id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.06115
Approximation of (some) FPUT lattices by KdV Equations
We consider a Fermi-Pasta-Ulam-Tsingou lattice with randomly varying coefficients. We discover a relatively simple condition which when placed on the nature of the randomness allows us to prove that small amplitude/long wavelength solutions are almost surely rigorously approximated by solutions of Korteweg-de Vries equations for very long times. The key ideas combine energy estimates with homogenization theory and the technical proof requires a novel application of autoregressive processes.
Joshua A. McGinnis, J. Douglas Wright
2023-08-11T13:10:22Z
http://arxiv.org/abs/2308.06115v1
# Approximation of (some) random FPUT lattices by KdV equations ###### Abstract. We consider a Fermi-Pasta-Ulam-Tsingou lattice with randomly varying coefficients. We discover a relatively simple condition which when placed on the nature of the randomness allows us to prove that small amplitude/long wavelength solutions are almost surely rigorously approximated by solutions of Korteweg-de Vries equations for very long times. The key ideas combine energy estimates with homogenization theory and the technical proof requires a novel application of autoregressive processes. ## 1. Introduction Consider a variable mass Fermi-Pasta-Ulam-Tsingou (FPUT) lattice1 Footnote 1: We write the equations as a first order system as opposed to its possibly more familiar second order form \[m(j)\ddot{x}(j)=\mathcal{V}^{\prime}(x(j+1)-x(j))-\mathcal{V}^{\prime}(x(j)-x( j-1)).\] The change of variables leading from this to (1.1) is \(q(j)=x(j+1)-x(j)\) and \(p(j)=\dot{x}(j)\).: \[\dot{q}(j,t)=\delta^{+}p(j,t)\quad\text{and}\quad m(j)\dot{p}(j,t)=\delta^{-}[ \mathcal{V}^{\prime}(q)](j,t). \tag{1.1}\] Here \(t\in\mathbf{R}\), \(j\in\mathbf{Z}\) and the unknowns \(q(j,t)\) (the _relative displacement_) and \(p(j,t)\) (the _velocity_) are real-valued. The _mass coefficients_\(m(j)\) are strictly positive and \[\mathcal{V}(q):=\frac{1}{2}q^{2}+\frac{1}{3}q^{3} \tag{1.2}\] is the _spring potential2_. Lastly Footnote 2: This choice of the spring potential—which is an instance of the “\(\alpha\)-potential” from [9]—is made mainly for simplicity. We could allow more complicated potentials and, so long as we had \(\mathcal{V}^{\prime}(0)>0\) and \(\mathcal{V}^{\prime\prime}(0)\neq 0\), only minor changes to our results would occur. Lastly \[\delta^{+}f(j)=f(j+1)-f(j)\quad\text{and}\quad\delta^{-}f(j)=f(j)-f(j-1)\] are the _right and left finite-difference operators_. Models of this sort are ubiquitous in applications. A partial list: molecular dynamics, lamination, nondestructive testing, vehicular traffic, granular media, metamaterials, chemistry/biochemistry, and power generation [21]. The system (1.1) also plays a major role as a paradigm for the mathematical analysis of wave propagation--especially solitary waves--in nonlinear dispersive settings and it is the system's famous connection to the Korteweg-de Vries (KdV) equation wherein our interest lies. Here are several important mathematical results about that connection: * When \(m(j)\) is constant, long-wavelength (say like \(1/\epsilon\), where \(0<\epsilon\ll 1\)), small-amplitude (order of \(\epsilon^{2}\)) solutions are well-approximated over long time scales (order of \(1/\epsilon^{3}\)) by solutions of KdV equations. The relative \(\ell^{2}\)-error made by the approximation in this case is \(\mathcal{O}(\epsilon)\). See [26] for the earliest formal derivation and [23] for the first rigorous result. * The same sort of result holds when \(m(j)\) is \(N\)-periodic (that is, \(m(j+N)=m(j)\) for all \(j\)). Indeed, the spring potentials may also be taken to be \(N-\)periodic (for instance, replace \(\mathcal{V}(q)\) with \(\mathcal{V}_{j}(q)=\kappa(j)q^{2}/2+\beta(j)q^{3}/3\) where \(\kappa(j)\) and \(\beta(j)\) are \(N-\)periodic). See [2, 10]. While there have been a few derivations of KdV from random versions of the FPUT lattice previously (specifically [14, 25]), all have been purely formal with no rigorous quantitative results. Even conjectures for the size of the error have been absent. We have been working for some time to remedy this. In our article [20] we discovered that if the \(m(j)\) are independent identically distributed (i.i.d.) random variables then the accuracy of long-wavelength approximations is substantially diminished and, consequently, only shorter time scales and the linear problem (that is when \(\mathcal{V}^{\prime}(q)=q\)) are within reach. Precisely, we showed3 that long-wavelength solutions (again like \(1/\epsilon\)) converge almost surely and strongly, but rather slowly, to solutions of a wave equation on time scales on the order of \(1/\epsilon\): the relative \(\ell^{2}\)-error made by the approximation is almost surely \(\mathcal{O}(\sqrt{\epsilon\ln|\ln(\epsilon)|})\). Numerics indicate that our error estimate is close to sharp. In [19], McGinnis proved a similar result for a 2D lattice. Footnote 3: We also allow for heterogeneity in \(\mathcal{V}\) as well as in \(m\) in [20]; our results apply to the case where \(\mathcal{V}^{\prime}(q)\) replaced by \(\mathcal{V}^{\prime}_{j}(q)=\kappa(j)q\) with \(\kappa(j)\) another collection of i.i.d. random variables. Furthermore, formal and numerical studies of random FPUT and other similar random lattice problems report that the waves in such systems experience a notable deterioration of their amplitude as time evolves (see, for instance, [11, 15, 16, 18]). We have carried out our own simulations of the nonlinear problem (1.1) with i.i.d. random variables as coefficients in the long-wavelength/small amplitude regime. These simulations demonstrate that for time scales longer than \(1/\epsilon\), solutions of (1.1) attenuate substantially; KdV-like dynamics (namely, resolution into solitons) is not observed. We include the results of our simulations below in Section 7, Figure 1. _In short, we do not believe that when the coefficients are i.i.d. random variables a KdV approximation is appropriate or possible._ However, there are more sorts of randomness than simply taking the coefficients to be i.i.d.. In this paper we consider the random case, but we restrict the randomness in such a way that we can prove a fully rigorous KdV approximation. We believe that this is the first example of such a result involving randomness and nonlinear dispersive systems, though there are several earlier results which carefully derive--but do not fully justify--KdV as an effective equation for the evolution of long water waves over randomly varying topography [22, 4, 6, 5]. Here is our assumption on the the masses: **Hypothesis 1.1**.: _The masses are given by_ \[m(j)=1+\delta^{+}\delta^{-}\zeta(j) \tag{1.3}\] _where \(\zeta(j)\), \(j\in\mathbf{Z}\), are i.i.d. random variables with zero mean, variance \(\sigma^{2}\) and support contained in \((-1/4,1/4)\)._ We refer to (1.3) as a _transparency condition_ and we call (1.1) subject to Hypothesis 1.1 the _transparent random mass FPUT lattice_. The use of "transparent" here is due to an observation from simulations: if the masses meet this condition then waves propagate relatively cleanly through the lattice without too much "back scattering" or "internal reflection." Our idea for making this choice was inspired by the derivation of KdV as an effective equation for water waves over a random bottom in [22] where the topography is given as a perfect spatial derivative. The condition on the support of \(\zeta(j)\) is there to ensure that \(m(j)\) are strictly positive (for if \(|\zeta(j)|<1/4\) then the triangle inequality tells us \(m(j)>0\)). It also guarantees that \(\sigma^{2}<\infty\). Our main result in a nutshell: for suitable initial conditions, solutions of the transparent random mass FPUT lattice almost surely satisfy \[\begin{split} q(j,t)&=\epsilon^{2}\left[A( \epsilon(j-t),\epsilon^{3}t)+B(\epsilon(j+t),\epsilon^{3}t)\right]+\mathcal{O} _{\ell_{2}}(\epsilon^{2}\sqrt{|\ln(\epsilon)|})\\ p(j,t)&=\epsilon^{2}\left[-A(\epsilon(j-t), \epsilon^{3}t)+B(\epsilon(j+t),\epsilon^{3}t)\right]+\mathcal{O}_{\ell_{2}}( \epsilon^{2}\sqrt{|\ln(\epsilon)|})\end{split} \tag{1.4}\] for \(|t|\leq T_{0}/\epsilon^{3}\), where \(A\) and \(B\) solve the KdV equations \[2\partial_{T}A+\left(\frac{1}{12}+2\sigma^{2}\right)\partial_{w}^{3}A+ \partial_{w}A^{2}=0\quad\text{and}\quad 2\partial_{T}B-\left(\frac{1}{12}+2 \sigma^{2}\right)\partial_{l}^{3}B-\partial_{l}B^{2}=0.\] The fully technical version of our result appears in Theorem 6.1 below. **Remark 1.2**.: _To the uninitiated, it may look like the size of the error exceeds the size of the approximation. However, the long-wave scaling of the spatial coordinate gives \(\|\epsilon^{2}A(\epsilon(\cdot-t),\epsilon^{3}t)\|_{\ell^{2}}=\mathcal{O}( \epsilon^{3/2})\) which indicates a relative \(\ell^{2}\)-error of \(\mathcal{O}\left(\sqrt{\epsilon|\ln(\epsilon)|}\right)\)._ Our paper is organized as follows. Section 2 spells out some notation and other ground rules. Section 3 proves a general approximation theorem for (1.1). While motivated by KdV approximations, the result applies more broadly. Section 4 contains the derivation of KdV from (1.1) under Hypothesis 1.1; this is the heart of the paper. Section 5 contains a multitude of estimates which set the stage for the application of the general approximation theorem. It is in this section where probability plays a major role and where the technical guts of our result live. Section 6 ties everything together with the statement and proof of our main result, the technical version of (1.4). Then we present the result of supporting numerics in Section 7. We close out with a big list of open questions in Section 8. **Acknowledgements:** The authors would like to thank Amanda French, C. Eugene Wayne and Atilla Yilmaz for helpful conversations related to this project. Also, JDW would like to recognize the National Science Foundation who supported this research with grant DMS-2006172. ## 2. Preliminaries ### Function/sequence spaces For a doubly infinite sequence \(f:\mathbf{Z}\to\mathbf{R}\) we put, as per usual, \(\|f\|_{\ell^{2}}:=\sqrt{\sum_{j\in\mathbf{Z}}f^{2}(j)}\) and \(\|f\|_{\ell^{\infty}}:=\sup_{j\in\mathbf{Z}}|f(j)|\). Of course \(\ell^{2}\) and \(\ell^{\infty}\) are the sets of all sequences where the associated norms are finite. If we write \(\|f,g\|_{\ell^{2}}\) we mean \(\|f\|_{\ell^{2}}+\|g\|_{\ell^{2}}\), the norm on \(\ell^{2}\times\ell^{2}\). The analogous convention applies to \(\|f,g\|_{\ell^{\infty}}\). For functions \(F:\mathbf{R}\to\mathbf{R}\) and non-negative integers \(n\) and \(r\) we put \[\|F\|_{H^{n}(r)}:=\sqrt{\int_{\mathbf{R}}(1+X^{2})^{r}F^{2}(X)dX+\int_{\mathbf{ R}}(1+X^{2})^{r}(\partial_{X}^{n}F)^{2}(X)dX}\] and \(H^{n}(r)\) is the closure of the set of all smooth functions with respect to this norm. We define \(H^{n}:=H^{n}(0)\), \(L^{2}(r):=H^{0}(r)\) and \(L^{2}:=H^{0}(0)\). Next, \(\|F\|_{W^{n,\infty}}:=\sup_{X\in\mathbf{R}}|F(X)|+|\partial_{X}^{n}F(X)|\) and \(W^{n,\infty}\) is the associated function space. By \(L^{\infty}\) we mean \(W^{0,\infty}\). All of the spaces listed above are Banach spaces. ### Probability All probabilistic components in the paper descend through the random variables \(\zeta(j)\). Associated probabilities are represented by \(\mathbb{P}\) and expectations by \(\mathbb{E}\). ### \(\mathcal{O}\), \(o\) and \(C\) notation We use the following version of Landau's "big \(\mathcal{O}\)/little \(o\)" notation. Given two real-valued functions, \(f(\epsilon)\) and \(g(\epsilon)\), we say \(f(\epsilon)=\mathcal{O}(g(\epsilon))\) if for some \(C_{\star}>0\) and \(\epsilon_{\star}>0\), \(|f(\epsilon)|\leq C_{\star}|g(\epsilon)|\) for \(\epsilon\in(0,\epsilon_{\star})\). We say \(f(\epsilon)=o(g(\epsilon))\) if \(\lim_{\epsilon\to 0^{+}}|f(\epsilon)/g(\epsilon)|=0\). If \(Y\) is a Banach space and we have a family of elements \(u_{\epsilon}\in Y\), we write \(u_{\epsilon}=\mathcal{O}_{Y}(g(\epsilon))\) if \(\|u_{\epsilon}\|_{Y}=\mathcal{O}(g(\epsilon))\) by the earlier definition. If we write \(f(\epsilon)=g(\epsilon)+\mathcal{O}(h(\epsilon))\) (or similar) we mean \(f(\epsilon)-g(\epsilon)=\mathcal{O}(h(\epsilon))\). We at times default to "big \(C\)" notation: if we simply write \(f(\epsilon)\leq Cg(\epsilon)\) and omit qualifiers then we mean \(f(\epsilon)=\mathcal{O}(g(\epsilon))\). Some quantities will depend on the random variables \(\zeta(j)\). For such quantities, if we write \(f(\epsilon)=\mathcal{O}(g(\epsilon))\) we mean this in an almost sure sense. Specifically, there exist constants \(C_{\star}>0\) (almost surely finite) and \(\epsilon_{\star}\) (almost surely positive) such that \(|f(\epsilon)|\leq C_{\star}|g(\epsilon)|\) for \(\epsilon\in(0,\epsilon_{\star})\). The values of \(C_{\star}\) and \(\epsilon_{\star}\) may depend upon the realization of the \(\zeta(j)\). The same "almost sure" point of view holds for \(\mathcal{O}_{Y}\) and \(o\) too. To be clear: we always mean \(\mathcal{O}\), \(o\) and their ilk rigorously, and we always mean them in the almost sure sense. ## 3. Approximation in general We begin by proving a general approximation theorem using the strategy described in Section 5.3 of [10] (itself inspired by [23]). Suppose that for \(\epsilon\in(0,1)\) we have some functions \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\) (the _approximators_) that we expect are good approximations to solutions of (1.1) when \(\epsilon\) is small. By this we mean that we know that \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\) nearly solve (1.1) in the sense that the _residuals_ \[\text{Res}_{1}:=\delta^{+}\widetilde{p}_{\epsilon}-\partial_{t}\widetilde{q}_ {\epsilon}\quad\text{and}\quad\text{Res}_{2}:=\frac{1}{m}\delta^{-}\left[ \mathcal{V}^{\prime}(\widetilde{q}_{\epsilon})\right]-\partial_{t}\widetilde{p }_{\epsilon} \tag{3.1}\] are small relative to \(\epsilon\). To validate the approximation over the time scale \(|t|\leq T_{0}/\epsilon^{3}\) we need information about \[\begin{split}\alpha_{1}(\epsilon)&:=\sup_{|t|\leq T_{ 0}/\epsilon^{3}}\|\widetilde{q}_{\epsilon},\widetilde{p}_{\epsilon}\|_{\ell^{2 }},\\ \beta_{1}(\epsilon)&:=\inf_{|t|\leq T_{0}/\epsilon^{3 }}\|\widetilde{q}_{\epsilon},\widetilde{p}_{\epsilon}\|_{\ell^{2}},\\ \alpha_{2}(\epsilon)&:=\sup_{|t|\leq T_{0}/\epsilon^ {3}}\|\partial_{t}\widetilde{q}_{\epsilon}\|_{\ell^{\infty}}\quad\text{and}\\ \alpha_{3}(\epsilon)&:=\sup_{|t|\leq T_{0}/\epsilon^ {3}}\|\operatorname{Res}_{1}\|_{\ell^{2}}+\|\operatorname{Res}_{2}\|_{\ell^{2 }}.\end{split} \tag{3.2}\] In particular, we assume: \[\alpha_{1}(\epsilon)=o(1),\quad\alpha_{2}(\epsilon)=\mathcal{O}(\epsilon^{3} )\quad\text{and}\quad\alpha_{3}(\epsilon)=o(\beta_{1}(\epsilon)\epsilon^{3}). \tag{3.3}\] Our goal is to show that if we have approximators with these features then the true solution of (1.1) whose initial conditions are consistent with those of the approximators remains close over the long time scale. The result we prove here is specialized to FPUT lattices where the spring potentials are homogeneous and of the form (1.2), but requires only the following non-degeneracy condition on the masses: \[\inf_{j\in\mathbf{Z}}m(j)>0\quad\text{and}\quad\sup_{j\in\mathbf{Z}}m(j)<\infty. \tag{3.4}\] The condition on the support of \(\zeta(j)\) in Hypothesis 1.1 implies the above, though we do not require all of that hypothesis in this section. Here is the result: **Theorem 3.1**.: _Suppose that the mass coefficients satisfy (3.4), the approximators \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\) meet (3.3) and the initial conditions for (1.1) satisfy_ \[\|q(0)-\widetilde{q}_{\epsilon}(0),p(0)-\widetilde{p}_{\epsilon}(0)\|_{\ell^{ 2}}=\mathcal{O}\left(\frac{\alpha_{3}(\epsilon)}{\epsilon^{3}}\right).\] _Then the solution \((q(t),p(t))\) of (1.1) satisfies the_ **absolute error estimate**__ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|q(t)-\widetilde{q}_{\epsilon}(t),p(t)- \widetilde{p}_{\epsilon}(t)\|_{\ell^{2}}=\mathcal{O}\left(\frac{\alpha_{3}( \epsilon)}{\epsilon^{3}}\right)\] _as well as the_ **relative error estimate**__ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\frac{\|q(t)-\widetilde{q}_{\epsilon}(t),p(t )-\widetilde{p}_{\epsilon}(t)\|_{\ell^{2}}}{\|q(t),p(t)\|_{\ell^{2}}}= \mathcal{O}\left(\frac{\alpha_{3}(\epsilon)}{\beta_{1}(\epsilon)\epsilon^{3}} \right).\] Proof.: We introduce the _errors_: \[\eta:=q-\widetilde{q}_{\epsilon}\quad\text{and}\quad\xi:=p-\widetilde{p}_{ \epsilon}\] where \(q(j,t)\) and \(p(j,t)\) solve (1.1). Time differentiation of these expressions together with (1.1) and some algebra get us: \[\dot{\eta}=\delta^{+}\xi+\operatorname{Res}_{1}\quad\text{and}\quad\dot{\xi}= \frac{1}{m}\delta^{-}(\mathcal{W}^{\prime}(\eta,\widetilde{q}_{\epsilon}))+ \operatorname{Res}_{2}. \tag{3.5}\] In the above \[\mathcal{W}(a,b):=\mathcal{V}(b+a)-\mathcal{V}(b)-\mathcal{V}^{\prime}(b)a=\frac{ 1}{2}(1+2b)a^{2}+\frac{1}{3}a^{3}\] so that \[\mathcal{W}^{\prime}(a,b):=\partial_{a}\mathcal{W}(a,b)=\mathcal{V}^{\prime}(b+ a)-\mathcal{V}^{\prime}(b)=(1+2b)a+a^{2}.\] Now we define the _energy_ functional: \[\mathcal{H}(u,v;t):=\sum_{j\in\mathbf{Z}}\left(\frac{1}{2}m_{j}v(j)^{2}+ \mathcal{W}(u(j),\widetilde{q}_{\epsilon}(j,t))\right).\] In the above \((u,v)=(u(j),v(j))\) is in \(\ell^{2}\times\ell^{2}\). Under our assumptions, the square root of this quantity is equivalent to the \(\ell^{2}\times\ell^{2}\) norm in the following sense: there exist \(\epsilon_{*}\in(0,1)\) and \(C_{*}>1\) such that \[\begin{split} 0<\epsilon<\epsilon_{*},\ \|u\|_{\ell^{2}}\leq 1,\ |t|\leq T_{0}/\epsilon^{3}\\ \Longrightarrow C_{*}^{-1}\|u,v\|_{\ell^{2}}\leq\sqrt{\mathcal{H} (u,v;t)}\leq C_{*}\|u,v\|_{\ell^{2}}.\end{split} \tag{3.6}\] Here are the details. First of all, simple estimation gives \[\frac{1}{2}\inf_{j\in\mathbf{Z}}m(j)\|v\|_{\ell^{2}}^{2}\leq\sum_{j\in\mathbf{ Z}}\frac{1}{2}m_{j}v(j)^{2}\leq\frac{1}{2}\sup_{j\in\mathbf{Z}}m(j)\|v\|_{ \ell^{2}}^{2}\] and thus (3.4) tells us that \(\sqrt{\sum_{j\in\mathbf{Z}}\frac{1}{2}m_{j}v(j)^{2}}\) is equivalent to \(\|v\|_{\ell^{2}}\). This gives the "\(v\)" part of (3.6). For the "\(u\)" part, recall that \(\|f\|_{\ell^{\infty}}\leq\|f\|_{\ell^{2}}\) and so the assumption \(\alpha_{1}(\epsilon)=o(1)\) implies that \(\|\widetilde{q}_{\epsilon}\|_{\ell^{\infty}}=o(1)\) as well. Thus for \(\epsilon\) sufficiently small we have \(\|\widetilde{q}_{\epsilon}\|_{\ell^{\infty}}\leq 1/30\). This, in conjunction with the assumption that \(\|u\|_{\ell^{2}}\leq 1\), gives us \[\left|\sum_{j\in Z}\left(\widetilde{q}_{\epsilon}(j,t)+\frac{1}{3}u(j)\right) u(j)^{2}\right|\leq\frac{11}{30}\|u\|_{\ell^{2}}^{2}.\] In turn the triangle inequality gives: \[\frac{2}{15}\|u\|_{\ell^{2}}^{2}\leq\sum_{j\in Z}\underbrace{\frac{1}{2}(1+2 \widetilde{q}_{\epsilon}(j,t)))u^{2}(j)+\frac{1}{3}u^{3}(j)}_{\mathcal{W}(u(j),\widetilde{q}_{\epsilon}(j,t))}\leq\frac{13}{15}\|u\|_{\ell^{2}}^{2}.\] So we have (3.6). For the next step, we suppose that \(\eta(j,t)\) and \(\xi(j,t)\) solve (3.5) and put \(H(t):=\mathcal{H}(\eta(t),\xi(t);t)\). Differentiation of \(H(t)\) with respect to \(t\) gives: \[\dot{H}(t)=\sum_{j\in\mathbf{Z}}\left(m_{j}\xi(j,t)\dot{\xi}(j,t)+\mathcal{W} ^{\prime}(\eta(j,t),\widetilde{q}_{\epsilon}(j,t))\dot{\eta}(j,t)+\partial_{b }\mathcal{W}(\eta(j,t),\widetilde{q}_{\epsilon}(j,t))\partial_{t}\widetilde{q }_{\epsilon}(j,t)\right).\] Using (3.5) (and suppressing some dependencies) results in: \[\dot{H}=\sum_{j\in\mathbf{Z}}\left(\xi(\delta^{-}(\mathcal{W}^{\prime}(\eta, \widetilde{q}_{\epsilon}))+\text{Res}_{2})+\mathcal{W}^{\prime}(\eta, \widetilde{q}_{\epsilon})(\delta^{+}\xi+\text{Res}_{1})+\partial_{b}\mathcal{W }(\eta,\widetilde{q}_{\epsilon})\partial_{t}\widetilde{q}_{\epsilon}\right).\] We sum by parts and terms cancel: \[\dot{H}=\sum_{j\in\mathbf{Z}}\left(\xi\operatorname{Res}_{2}+\mathcal{W}^{ \prime}(\eta,\widetilde{q}_{\epsilon})\operatorname{Res}_{1}+\partial_{b} \mathcal{W}(\eta,\widetilde{q}_{\epsilon})\partial_{t}\widetilde{q}_{\epsilon} \right).\] Subsequently, Cauchy-Schwarz and the like get us: \[\dot{H}\leq\|\xi\|_{\ell^{2}}\|\operatorname{Res}_{2}\|_{\ell^{2}}+\| \mathcal{W}^{\prime}(\eta,\widetilde{q}_{\epsilon})\|_{\ell^{2}}\| \operatorname{Res}_{1}\|_{\ell^{2}}+\|\partial_{b}\mathcal{W}(\eta,\widetilde{ q}_{\epsilon})\|_{\ell^{1}}\|\partial_{t}\widetilde{q}_{\epsilon}\|_{\ell^{ \infty}}.\] One easily computes that \(\partial_{b}\mathcal{W}(\eta,\widetilde{q}_{\epsilon})=\eta^{2}.\) In which case we conclude, using the earlier formula for \(\mathcal{W}^{\prime}\) and routine estimates, that \[\dot{H}\leq\|\xi\|_{\ell^{2}}\|\operatorname{Res}_{2}\|_{\ell^{2}}+\left((1+2 \|\widetilde{q}_{\epsilon}\|_{\ell^{\infty}})\,\|\eta\|_{\ell^{2}}+\|\eta\|_{ \ell^{2}}^{2}\right)\|\operatorname{Res}_{1}\|_{\ell^{2}}+\|\eta\|_{\ell^{2}} ^{2}\|\partial_{t}\widetilde{q}_{\epsilon}\|_{\ell^{\infty}}.\] Next we use (3.3) to get: \[\dot{H}\leq 2\alpha_{3}(\epsilon)\left(\|\eta\|_{\ell^{2}}+\|\xi\|_{\ell^{2} }\right)+2\alpha_{2}(\epsilon)\|\eta\|_{\ell^{2}}^{2}.\] Then we use (3.6): \[\dot{H}\leq 2C_{*}^{2}\left(\alpha_{3}(\epsilon)H^{1/2}+\alpha_{2}(\epsilon) H\right).\] Since \(\dot{H}=2H^{1/2}\frac{d}{dt}H^{1/2}\) the above can be recast as \[\frac{d}{dt}H^{1/2}\leq C_{*}^{2}\left(\alpha_{3}(\epsilon)+\alpha_{2}( \epsilon)H^{1/2}\right).\] We have assumed \(\alpha_{2}(\epsilon)=\mathcal{O}(\epsilon^{3})\) so the above implies \[\frac{d}{dt}H^{1/2}\leq C_{*}^{2}\left(\alpha_{3}(\epsilon)+C_{2}\epsilon^{3} H^{1/2}\right)\] for a constant \(C_{2}>0.\) An application of Gronwall's inequality gets us: \[H^{1/2}(t)\leq C_{2}^{-1}\left(e^{C_{*}^{2}C_{2}\epsilon^{3}t}-1\right)\frac {\alpha_{3}(\epsilon)}{\epsilon^{3}}+e^{C_{*}^{2}C_{2}\epsilon^{3}t}H^{1/2}( 0).\] Using (3.6) again: \[\|\eta(t),\xi(t)\|_{\ell^{2}}\leq C_{*}^{2}C_{2}^{-1}\left(e^{C_{*}^{2}C_{2} \epsilon^{3}t}-1\right)\frac{\alpha_{3}(\epsilon)}{\epsilon^{3}}+C_{*}^{2}e^ {C_{*}^{2}C_{2}\epsilon^{3}t}\|\eta(0),\xi(0)\|_{\ell^{2}}.\] We take the supremum of this over \(|t|\leq T_{0}/\epsilon^{3}\) and get \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\eta(t),\xi(t)\|_{\ell^{2}}\leq C_{\star} \left(\frac{\alpha_{3}(\epsilon)}{\epsilon^{3}}+\|\eta(0),\xi(0)\|_{\ell^{2}} \right).\] The constant \(C_{\star}>0\) is independent of \(\epsilon.\) In conclusion, if we assume that \[\|\eta(0),\xi(0)\|_{\ell^{2}}=\mathcal{O}\left(\frac{\alpha_{3}(\epsilon)}{ \epsilon^{3}}\right)\] then we have shown \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\eta(t),\xi(t)\|_{\ell^{2}}=\mathcal{O} \left(\frac{\alpha_{3}(\epsilon)}{\epsilon^{3}}\right).\] This is the absolute error estimate. As for the relative error a standard reverse triangle inequality argument shows that \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\frac{\|\eta(t),\xi(t)\|_{\ell^{2}}}{\|q(t),p(t) \|_{\ell^{2}}}=\mathcal{O}\left(\frac{\alpha_{3}(\epsilon)}{\beta_{1}(\epsilon) \epsilon^{3}}\right).\] ## 4. Derivation of the effective equations Now that we have Theorem 3.1, we can move on to deriving the KdV equations from (1.1). The procedure for the derivation is a multiple scales expansion, inspired by [3]. We assume the following form of our approximators: \[\widetilde{q}_{\epsilon}(j,t)=\sum_{n=0}^{3}\epsilon^{n+2}Q_{n}(j,\epsilon j, \epsilon t,\epsilon^{3}t)\quad\text{and}\quad\widetilde{p}_{\epsilon}(j,t)= \sum_{n=0}^{3}\epsilon^{n+2}P_{n}(j,\epsilon j,\epsilon t,\epsilon^{3}t) \tag{4.1}\] where the \(Q_{n}=Q_{n}(j,X,\tau,T)\) and \(P_{n}=P_{n}(j,X,\tau,T)\) are maps \[\mathbf{Z}\times\mathbf{R}\times\mathbf{R}\times\mathbf{R}\to\mathbf{R}.\] Of course we are viewing \(\epsilon\) as being small. Given that we put \(X=\epsilon j\) in \(\widetilde{q}_{\epsilon}\) and \(\widetilde{p}_{\epsilon}\), we think of \(X\) as being the _long-wave length scale_ and \(j\) being the _microscopic length scale._ For expansions of the sort we are carrying out, it pays to be organized at the outset. First we define the following operators for functions \(U=U(j,X)\): \[S_{j}^{\pm}U(j,X):=U(j\pm 1,X)\quad\text{and}\quad\delta_{j}^{\pm}U(j,X):= \pm\left(U(j\pm 1,X)-U(j,X)\right).\] These are _partial shifts_ and _partial finite-differences_ with respect to \(j\). Next, for \(\epsilon>0\) put \[D^{+}U(j,X):=\pm(U(j\pm 1,X\pm\epsilon)-U(j,X)).\] If \(u(j)=U(j,\epsilon j)\) then \(\delta^{\pm}u(j)=D^{\pm}U(j,\epsilon j).\) That is to say, \(D^{\pm}\) are the _total finite-difference operators_. Expanding the right-hand sides of \(D^{\pm}U(j,X)\) in (formal) Taylor series with respect to \(\epsilon\) gives \(D^{\pm}U(j,X)=\delta_{j}^{\pm}U(j,X)+\sum_{n=1}^{\infty}\epsilon^{n}\frac{( \pm 1)^{n+1}}{n!}S_{j}^{\pm}\partial_{X}^{n}U(j,X).\) Truncating the sum at \(n=M\) would give a formal error on the order of \(\epsilon^{M+1}\) and so we define \[\epsilon^{M+1}E_{M}^{\pm}:=D^{\pm}-\delta_{j}^{\pm}-\sum_{n=1}^{M}\epsilon^{ n}\frac{(\pm 1)^{n+1}}{n!}S_{j}^{\pm}\partial_{X}^{n}.\] These operators give the exact error made by such a truncation. Note that for \(M=0\) we just ignore the sum, _i.e._\(\epsilon E_{0}^{\pm}:=D^{\pm}-\delta_{j}^{\pm}\). If we plug (4.1) into the residuals (3.1) and carry out some substantial algebra we find that \[\text{Res}_{1} =\epsilon^{2}Z_{12}+\epsilon^{3}Z_{13}+\epsilon^{4}Z_{14}+ \epsilon^{5}Z_{15}+\epsilon^{6}W_{1}\quad\text{and}\] \[\text{Res}_{2} =\frac{1}{m}\left(\epsilon^{2}Z_{22}+\epsilon^{3}Z_{23}+\epsilon^ {4}Z_{24}+\epsilon^{5}Z_{25}+\epsilon^{6}W_{2}\right)\] where \[Z_{12}:= \delta_{j}^{+}P_{0}\] \[Z_{22}:= \delta_{j}^{-}Q_{0}\] \[Z_{13}:= \delta_{j}^{+}P_{1}+S_{j}^{+}\partial_{X}P_{0}-\partial_{\tau}Q_{0}\] \[Z_{23}:= \delta_{j}^{-}Q_{1}+S_{j}^{-}\partial_{X}Q_{0}-m\partial_{\tau}P_ {0}\] \[Z_{14}:= \delta_{j}^{+}P_{2}+S_{j}^{+}\partial_{X}P_{1}+\frac{1}{2}S_{j}^{ +}\partial_{X}^{2}P_{0}-\partial_{\tau}Q_{1}\] \[Z_{24}:= \delta_{j}^{-}Q_{2}+S_{j}^{-}\partial_{X}Q_{1}-\frac{1}{2}S_{j}^{ -}\partial_{X}^{2}Q_{0}+\delta_{j}^{-}Q_{0}^{2}-m\partial_{\tau}P_{1}\] \[Z_{15}:= \delta_{j}^{+}P_{3}+S_{j}^{+}\partial_{X}P_{2}+\frac{1}{2}S_{j}^{ +}\partial_{X}^{2}P_{1}+\frac{1}{6}S_{j}^{+}\partial_{X}^{3}P_{0}\] \[-\partial_{\tau}Q_{2}-\partial_{T}Q_{0}\] \[Z_{25}:= \delta_{j}^{-}Q_{3}+S_{j}^{-}\partial_{X}Q_{2}-\frac{1}{2}S_{j}^{ -}\partial_{X}^{2}Q_{1}+\frac{1}{6}S_{j}^{-}\partial_{X}^{3}Q_{0}\] \[+2\delta_{j}^{-}(Q_{0}Q_{1})+\partial_{X}Q_{0}^{2}-m\partial_{ \tau}P_{2}-m\partial_{T}P_{0}\] and \[W_{1}:= \sum_{n=0}^{3}E_{3-n}^{+}P_{n}-\partial_{\tau}Q_{3}-\sum_{n=1}^{3 }\epsilon^{n-1}\partial_{T}Q_{n} \tag{4.2}\] \[W_{2}:= \sum_{n=0}^{3}E_{3-n}^{-}Q_{n}+E_{1}^{-}Q_{0}^{2}+2E_{0}^{-}(Q_{0 }Q_{1})\] \[+D^{-}\left(2Q_{0}Q_{2}+\left(Q_{1}+\epsilon Q_{2}+\epsilon^{2}Q _{3}\right)^{2}\right)-m\partial_{\tau}P_{3}-m\sum_{n=1}^{3}\epsilon^{n-1} \partial_{T}P_{n}.\] The usual way to proceed is to select the \(Q_{0},P_{0},\ldots,Q_{3},P_{3}\) so that each \(Z_{1k}=Z_{2k}=0\). In this case we would have \(\text{Res}_{1}=\epsilon^{6}W_{1}\) and \(\text{Res}_{2}=\frac{1}{m}\epsilon^{6}W_{2}\) which we can then estimate using the formulas for the \(Q_{n}\) and \(P_{n}\). This strategy works perfectly well in the homogeneous and periodic problems as all the terms are rigorously the size they formally appear to be, modulo an annoying factor of \(\epsilon^{-1/2}\) caused by the long-wave scaling. But it fails in the random problem; the randomness leads to terms which are much larger than they appear. Our modified strategy is to solve \[Z_{12}=Z_{22}=Z_{13}=Z_{23}=Z_{14}=Z_{24}=0\] (which will largely determine \(Q_{0},P_{0},\ldots,Q_{2},P_{2}\)) and then to do "something else" for \(Z_{15}\) and \(Z_{25}\). At the end of this, we find that \(\text{Res}_{1}=\epsilon^{5}Z_{15}+\epsilon^{6}W_{1}\) and \(\text{Res}_{2}=\frac{1}{m}\left(\epsilon^{5}Z_{25}+\epsilon^{6}W_{2}\right).\) In Section 5 we show that these are \(\mathcal{O}_{\ell^{2}}(\epsilon^{5}\sqrt{|\ln(\epsilon)|})\). This is enough to apply Theorem 3.1 and get the error estimates shown in (1.4). ### A tutorial on solving \(Z_{1k}=z_{2k}=0\) Each pair of equations \(Z_{1k}=Z_{2k}=0\) will have the form \[\begin{split}&\delta_{j}^{+}P_{k-2}=\bar{F}_{0}(X,\tau,T)+\sum_{n=1 }^{N}f_{n}(j)\bar{F}_{n}(X,\tau,T)\\ &\delta_{j}^{-}Q_{k-2}=\bar{G}_{0}(X,\tau,T)+\sum_{n=1}^{N}g_{n} (j)\bar{G}_{n}(X,\tau,T).\end{split} \tag{4.3}\] The sequences \(f_{n}(j)\) and \(g_{n}(j)\) are mean-zero random variables which come, in one way or another, from \(\zeta(j)\); they depend only on the microscale coordinate. The \(\bar{F}_{n}\) and \(\bar{G}_{n}\) functions do not depend on the microscale coordinate at all. They will be made up of pieces of the various \(P_{n}\) and \(Q_{n}\) where \(n<k-2\). In this way (4.3) allows us to figure out \(P_{k-2}\) and \(Q_{k-2}\) from the earlier functions. We decompose (4.3) into a "long-wave" part (those pieces that do not depend on the microscale coordinate \(j\) at all) and a "microscale" part (those that do). The long-wave part just consists of the terms \(\bar{F}_{0}\) and \(\bar{G}_{0}\) and so we set \[\bar{F}_{0}=0\quad\text{and}\quad\bar{G}_{0}=0. \tag{4.4}\] This is a sort of solvability condition that will wind up giving us the long-wave dynamics; how it all plays out will be seen when we get in the weeds below. The microscale part is what is left over: \[\delta_{j}^{+}P_{k-2}=\sum_{l=1}^{N}f_{n}(j)\bar{F}_{n}(X,\tau,T)\quad\text{ and}\quad\delta_{j}^{-}Q_{k-2}=\sum_{l=1}^{N}g_{n}(j)\bar{G}_{n}(X,\tau,T). \tag{4.5}\] We can just write down a solution for this: \[\begin{split}& P_{k-2}(j,X,\tau,T)=\bar{P}_{k-2}(X,\tau,T)+\sum_{l=1 }^{N}\chi_{n}(j)\bar{F}_{n}(X,\tau,T)\quad\text{and}\\ & Q_{k-2}(j,X,\tau,T)=\bar{Q}_{k-2}(X,\tau,T)+\sum_{l=1}^{N} \kappa_{n}(j)\bar{G}_{n}(X,\tau,T)\end{split} \tag{4.6}\] where we select \(\chi_{n}\) and \(\kappa_{n}\) so that \[\delta^{+}\chi_{n}=f_{n}\quad\text{and}\quad\delta^{-}\kappa_{n}=g_{n}.\] Solving these equations for \(\chi_{n}\) and \(\kappa_{n}\) from \(f_{n}\) and \(g_{n}\) is one of the key steps in the whole procedure and as we shall show the transparency condition makes this a relatively easy affair...at least at first. The functions \(\bar{P}_{k-2}(X,\tau,T)\) and \(\bar{Q}_{k-2}(X,\tau,T)\) are "constants of integration"; in most cases we determine these from (4.4) at a later point in the derivation. Now we get into actually solving the equations. ### \(Z_{12}=z_{22}=0\) These read \(\delta_{j}^{+}P_{0}=0\) and \(\delta_{j}^{-}Q_{0}=0\) which tells us that \[Q_{0}(j,X,\tau,T)=\bar{Q}_{0}(X,\tau,T)\quad\text{and}\quad P_{0}(j,X,\tau,T)= \bar{P}_{0}(X,\tau,T). \tag{4.7}\] **Remark 4.1**.: _In this section any function with a "bar" on top will not depend on \(j\). We make this convention so that we do not need to perpetually clutter up our algebra with functional dependencies. For the same reason it is helpful to keep in mind that \(m\) and \(\zeta\) depend only on \(j\) and not on the other variables._ ### \(Z_{13}=Z_{23}=0\) Using (4.7) these equations become \[\delta_{j}^{+}P_{1}=\partial_{\tau}\bar{Q}_{0}-\partial_{X}\bar{P}_{0}\quad \text{and}\quad\delta_{j}^{-}Q_{1}=m\partial_{\tau}\bar{P}_{0}-\partial_{X} \bar{Q}_{0}.\] Using the transparency condition (1.3) converts these to \[\delta_{j}^{+}P_{1}=\partial_{\tau}\bar{Q}_{0}-\partial_{X}\bar{P}_{0}\quad \text{and}\quad\delta_{j}^{-}Q_{1}=\partial_{\tau}\bar{P}_{0}-\partial_{X} \bar{Q}_{0}+\delta^{+}\delta^{-}\zeta\partial_{\tau}\bar{P}_{0}.\] Following the steps from the tutorial in Section 4.1 we see that the long-wave part (4.4) of these equations is \[\partial_{\tau}\bar{Q}_{0}-\partial_{X}\bar{P}_{0}=0\quad\text{and}\quad \partial_{\tau}\bar{P}_{0}-\partial_{X}\bar{Q}_{0}=0. \tag{4.8}\] This is the wave equation wearing a fake mustache and glasses and we readily solve it: \[\bar{Q}_{0}=A(X-\tau,T)+B(X+\tau,T)\quad\text{and}\quad\bar{P}_{0}=-A(X-\tau,T )+B(X+\tau,T). \tag{4.9}\] **Remark 4.2**.: _We use the convention that_ \[w=X-\tau\quad\text{and}\quad l=X+\tau\] _so that \(A=A(w,T)\) and \(B=B(l,T)\). Note that \(A\) and \(B\) do not depend on \(j\). It is these functions that will ultimately solve KdV equations._ After (4.8) we are left with the microscale part \[\delta_{j}^{+}P_{1}=0\quad\text{and}\quad\delta_{j}^{-}Q_{1}=\delta^{+}\delta ^{-}\zeta\partial_{\tau}\bar{P}_{0}.\] The solution formula (4.6) gives \(Q_{1}=\bar{Q}_{1}+\chi\partial_{\tau}\bar{P}_{0}\) where we want \(\delta^{-}\chi=\delta^{+}\delta^{-}\zeta\). Finding \(\chi\) is easily done as we can simply cancel a \(\delta^{-}\) from both sides and put \(\chi=\delta^{+}\zeta\). This is so simple because of the transparency condition (1.3) and this is one of the reasons we have assumed it. Likewise (4.6) says that we should put \(P_{1}=\bar{P}_{1}\), but it will turn out that \(\bar{P}_{1}\) will be zero so we just enforce that now. In short we have \[Q_{1}=\bar{Q}_{1}+\delta_{j}^{+}\zeta\partial_{\tau}\bar{P}_{0}\quad\text{and }\quad P_{1}=0. \tag{4.10}\] Note that \(\delta_{j}^{+}\zeta\) is bounded in \(j\) because of the compact support assumption in Hypothesis 1.1. **Remark 4.3**.: _What if we had not made the transparency assumption but instead assumed that \(m(j)=1+z(j)\) where \(z(j)\) are i.i.d. mean zero random variables? The long-wave part is the same as above but now the microscale part is \(\delta_{j}^{-}Q_{1}=z\partial_{\tau}\bar{P}_{0}.\) To use the solution formula (4.6) we would want to find \(\chi\) so that \(\delta^{-}\chi=z\), or rather_ \[\chi(j)=\chi(j-1)+z(j).\] _This equation tells us that \(\chi(j)\) is a random walk with steps given by \(z(j)\) and as such we expect \(\chi\) to grow like \(\sqrt{j}\)._ _To see why this is an issue, notice that \(Q_{1}\) would include the term \(\chi(j)A_{w}(X-\tau,T),\) which then would show up in the approximator (4.1) as \(\epsilon^{3}\chi(j)A_{w}(\epsilon(j-t),\epsilon^{3}t).\) The term \(A_{w}\) is propagating to the right with roughly unit speed and thus when \(t\sim 1/\epsilon^{3}\) will be located at \(j\sim 1/\epsilon^{3}\). In turn this indicates \(\chi(j)\sim\epsilon^{-3/2}\) towards the end of the approximation time interval. Hence the term \(\epsilon^{3}\chi A_{w}\) would be substantially larger than it appears: the techniques from [20] show that almost surely_ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\epsilon^{3}\chi(\cdot)A_{w}(\epsilon(\cdot- t),\epsilon^{3}t)\|_{\ell^{2}}\leq C\epsilon\sqrt{\ln|\ln(\epsilon)|}.\] _Were \(\chi=\mathcal{O}_{\ell^{\infty}}(1)\) the right-hand side of the preceding estimate would be \(C\epsilon^{5/2}\) (see Lemma 5.6 below). And so we find that the "\(\epsilon^{3}\) term" in the approximator is more than an order of magnitude larger than it should be, bigger in fact that the leading order term in the approximation. Disaster!_ _The lesson learned: if a term in our approximation involves a random walk it will ultimately be at least \(\epsilon^{-3/2}\) larger than it formally appears to be. We call this difficulty_ **a random walk disaster.** ### \(Z_{14}=Z_{24}=0\) The relations (1.3), (4.7), (4.8), (4.10) and a little algebra convert these equations to \[\delta_{j}^{+}P_{2}=\partial_{\tau}\bar{Q}_{1}-\frac{1}{2}\partial_{X\tau}^{ 2}\bar{Q}_{0}+\delta_{j}^{+}\zeta\partial_{X}^{2}\bar{P}_{0}\quad\text{and} \quad\delta_{j}^{-}Q_{2}=-\partial_{X}\bar{Q}_{1}+\frac{1}{2}\partial_{X}^{2 }\bar{Q}_{0}-\delta_{j}^{-}\zeta\partial_{X}^{2}\bar{Q}_{0}.\] The long-wave part of this is \[0=\partial_{\tau}\bar{Q}_{1}-\frac{1}{2}\partial_{X\tau}^{2}\bar{Q}_{0}\quad \text{and}\quad 0=-\partial_{X}\bar{Q}_{1}+\frac{1}{2}\partial_{X}^{2}\bar{Q}_{0}\] which can be solved by putting \[\bar{Q}_{1}=\frac{1}{2}\partial_{X}\bar{Q}_{0}=\frac{1}{2}\left(\partial_{w}A +\partial_{l}B\right). \tag{4.11}\] This leaves the microscale part \[\delta_{j}^{+}P_{2}=\delta_{j}^{+}\zeta\partial_{X}^{2}\bar{P}_{0}\quad\text{ and}\quad\delta_{j}^{-}Q_{2}=-\delta_{j}^{-}\zeta\partial_{X}^{2}\bar{Q}_{0}. \tag{4.12}\] Which as per (4.6) we solve by taking \[P_{2}=\bar{P}_{2}+\zeta\partial_{X}^{2}\bar{P}_{0}\quad\text{and}\quad Q_{2}= \bar{Q}_{2}-\zeta\partial_{X}^{2}\bar{Q}_{0}. \tag{4.13}\] Once again, the transparency condition (1.3) made finding this solution a simple matter of cancelation; it is the reason why the transparency condition has two finite-differences on \(\zeta\). If we had put only one finite-difference in (1.3) then another random walk disaster as described in Remark 4.3 would occur when we solve (4.12). ### Something else for \(Z_{15}\) and \(Z_{25}\) The relations (1.3), (4.7), (4.8), (4.10), (4.11), (4.13) and quite a bit of algebra get us: \[Z_{15}= -\partial_{T}\bar{Q}_{0}-\partial_{\tau}\bar{Q}_{2}+\partial_{X} \bar{P}_{2}+\frac{1}{6}\partial_{X}^{3}\bar{P}_{0}\] \[+\delta_{j}^{+}P_{3}+\left(\zeta+S_{j}^{+}\zeta\right)\partial_{X }^{3}\bar{P}_{0}\] \[Z_{25}= -\partial_{T}\bar{P}_{0}-\partial_{\tau}\bar{P}_{2}+\partial_{X} \bar{Q}_{2}-\left(\frac{1}{12}-2\sigma^{2}\right)\partial_{X}^{3}\bar{Q}_{0}+ \partial_{X}(\bar{Q}_{0}^{2})\] \[+\delta_{j}^{-}Q_{3}-\left(\zeta+S_{j}^{-}\zeta+\frac{1}{2} \delta_{j}^{-}\zeta+\zeta\delta_{j}^{+}\delta_{j}^{-}\zeta+2\sigma^{2}\right) \partial_{X}^{3}\bar{Q}_{0}\] \[+\delta_{j}^{-}\delta_{j}^{+}\zeta\partial_{X}(\bar{Q}_{0}^{2})- \delta_{j}^{+}\delta_{j}^{-}\zeta\left(\partial_{T}\bar{P}_{0}+\partial_{\tau} \bar{P}_{2}\right).\] Recall that \(\sigma^{2}\) is the variance of \(\zeta(j)\). We need \(Z_{15}\) and \(Z_{25}\) to be small relative to \(\epsilon\), but zeroing them out completely happens to be too restrictive; we will need to modify the microscale part of the decomposition described in Section 4.1. But before we get there we deal with the long-wave part. #### 4.5.1. Kill the long-wave part with KdV equations As per normal, we zero out the long-wave parts of \(Z_{15}\) and \(Z_{25}\). We have conveniently arranged all such terms in the first line on the right of the preceding formulas for \(Z_{15}\) and \(Z_{25}\) and so we put \[\begin{split} 0&=-\partial_{T}\bar{Q}_{0}- \partial_{\tau}\bar{Q}_{2}+\partial_{X}\bar{P}_{2}+\frac{1}{6}\partial_{X}^{3 }\bar{P}_{0}\\ 0&=-\partial_{T}\bar{P}_{0}-\partial_{\tau}\bar{P}_ {2}+\partial_{X}\bar{Q}_{2}-\left(\frac{1}{12}-2\sigma^{2}\right)\partial_{X} ^{3}\bar{Q}_{0}+\partial_{X}(\bar{Q}_{0}^{2}).\end{split} \tag{4.14}\] Within (4.14) lurk the KdV equations; here is how we coax them into the daylight. Let \[\bar{Q}_{2}(X,\tau,T) =A_{2}(X-\tau,X+\tau,T)+B_{2}(X-\tau,X+\tau,T)\] \[\text{and}\quad\bar{P}_{2}(X,\tau,T) =-A_{2}(X-\tau,X+\tau,T)+B_{2}(X-\tau,X+\tau,T)\] and use (4.9) in (4.14) to get \[0= \partial_{T}A+\partial_{T}B+2\partial_{l}A_{2}-2\partial_{w}B_{2} -\frac{1}{6}(-\partial_{w}^{3}A+\partial_{l}^{3}B)\] \[0= -\partial_{T}A+\partial_{T}B-2\partial_{l}A_{2}-2\partial_{w}B_{ 2}+\left(\frac{1}{12}-2\sigma^{2}\right)(\partial_{w}^{3}A+\partial_{l}^{3}B)\] \[-\left(\partial_{w}A^{2}+2\partial_{w}AB+2A\partial_{l}B+ \partial_{l}B^{2}\right).\] Subtracting these gives: \[\begin{split} 0=& 2\partial_{T}A+\left(\frac{1}{12}+2 \sigma^{2}\right)\partial_{w}^{3}A+\partial_{w}A^{2}\\ +& 4\partial_{l}A_{2}-\left(\frac{1}{4}-2\sigma^{2} \right)\partial_{l}^{3}B+\left(2\partial_{w}AB+2A\partial_{l}B+\partial_{l}B^{ 2}\right).\end{split} \tag{4.15}\] If we let \(\mathcal{B}\) be an \(l\)-antiderivative of \(B\) (specifically \(\mathcal{B}(l,T):=\int_{0}^{l}B(y,T)dy\)) and set \[A_{2}=\frac{1}{4}\left[\left(\frac{1}{4}-2\sigma^{2}\right)\partial_{l}^{2}B- \left(2\partial_{w}A\mathcal{B}+2AB+B^{2}\right)\right] \tag{4.16}\] many terms in (4.15) die. What survives is \[0=2\partial_{T}A+\left(\frac{1}{12}+2\sigma^{2}\right)\partial_{w}^{3}A+ \partial_{w}A^{2}. \tag{4.17}\] This is a KdV equation! A parallel argument (after adding instead subtracting equations a few steps above) shows we should take \[B_{2}=\frac{1}{4}\left[\left(\frac{1}{4}-2\sigma^{2}\right)\partial_{w}^{2}A- \left(A^{2}+2AB+2\mathcal{A}\partial_{l}B\right)\right] \tag{4.18}\] with \(\mathcal{A}\) a \(w\)-antiderivative of \(A\) (specifically \(\mathcal{A}(w,T):=\int_{0}^{w}A(y,T)dy\)). In which case we get that \(B\) solves another KdV equation: \[0=2\partial_{T}B-\left(\frac{1}{12}+2\sigma^{2}\right)\partial_{l}^{3}B- \partial_{l}B^{2}. \tag{4.19}\] To summarize: taking \(A\), \(B\), \(A_{2}\) and \(B_{2}\) as we have just described means that (4.14) is satisfied. #### 4.5.2. Handle with the microscopic part using autoregressive processes The next step in dealing with \(Z_{15}\) and \(Z_{25}\) is to control the microscopic parts that are left over after (4.14): \[\begin{split} Z_{15}=&\delta_{j}^{+}P_{3}+\left( \zeta+S_{j}^{+}\zeta\right)\partial_{X}^{3}\bar{P}_{0}\\ Z_{25}=&\delta_{j}^{-}Q_{3}-\left(\zeta+S_{j}^{-} \zeta+\frac{1}{2}\delta_{j}^{-}\zeta+\zeta\delta_{j}^{+}\delta_{j}^{-}\zeta+2 \sigma^{2}\right)\partial_{X}^{3}\bar{Q}_{0}\\ &+\delta_{j}^{-}\delta_{j}^{+}\zeta\partial_{X}(\bar{Q}_{0}^{2})- \delta_{j}^{+}\delta_{j}^{-}\zeta\left(\partial_{T}\bar{P}_{0}+\partial_{ \tau}\bar{P}_{2}\right).\end{split} \tag{4.20}\] Many, but not all, of these terms in \(Z_{25}\) can be eliminated with the same cancelation tricks that worked earlier. To see this, we let \[\begin{split} P_{3}&=\gamma_{1}\partial_{X}^{3} \bar{P}_{0}\quad\text{and}\\ Q_{3}&=\left(\gamma_{2}+\frac{1}{2}\zeta\right) \partial_{X}^{3}\bar{Q}_{0}-\delta_{j}^{+}\zeta\partial_{X}(\bar{Q}_{0}^{2})+ \delta_{j}^{+}\zeta\left(\partial_{T}\bar{P}_{0}+\partial_{\tau}\bar{P}_{2} \right)\end{split}\] where for the moment we leave \(\gamma_{1}=\gamma_{1}(j)\) and \(\gamma_{2}=\gamma_{2}(j)\) unspecified. Substituting the above into (4.20) gives \[\begin{split} Z_{15}=&\left[\delta_{j}^{+}\gamma_{1} +\zeta+S_{j}^{+}\zeta\right]\partial_{X}^{3}\bar{P}_{0}\\ Z_{25}=&\left[\delta_{j}^{-}\gamma_{2}-\left(\zeta+S _{j}^{-}\zeta+\zeta\delta_{j}^{+}\delta_{j}^{-}\zeta+2\sigma^{2}\right)\right] \partial_{X}^{3}\bar{Q}_{0}.\end{split} \tag{4.21}\] If we followed the strategy from the tutorial in Section 4.1, we would put \(\delta_{j}^{+}\gamma_{1}=-\zeta-S_{j}^{+}\zeta\) and \(\delta^{-}\gamma_{2}=\zeta+S^{-}\zeta+\zeta\delta^{+}\delta^{-}\zeta+2\sigma^{2}\) and get \(Z_{15}=Z_{25}=0\). Since the \(\zeta(j)\) are i.i.d. random variables we would then find that \(\gamma_{1}(j)\) and \(\gamma_{2}(j)\) are random walks, which leads us to another disaster as described in Remark 4.3 (this time in the residual terms). Why not just stack another finite-difference on \(\zeta\) in the transparency condition? This would help in \(Z_{15}\) but would not be useful in handling the parts stemming from \(\zeta\delta_{j}^{+}\delta_{j}^{-}\zeta\) in \(Z_{25}\). To avoid these problematic random walks we take \(\gamma_{1}\) and \(\gamma_{2}\) to solve \[\begin{split}\delta^{+}\gamma_{1}&=-\epsilon\, \mathrm{sgn}(j)\gamma_{1}-\left(\zeta+S^{+}\zeta\right)\quad\text{and}\\ \delta^{-}\gamma_{2}&=-\epsilon\,\mathrm{sgn}(j) \gamma_{2}+\left(\zeta+S^{-}\zeta+\zeta\delta^{+}\delta^{-}\zeta+2\sigma^{2} \right).\end{split} \tag{4.22}\] In which case we find that (4.21) becomes \[Z_{15}=-\epsilon\,\mathrm{sgn}(j)\gamma_{1}\partial_{X}^{3}\bar{P}_{0}\quad \text{and}\quad Z_{25}=-\epsilon\,\mathrm{sgn}(j)\gamma_{2}\partial_{X}^{3} \bar{Q}_{0}.\] The extra factors of \(\epsilon\) on the right-hand sides here means that our choices of \(P_{3}\) and \(Q_{3}\) are formally as good as putting \(Z_{15}=Z_{25}=0\). Estimates for \(P_{3}\) and \(Q_{3}\) (and consequently \(Z_{15}\), \(Z_{25}\) and the residuals) ultimately require us to understand \(\gamma_{1}\) and \(\gamma_{2}\). The equations in (4.22) are examples of _autoregressive processes_[12]. These are dissipative cousins of random walks and with classical probabilistic methods we will show that they roughly cost us a factor of \(\epsilon^{-1/2}\) (see Lemma 5.10 below) instead of the \(\epsilon^{-3/2}\) we get from using random walks. This is big but not too big for our estimates to handle. ### Summing up At this point we have completely determined all the functions \(P_{0},\ldots,Q_{3}\) in the approximation. As it can be challenging to sort through it all, we close out this section by summarizing the derivation. **Definition 4.4**.: _Suppose \(A(w,T)\) and \(B(l,T)\) solve the KdV equations (4.17) and (4.19) and \(\gamma_{1}(j)\) and \(\gamma_{2}(j)\) solve the autoregressive processes (4.22). Take \(A_{2}(w,l,T)\) and \(B_{2}(w,l,T)\) as in (4.16) and (4.18). Define \(Q_{k}(j,X,\tau,T)\) and \(P_{k}(j,X,\tau,T)\) via_ \[\begin{array}{l}Q_{0}=A+B\text{,}\\ Q_{1}=\frac{1}{2}\partial_{X}Q_{0}+\delta_{j}^{+}\zeta\partial_{\tau}P_{0}\text {,}\\ Q_{2}=A_{2}+B_{2}-\zeta\partial_{X}^{2}Q_{0}\\ Q_{3}=\left(\gamma_{2}+\frac{1}{2}\zeta\right)\partial_{X}^{3}Q_{0}-\delta_{j} ^{+}\zeta\partial_{X}(Q_{0}^{2})\\ +\delta_{j}^{+}\zeta\left(\partial_{T}P_{0}-\partial_{\tau}A_{2}+\partial_{ \tau}B_{2}\right)\end{array}\begin{array}{l}P_{0}=-A+B\\ P_{1}=0\\ P_{2}=-A_{2}+B_{2}+\zeta\partial_{X}^{2}P_{0}\\ P_{3}=\gamma_{1}\partial_{X}^{3}P_{0}\end{array}\] _where it is understood that \(w=X-\tau\) and \(l=X+\tau\). Then we call \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\), as defined in (4.1), the_ **extended KdV approximators**_._ In this section we have proven: **Lemma 4.5**.: _The extended KdV approximators have_ \[\mathrm{Res}_{1}=\epsilon^{6}\left(-\,\mathrm{sgn}(j)\gamma_{1}\partial_{X}^{ 3}P_{0}+W_{1}\right)\quad\text{and}\quad\mathrm{Res}_{2}=\frac{\epsilon^{6}}{ m}\left(-\,\mathrm{sgn}(j)\gamma_{2}\partial_{X}^{3}Q_{0}+W_{2}\right)\] _with \(W_{1}\) and \(W_{2}\) given at (4.2)._ We move on to proving many estimates related to the extended KdV approximators. ## 5. Estimates on the approximators and residuals To streamline some of the forthcoming statements we put forth the following convention: **Definition 5.1**.: _We say \(A\) and \(B\) are_ **good solutions of KdV on \([-T_{0},T_{0}]\)** _if they satisfy (4.17) and (4.19) along with the estimate_ \[0<\sup_{|T|\leq T_{0}}\|A(\cdot,T)\|_{H^{T}(1)}+\|B(\cdot,T)\|_{H^{T}(1)}<\infty.\] **Remark 5.2**.: _The existence of good solutions of KdV on intervals of arbitrary length is by now classical (see [24]). The lower bound is just to guarantee that the approximation is not trivial._ In this section we prove: **Proposition 5.3**.: _Assume Hypothesis 1.1. Let \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\) be the extended KdV approximators as in Definition 4.4 where we further assume that that \(A\) and \(B\) are good solutions of KdV on \([-T_{0},T_{0}]\). Then almost surely the quantities defined at (3.2) satisfy_ \[\alpha_{1}(\epsilon)=\mathcal{O}(\epsilon^{3/2}),\ \alpha_{2}(\epsilon)= \mathcal{O}(\epsilon^{3}),\ \alpha_{3}(\epsilon)=\mathcal{O}(\epsilon^{5}\sqrt{|\ln(\epsilon)|})\ \ \text{and}\ \ \beta_{1}^{-1}(\epsilon)=\mathcal{O}(\epsilon^{-3/2}).\] Estimates on terms which do not involve \(\gamma_{1}\) or \(\gamma_{2}\) can be handled using well-understood techniques found in previous works, whereas the rest require new ideas. All dependence on \(\gamma_{1}\) and \(\gamma_{2}\) enters through \(P_{3}\) and \(Q_{3}\), the latter of which has some terms without them. And so we put \[Q_{3\gamma}:=\gamma_{2}\partial_{X}^{3}Q_{0}\quad\text{and}\quad Q_{30}:=Q_{3} -\gamma_{2}\partial_{X}^{3}Q_{0}. \tag{5.1}\] To be clear, \(Q_{30}\) has no instances of a \(\gamma\) within. Similarly if in the formulas for \(W_{1}\) and \(W_{2}\) we eliminate any term with a \(\gamma\) in it we get: \[W_{10}:= \sum_{n=0}^{2}E_{3-n}^{+}P_{n}-\sum_{n=1}^{2}\epsilon^{n-1} \partial_{T}Q_{n}-\epsilon^{2}\partial_{T}Q_{30}\] \[W_{20}:= \sum_{n=0}^{2}E_{3-n}^{-}Q_{n}+E_{0}^{-}Q_{30}+E_{1}^{-}Q_{0}^{2 }+2E_{0}^{-}(Q_{0}Q_{1})\] \[+D^{-}\left(2Q_{0}Q_{2}+\left(Q_{1}+\epsilon Q_{2}+\epsilon^{2}Q _{30}\right)^{2}\right)-m\sum_{n=1}^{2}\epsilon^{n-1}\partial_{T}P_{n}.\] Thus the terms with a \(\gamma\) are: \[W_{1\gamma}:=W_{1}-W_{10}\quad\text{and}\quad W_{2\gamma}:=W_{2}-W_{20}. \tag{5.2}\] ### Terms without \(\gamma_{1}\) and \(\gamma_{2}\) In this part we prove: **Lemma 5.4**.: _Assume Hypothesis 1.1. Let \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\) be the extended KdV approximators as in Definition 4.4 where we further assume that that \(A\) and \(B\) are good solutions of KdV on \([-T_{0},T_{0}]\). Then_ \[\sum_{n=0}^{2}\sup_{|t|\leq T_{0}/\epsilon^{3}}\left(\|P_{n}(\cdot,\epsilon \cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}+\|Q_{n}(\cdot,\epsilon\cdot, \epsilon t,\epsilon^{3}t)\|_{\ell^{2}}\right)=\mathcal{O}(\epsilon^{-1/2}),\] \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|Q_{30}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{ 3}t)\|_{\ell^{2}}=\mathcal{O}(\epsilon^{-1/2}),\] _and_ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\left(\|W_{10}(\cdot,\epsilon\cdot,\epsilon t, \epsilon^{3}t)\|_{\ell^{2}}+\|W_{20}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{ 3}t)\|_{\ell^{2}}\right)=\mathcal{O}(\epsilon^{-1/2}).\] **Remark 5.5**.: _Note that in Hypothesis 1.1 we assumed that \(|\zeta(j)|<1/4\) for all \(j\). A consequence of this is that none of the estimates in Lemma 5.4 depend on the realization of the \(\zeta(j)\). That is to say there is no probability needed to understand this lemma._ Proof.: The proof is similar to that of Proposition 4.2 of [10], though there are a few small, but substantive, differences. The main tool we need is: **Lemma 5.6**.: _Let \(M\geq 0\) be an integer. Suppose that \(f(j)\in\ell^{\infty}\) and \(F(X)\in H^{M+1}\). If \(u_{\epsilon}(j):=f(j)F(\epsilon j)\) then_ \[\|u\|_{\ell^{\infty}}\leq\|f\|_{\ell^{\infty}}\|F\|_{L^{\infty}},\] \[\|u\|_{\ell^{2}}\leq C\epsilon^{-1/2}\|f\|_{\ell^{\infty}}\|F\|_{H^{1}},\] \[\|E_{M}^{\pm}u\|_{\ell^{2}}\leq C\epsilon^{-1/2}\|f\|_{\ell^{\infty}}\|F\|_{H^ {M+1}}\] _and_ \[\|D^{\pm}u\|_{\ell^{2}}\leq C\epsilon^{-1/2}\|f\|_{\ell^{\infty}}\|F\|_{H^{1}}.\] _The constants \(C>0\) depend only on \(M\)._ Proof.: Lemma 4.3 of [10] is nearly identical to this, but has the requirement that \(f(j)\) be \(N\)-periodic. Still we can piggyback the proof of our result on that one. The first estimate is all but obvious. For the second we have the easy estimate \(\|u\|_{\ell^{2}}\leq\|f\|_{\ell^{\infty}}\|F_{\epsilon}\|_{\ell^{2}}\) where \(F_{\epsilon}(j):=F(\epsilon j)\), \(j\in\mathbf{Z}\). But then the second estimate of Lemma 4.3 of [10] applies and shows \(\|F_{\epsilon}\|_{\ell^{2}}\leq C\epsilon^{-1/2}\|F\|_{H^{1}}\). For the third, a direct computation shows that \(E_{M}^{+}u(j)=f(j+1)(E_{M}^{+}F_{\epsilon})(j)\) which implies \(\|E_{M}^{+}u\|_{\ell^{2}}\leq\|f\|_{\ell^{\infty}}\|E_{M}^{+}F_{\epsilon}\|_ {\ell^{2}}\). The third estimate from Lemma 4.3 of [10] implies that \(\|E_{M}^{+}F_{\epsilon}\|_{\ell^{2}}\leq C\epsilon^{-1/2}\|F\|_{H^{M+1}}\). To get an estimate for \(E_{M}^{-}\) is similar. The final estimate, for \(D^{\pm}u\), follows from, the definition of \(D^{\pm}\), the triangle inequality and the second estimate in this lemma. We also need the following, to control the antiderivatives in \(A_{2}\) and \(B_{2}\): **Lemma 5.7**.: _Suppose that \(F(X)\in L^{2}(1)\) then \(\mathcal{F}(X):=\int_{0}^{X}F(y)dy\) is in \(L^{\infty}\) and \(\|\mathcal{F}\|_{L^{\infty}}\leq\sqrt{\pi}\|F\|_{L^{2}(1)}\)._ Proof.: We use Cauchy-Schwarz and the fact that \(\int_{\mathbf{R}}(1+y^{2})^{-1}=\pi\). To wit: \[|\mathcal{F}(X)|\leq \int_{0}^{X}(1+y^{2})^{-1/2}(1+y^{2})^{1/2}|F(y)|dy\] \[\leq \sqrt{\int_{\mathbf{R}}(1+y^{2})^{-1}dy}\sqrt{\int_{\mathbf{R}}(1 +y^{2})|F(y)|^{2}dy}=\sqrt{\pi}\|F\|_{L^{2}(1)}.\] Taking the supremum over \(X\) seals the deal. Armed with Lemmas 5.6 and 5.7 we can get into proving the estimates in the Lemma 5.4. There are many terms and handling each would inflate this paper like a bounce house. So we do not do that. Instead we show how to estimate a "prototype" term which captures the nuances. That term is \[g=E_{0}^{-}\left(\delta_{j}^{+}\zeta\mathcal{A}\partial_{l}^{2}B\right)\] which some digging will show appears in \(\operatorname{Res}_{2}\). Using the estimate for \(E_{0}^{-}\) from Lemma 5.6 we have \[\|g\|_{\ell^{2}}\leq C\epsilon^{-1/2}\|\delta_{j}^{+}\zeta\|_{\ell^{\infty}}\| \mathcal{A}\partial_{l}^{2}B\|_{H^{1}}.\] By the triangle inequality and the definition of \(\delta_{j}^{+}\) we have \(\|\delta_{j}^{+}\zeta\|_{\ell^{\infty}}\leq 2\|\zeta\|_{\ell^{\infty}}\) and the supposition that the support of \(\zeta(j)\) is in \((-1/4,1/4)\) ultimately gives \(\|\delta_{j}^{+}\zeta\|_{\ell^{\infty}}\leq 1/2\). Also classical Sobolev-Holder inequalities tell us that \(\|\mathcal{A}\partial_{l}^{2}B\|_{H^{1}}\leq\|\mathcal{A}\|_{W^{1,\infty}}\| \partial_{l}^{2}B\|_{H^{1}}\leq\|\mathcal{A}\|_{W^{1,\infty}}\|B\|_{H^{3}}.\) Since \(\|\mathcal{A}\|_{W^{1,\infty}}=\|\mathcal{A}\|_{L^{\infty}}+\|\mathcal{A}_{w} \|_{L^{\infty}}\) and \(\mathcal{A}\) is an antiderivative of \(A\) we can use Lemma 5.7 to conclude that \(\|\mathcal{A}\|_{L^{\infty}}\leq\sqrt{\pi}\|A\|_{L^{2}(1)}.\) Likewise, Sobolev's inequality tells us that \(\|\mathcal{A}_{w}\|_{L^{\infty}}=\|A\|_{L^{\infty}}\leq C\|A\|_{H^{1}}\). So all together we have \[\|g\|_{\ell^{2}}\leq C\epsilon^{-1/2}\left(\|A\|_{L^{2}(1)}+\|A\|_{H^{1}} \right)\|B\|_{H^{3}}.\] Since \(A\) and \(B\) are assumed to be good solutions of KdV on \([-T_{0},T_{0}]\) we get \(\sup_{|t|\leq T_{0}/\epsilon^{3}}\|g\|_{\ell^{2}}\leq C\epsilon^{-1/2}\), which is the targeted estimate. All the other terms are handled using the same sorts of steps used above. We close the proof with a comment on the regularity needed. The most smoothness required for \(A\) and \(B\) comes from the terms in \(\partial_{T}Q_{30}\). As in [23, 2, 10], one finds that \(\partial_{w}^{6}A\) and \(\partial_{l}^{6}B\) make an appearance and so, to deploy estimates like in Lemma 5.6, we need \(A\) and \(B\) to be in \(H^{7}\). ### The autoregressive part Now we need to put bounds on terms where \(\gamma_{1}\) and \(\gamma_{2}\) appear. The first question: how big are these sequences? The equations in (4.22) which these satisfy are examples of autoregressive models, specifically AR(1) processes [12]. We have the following almost sure estimate for solutions of such processes: **Lemma 5.8**.: _Suppose that \(z(n)\), \(n\geq 0\), are i.i.d. random variables with zero mean and compact support. Fix \(\theta\in(-1,1)\) and let_ \[\chi(n):=\sum_{k=0}^{n-1}\theta^{k}z(n-k). \tag{5.3}\] _Then there exists a constant \(C>0\) so that_ \[\sup_{n>0}\frac{|\chi(n)|}{\sqrt{\ln(e+n)}}\leq C\sqrt{\frac{1}{1-\theta^{2}}}.\] _The constant \(C\) depends on the realization of \(z(n)\) but does not depend on \(\theta\); it is almost surely finite._ Proof.: The result is a consequence of of Hoeffding's inequality, whose proof can be found in [13]: **Theorem 5.9**.: _Let \(w(0),\ldots,w(n-1)\) be mean-zero, independent random variables with \(-b_{k}\leq w(k)\leq b_{k}\) almost surely and \(\chi(n)=\sum_{k=0}^{n-1}w(k).\) Then for any \(\mu\geq 0\)_ \[\mathbb{P}(|\chi(n)|\geq\mu)\leq 2\exp\left(-\frac{\mu^{2}}{2\sum_{k=0}^{n-1}b_{k }^{2}}\right).\] We apply this to (5.3); let \(w_{n}(k):=\theta^{k}z(n-k)\). Since \(\mathbb{E}[z(j)]=0\) we have \(\mathbb{E}[w_{n}(k)]=0\) for all choices of \(n\) and \(k\). Since the \(z(j)\) are independent it follows that, for fixed \(n\), the \(w_{n}(k)\) are independent with respect to \(k\). The support of \(z(j)\) is compact so there is \(a\geq 0\) for which the support lies in \([-a,a]\). Then the support of \(\theta^{k}z(n-k)\) is in \([-a\theta^{k},a\theta^{k}]\). Thus \(w_{n}(0),\ldots,w_{n}(n-1)\) pass the hypotheses of Theorem 5.9 with \(b_{k}=a\theta^{k}\) and we have: \[\mathbb{P}[|\chi(n)|\geq\mu]\leq 2\exp\left(-\frac{\mu^{2}}{2a^{2}\sum_{k=0}^{n -1}\theta^{2k}}\right)=2\exp\left(-\frac{\mu^{2}(1-\theta^{2})}{2a^{2}(1- \theta^{2n})}\right).\] Now let \(\mu(n):=\sqrt{\ln(e+n)\frac{4a^{2}(1-\theta^{2n})}{1-\theta^{2}}}\) so that \[\mathbb{P}[|\chi(n)|\geq\mu(n)]\leq 2\exp\left(-\frac{\mu^{2}(1-\theta^{2})}{2a ^{2}(1-\theta^{2n})}\right)=2\exp\left(-2\ln(e+n)\right)=\frac{2}{(e+n)^{2}}.\] Since \(\sum_{n\geq 0}2/(e+n)^{2}\) is finite, the Borel-Cantelli Lemma [8] tells us that, almost surely, \(|\chi(n)|\geq\mu(n)\) happens for at most finitely many \(n\). For a given realization of \(z(n)\) let \(N_{\omega}\) be the largest value of \(n\) at which \(|\chi(n)|\geq\mu(n)\) and put \(c_{\omega}:=\max_{1\leq n\leq N_{\omega}}|\chi(n)|/\mu(n).\) Thus we have \[|\chi(n)|\leq 2ac_{\omega}\sqrt{\ln(e+n)\frac{1-\theta^{2n}}{1-\theta^{2}}} \leq 2ac_{\omega}\sqrt{\frac{\ln(e+n)}{1-\theta^{2}}}\] for all \(n\). Putting \(C=2ac_{\omega}\) completes the proof. With Lemma 5.8 we can prove **Lemma 5.10**.: _Take Hypothesis 1.1 as given. Suppose that \(\gamma_{1}(j)\) and \(\gamma_{2}(j)\) solve (4.22) and \(\gamma_{1}(0)=\gamma_{2}(0)=0\). Then there exists a constant \(C>0\) such that for all \(\epsilon\in(0,1)\) we have_ \[\sup_{j\in\mathbf{Z}}\frac{|\gamma_{1}(j)|+|\gamma_{2}(j)|}{\sqrt{\ln(e+|j|)} }\leq C\epsilon^{-1/2}.\] _The constant \(C\) depends on the realization of the \(\zeta(j)\) but is almost surely finite._ Proof.: We prove the estimate for \(\gamma_{2}\) as the one for \(\gamma_{1}\) is similar but easier. Taking \(j>0\) in the second equation in (4.22) gives \[\gamma_{2}(j)-\gamma_{2}(j-1)=-\epsilon\gamma_{2}(j)+\left(\zeta(j)+S^{-} \zeta(j)+\zeta(j)\delta^{+}\delta^{-}\zeta(j)+2\sigma^{2}\right)\] or rather \[\gamma_{2}(j)=\frac{1}{1+\epsilon}\gamma_{2}(j-1)+\frac{1}{1+\epsilon}\left( \zeta(j)+S^{-}\zeta(j)+\zeta(j)\delta^{+}\delta^{-}\zeta(j)+2\sigma^{2}\right).\] If we take \(\gamma_{2}(0)=0\) then the we can find \(\gamma_{2}(j)\) (for \(j>0\)) from the above by iteration. In particular we have \[\gamma_{2}(j)= \frac{1}{1+\epsilon}\sum_{k=0}^{j-1}\theta_{\epsilon}^{k}\zeta(j- k)+\frac{1}{1+\epsilon}\sum_{k=0}^{j-1}\theta_{\epsilon}^{k}S^{-1}\zeta(j-k)\] \[+ \frac{1}{1+\epsilon}\sum_{k=0}^{j-1}\theta_{\epsilon}^{k}\left( \zeta(j-k)\delta^{+}\delta^{-}\zeta(j-k)+2\sigma^{2}\right)\] \[=: \frac{1}{1+\epsilon}\left(\gamma_{21}(j)+\gamma_{22}(j)+\gamma_{2 3}(j)\right)\] where we have put \(\theta_{\epsilon}:=1/(1+\epsilon).\) To be clear \(\gamma_{21}(j)\), \(\gamma_{22}(j)\) and \(\gamma_{23}(j)\) correspond to the three sums in the order of their appearance. The random variables \(\zeta(j)\) meet the hypotheses of Lemma 5.8 and so we can apply the results to \(\gamma_{21}(j)\) and \(\gamma_{22}(j)\) forthwith to get: \[\sup_{j\in\mathbf{Z}}\frac{|\gamma_{21}(j)|}{\sqrt{\ln(e+|j|)}}\leq C\sqrt{ \frac{1}{1-\theta_{\epsilon}^{2}}}\quad\text{and}\quad\sup_{j\in\mathbf{Z}} \frac{|\gamma_{22}(j)|}{\sqrt{\ln(e+|j|)}}\leq C\sqrt{\frac{1}{1-\theta_{ \epsilon}^{2}}}\] for some \(C>0\) which is almost surely finite. An easy calculation shows that \(1/(1-\theta_{\epsilon}^{2})=(1+\epsilon^{2})/(2\epsilon+\epsilon^{2})<1/\epsilon\) when \(\epsilon\in(0,1)\). Thus we have \[\sup_{j\geq 0}\frac{|\gamma_{21}(j)|+|\gamma_{22}(j)|}{\sqrt{\ln(e+|j|)}}\leq C \epsilon^{-1/2}.\] Dealing with \(\gamma_{23}(j)\) is a bit more complicated because the summands are not independent. We have \(\gamma_{23}(j)=\sum_{k=0}^{j-1}\theta_{\epsilon}^{k}v(j-k)\) where \[v(j)=\zeta(j)\zeta(j+1)+\zeta(j)\zeta(j-1)-2\zeta(j)^{2}+2\sigma^{2}.\] From this we see that \(v(j)\) and \(v(j+1)\) are dependent. As are \(v(j)\) and \(v(j+2)\), since \(\zeta(j+1)\) appears in both. But \(v(j+3)\) and \(v(j)\) have no terms in common and it follows that they are independent. Thus \(\{v(3l)\}_{l\in\mathbf{Z}}\) is an i.i.d. collection of random variables. As are \(\{v(3l+1)\}_{l\in\mathbf{Z}}\) and \(\{v(3l+2)\}_{l\in\mathbf{Z}}\,.\) We break up \(\gamma_{23}(k)\) accordingly: \[\gamma_{23}(j)=\sum_{\begin{subarray}{c}0\leq k\leq j-1\\ k=0\text{mod}3\end{subarray}}\theta_{\epsilon}^{k}v(j-k)+\sum_{\begin{subarray} {c}0\leq k\leq j-1\\ k=1\text{mod}3\end{subarray}}\theta_{\epsilon}^{k}v(j-k)+\sum_{\begin{subarray} {c}0\leq k\leq j-1\\ k=2\text{mod}3\end{subarray}}\theta_{\epsilon}^{k}v(j-k).\] Each of the three sums passes the hypotheses of Lemma 5.8, though there are some small subtleties. We estimate the first as the others are all but the same. Put \(k=3l\) to find \[\sum_{\begin{subarray}{c}0\leq k\leq j-1\\ k=0\text{mod}3\end{subarray}}\theta_{\epsilon}^{k}v(j-k)=\sum_{l=0}^{\lfloor(j -1)/3\rfloor}\theta_{\epsilon}^{3l}v(j-3l).\] Then we have from Lemma 5.8: \[\sum_{l=0}^{\lfloor(j-1)/3\rfloor}\theta_{\epsilon}^{3l}v(j-3l)\leq C\sqrt{ \ln(e+\lfloor(j-1)/3\rfloor)}\sqrt{\frac{1}{1-\theta_{\epsilon}^{6}}}.\] As \(\theta_{\epsilon}^{2}>0\) we find \(1/(1-\theta_{\epsilon}^{6})=1/(1-\theta_{\epsilon}^{2})(1+\theta_{\epsilon}^{2}+ \theta_{\epsilon}^{4})\leq 1/(1-\theta_{\epsilon}^{2})<1/\epsilon\). This, along with the fact that \(\ln\) is an increasing function, gives \[\sum_{l=0}^{\lfloor(j-1)/3\rfloor}\theta_{\epsilon}^{3l}v(j-3l)\leq C\sqrt{\ln (e+|j|)}\epsilon^{-1/2}\] which in turn leads to the estimate we are after. We need estimates for \(\gamma_{2}(j)\) when \(j<0\) too. If we take \(j<0\) in the second equation of (4.22) we get \[\gamma_{2}(j)-\gamma_{2}(j-1)=\epsilon\gamma_{2}(j)+\left(\zeta(j)+S^{-}\zeta (j)+\zeta(j)\delta^{+}\delta^{-}\zeta(j)+2\sigma^{2}\right).\] We rearrange this: \[\gamma_{2}(j-1)=(1-\epsilon)\gamma_{2}(j)-\left(\zeta(j)+S^{-}\zeta(j)+\zeta (j)\delta^{+}\delta^{-}\zeta(j)+2\sigma^{2}\right).\] As we have taken \(\gamma_{2}(0)=0\) the above formula gives us \(\gamma_{2}(-1)\) and, more generally, \(\gamma_{2}(j)\), \(j<0\), by iteration. For \(j=-l<0\) we obtain: \[\gamma_{2}(-l)= -\sum_{k=0}^{l-1}\vartheta_{\epsilon}^{k}\zeta(-l+k+1)-\sum_{k=0 }^{l-1}\vartheta_{\epsilon}^{k}S^{-1}\zeta(-l+k+1)\] \[-\sum_{k=0}^{l-1}\vartheta_{\epsilon}^{k}\left(\zeta(-l+k+1) \delta^{+}\delta^{-}\zeta(-l+k+1)+2\sigma^{2}\right)\] where \(\vartheta_{\epsilon}:=1-\epsilon\). The first two sums pass the hypotheses of Lemma 5.8 and since \(1/(1-\vartheta_{\epsilon}^{2})=1/(2\epsilon-\epsilon^{2})<1/\epsilon\) when \(\epsilon\in(0,1)\) we can bound both as we did for \(\gamma_{21}\) and \(\gamma_{22}\) earlier. And the same skullduggery about independence that worked for \(\gamma_{23}\) works for the third sum. All together we get \[\sup_{j\leq 0}\frac{|\gamma_{2}(j)|}{\sqrt{\ln(e+|j|)}}\leq C\epsilon^{-1/2}.\] That completes the proof of Lemma 5.10. Next we prove the main workhorse lemma for controlling \(\gamma\) terms in our approximation: **Lemma 5.11**.: _If \(F=F(w,T)\) has \(\sup_{|T|\leq T_{0}}\|F(\cdot,T)\|_{H^{1}(1)}<\infty\) then_ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\gamma_{k}(\cdot)F(\epsilon(\cdot\pm t), \epsilon^{3}t)\|_{\ell^{2}}\leq C\epsilon^{-1}\sqrt{|\ln(\epsilon)|}\sup_{|T| \leq T_{0}}\|F(\cdot,T)\|_{H^{1}(1)} \tag{5.4}\] _for \(k=1,2\)._ _If in addition \(\sup_{|T|\leq T_{0}}\|F(\cdot,T)\|_{H^{2}(1)}<\infty\) then_ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|E_{0}^{\pm}\left(\gamma_{k}(\cdot)F( \epsilon(\cdot\pm t),\epsilon^{3}t)\right)\|_{\ell^{2}}\leq C\epsilon^{-1} \sqrt{|\ln(\epsilon)|}\sup_{|T|\leq T_{0}}\|F(\cdot,T)\|_{H^{2}(1)} \tag{5.5}\] _for \(k=1,2\). (The choices for \(+\) or \(-\) in \(E_{0}^{\pm}\) and \(F(\epsilon(\cdot\pm t),\epsilon^{3}t)\) are not linked.)_ _The constant \(C>0\) is almost surely finite._ Proof.: First we tackle (5.4). We handle \(k=1\) and the "\(-\)" sign. The other cases are no different. First \[\|\gamma_{1}(\cdot)F(\epsilon(\cdot-t),\epsilon^{3}t)\|_{\ell^{2}}^{2}=\sum_{j\in \mathbf{Z}}\gamma_{1}^{2}(j)F(\epsilon(j-t),\epsilon^{3}t)^{2}.\] Using the estimate from Lemma 5.10 gets \[\|\gamma_{1}(\cdot)F(\epsilon(\cdot-t),\epsilon^{3}t)\|_{\ell^{2}}^{2}\leq C\epsilon^{-1}\sum_{j\in\mathbf{Z}}\ln(e+|j|)F(\epsilon(j-t),\epsilon^{3} t)^{2}.\] A simple estimate leads us to \[\|\gamma_{1}(\cdot)F(\epsilon(\cdot-t),\epsilon^{3}t)\|_{\ell^{2}}^{2}.\leq C\epsilon^{-1}\sup_{j\in\mathbf{Z}}\frac{\ln(e+|j|)}{1+( \epsilon(j-t))^{2}}\sum_{j\in\mathbf{Z}}(1+(\epsilon(j-t))^{2})F(\epsilon(j-t ),\epsilon^{3}t)^{2}.\] We can apply the second estimate in Lemma 5.6 to the sum (with \(u=(1+(\epsilon(j-t))^{2})F(\epsilon(j-t),\epsilon^{3}t)^{2}\)) and find \[\|\gamma_{1}(\cdot)F(\epsilon(\cdot-t),\epsilon^{3}t)\|_{\ell^{2}}^ {2}\leq C\epsilon^{-2}\sup_{j\in\mathbf{Z}}\frac{\ln(e+|j|)}{1+(\epsilon( j-t))^{2}}\|\sqrt{1+(\cdot)^{2}}F(\cdot,\epsilon^{3}t)\|_{H^{1}}^{2}\] \[\leq C\epsilon^{-2}\sup_{j\in\mathbf{Z}}\frac{\ln(e+|j|)}{1+( \epsilon(j-t))^{2}}\|F(\cdot,\epsilon^{3}t)\|_{H^{1}(1)}^{2}\] Thus \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\gamma_{1}(\cdot)F(\epsilon(\cdot-t), \epsilon^{3}t)\|_{\ell^{2}}\leq C\epsilon^{-1}\sqrt{\sup_{|t|\leq T_{0}/ \epsilon^{3}}\sup_{j\in\mathbf{Z}}\frac{\ln(e+|j|)}{1+(\epsilon(j-t))^{2}}} \sup_{|T|\leq T_{0}}\|F(\cdot,T)\|_{H^{1}(1)}.\] From this we see that the proof of (5.4) will be complete once we show that there is \(C=C(T_{0})>0\) such that \(0<\epsilon<1\) implies \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\sup_{j\in\mathbf{Z}}\frac{\ln(e+|j|)}{1+( \epsilon(j-t))^{2}}\leq C|\ln(\epsilon)|.\] The proof is mainly elementary Calculus, but that does not mean it is obvious. Here are the details. Let \( f_{\epsilon}(y,t):=\frac{\ln(e+|y|)}{1+(\epsilon(y-t))^{2}}\). We show \(\sup_{|t|\leq T_{0}/\epsilon^{3}}\sup_{y\in\mathbf{R}}f_{\epsilon}(y,t)\leq C |\ln(\epsilon)|\). Since \(f_{\epsilon}(y,t)=f_{\epsilon}(-y,-t)\) we have \(\sup_{|t|\leq T_{0}/\epsilon^{3}}\sup_{y\in\mathbf{R}}f_{\epsilon}(y,t)=\sup _{0\leq t\leq T_{0}/\epsilon^{3}}\sup_{y\in\mathbf{R}}f_{\epsilon}(y,t)\). If \(t\geq 0\) and \(y\geq 0\) then \(|y-t|\leq|-y-t|\) which implies \(f_{\epsilon}(-y,t)\leq f_{\epsilon}(y,t)\). Thus \(\sup_{|t|\leq T_{0}/\epsilon^{3}}\sup_{y\in\mathbf{R}}f_{\epsilon}(y,t)=\sup _{0\leq t\leq T_{0}/\epsilon^{3}}\sup_{y\geq 0}f_{\epsilon}(y,t)\). Next we argue that \(f_{\epsilon}(y,t)\) achieves its supremum at a point in \((0,\infty)\). Clearly \(f_{\epsilon}(y,t)\) is non-negative and \(f_{\epsilon}(y,t)\to 0\) as \(y\to\infty\). It is easy enough to show that \(\lim_{y\to 0^{+}}\partial_{y}f_{\epsilon}(y,t)>0\) when \(t\geq 0\). Since \(f_{\epsilon}\) is smooth (except at \(y=0\)), these considerations imply the existence of \(y_{\epsilon}(t)\in(0,\infty)\) for which \(f_{\epsilon}(y_{\epsilon}(t),t)=\sup_{y\in\mathbf{R}}f_{\epsilon}(y,t)\) and \(\partial_{y}f_{\epsilon}(y_{\epsilon}(t),t)=0\). So we search for solutions of \(\partial_{y}f_{\epsilon}(y,t)=0\) with \(y\geq 0\). We claim that for \(t\geq 0\) and \(\epsilon\in(0,1)\) that \[\partial_{y}f_{\epsilon}(y,t)=0\text{ and }y\geq 0\implies t\leq y\leq t+\frac{1}{ \epsilon}. \tag{5.6}\] Given the claim, \(t\leq y_{\epsilon}(t)\leq t+\epsilon^{-1}\) follows and as such: \[f_{\epsilon}(y_{\epsilon}(t),t)=\frac{\ln(e+y_{\epsilon}(t))}{1+(\epsilon(y_{ \epsilon}(t)-t))^{2}}\leq\ln(e+t+\epsilon^{-1}).\] In turn we have \(\sup_{0\leq t\leq T_{0}/\epsilon^{3}}\sup_{y\geq 0}f_{\epsilon}(y,t)\leq\ln(e+T_{0 }\epsilon^{-3}+\epsilon^{-1})\leq C|\ln(\epsilon)|\) for a constant depending only on \(T_{0}\). So we will be done if we establish the claim. Routine computations show that \(\partial_{y}f_{\epsilon}(y,t)=0\) if and only if \[\frac{1+\epsilon^{2}(y-t)^{2}}{2\epsilon^{2}(y-t)}=(e+y)\ln(e+y). \tag{5.7}\] Note that if \(0\leq y<t\) then the left-hand side of (5.7) is negative whereas the right-hand side is positive. So there can be no solutions with \(y<t\) and this implies the left-hand inequality in (5.6). For the right-hand inequality, let us assume that \(y-t>\epsilon^{-1}\) and \(t\geq 0\). This gives \[\frac{1+\epsilon^{2}(y-t)^{2}}{2\epsilon^{2}(y-t)}=\frac{1}{2\epsilon^{2}(y-t )}+\frac{y-t}{2}<\frac{1}{2\epsilon}+\frac{y-t}{2}.\] Next, since \(t\geq 0\) then \(y-t\leq y\) which implies \(\frac{1+\epsilon^{2}(y-t)^{2}}{2\epsilon^{2}(y-t)}<\frac{1}{2 \epsilon}+\frac{y}{2}.\) Since \(y-t>\epsilon^{-1}\) and \(t\geq 0\) we have \(y>\epsilon^{-1}\). Thus \(\frac{1+\epsilon^{2}(y-t)^{2}}{2\epsilon^{2}(y-t)}<y.\) Also we clearly have \(y<(e+y)\ln(e+y)\) and so all told \[\frac{1+\epsilon^{2}(y-t)^{2}}{2\epsilon^{2}(y-t)}<(e+y)\ln(e+y)\] when \(y-t>\epsilon^{-1}\) and \(t\geq 0\). This precludes \(\partial_{y}f_{\epsilon}(y,t)=0\) and the right inequality in (5.6) follows. Thus we are done with the proof of (5.4). The estimate (5.5) follows from (5.4) with a few tricks. First we have by direct calculation \(E_{0}^{+}\left(\gamma_{1}F\right)=\gamma_{1}(j+1)(E_{0}^{+}F).\) Second, if we let \(\mathcal{I}_{\epsilon}G(X)=\epsilon^{-1}\int_{X}^{X+\epsilon}G(Y)dY\) then the Fundamental Theorem of Calculus and the definition of \(E_{0}^{+}\) tell us \(E_{0}^{+}F=\mathcal{I}_{\epsilon}\partial_{w}F\). One can show (see the argument that leads to equation (3.4) in [20]) that \(\|\mathcal{I}_{\epsilon}G\|_{H^{n}(r)}\leq C\|G\|_{H^{n}(r)}\). Putting it all together we get \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|E_{0}^{\pm}\left(\gamma_{k}( \cdot)F(\epsilon(\cdot\pm t),\epsilon^{3}t)\right)\|_{\ell^{2}} =\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\gamma_{k}(\cdot+1)\mathcal{I }_{\epsilon}\partial_{w}F(\epsilon(\cdot\pm t),\epsilon^{3}t)\|_{\ell^{2}}\] \[\leq C\epsilon^{-1}\sqrt{|\ln(\epsilon)|}\sup_{|T|\leq T_{0}}\| \mathcal{I}_{\epsilon}\partial_{w}F(\cdot,T)\|_{H^{1}(1)}\] \[\leq C\epsilon^{-1}\sqrt{|\ln(\epsilon)|}\sup_{|T|\leq T_{0}}\|F( \cdot,T)\|_{H^{2}(1)}.\] Now we can control all the \(\gamma\) dependent terms. **Lemma 5.12**.: _Assume Hypothesis 1.1. Let \(\widetilde{q}_{\epsilon}(j,t)\) and \(\widetilde{p}_{\epsilon}(j,t)\) be the extended KdV approximators as in Definition 4.4 where we further assume that that \(A\) and \(B\) are good solutions of KdV on \([-T_{0},T_{0}]\). Then almost surely_ \[\sup_{|t|\leq T_{0}\epsilon^{-3}}\left(\|P_{3}(\cdot,\epsilon\cdot,\epsilon t, \epsilon^{3}t)\|_{\ell^{2}}+\|Q_{3\gamma}(\cdot,\epsilon\cdot,\epsilon t, \epsilon^{3}t)\|_{\ell^{2}}\right)=\mathcal{O}(\epsilon^{-1}\sqrt{|\ln( \epsilon)|})\] _and_ \[\sup_{|t|\leq T_{0}\epsilon^{-3}}\left(\|W_{1\gamma}(\cdot,\epsilon\cdot, \epsilon t,\epsilon^{3}t)\|_{\ell^{2}}+\|W_{2\gamma}(\cdot,\epsilon\cdot, \epsilon t,\epsilon^{3}t)\|_{\ell^{2}}\right)=\mathcal{O}(\epsilon^{-1}\sqrt {|\ln(\epsilon)|}).\] Proof.: The estimates for \(P_{3}\) and \(Q_{3\gamma}\) are immediate from Lemma 5.11 and their definitions. A direct calculation shows that \[W_{1\gamma}=E_{0}^{+}P_{3}-\partial_{\tau}Q_{3\gamma}-\epsilon^{2}\partial_{ T}Q_{3\gamma}.\] Each of these can be estimated with Lemma 5.11 as well. Another calculation gives \[W_{2\gamma}= E_{0}^{-}Q_{3\gamma}-m\partial_{\tau}P_{3}-m\epsilon^{2}\partial_{T}P_{3}\] \[+2\epsilon^{2}D^{-}(Q_{3\gamma}Q_{1})+2\epsilon^{3}D^{-}(Q_{3 \gamma}Q_{2})+2\epsilon^{4}D^{-}(Q_{3\gamma}Q_{30})+\epsilon^{4}D^{-}Q_{3 \gamma}^{2}.\] The first line of the above we estimate with Lemma 5.11. The ones in the second line all hinge on estimating terms of the form \(D^{-}(Q_{3\gamma}Q_{l})\) for different choices of \(l\). The definition of \(D^{-}\) and the triangle inequality give \(\|D^{-}(Q_{3\gamma}Q_{l})(\cdot,\epsilon\cdot)\|_{\ell^{2}}\leq 2\|Q_{3 \gamma}(\cdot,\epsilon\cdot)Q_{l}(\cdot,\epsilon\cdot)\|_{\ell^{2}}\) Then we use the fact that \(\|fg\|_{\ell^{2}}\leq\|f\|_{\ell^{2}}\|g\|_{\ell^{2}}\) to get \(\|D^{-}(Q_{3\gamma}Q_{l})(\cdot,\epsilon\cdot)\|_{\ell^{2}}\leq 2\|Q_{3 \gamma}(\cdot,\epsilon\cdot)\|_{\ell^{2}}\|Q_{l}(\cdot,\epsilon\cdot)\|_{\ell ^{2}}\). At this point the remainder of the estimates follow from earlier estimates on the component \(Q_{k}\) and bookkeeping. ### Finishing up We are now in position to prove Proposition 5.3. Proof.: We begin with \(\alpha_{1}(\epsilon)\). From its definition, (4.1), (5.1) and the triangle inequality we have \[\alpha_{1}(\epsilon)\leq \sum_{n=0}^{2}\epsilon^{n+2}\sup_{|t|\leq T_{0}/\epsilon^{3}} \left(\|Q_{n}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}+\|P _{n}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}\right)\] \[+ \epsilon^{5}\sup_{|t|\leq T_{0}/\epsilon^{3}}\|Q_{30}(\cdot, \epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}+\epsilon^{5}\sup_{|t| \leq T_{0}/\epsilon^{3}}\left(\|Q_{3\gamma}(\cdot,\epsilon\cdot,\epsilon t, \epsilon^{3}t)\|_{\ell^{2}}+\|P_{3}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{ 3}t)\|_{\ell^{2}}\right)\] Then Lemmas 5.4 and 5.12 gives us \(\alpha_{1}(\epsilon)=\mathcal{O}(\epsilon^{3/2})\). For \(\alpha_{3}(\epsilon)\), we use its definition, Lemma 4.5, (5.2) and the triangle inequality to obtain \[\alpha_{3}(\epsilon)\leq \epsilon^{6}\sup_{|t|\leq T_{0}/\epsilon^{3}}\left(\|\gamma_{1}( \cdot)\partial_{X}^{3}P_{0}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_ {\ell^{2}}+\|\gamma_{2}(\cdot)\partial_{X}^{3}Q_{0}(\cdot,\epsilon\cdot, \epsilon t,\epsilon^{3}t)\|_{\ell^{2}}\right)\] \[+ \epsilon^{6}\sup_{|t|\leq T_{0}/\epsilon^{3}}\left(\|W_{10}(\cdot,\epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}+\|W_{20}(\cdot,\epsilon \cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}\right)\] \[+ \epsilon^{6}\sup_{|t|\leq T_{0}/\epsilon^{3}}\left(\|W_{1\gamma}( \cdot,\epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}+\|W_{2\gamma}( \cdot,\epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{2}}\right).\] Then Lemmas 5.4, 5.11 and 5.12 give \(\alpha_{3}(\epsilon)=\mathcal{O}(\epsilon^{5}\sqrt{|\ln(\epsilon)|})\). To prove the estimate for \(\alpha_{2}(\epsilon)\), from (4.1) we have \[\partial_{t}\widetilde{q}_{\epsilon}=\epsilon^{3}\partial_{\tau}Q_{0}+\underbrace{ \epsilon^{5}\partial_{T}Q_{0}+\epsilon^{4}\sum_{k=1}^{3}\left(\epsilon^{k} \partial_{\tau}Q_{k}+\epsilon^{k+2}\partial_{T}Q_{k}\right)}_{\epsilon^{4} \widetilde{h}_{\epsilon}}.\] Since \(\|f\|_{\ell^{\infty}}\leq\|f\|_{\ell^{2}}\) we have \(\|\widetilde{h}_{\epsilon}\|_{\ell^{\infty}}\leq\|\widetilde{h}_{\epsilon}\|_ {\ell^{2}}.\) All terms appearing in \(\widetilde{h}_{\epsilon}\) have been estimated in one place or another previously and each is \(\mathcal{O}_{\ell^{2}}(\epsilon^{-1/2})\) at worst so that we get \(\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\epsilon^{4}\widetilde{h}_{\epsilon}\|_{ \ell^{\infty}}=\mathcal{O}(\epsilon^{7/2})\). On the other hand using the first estimate in Lemma 5.6 shows that \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\epsilon^{3}\partial_{\tau}Q_{0}(\cdot, \epsilon\cdot,\epsilon t,\epsilon^{3}t)\|_{\ell^{\infty}}\leq\sup_{|t|\leq T _{0}/\epsilon^{3}}\epsilon^{3}\left(\|A_{w}(\cdot,\epsilon^{3}t)\|_{W^{1, \infty}}+\|B_{l}(\cdot,\epsilon^{3}t)\|_{W^{1,\infty}}\right)\leq C\epsilon^{3}.\] So all told we have \(\alpha_{2}(\epsilon)=\mathcal{O}(\epsilon^{3})\). Next, if we let \[\widetilde{q}_{\epsilon}(j,t):=\sum_{k=1}^{3}\epsilon^{k+2}Q_{k}(j,\epsilon j,\epsilon t,\epsilon^{3}t)\quad\text{and}\quad\widetilde{p}_{\epsilon}(,t):= \sum_{k=1}^{3}\epsilon^{k+2}P_{k}(j,\epsilon j,\epsilon t,\epsilon^{3}t) \tag{5.8}\] then the estimates from Lemmas 5.4 and 5.12 lead to \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|\widetilde{q}_{\epsilon},\widetilde{p}_{ \epsilon}\|_{\ell^{2}}=\mathcal{O}(\epsilon^{5/2}). \tag{5.9}\] So long \(A\) and \(B\) are not both identically zero it is easy using the conservation laws of KdV to find that \(\inf_{|T|\leq T_{0}}\|A(\cdot,T)\|_{H^{7}(1)}+\|B(\cdot,T)\|_{H^{7}(1)}\geq b>0\), for some \(b\). This leads, by the triangle inequality, to \(\beta_{1}(\epsilon)=\inf_{|t|\leq T_{0}/\epsilon^{3}}\|\widetilde{q}_{ \epsilon},\widetilde{p}_{\epsilon}\|_{\ell^{2}}\geq C\epsilon^{3/2}.\) This completes the proof. ## 6. The main event Now we can state and prove our main theorem in full detail. **Theorem 6.1**.: _Let \(m(j)\) be a realization of the mass coefficients subject to Hypothesis 1.1. Fix \(T_{0}>0\) and \(\Phi,\Psi\in H^{7}(1)\). Let \((q(j,t),p(j,t))\) be the solution of the transparent random mass FPUT lattice (1.1) with initial data_ \[q(j,0)=\epsilon^{2}\Phi(\epsilon j)\quad\text{and}\quad p(j,0)=\epsilon^{2} \Psi(\epsilon j).\] _Let \(A(w,T)\) and \(B(l,t)\) be the solutions of the KdV equations (4.17) and (4.19) with initial data_ \[A(w,0)=\frac{1}{2}\Phi(w)-\frac{1}{2}\Psi(w)\quad\text{and}\quad B(l,0)=\frac{ 1}{2}\Phi(l)+\frac{1}{2}\Psi(l).\] _Then there exits \(\epsilon_{\star}=\epsilon_{\star}(m,T_{0},\Phi,\Psi)\) (almost surely positive) and \(C_{\star}=C_{\star}(m,T_{0},\Phi,\Psi)>0\) (almost surely finite) such that, for all \(\epsilon\in(0,\epsilon_{\star})\), we have the absolute \(\ell^{2}\)-error estimates_ \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\big{\|}q(\cdot,t)-\epsilon^{2} \left[A(\epsilon(\cdot-t),\epsilon^{3}t)+B(\epsilon(\cdot+t),\epsilon^{3}t) \right]\big{\|}_{\ell^{2}}\leq C_{\star}\epsilon^{2}\sqrt{|\ln(\epsilon)|} \quad\text{and}\] \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\big{\|}p(\cdot,t)-\epsilon^{2} \left[-A(\epsilon(\cdot-t),\epsilon^{3}t)+B(\epsilon(\cdot+t),\epsilon^{3}t) \right]\big{\|}_{\ell^{2}}\leq C_{\star}\epsilon^{2}\sqrt{|\ln(\epsilon)|}.\] _If at least one of \(\Phi\) or \(\Psi\) is non-zero then the associated relative \(\ell^{2}\)-error estimates are \(\mathcal{O}(\sqrt{\epsilon|\ln(\epsilon)|})\)._ Proof.: Take \(A\) and \(B\) with the initial data as in the statement and form \(\widetilde{q}_{\epsilon}\) and \(\widetilde{p}_{\epsilon}\) as in Definition 4.4. Then we have the estimates for \(\alpha_{1}(\epsilon)\), \(\alpha_{2}(\epsilon)\), \(\alpha_{3}(\epsilon)\) and \(\beta_{1}(\epsilon)\) as in Proposition 5.3, which is to say we have met condition (3.3) from the statement of Theorem 3.1. Note that \[q(j,t)-\widetilde{q}_{\epsilon}(j,t)=q(j,t)-\epsilon^{2}[A(\epsilon(j-t), \epsilon^{3}t)+B(\epsilon(j+t),\epsilon^{3}t)]-\widetilde{q}_{\epsilon}(j,t)\] where \(\widetilde{q}_{\epsilon}\) is given in (5.8). In (5.9) we showed that \(\widetilde{q}_{\epsilon}=\mathcal{O}_{\ell^{2}}(\epsilon^{5/2})\) for \(|t|\leq T_{0}/\epsilon^{3}\). The initial conditions for \(p\), \(q\), \(A\) and \(B\) are arranged so that \[q(j,0)-\widetilde{q}_{\epsilon}(j,0)=-\widetilde{q}_{\epsilon}(j,0)\] And so we have \(\|q(0)-\widetilde{q}_{\epsilon}(0)\|_{\ell^{2}}=\|\widetilde{q}_{\epsilon}(0 )\|_{\ell^{2}}=\mathcal{O}(\epsilon^{5/2})\). Similarly we have \[p(j,t)-\widetilde{p}_{\epsilon}(j,t)=p(j,t)-\epsilon^{2}[-A(\epsilon(j-t), \epsilon^{3}t)+B(\epsilon(j+t),\epsilon^{3}t)]-\widetilde{p}_{\epsilon}(j,t)\] and \(\|p(0)-\widetilde{p}_{\epsilon}(0)\|_{\ell^{2}}=\|\widetilde{p}_{\epsilon}(0 )\|_{\ell^{2}}=\mathcal{O}(\epsilon^{5/2})\). Since \(\alpha_{3}(\epsilon)/\epsilon^{3}=\mathcal{O}(\epsilon^{2}\sqrt{|\ln(\epsilon) |})\) and \(\epsilon^{5/2}=o(\epsilon^{2}\sqrt{|\ln(\epsilon)|})\), these estimates imply that we meet the condition on the initial data in the statement of Theorem 3.1. Thus we have the conclusion \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\|q(t)-\widetilde{q}_{\epsilon}(t),p(t)- \widetilde{p}_{\epsilon}(t)\|_{\ell^{2}}=\mathcal{O}\left(\epsilon^{2}\sqrt{ |\ln(\epsilon)|}\right).\] Then the triangle inequality plus (5.8) give \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\left\|q(\cdot,t)-\epsilon^{2}\left[A( \epsilon(\cdot-t),\epsilon^{3}t)+B(\epsilon(\cdot+t),\epsilon^{3}t)\right] \right\|_{\ell^{2}}=\mathcal{O}(\epsilon^{2}\sqrt{|\ln(\epsilon)|})\] and \[\sup_{|t|\leq T_{0}/\epsilon^{3}}\left\|p(\cdot,t)-\epsilon^{2}\left[-A( \epsilon(\cdot-t),\epsilon^{3}t)+B(\epsilon(\cdot+t),\epsilon^{3}t)\right] \right\|_{\ell^{2}}=\mathcal{O}(\epsilon^{2}\sqrt{|\ln(\epsilon)|}).\] This is the absolute error estimate in the theorem. The relative error estimate follows from the estimate on \(\beta_{1}(\epsilon)\). ## 7. Numerics In this section we report the outcomes of a variety of numerical simulations of solutions of (1.1). In all cases our methodology is to truncate (1.1) to \(|j|\leq M\) where \(M\gg 1\) and enforce periodic boundary conditions (\(M\) is always taken to be so incredibly vast that the solutions are never large anywhere near the edges of the computational domain). The resulting system is a large finite-dimensional ODE which we solve with a standard RK4 algorithm. This is essentially the same method as used in [10, 20]. The calculations were performed in MATLAB. ### Amplitude attenuation The first experiment simulates (1.1) with a number of choices for. These are: * for all for all, that is, they are constant. * meet the transparency condition (1.3) where are drawn from the uniform distribution on. * are i.i.d. random variables, drawn uniformly from. For all these cases, we choose as initial conditions: (7.1) We take and simulate from out to. Famously, solutions of KdV equations with smooth and localized initial data will, over time, resolve into the sum of separated solitary waves of fixed amplitude [7]. Thus, if the solution of the FPUT lattice is well-approximated by a KdV equation we expect the -norm to at least roughly stabilize over long time periods. And so in Figure 1 we plot vs, for. (The scaling here is to be consistent with the long wave scaling so that we may compare various choices of on the same plot.) We see exactly this stabilization in the plots for the constant, -periodic and transparent cases. Furthermore, the stabilization becomes more pronounced as decreases, which is consistent with the rigorous KdV approximation theorems here and in [23, 2, 10]. But when the masses are taken to be i.i.d., there is an obvious, pronounced decay of the amplitude; this attenuation (up to the scaling) becomes stronger as decreases. This is why we said in the Introduction that a KdV approximation for the i.i.d. problem is not appropriate. ### Numerical computation of optimal error bound In the second experiment we aim to corroborate the conclusions of our main result, Theorem 6.1. We simulate (1.1) with subject to the transparency condition (1.3) where are drawn from the uniform distribution on. In this case,. We choose the initial data so that is zero and is an exact solitary wave solution of (4.17), namely. That is to say, we take (7.2) We simulate for and run the simulations from to. (When this takes a very long time!) To be clear, we fix a realization and then vary as stated with the same realization used throughout. Then we compute the overall absolute error, specifically: Then we repeat for another realization (ten different realizations all together). If we plot \(E_{\epsilon}\) vs \(\epsilon\) on a loglog plot, Theorem 2 tells us the best fit line to the data should have slope somewhere around 2, or larger. The results are shown in Figure 2; all ten realizations on the same graph. The line of best fit has slope exceeding 2.5 in each case; they are 2.5715, 2.5982, 2.5677, 2.5445, 2.5778, 2.5948, 2.5510, 2.5760, 2.5499, 2.5563. This numerically computed slope is over.5 larger than what we expect from our rigorous estimate. That is to say, the numerics indicate that the approximation of the transparent mass FPUT lattice by KdV is a fair bit better than what our results from Theorem 6.1 show. We repeated the same experiment for many different realizations and the numerically computed slope was always near to 2.5. Thus we conjecture that the absolute \(\ell^{2}\)-error is at worst \(\mathcal{O}(\epsilon^{5/2})\), which is the same size as the error for the constant and periodic problems. Of course we do not know how to prove such a thing at this time. Figure 1. Scaled \(\ell^{\infty}\)-amplitude vs scaled time for solutions of (1.1) with long-wave data as in (7.1). ## 8. What's next. Our results are the first piece of much larger program aimed at bringing stochastic homogenization to nonlinear dispersive problems. Here are a number of open problems, some of which should be relatively straightforward given our results here and others of which will require substantial new technical ideas. 1. Prove a result analogous to Theorem 6.1 but in expectation instead of in the almost sure sense. In our work on the linear i.i.d. lattice [20] we proved approximation results in both senses and the strategy for the expectation result is almost surely transferable. 2. Study (1.1) but allow spatial heterogeneity in the spring potentials as well. We expect an analogous transparency condition can be used to achieve a result similar to the one here. 3. Confirm (or reject!) the conjecture that the sharp order of the KdV approximation error for the transparent mass FPUT lattice is smaller than \(\mathcal{O}(\epsilon^{5/2})\). The error estimate we prove here is due entirely to our use of the autoregressive processes in the extended approximation. But perhaps a yet more clever option exists to handle the terms which we encountered at \(Z_{15}\) and \(Z_{25}\). 4. If one can get the conjectured sharp error estimate, it opens the door to replacing the transparency condition (1.3) with \[m(j)=1+\delta^{-}\zeta(j).\] Figure 2. Ten realizations of absolute \(\ell^{2}\)-error vs \(\epsilon\), loglog plot, for solutions of (1.1) with “solitary wave-like” data as in (7.2). Here there is only one finite-difference on \(\zeta(j)\) (which are still i.i.d.) instead of two. Some preliminary simulations indicate this condition, which we call the _translucent random mass FPUT lattice_, should have a valid KdV approximation. 5. The transparency condition is hardly natural or obvious. This raises the question: are there conditions that one can place on \(m(j)\) that are less rigid than the transparency (or translucency) conditions? Note that the transparency condition implies that the \(m(j)\) are correlated with one another; perhaps strictures placed upon correlation lengths in \(m(j)\) can be used to get results similar to our results here. 6. How can one rigorously capture the amplitude attenuation seen in the numerics for the i.i.d. random mass problem in Figure 1? Clearly KdV is the wrong approach, but perhaps there is some other modulation equation that can capture the dynamics. The articles [15, 16] suggest the use of nonlinear diffusion equations. On the other hand, an analysis of a spectral problem associated with linear random lattices by [1] indicates a connection to Anderson localization and the authors of [17] utilize Boltzmann models in a high-dimensional linear version of (1.1). 7. Can our approach be carried over to the problem of long water waves over random bathymetry? Our transparency condition is inspired by a similar condition on the bathymetry proposed in [22]. Can a continuous version of an autoregressive process (that is, an _Ornstein-Uhlenbeck process_) be used to control the residuals in that problem and prove a rigorous approximation? 8. How about nonlinear wave equations with random coefficients? Nonlinear Schrodinger equations (discrete or continuous)? Any of the multitude of equations named for Joseph Valentin Boussinesq? Or problems in higher spatial dimensions?
2304.11964
3rd Place Solution to Meta AI Video Similarity Challenge
This paper presents our 3rd place solution in both Descriptor Track and Matching Track of the Meta AI Video Similarity Challenge (VSC2022), a competition aimed at detecting video copies. Our approach builds upon existing image copy detection techniques and incorporates several strategies to exploit on the properties of video data, resulting in a simple yet powerful solution. By employing our proposed method, we achieved substantial improvements in accuracy compared to the baseline results (Descriptor Track: 38% improvement, Matching Track: 60% improvement). Our code is publicly available here: https://github.com/line/Meta-AI-Video-Similarity-Challenge-3rd-Place-Solution
Shuhei Yokoo, Peifei Zhu, Junki Ishikawa, Rintaro Hasegawa
2023-04-24T10:00:09Z
http://arxiv.org/abs/2304.11964v2
# 3rd Place Solution to Meta AI Video Similarity Challenge ###### Abstract This paper presents our 3rd place solution in both Descriptor Track and Matching Track of the Meta AI Video Similarity Challenge (VSC2022), a competition aimed at detecting video copies. Our approach builds upon existing image copy detection techniques and incorporates several strategies to exploit on the properties of video data, resulting in a simple yet powerful solution. By employing our proposed method, we achieved substantial improvements in accuracy compared to the baseline results (Descriptor Track: 38% improvement, Matching Track: 60% improvement). Our code is publicly available here: [https://github.com/line/Meta-AI-Video-Similarity-Challenge-3rd-Place-Solution](https://github.com/line/Meta-AI-Video-Similarity-Challenge-3rd-Place-Solution) ## 1 Introduction In recent years, with the rapid development of social media, issues such as plagiarism and unauthorized modification have become increasingly severe. Consequently, there is a growing demand for technologies capable of accurately and automatically detecting illegal content. In response to this situation, the field of copy detection has experienced a swift expansion in recent years, with research on image and video copy detection [12, 16, 5, 8, 6, 7] gaining significant attention. Notably, the Image Similarity Challenge at NeurIPS'21 (ISC2021) was organized as a competition for image copy detection [4, 11], followed by the recent organization of the Meta AI Video Similarity Challenge at CVPR'23 (VSC2022) focusing on video copy detection. VSC2022 has two tracks: descriptor and matching. In the descriptor track, the task is to compute embeddings (up to 512 dimensions) for each video. In the matching track, the task is to generate predicted matches containing the starting and ending timestamps of query-reference video pairs. Figure 1: Our solution pipeline overview of video copy detection for inference. The feature extraction is mainly based on an image copy detection model (ISC-dt1 [17]), and the temporal alignment is based on Temporal Network [14]. We implement several tricks to improve the detection performance. In this paper, we describe our approach that achieved third place in both tracks of the VSC2022 competition. Our solution can be summarized as follows: 1. We propose a simple yet powerful video copy detection pipeline, based on an image copy detection model (ISC-dt1 [17]). 2. Our approach utilizes test time augmentation based on the predictions of an editing prediction model. 3. We exploit the properties of videos through various techniques: emphasizing copy videos more in search results by calculating frame consistency, concatenating adjacent frames followed by dimensionality reduction using PCA, and localizing copied frames by employing Temporal Network [14]. ## 2 Method ### Pipeline Overview As previous works have proved frame-level features are necessary to precisely locate copied segments and achieved better performance in video copy detection tasks [9, 13], our method also extracts frame-level features to retrieve copied pairs and applies video temporal alignment to localize copied segments. The overview of our method is shown in Figure 1. Details will be described in the following subsections. In video-level retrieval, the embeddings of query and reference videos are extracted respectively. For query videos, since they can either be original videos (without any copied segments) or edited videos (with copied segments), we first implement a process to predict whether they are edited. Only videos being predicted as edited will be processed in the following steps. Next, multiple crop which is a type of test time augmentation (TTA), is performed so that the query frames can be matched both globally and locally. On the other hand, since reference videos do not contain any copied segments or manipulations, they are directly passed to the feature extraction once the frames are extracted. For feature extraction, we use ISC-dt1 [17] which is the 1st place model of ISC2021 Descriptor Track. Post-processing including our proposed Consistency Weighting, Temporal Concat, and score normalization [12] are applied to further improve the accuracy. The last step in video-level retrieval is to search copied pairs from the query and reference embeddings. We use an exhaustive search to obtain the top 1200 candidates for each query. To localize the starting and ending timestamps of the copied segments between the candidate copied pairs, we generate frame-to-frame similarity matrix for each pair and Figure 2: The three types of multi-crop used in our method. 2-views crop is for vertical or horizontal stacking, 4-views crop is for vertical plus horizontal stacking, and 5-views is for all other manipulations. Multi-crop enables the query videos to be matched both globally and locally and significantly improves the detection results. Credit of the original videos in this figure: (user name of Flickr): “madc0w”, “bamblue88”, “SupremeCrete”, “Wam Kat”, “permanently scatterbrained”, “jfgornet”, “Salim Virji”, “scott.fuhrman”, “terryballard”, “FriendishX”, “Steven Smith!”, “tristanf”, “itsbeach”. apply temporal alignment. Various methods have been proposed for temporal alignment [14, 3, 2], and we choose a graph-based method named Temporal Network (TN) [14]. TN constructs a graph network that takes matched frames as nodes and similarities between frames as weights of links, and thus the path with maximized weight indicates the location of the copied segment. Finally, the video copy pairs and their matching results are obtained. ### Editing Prediction Editing prediction aims to classify what type of manipulations are used for copy videos. Knowing the types of manipulations helps us to choose proper augmentation methods that return the edited video to its original appearance. We build an editing prediction model by training with simulation data. We first create a simulated copy video dataset using the training reference of the VSC2022 data. We randomly select two videos from the training reference and copy a random segment from one video to another. To better reproduce the character of the challenge data, we use a data augmentation library named AugLy [1] to add manipulations including blending, altering brightness or contrast, blurring, stacking, etc. We considered editing prediction as a multi-label classification task and build a model based on ConvNeXt [10]. ### Multiple Crops The challenge data contains manipulations that concatenate multiple videos spatially, such as stacking, and overlay. Only part of the query image can be matched to the reference image in such cases, and thus using multiple crops to match both globally and locally can be effective. We design three types of multiple crops, 2-views for vertical or horizontal stacking, 4-views for vertical plus horizontal stacking and 5-views for the other manipulations. Examples are shown in Figure 2. The model inferences are performed on multiple crops, and multiple descriptors are extracted depending on the number of views. These are stored as separate descriptors in the temporal direction with repeated timestamps. ### Post-processing **Consistency Weighting.** We analyzed the search results in the train set and noticed that there are some incorrect prediction pairs with relatively high confidence scores. The commonality between them is that they are very similar in content, but not copies. It is important to distinguish between videos that are copies and videos that are simply quite similar because these incorrect predictions with high scores are predominantly detrimental to the competition metric. Therefore, we utilize the characteristics of copied videos and apply weighting to increase the confidence of copied videos. We call this method Consistency Weighting. Specifically, in the case of copied videos, a clip from another video is inserted in the middle, causing a scene change and resulting in lower consistency between frames. Therefore, for each video descriptor, we apply weighting as represented by the following formula: \[X^{\prime}=\frac{X}{\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}(X\cdot X^{T}) _{ij}},X\in\mathbb{R}^{n\times d} \tag{1}\] \(X\) is the matrix representing a video, containing \(d\)-dimensional descriptors for \(n\) frames. This weighting scheme can increase the inner product between a copied video descriptor and reference descriptors, emphasizing the predictions of copied videos more than non-copied videos. **Temporal Concat.** Since the frame-level features are extracted without considering any temporal information, we add a temporal post-processing to improve temporal robustness. We concatenate adjacent frame descriptors in a sliding-window manner and apply PCA to reduce their dimensions. In this report, we refer to this temporal post-processing as Temporal Concat. Figure 3 shows an overview of the Temporal Concat method. Figure 4 shows the visualization of the impact of Temporal Concat on the similarity matrix for a positive pair example. The area enclosed by the red box represents the copied segment. An ideal similarity matrix would have peak values along the diagonal within the segment. After applying Temporal Concat, the similarity peaks along the diagonal become more pronounced within the copied segment, while the overall similarity outside the copied segment decreases, indicating less noise. Figure 3: Overview of Temporal Concat. When concatenating descriptors, we assign larger weights to descriptors that are temporally closer to the center frame. This process is applied to all frame descriptors in a sliding-window manner. **Score normalization.** Score normalization [4, 12] is a widely-used trick for ranking systems. It aims to make the similarity score comparable across different queries. We use the same score normalization as described in [12]. The training reference of the VSC2022 data is used as the "noise" dataset. ## 3 Experiments ### Dataset VSC2022 provides a new video copy detection dataset composed of approximately 100,000 videos derived from the YFCC100M [15] dataset. The training dataset contains 8,404 query videos, 40,311 reference videos, and the ground truth for the query videos which contain content derived from reference videos. Edited query videos may have been modified using a number of techniques including blending, \begin{table} \begin{tabular}{c c c c c} \hline \hline Multi-crop & Consistency Weighting & Temporal Concat & \(\mu\)AP (Descriptor) & \(\mu\)AP (Matching) \\ \hline & & & 0.6817 & 0.5706 \\ ✓ & & & 0.7715 & 0.7367 \\ ✓ & ✓ & & 0.8463 & 0.7525 \\ ✓ & ✓ & ✓ & 0.8692 & 0.7594 \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation from baseline to our final solution for both tracks. \(\mu\)AP (Descriptor) is a score of Descriptor Track, and \(\mu\)AP (Matching) is a score of Matching Track. \begin{table} \begin{tabular}{l c} \hline \hline Team & \(\mu\)AP \\ \hline do something & 0.8717 \\ FriendshipFirst1 & 0.8514 \\ **cvl-descriptor (Ours)** & **0.8362** \\ \hline Baseline & 0.6047 \\ \hline \hline \end{tabular} \end{table} Table 2: Leaderboard with Descriptor Track final results (only top 3 teams are listed here). Our results are in bold. \begin{table} \begin{tabular}{l c} \hline \hline Team & \(\mu\)AP \\ \hline do something more & 0.9153 \\ CompetitionSecond2 & 0.7711 \\ **cvl-matching (Ours)** & **0.7036** \\ \hline Baseline & 0.4411 \\ \hline \hline \end{tabular} \end{table} Table 3: Leaderboard with Matching Track final results (only top 3 teams are listed here). Our results are in bold. Figure 4: Visualization of the impact of Temporal Concat on the similarity matrix. The left matrix represents the similarity matrix before applying Temporal Concat, while the right matrix represents the matrix after applying Temporal Concat. The x-axis of the visualization graph corresponds to the sequential frame numbers of the reference video, and the y-axis corresponds to the sequential frame numbers of the query video. altering brightness or contrast, blurring, etc. In the final phase, a new corpus of approximately 8,000 query videos was provided to determine the final ranking. ### Ablations We provide a step-wise comparison with the baseline which only uses ISC-dt1 model and score normalization to extract embedding. In addition, TN is used for localization for Matching Track evaluation. The evaluation was performed on the training dataset, and the \(\upmu\)AP for both tracks are shown in Table 1. Multi-crop and Consistency Weighting improve \(\upmu\)AP by a large margin, and Temporal Concat also has some positive effects. ### VSC2022 Results The final VSC2022 results are shown in Table 2 and Table 3. There are 344 participants in the descriptor track and 212 participants in the matching track. With various tricks that exploit copy video characteristics, our simple pipeline achieved 3rd place for both tracks. ## 4 Conclusion This paper presents a simple yet powerful pipeline for video copy detection. We incorporate several tricks, such as TTA, Consistency Weighting, and Temporal Concat. Experiments show such tricks significantly improve detection performance. We achieved 3rd place for both the descriptor track and matching track of the VSC 2022.
2307.08965
Koopman spectral analysis of skew-product dynamics on Hilbert $C^*$-modules
We introduce a linear operator on a Hilbert $C^*$-module for analyzing skew-product dynamical systems. The operator is defined by composition and multiplication. We show that it admits a decomposition in the Hilbert $C^*$-module, called eigenoperator decomposition, that generalizes the concept of the eigenvalue decomposition. This decomposition reconstructs the Koopman operator of the system in a manner that represents the continuous spectrum through eigenoperators. In addition, it is related to the notions of cocycle and Oseledets subspaces and it is useful for characterizing coherent structures under skew-product dynamics. We present numerical applications to simple systems on two-dimensional domains.
Dimitrios Giannakis, Yuka Hashimoto, Masahiro Ikeda, Isao Ishikawa, Joanna Slawinska
2023-07-18T04:29:50Z
http://arxiv.org/abs/2307.08965v1
# Koopman spectral analysis of skew-product dynamics on Hilbert \(C^{*}\)-modules ###### Abstract We introduce a linear operator on a Hilbert \(C^{*}\)-module for analyzing skew-product dynamical systems. The operator is defined by composition and multiplication. We show that it admits a decomposition in the Hilbert \(C^{*}\)-module, called eigenoperator decomposition, that generalizes the concept of the eigenvalue decomposition. This decomposition reconstructs the Koopman operator of the system in a manner that represents the continuous spectrum through eigenoperators. In addition, it is related to the notions of cocycle and Oseledets subspaces and it is useful for characterizing coherent structures under skew-product dynamics. We present numerical applications to simple systems on two-dimensional domains. **Keywords: Koopman operator, transfer operator, operator cocycle, Hilbert \(C^{*}\)-module, skew-product dynamical system** ## 1 Introduction ### Background and motivation Operator-theoretic methods have been used extensively in analysis and computational techniques for dynamical systems. Let \(f:\mathcal{X}\rightarrow\mathcal{X}\) be a dynamical system on a state space \(\mathcal{X}\). Then, the Koopman operator \(U_{f}\) associated with \(f\) is defined as a composition operator on an \(f\)-invariant function space \(\mathcal{F}\) on \(\mathcal{X}\), \[U_{f}v=v\circ f,\] for \(v\in\mathcal{F}\)[1, 2]. In many cases, \(\mathcal{F}\) is chosen as a Banach space or Hilbert space, such as the Lebesgue spaces \(L^{p}(\mathcal{X})\) for a measure space \(\mathcal{X}\) and the Hardy space \(H^{p}(\mathbb{D})\) on the unit disk \(\mathbb{D}\), where \(p\in[1,\infty]\). Meanwhile, the Perron-Frobenius, or transfer, operator associated with \(f\) is defined as the adjoint \(P_{f}\) of the Koopman operator acting on the continuous dual \(\mathcal{F}^{\prime}\) of \(\mathcal{F}\), i.e., \(P_{f}\nu=\nu\circ U_{f}\) for \(\nu\in\mathcal{F}^{\prime}\)[3]. In a number of important cases (e.g., \(\mathcal{F}=L^{p}(\mathcal{X})\) with \(p\in[1,\infty)\) or \(\mathcal{F}=C(\mathcal{X})\) for a compact Hausdorff space \(\mathcal{X}\)), \(\mathcal{F}^{\prime}\) can be identified with a space of measures on \(\mathcal{X}\); the transfer operator is then identified with the pushforward map on measures, \(P_{f}\nu=\nu\circ f^{-1}\). When \(\mathcal{F}\) has a predual, \(\mathcal{F}_{*}\subseteq\mathcal{F}^{\prime}\) it is common to define \(P_{f}\) as the predual of the Koopman operator, i.e., \((U_{f}v)\nu=v(P_{f}\nu)\); an important such example is \(\mathcal{F}=L^{\infty}(\mathcal{X})\) with \(\mathcal{F}_{*}=L^{1}(\mathcal{X})\). A central tenet of modern ergodic theory is to leverage the duality relationships between \(f:\mathcal{X}\rightarrow\mathcal{X}\), \(U_{f}:\mathcal{F}\rightarrow\mathcal{F}\), and \(P_{f}:\mathcal{F}^{\prime}\rightarrow\mathcal{F}^{\prime}\) to characterize properties of nonlinear dynamics such ergodicity, mixing, and existence of factor maps, using linear operator-theoretic techniques [4]. Starting from work in the late 1990s and early 2000s [5, 6, 7, 8], operator-theoretic techniques have also proven highly successful in data-driven applications [9, 10, 11, 12]. A primary such application is the modal decomposition (e.g., [13, 14, 15, 16, 17, 18]). This approach applies eigenvalue decomposition to the Koopman operator to identify the long-term behavior of the dynamical system. Assume \(\mathcal{F}\) is a Hilbert space equipped with an inner product \(\left\langle\cdot,\cdot\right\rangle\), and \(U_{f}\) is normal, bounded, and diagonalizable, with eigenvalues \(\lambda_{1},\lambda_{2},\ldots\in\mathbb{C}\) and corresponding basis of orthonormal eigenvectors \(v_{1},v_{2},\ldots\in\mathcal{F}\). Then, for \(u\in\mathcal{F}\) and a.e. \(x\in\mathcal{X}\) we have \(u(f^{i}(x))=U_{f}^{i}u(x)=\sum_{j=1}^{\infty}\lambda_{j}^{i}v_{j}(x)\left\langle v _{j},u\right\rangle\), where \(i\in\mathbb{N}\) represents discrete time. Therefore, the time evolution of observables is described by the Koopman eigenvalues and corresponding eigenvectors. By computing the eigenvalues of the Koopman operator, we obtain oscillating elements and decaying elements in the dynamical system. Several attempts have been made to generalize the above decomposition to the case where the Koopman operator has continuous or residual spectrum. Korda et al. [19] approximate the spectral measure of the Koopman operator on \(L^{2}(\mathcal{X})\) for measure-preserving dynamics using Christoffel-Darboux kernels in spectral space. Slipantschuk et al. [20] consider a riddged Hilbert space and extend the Koopman operator to a space of distributions so that it becomes compact. Colbrook and Townsend [21] employ a residual-based approach that consistently approximates the spectral measure by removing spurious eigenvalues from DMD-type spectral computations. Spectrally approximating the Koopman operator in measure-preserving, ergodic flows by compact operators on reproducing kernel Hilbert spaces (RKHSs) has also been investigated [22]. However, dealing with continuous and residual Koopman spectra is still a challenging problem. On the transfer operator side, popular approximation techniques are based on the Ulam method [23]. The Ulam method has been shown to yield spectrally consistent approximations for particular classes of systems such as expanding maps and Anosov diffeomorphisms on compact manifolds [5]. In some cases, spectral computations from the Ulam method has been shown to recover eigenvalues of transfer operators on anisotropic Banach spaces adapted to the expanding/contracting subspaces of such systems [24]; however, these results depend on carefully chosen state space partitions that may be hard to construct in high dimensions and/or under unknown dynamics. Various modifications of the basic Ulam method have been proposed that are appropriate for high-dimensional applications; e.g., sparse grid techniques [25]. ### Skew-product dynamical systems We focus on measure-preserving skew-product systems in discrete time, \(T(y,z)=(h(y),g(y,z))\), or continuous-time, \(\Phi_{t}(y,z)=(h_{t}(y),g_{t}(y,z))\), on a product space \(\mathcal{X}=\mathcal{Y}\times\mathcal{Z}\). Here, \(\mathcal{Y}\) and \(\mathcal{Z}\) are measure spaces, oftentimes referred to as the "base" and "fiber", respectively. In such systems, the driving dynamics on \(\mathcal{Y}\) is autonomous, but the dynamics on \(\mathcal{Z}\) depends on the configuration \(y\in\mathcal{Y}\). In many cases, one is interested in the time-dependent fiber dynamics, rather than the autonomous dynamics on the base. A typical example of skew-product dynamics is Lagrangian tracer advection under a time-dependent fluid flow [26, 27, 28], where \(\mathcal{Y}\) is the state space of the fluid dynamical equations of motion and \(\mathcal{Z}\) is the spatial domain where tracer advection takes place. A well-studied approach for analysis of skew-product systems involves replacing the spectral decomposition of Koopman/transfer operators acting on functions on \(\mathcal{X}\) by decomposition of associated operator _cocycles_ acting on functions on \(\mathcal{Y}\) using multiplicative ergodic theorems. In a standard formulation of the multiplicative ergodic theorem, first proved by Oseledets [29], one considers an invertible measure-preserving map \(h:\mathcal{Y}\rightarrow\mathcal{Y}\) and the cocycle generated by a matrix-valued map \(A\) on \(\mathcal{Y}\). The multiplicative ergodic theorem then shows the existence of subspaces \(\mathcal{V}_{1}(y),\ldots,\mathcal{V}_{k}(y)\) such that \(A(y)\mathcal{V}_{j}(y)=\mathcal{V}_{j}(h(y))\). The subspace \(\mathcal{V}_{j}(y)\) is called an Oseledets subspace (or equivariant subspace). Each Oseledets subspace has an associated Lyapunov exponent and associated covariant vectors, which are the analogs of the eigenvalues and eigenvectors of Koopman/transfer operators, respectively, in the setting of cocycles. Since its inception, the multiplicative ergodic theorem has been extended in many ways to infinite-dimensional operator cocycles [30, 31, 32, 33, 34]. Under appropriate quasi-compactness assumptions, it has been shown that the Lyapunov exponent spectrum is at most countably infinite and the associated Oseledets subspaces are finite-dimensional, e.g., [34, Theorem A]. A primary application of Oseledets decompositions is the detection of coherent sets and coherent structures in natural and engineered systems [35]. A family of sets \(\{\mathcal{S}(y)\}_{y\in\mathcal{Y}}\) is called coherent if \(\nu(\mathcal{S}(y)\bigcap g(h(y),\mathcal{S}(y)))/\nu(\mathcal{S}(y))\) is large for a reference measure \(\nu\) on \(\mathcal{Z}\). If an Oseledets subspace \(\mathcal{V}(y)\) with respect to the transfer operator cocycle \(U^{-1}_{g(y,\cdot)}\) is represented as \(\mathcal{V}(y)=\mathrm{Span}\{v_{y}\}\) for a covariant vector \(v_{y}\in L^{2}(\mathcal{Z})\) satisfying \(U_{g(y,\cdot)}v_{h(y)}=v_{y}\), then setting \(\mathcal{S}(y)\) to a level set of \(v_{y}\) leads to a family of coherent sets. Finite-time coherent sets and Lagrangian coherent structures as the boundaries of the finite-time coherent sets have also been studied [26, 36, 37]. ### Eigenoperator decomposition In this paper, we investigate a different approach to deal with continuous and residual spectra of Koopman operators on \(L^{2}\) associated with skew-product dynamical systems. We propose a new decomposition, called _eigenoperator decomposition_, which reconstructs the Koopman operator from multiplication operators acting on certain subspaces, referred to here as generalized Oseledets spaces. These multiplication operators are obtained by solving an eigenvalue-type equation, but they can individually have continuous spectrum. Intuitively, this decomposition provides a factorization of the (potentially continuous) spectrum of the underlying Koopman operator into the spectra of eigenoperator families. Our approach is based on the theory of Hilbert \(C^{*}\)-modules [38], which generalizes Hilbert space theory by replacing the complex-valued inner product by a product that takes values in a \(C^{*}\)-algebra. In this work, we employ the \(C^{*}\)-algebra of bounded linear operators on \(L^{2}(\mathcal{Z})\), denoted by \(\mathcal{B}(L^{2}(\mathcal{Z}))\). A standard operator-theoretic approach for skew-product dynamics is to define the Koopman or transfer operator on the product Hilbert space \(\mathcal{H}=L^{2}(\mathcal{Y})\otimes L^{2}(\mathcal{Z})\)[28, 39]. In contrast, here we consider the Hilbert \(C^{*}\)-module \(\mathcal{M}=L^{2}(\mathcal{Y})\otimes\mathcal{B}(L^{2}(\mathcal{Z}))\) over \(\mathcal{B}(L^{2}(\mathcal{Z}))\). By considering \(\mathcal{B}(L^{2}(\mathcal{Z}))\) instead of \(L^{2}(\mathcal{Z})\), we aim to push information about the continuous spectrum of the Koopman operator onto the \(C^{*}\)-algebra \(\mathcal{B}(L^{2}(\mathcal{Z}))\). In more detail, starting from discrete-time systems, we define a \(\mathcal{B}(L^{2}(\mathcal{Z}))\)-linear operator \(K_{T}\) on \(\mathcal{M}\), which can be thought of as a lift of the standard Koopman operator on \(\mathcal{H}\) to the Hilbert \(C^{*}\)-module setting. In addition, \(K_{T}\) can be used to reconstruct the Koopman operator of the full skew-product system on \(\mathcal{X}\). We show that \(K_{T}\) admits a decomposition \[K_{T}\hat{w}_{i,j}=\hat{w}_{i,j}\cdot\hat{M}_{i,j}, \tag{1}\] where \(\hat{M}_{i,j}\) is a \(\mathcal{B}(L^{2}(\mathcal{Z}))\)-linear multiplication operator (which we call eigenoperator), and \(\hat{w}_{i,j}\in\mathcal{M}\) are eigenvectors associated with the operator cocycle on \(L^{2}(\mathcal{Z})\) induced by the skew-product dynamics. We also derive an analogous version of (1) for continuous-time systems, formulated in terms of the generator of the Koopman group \(\{K_{\Phi_{t}}\}_{t\in\mathbb{R}}\) acting on \(\mathcal{M}\). A schematic overview of our approach for the continuous-time case is displayed in Fig. 1. The eigenoperator decomposition (1) and its continuous-time variant have associated equivariant subspaces of \(L^{2}(\mathcal{Z})\) as in the multiplicative ergodic theorem. In particular, to each eigenoperator \(\hat{M}_{i,j}\) there is an associated family \(\{\mathcal{V}_{j}(y)\}_{y\in\mathcal{Y}}\) of closed subspaces \(\mathcal{V}_{j}(y)\subseteq L^{2}(\mathcal{Z})\) such that \(U_{g(y,\cdot)}\) maps vectors in \(\mathcal{V}_{j}(h(y))\) to vectors in \(\mathcal{V}_{j}(y)\). Since we consider cocycles generated by unitary Koopman/transfer operators, the equivariant subspaces \(\mathcal{V}_{j}(y)\) can be infinite-dimensional. Therefore, we call them generalized Oseledets subspaces. Spectral analysis of \(\hat{M}_{i,j}\) then reveals coherent structures under the skew-product dynamics. The rest of this paper is organized as follows. In Section 2, we derive our eigenoperator decomposition for discrete-time systems, and establish the correspondence between \(K_{T}\) and the Koopman operator. We illustrate the decomposition in Section 3 by means of analytical examples with fiber dynamics on abelian and non-abelian groups. In these examples, the generalized Oseledets subspaces can be constructed explicitly, which provides intuition about the behavior of eigenoperator decomposition. In Section 4, we describe the construction of the infinitesimal generator and the associated eigenoperator decomposition for continuous-time systems. Section 5 contains numerical applications of the decomposition for continuous-time systems to simple time-dependent flows in two-dimensional domains. Section 6 contains a conclusory discussion. The paper includes an Appendix collecting auxiliary results. ## 2 Discrete-time systems ### Skew product system and Koopman operator on Hilbert space Let \(\mathcal{Y}\) and \(\mathcal{Z}\) be separable measure spaces equipped with measures \(\mu\) and \(\nu\), respectively and let \(\mathcal{X}=\mathcal{Y}\times\mathcal{Z}\), the direct product measure space of \(\mathcal{Y}\) and \(\mathcal{Z}\). Let \(h:\mathcal{Y}\rightarrow\mathcal{Y}\) be a measure preserving and invertible map and let \(g:\mathcal{X}\rightarrow\mathcal{Z}\) be a measurable map such that \(g(y,\cdot)\) is measure preserving and invertible for any \(y\in\mathcal{Y}\). Consider the following skew product transformation \(T\) on \(\mathcal{X}\): \[T(y,z)=(h(y),g(y,z)).\] We consider the Koopman operator \(U_{T}\) on \(L^{2}(\mathcal{X})\). Note that since \(L^{2}(\mathcal{Y})\) and \(L^{2}(\mathcal{Z})\) are separable, their tensor product \(L^{2}(\mathcal{Y})\otimes L^{2}(\mathcal{Z})\) satisfies \[L^{2}(\mathcal{Y})\otimes L^{2}(\mathcal{Z})\simeq L^{2}(\mathcal{X}).\] Figure 1: Overview of eigenoperator decomposition for continuous-time systems. **Definition 1**.: The Koopman operator \(U_{T}\) on \(L^{2}(\mathcal{X})\) is defined as \[U_{T}f=f\circ T\] for \(f\in L^{2}(\mathcal{X})\). Since \(T\) is measure preserving, the Koopman operator \(U_{T}\) is an unitary operator, but \(U_{T}\) does not always have an eigenvalue decomposition since it has continuous spectrum in general. ### Operator on Hilbert \(\boldsymbol{C^{*}}\)-module related to the Koopman operator We extend the Koopman operator \(U_{T}\) to an operator on a Hilbert \(C^{*}\)-module. We first introduce Hilbert \(C^{*}\)-module [38, 40]. **Definition 2**.: For a module \(\mathcal{M}\) over a \(C^{*}\)-algebra \(\mathcal{A}\), a map \(\left\langle\cdot,\cdot\right\rangle_{\mathcal{M}}:\mathcal{M}\times\mathcal{ M}\rightarrow\mathcal{A}\) is referred to as an \(\mathcal{A}\)-valued inner product if it is \(\mathbb{C}\)-linear with respect to the second variable and has the following properties: For \(w_{1},w_{2},w_{3}\in\mathcal{M}\) and \(a,b\in\mathcal{A}\), 1. \(\left\langle w_{1},w_{2}a+w_{3}b\right\rangle_{\mathcal{M}}=\left\langle w_{1},w_{2}\right\rangle_{\mathcal{M}}a+\left\langle w_{1},w_{3}\right\rangle_{ \mathcal{M}}b\), 2. \(\left\langle w_{1},w_{2}\right\rangle_{\mathcal{M}}=\left\langle w_{2},w_{1} \right\rangle_{\mathcal{M}}^{*}\), 3. \(\left\langle w_{1},w_{1}\right\rangle_{\mathcal{M}}\) is positive, 4. If \(\left\langle w_{1},w_{1}\right\rangle_{\mathcal{M}}=0\) then \(w_{1}=0\). If \(\left\langle\cdot,\cdot\right\rangle\) satisfies the conditions 1\(\sim\)3, but not 4, then it is called a semi-inner product. Let \(\left\|w\right\|_{\mathcal{M}}=\left\|\left\langle w,w\right\rangle_{\mathcal{ M}}\right\|_{\mathcal{A}}^{1/2}\) for \(w\in\mathcal{M}\). Then \(\|\cdot\|_{\mathcal{M}}\) is a norm in \(\mathcal{M}\). **Definition 3**.: A Hilbert \(C^{*}\)-module over \(\mathcal{A}\) or Hilbert \(\mathcal{A}\)-module is a module over \(\mathcal{A}\) equipped with an \(\mathcal{A}\)-valued inner product and complete with respect to the norm induced by the \(\mathcal{A}\)-valued inner product. Let \(\mathcal{A}\) be the \(C^{*}\)-algebra \(\mathcal{B}(L^{2}(\mathcal{Z}))\). Let \[\mathcal{M}=L^{2}(\mathcal{Y})\otimes\mathcal{A},\] i.e., the (right) Hilbert \(\mathcal{A}\)-module defined by the tensor product of the Hilbert \(\mathbb{C}\)-module \(L^{2}(\mathcal{Y})\) and (right) Hilbert \(\mathcal{A}\)-module \(\mathcal{A}\)[38]. We now define an operator on a Hilbert \(C^{*}\)-module. **Definition 4**.: We define the a right \(\mathcal{A}\)-linear operator \(K_{T}\) on \(\mathcal{M}\) (i.e., \(K_{T}\) is linear and satisfies \(K_{T}(wa)=(K_{t}w)a\) for all \(a\in\mathcal{A}\) and \(w\in\mathcal{M}\)) by \[K_{T}(v\otimes a)(y)=v(h(y))U_{g(y,\cdot)}a\] for \(v\in L^{2}(\mathcal{Y})\), \(a\in\mathcal{A}\), and \(y\in\mathcal{Y}\). Here, for \(y\in\mathcal{Y}\), \(U_{g(y,\cdot)}\) is the Koopman operator on \(L^{2}(\mathcal{Z})\) with respect to the map \(g(y,\cdot)\). The well-definedness of \(K_{T}\) is not trivial. The following proposition shows the well-definedness of \(K_{T}\) as an operator from \(\mathcal{M}\) to \(\mathcal{M}\). **Proposition 1**.: _The operator \(K_{T}\) is a right \(\mathcal{A}\)-linear unitary operator from \(\mathcal{M}\) to \(\mathcal{M}\)._ The proof of Proposition 1 is documented in Appendix. The next proposition shows the relationship between \(U_{T}\) and \(K_{T}\), which enables us to connect existing studies of Koopman operators with our framework. **Proposition 2**.: _Let \(\{\gamma_{i}\}_{i=1}^{\infty}\) be an orthonormal basis of \(L^{2}(\mathcal{Z})\). Let \(\iota_{i}:L^{2}(\mathcal{X})\to\mathcal{M}\) and \(P_{i}:\mathcal{M}\to L^{2}(\mathcal{X})\) be linear operators defined as \(v\otimes u\mapsto v\otimes u\gamma_{i}^{\prime}\) and \(v\otimes a\mapsto v\otimes a\gamma_{i}\), respectively. Then, we have \(U_{T}=P_{i}K_{T}\iota_{i}\) for any \(i=1,2,\ldots\). Moreover, we have \(K_{T}=\sum_{i=1}^{\infty}\iota_{i}U_{T}P_{i}\), where the sum converges strongly to \(K_{T}\) in \(\mathcal{M}\)._ Proof.: For \(v\in L^{2}(\mathcal{Y})\), \(u\in L^{2}(\mathcal{Z})\), \(y\in\mathcal{Y}\), and \(i=1,2,\ldots\), we have \[K_{T}\iota_{i}(v\otimes u)(y) =K_{T}(v\otimes u\gamma_{i}^{\prime})(y)=v(h(y))U_{g(y,\cdot)}u \gamma_{i}^{\prime}\] \[=v(h(y))u(g(y,\cdot))\gamma_{i}^{\prime}=\iota_{i}U_{T}(v\otimes u). \tag{2}\] By acting \(P_{i}\) on the both sides of Eq. (2), we have \(P_{i}K_{T}\iota_{i}(v\otimes u)=U_{T}(v\otimes u)\). Since \(U_{T}\) and \(K_{T}\) are bounded, \(P_{i}K_{T}\iota_{i}=U_{T}\) holds on \(\mathcal{M}\). Moreover, for \(v\in L^{2}(\mathcal{Y})\) and \(a\in\mathcal{A}\), we have \[\sum_{i=1}^{\infty}\iota_{i}P_{i}(v\otimes a)=\sum_{i=1}^{\infty}v\otimes a \gamma_{i}\gamma_{i}^{\prime}=v\otimes a, \tag{3}\] where the convergence is the strong convergence. Since \(K_{T}\) is bounded, by Eqs. (2) and (3), \(K_{T}(v\otimes a)=\sum_{i=1}^{\infty}\iota_{i}U_{T}P_{i}(v\otimes a)\) holds for \(v\in L^{2}(\mathcal{Y})\) and \(a\in\mathcal{A}\). Since \(U_{T}\) and \(K_{T}\) are bounded, \(K_{T}=\sum_{i=1}^{\infty}\iota_{i}U_{T}P_{i}\) holds on \(\mathcal{M}\). ### Decomposition of \(K_{t}\) We derive a decomposition of \(K_{T}\) called the eigenoperator decomposition. We first derive a fundamental decomposition using a cocycle on \(\mathcal{Y}\). Then, we refine the decomposition using generalized Oseledets subspaces. #### 2.3.1 Fundamental decomposition using cocycle We first define vectors to decompose the operator \(K_{T}\) using a cocycle on \(\mathcal{Y}\). **Definition 5**.: For \(i\in\mathbb{Z}\), we define a linear operator \(w_{i}:L^{2}(\mathcal{Z})\to L^{2}(\mathcal{X})\) as \[(w_{i}u)(y,z)=\left\{\begin{array}{ll}(U_{g(y,\cdot)}U_{g(h(y), \cdot)}\cdots U_{g(h^{i-1}(y),\cdot)}u)(z)&(i>0)\\ I&(i=0)\\ (U_{g(h^{-1}(y),\cdot)}^{*}U_{g(h^{-2}(y),\cdot)}^{*}\cdots U_{g(h^{i}(y), \cdot)}^{*}u)(z)&(i<0).\end{array}\right.\] We can see that \(\mathcal{M}\) can be also regarded as a left \(\mathcal{A}\)-module. Thus, we can also consider left \(\mathcal{A}\)-linear operators on \(\mathcal{M}\). In the following, we denote the action of a left \(\mathcal{A}\)-linear operator \(A\) on a vector \(w\in\mathcal{M}\) by \(w\cdot M\). **Proposition 3**.: _For \(i\in\mathbb{Z}\), we have \(w_{i}\in\mathcal{M}\). Moreover, \(K_{T}w_{i}=w_{i+1}=w_{i}\cdot M_{i}\), where \(M_{i}\) is a left \(\mathcal{A}\)-linear multiplication operator on \(\mathcal{M}\) defined as \((w\cdot M_{i})(y)=w(y)U_{g(h^{i}(y),\cdot)}\)._ Proof.: We obtain \(w_{i}\in\mathcal{M}\) in the same manner as the proof of Proposition 1. The identities \(K_{T}w_{i}=w_{i+1}=w_{i}\cdot M_{i}\) follow by the definition of \(w_{i}\) The vectors \(w_{i}\) characterize the dynamics within \(\mathcal{Z}\), which are specific to skew product dynamical systems and of particular interest to us. **Proposition 4**.: _The action of the Koopman operator \(U_{T}\) is decomposed into two parts as_ \[U_{T}^{i}(v\otimes u)(y,z)=U_{h}^{i}v(z)\cdot U_{T}^{i-1}u\circ g(y,z) \tag{4}\] _for \(v\in L^{2}(\mathcal{Y})\), \(u\in L^{2}(\mathcal{Z})\), and \(i\in\mathbb{Z}\)._ Proof.: For \(i>0\), it follows by the definition of \(U_{T}\). Regarding the case of \(i\leq 0\), \(U_{T}^{-1}\) is calculated as follows: \[\langle v_{1}\otimes u_{1},U_{T}(v_{2}\otimes u_{2})\rangle =\int_{z\in\mathcal{Z}}\int_{y\in\mathcal{Y}}\overline{v_{1}(y)u_ {1}(z)}v_{2}(h(y))u_{2}(g(y,z))\mathrm{d}\mu(y)\mathrm{d}\nu(z)\] \[=\int_{z\in\mathcal{Z}}\int_{y\in\mathcal{Y}}\overline{v_{1}(h^{ -1}(y))u_{1}(g_{h^{-1}(y)}(z))}v_{2}(y)u_{2}(z)\mathrm{d}\mu(y)\mathrm{d}\nu(z),\] where for \(y\in\mathcal{Y}\), the map \(g_{y}:\mathcal{Z}\to\mathcal{Z}\) is defined as \(g_{y}(z)=g(y,z)\). Thus, we have \(U_{T}^{-1}(u\otimes v)(y,z)=v(h^{-1}(y))u(g_{h^{-1}(y)}(z))\). As a result, the equation (4) is derived also for \(i\leq 0\). We define a submodule \(\mathcal{W}\) of \(\mathcal{M}\), which is composed of the vectors \(w_{i}\) (\(i\in\mathbb{Z}\)). Let \[\mathcal{W}_{0}=\bigg{\{}\sum_{i\in F}w_{i}c_{i}\,\mid\,F\subseteq\mathbb{Z}: \text{ finite set, }c_{i}\in\mathcal{A}\bigg{\}}\] and let \(\mathcal{W}\) be the completion of \(\mathcal{W}_{0}\) with respect to the norm in \(\mathcal{M}\). Note that \(\mathcal{W}\) is a submodule of \(\mathcal{M}\) and Hilbert \(\mathcal{A}\)-module. Moreover, for \(u\in L^{2}(\mathcal{Z})\), let \(\tilde{w}_{u,i}\in L^{2}(\mathcal{X})\) be defined as \(\tilde{w}_{u,i}(y,z)=u(g(h^{i-1}(y),\ldots,g(h(y),g(y,z))\ldots))\) for \(i>0\), \(\tilde{w}_{u,0}(y,z)=u(z)\), and \(\tilde{w}_{u,i}(y,z)=u(g(h^{i}(y),\ldots,g(h^{-2}(y),g(h^{-1}(y),z))\ldots))\) for \(i<0\). Let \[\tilde{\mathcal{W}}_{0}=\bigg{\{}\sum_{j=1}^{n}\sum_{i\in F}c_{i}\tilde{w}_{ u_{j},i}\,\mid\,n\in\mathbb{N},\,\,F\subseteq\mathbb{Z}:\text{ finite set, }c_{i}\in\mathbb{C},\,\,u_{j}\in L^{2}(\mathcal{Z})\bigg{\}}\] and \(\tilde{\mathcal{W}}\) be the completion of \(\tilde{\mathcal{W}}_{0}\) with respect to the norm in \(L^{2}(\mathcal{X})\). We show the connection of the operator \(K_{T}\) restricted on \(\mathcal{W}\) with the Koopman operator \(U_{T}\). **Proposition 5**.: _With the notation defined in Proposition 2, we have \(K_{T}|_{\mathcal{W}}\,\iota_{i}|_{\tilde{\mathcal{W}}}=\iota_{i}U_{T}|_{ \tilde{\mathcal{W}}}\) for i=1,2,...._ Proof.: For \(u\in L^{2}(\mathcal{Z})\), \(j\in\mathbb{Z}\), and \(i>0\), we have \[(\iota_{i}\tilde{w}_{u,j})(y) =u(g(h^{i-1}(y),\ldots,g(h(y),g(y,\cdot))\ldots))\gamma_{i}^{\prime}\] \[=U_{g(h^{i-1}(y),\cdot)}\cdots U_{g(h(y),\cdot)}U_{g(h(y),\cdot)} u\gamma_{i}^{\prime}=w_{i}(y)(u\gamma_{i}^{\prime}).\] Thus, we obtain \(\iota_{i}\tilde{w}_{u,j}\in\mathcal{W}\). We obtain \(\iota_{i}\tilde{w}_{u,j}\in\mathcal{W}\) for \(i\leq 0\) in the same manner as the case of \(i>0\). Therefore, the range of \(\iota_{i}|_{\tilde{\mathcal{W}}}\) is contained in \(\mathcal{W}\). The equality is deduced by the definitions of \(K_{T}\) and \(U_{T}\). We can describe the decomposition proposed in Proposition 3 using operators on Hilbert \(C^{*}\)-modules. Let \[\mathcal{C}_{0} =\{(\ldots,c_{-1},c_{0},c_{1},\ldots)\,\mid\,c_{i}\in\mathcal{A},\, \,c_{i}=0\text{ for all but finite }i\in\mathbb{Z}\},\] \[\mathcal{C}^{\prime}_{0} =\{(\ldots,A_{-1},A_{0},A_{1},\ldots)\,\mid\,A_{i}:\text{ left }\mathcal{A}\text{-linear operator on }\mathcal{W},\] \[A_{i} =0\text{ for all but finite }i\in\mathbb{Z}\}.\] We can see \(\mathcal{C}_{0}\) and \(\mathcal{C}^{\prime}_{0}\) are right \(\mathcal{A}\)-modules. We define \(\mathcal{A}\)-valued semi-inner products in \(\mathcal{C}_{0}\) and \(\mathcal{C}^{\prime}_{0}\) as \[\langle(\ldots,c_{-1},c_{0},c_{1},\ldots),(\ldots,d_{-1},d_{0},d _{1},\ldots)\rangle_{\mathcal{C}_{0}}=\sum_{i,j\in\mathbb{Z}}c_{i}^{*}\left<w_ {i},w_{j}\right>_{\mathcal{M}}d_{j},\] \[\langle(\ldots,A_{-1},A_{0},A_{1},\ldots),(\ldots,B_{-1},B_{0},B _{1},\ldots)\rangle_{\mathcal{C}^{\prime}_{0}}=\sum_{i,j\in\mathbb{Z}}\left<w _{i}\cdot A_{i},w_{j}\cdot B_{j}\right>_{\mathcal{M}},\] respectively. We define an equivalent relation \(c\sim d\) by \(c-d\in\mathcal{N}\) for \(c,d\in\mathcal{C}_{0}\), where \(\mathcal{N}=\{c\in\mathcal{C}_{0}\,\mid\,\left<c,c\right>_{\mathcal{C}_{0}}=0\}\). There is an \(\mathcal{A}\)-valued inner product on \(\mathcal{C}_{0}/\sim\) given by \(\langle c,d\rangle=\langle c+\mathcal{N},d+\mathcal{N}\rangle\). We denote by \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) the completions of \(\mathcal{C}_{0}/\sim\) and \(\mathcal{C}^{\prime}_{0}/\sim\) with respect to the norms induced by the above inner products. We abuse the notation and denote by \((\ldots,c_{-1},c_{0},c_{1},\ldots)\) the equivalent class of \((\ldots,c_{-1},c_{0},c_{1},\ldots)\) with respect to Let \(W\) be a right \(\mathcal{A}\)-linear operator from \(\mathcal{C}^{\prime}\) to \(\mathcal{W}\) defined as \[W(\ldots,A_{-1},A_{0},A_{1},\ldots)=\sum_{i\in\mathbb{Z}}w_{i}\cdot A_{i}\] for \((\ldots,A_{-1},A_{0},A_{1},\ldots)\in\mathcal{C}^{\prime}_{0}\) and let \(X\) be a right \(\mathcal{A}\)-linear operator from \(\mathcal{W}\) to \(\mathcal{C}\) defined as \[X\sum_{i\in F}w_{i}c_{i}=(\ldots,c_{-1},c_{0},c_{1},\ldots)\] for a finite set \(F\subseteq\mathbb{Z}\). In addition, let \(M\) be a right \(\mathcal{A}\)-linear operator from \(\mathcal{C}\) to \(\mathcal{C}^{\prime}\) defined as \[M(\ldots,c_{-1},c_{0},c_{1},\ldots)=(\ldots,M_{-1}c_{-1},M_{0}c_{0},M_{1}c_{1},\ldots)\] for \((\ldots,c_{-1},c_{0},c_{1},\ldots)\in\mathcal{C}_{0}\), which is formally denoted by \(\operatorname{diag}\{M_{i}\}_{i\in\mathbb{Z}}\). Here, \(M_{i}\) is the multiplication operator defined in Proposition 3. **Proposition 6**.: _The operators \(W\) and \(X\) are unitary operators. Therefore, \(XW\) is an unitary operator from \(\mathcal{C}^{\prime}\) to \(\mathcal{C}\)._ Proof.: Let \(\tilde{W}:\mathcal{W}\rightarrow\mathcal{C}^{\prime}\) be the right \(\mathcal{A}\)-linear operator defined as \(\tilde{W}(\sum_{i\in F}w_{i}c_{i})=(\ldots,C_{-1},C_{0},C_{1},\ldots)\), where \(C_{i}\) is the left \(\mathcal{A}\)-linear multiplication operator on \(\mathcal{W}\) with respect to the constant function \(c_{i}\). In addition, let \(\tilde{X}:\mathcal{C}\rightarrow\mathcal{W}\) be the right \(\mathcal{A}\)-linear operator defined as \(\tilde{X}(\ldots,c_{-1},c_{0},c_{1},\ldots)=\sum_{i\in F}w_{i}c_{i}\). Then, \(\tilde{W}\) and \(\tilde{X}\) are the inverses of \(W\) and \(X\), respectively. Moreover, for \((\ldots,A_{-1},A_{0},A_{1},\ldots),(\ldots,B_{-1},B_{0},B_{1},\ldots)\in\mathcal{C} _{0}^{\prime},\) we have \[\left\langle W(\ldots,A_{-1},A_{0},A_{1},\ldots),W(\ldots,B_{-1},B_{0},B_{1}, \ldots)\right\rangle_{\mathcal{M}}=\left\langle\sum_{i\in\mathbb{Z}}w_{i} \cdot A_{i},\sum_{i\in\mathbb{Z}}w_{i}\cdot B_{i}\right\rangle_{\mathcal{M}}\] \[=\left\langle(\ldots,A_{-1},A_{0},A_{1},\ldots),(\ldots,B_{-1},B_{0},B_{1}, \ldots)\right\rangle_{\mathcal{C}^{\prime}}\] and for \(c_{i},d_{i}\in\mathcal{A},\) finite subsets \(F\) and \(G\) of \(\mathbb{Z},\) we have \[\left\langle X\sum_{i\in F}w_{i}c_{i},X\sum_{i\in G}w_{i}d_{i} \right\rangle_{\mathcal{C}} =\left\langle(\ldots,c_{-1},c_{0},c_{1},\ldots),(\ldots,d_{-1},d _{0},d_{1},\ldots)\right\rangle_{\mathcal{C}}\] \[=\left\langle\sum_{i\in F}w_{i}c_{i},\sum_{i\in G}w_{i}d_{i} \right\rangle_{\mathcal{M}}.\] **Proposition 7**.: _The operator \(M\) is well-defined and \(K_{T}|_{\mathcal{W}}=WMX\)._ Proof.: Since \(K_{T}\) is unitary, we have \[\left\langle w_{i+1},w_{j+1}\right\rangle=\left\langle K_{T}w_{i},K_{T}w_{j} \right\rangle=\left\langle w_{i},w_{j}\right\rangle \tag{5}\] for \(i,j\in\mathbb{Z}.\) Assume \((\ldots,c_{-1},c_{0},c_{1},\ldots)=0.\) Then, by the equation (5), we obtain \[\left\langle M(\ldots,c_{-1},c_{0},c_{1},\ldots),M(\ldots,c_{-1}, c_{0},c_{1},\ldots)\right\rangle_{\mathcal{C}^{\prime}}\] \[\qquad=\left\langle(\ldots,M_{-1}c_{-1},M_{0}c_{0},M_{1}c_{1}, \ldots),(\ldots,M_{-1}c_{-1},M_{0}c_{0},M_{1}c_{1},\ldots)\right\rangle_{ \mathcal{C}^{\prime}}\] \[\qquad=\sum_{i,j\in F}\left\langle w_{i}\cdot M_{i}c_{i},w_{j} \cdot M_{j}c_{j}\right\rangle_{\mathcal{M}}=\sum_{i,j\in F}\left\langle w_{i+ 1}c_{i},w_{j+1}c_{j}\right\rangle_{\mathcal{M}}\] \[\qquad=\sum_{i,j\in F}\left\langle w_{i}c_{i},w_{j}c_{j}\right\rangle _{\mathcal{M}}=0,\] which shows the well-definiteness of \(M.\) The decomposition \(K_{T}|_{\mathcal{W}}=WMX\) is derived by Proposition 3. In summary, we obtain the following commutative diagram: #### 2.3.2 Further decomposition We further decompose \(w_{i}\) and \(M_{i}\) and obtain a more detailed decomposition of \(K_{T}|_{\mathcal{W}}.\) Let \(\mathcal{V}_{1},\mathcal{V}_{2},\ldots\) be a sequence of maps from \(\mathcal{Y}\) to the set of all closed subspaces of \(L^{2}(\mathcal{Z})\) satisfying \(L^{2}(\mathcal{Z})=\overline{\operatorname{Span}\{\bigcup_{j=1}^{\infty} \mathcal{V}_{j}(y)\}}\) for a.s. \(y\in\mathcal{Y}.\) Let \(p_{j}(y)\in\mathcal{A}\) be the projection onto \(\mathcal{V}_{j}(y)\), i.e., it satisfies \(p_{j}(y)^{2}=p_{j}(y)\) and \(p_{j}(y)^{*}=p_{j}(y)\). For \(i\in\mathbb{Z}\) and \(j=1,2,\ldots\), we define a linear map \(\hat{w}_{i,j}\) from \(L^{2}(\mathcal{Z})\) to \(L^{2}(\mathcal{X})\) as \((\hat{w}_{i,j}u)(y,z)=(w_{i}(y)p_{j}(h^{i}(y))u)(z)\). We decompose \(K_{T}|_{\mathcal{W}}\) using \(\hat{w}_{i,j}\). For each \(j=1,2,\ldots\), the following theorem holds: **Theorem 8** (Eigenoperator decomposition for discrete-time systems).: _Assume \(\mathcal{V}_{j}\) satisfies \(U_{g(y,\cdot)}\mathcal{V}_{j}(h(y))\subseteq\mathcal{V}_{j}(y)\) for a.s. \(y\in\mathcal{Y}\). Assume in addition, for any \(u\in L^{2}(\mathcal{Z})\) and \(i\in\mathbb{Z}\), the map \((y,z)\mapsto(p_{j}(h^{i}(y))u)(z)\) is measurable. Then, \(\hat{w}_{i,j}\) is contained in \(\mathcal{M}\) and we have \(K_{T}\hat{w}_{i,j}=\hat{w}_{i+1,j}=\hat{w}_{i,j}\cdot\hat{M}_{i,j}\). Here, \(\hat{M}_{i,j}\) is a left \(\mathcal{A}\)-linear multiplication operator on \(\mathcal{M}\) defined as \((w\cdot\hat{M}_{i,j})(y)=w(y)U_{g(h^{i}(y),\cdot)}p_{j}(h^{i}(y))\)._ Proof.: We have \[K_{T}\hat{w}_{i,j}(y)=U_{g(y,\cdot)}\hat{w}_{i,j}(h(y))=w_{i+1}(y)p_{j}(h^{i+1} (y))=\hat{w}_{i+1,j}(y).\] In addition, since by the assumption, the range of \(U_{g(h^{i}(y),\cdot)}p_{j}(h^{i+1}(y))\) is contained in \(\mathcal{V}(h^{i}(y))\), we have \[K_{T}\hat{w}_{i,j}(y) =w_{i}(y)U_{g(h^{i}(y),\cdot)}p_{j}(h^{i+1}(y))\] \[=w_{i}(y)p_{j}(h^{i}(y))U_{g(h^{i}(y),\cdot)}p_{j}(h^{i+1}(y))= \hat{w}_{i,j}(y)\cdot\hat{M}_{i,j}.\] **Corollary 9**.: _By replacing \(\{w_{i}\}_{i\in\mathbb{Z}}\) and \(\{M_{i}\}_{i\in\mathbb{Z}}\) by \(\{\hat{w}_{i,j}\}_{i\in\mathbb{Z},j\in\mathbb{N}}\) and \(\{\hat{M}_{i,j}\}_{i\in\mathbb{Z},j\in\mathbb{N}}\), respectively, we define \(\hat{\mathcal{W}}\), \(\hat{\mathcal{C}}\), \(\hat{\mathcal{C}}^{\prime}\), \(\hat{W}\), \(\hat{X}\), and \(\hat{M}\) in the same manner as \(\mathcal{W}\), \(\mathcal{C}\), \(\mathcal{C}^{\prime}\), \(W\), \(X\), and \(M\), respectively. Then, under the assumptions of Theorem 8, we obtain \(K_{T}|_{\mathcal{W}}=\hat{W}\hat{M}\hat{X}\)._ We call \(\hat{M}_{i,j}\) an eigenoperator and \(\hat{w}_{i,j}\) an eigenvector. In addition, we call the subspace \(\mathcal{V}_{j}(y)\) satisfying the assumption in Theorem 8 generalized Oseledets space. If \(\mathcal{V}_{j}(y)\) is a finite-dimensional space, then we can explicitly calculate the spectrum of \(\hat{M}_{i,j}\) as follows. **Proposition 10**.: _Assume \(\dim(\mathcal{V}_{j}(y))\) is finite and constant with respect to \(y\in\mathcal{Y}\). Then, \(\sigma(\hat{M}_{i,j})=\{\lambda\in\mathbb{C}\,\mid\,^{\forall}\epsilon>0,\, \,\mu(\{y\in\mathcal{Y}\,\mid\,\lambda\in\sigma_{\epsilon}(f_{i,j}(y))\})>0\}\). Here, \(f_{i,j}(y)=U_{g(h^{i}(y),\cdot)}p_{j}(h^{i}(y))\). In addition, \(\sigma(a)\) and \(\sigma_{\epsilon}(a)\) for \(a\in\mathcal{A}\) are the spectrum and the essential spectrum of \(a\), respectively._ Proof.: For \(\lambda\in\mathbb{C}\), \(\lambda I-\hat{M}_{i,j}\) is not invertible if and only if there exists \(w\in\mathcal{M}\) with \(w\neq 0\) such that \(w(y)(\lambda I-f_{i,j}(y))=0\) for a.s. \(y\in\mathcal{Y}\), which is equivalent to \[\mu(\{y\in\mathcal{Y}\,\mid\,\lambda I-f_{i,j}(y)\text{ is not invertible}\})>0.\] Assume \(\lambda I-f_{i,j}(y)\) is invertible for a.s. \(y\in\mathcal{Y}\). Then, we have \((\lambda I-\hat{M}_{i,j})^{-1}w(y)=w(y)(\lambda I-f_{i,j}(y))^{-1}\). For \(w\in\mathcal{M}\), we have \[\|(\lambda I-\hat{M}_{i,j})^{-1}w\|_{\mathcal{M}}^{2}=\bigg{\|}\int_{\mathcal{ Y}}(\lambda I-f_{i,j}(y))^{-*}w(y)^{*}w(y)(\lambda I-f_{i,j}(y))^{-1}\mathrm{d}\mu(y) \bigg{\|}_{\mathcal{A}}\] \[\leq\operatorname{tr}\bigg{(}\int_{\mathcal{Y}}(\lambda I-f_{i,j}(y))^ {-*}w(y)^{*}w(y)(\lambda I-f_{i,j}(y))^{-1}\mathrm{d}\mu(y)\bigg{)}\] \[=\operatorname{tr}\bigg{(}\int_{\mathcal{Y}}w(y)(\lambda I-f_{i,j }(y))^{-*}(\lambda I-f_{i,j}(y))^{-1}w(y)^{*}\mathrm{d}\mu(y)\bigg{)}.\] Thus, we have \[\frac{1}{d_{j}}\bigg{\|}\int_{\mathcal{Y}}w(y)(\lambda I-f_{i,j}( y))^{-*}(\lambda I-f_{i,j}(y))^{-1}w(y)^{*}\mathrm{d}\mu(y)\bigg{\|}_{\mathcal{A}} \leq\|(\lambda I-\hat{M}_{i,j})^{-1}w\|_{\mathcal{M}}^{2}\] \[\qquad\leq d_{j}\bigg{\|}\int_{\mathcal{Y}}w(y)(\lambda I-f_{i,j }(y))^{-*}(\lambda I-f_{i,j}(y))^{-1}w(y)^{*}\mathrm{d}\mu(y)\bigg{\|}_{ \mathcal{A}},\] where \(d_{j}=\dim(\mathcal{V}_{j}(y))\). Assume for any \(\epsilon>0\), \(\mu(\{y\in\mathcal{Y}\,\mid\,\|(\lambda I-f_{i,j}(y))^{-1}\|_{\mathcal{A}}>1 /\epsilon\})>0\). We set \(a_{i,j}(y)=v_{i,j}(y)v_{i,j}(y)^{*}\), where \(v_{i,j}(y)\) is the orthonormal eigenvector corresponding to the largest eigenvalue of \((\lambda I-f_{i,j}(y))^{-*}\lambda I-f_{i,j}(y))^{-1}\). Then, we have \[\|(\lambda I-\hat{M}_{i,j})^{-1}w\|_{\mathcal{M}}^{2}\geq\frac{1 }{d_{j}}\bigg{\|}\bigg{[}\int_{\mathcal{Y}}\|(\lambda I-f_{i,j}(y))^{-1}\|_{ \mathcal{A}}^{2}w(y)a_{i,j}(y)w(y)^{*}\mathrm{d}\mu(y)\bigg{\|}_{\mathcal{A}}\] \[\qquad\geq\mu(\{y\in\mathcal{Y}\,\mid\,\|(\lambda I-f_{i,j}(y))^{- 1}\|_{\mathcal{A}}>1/\epsilon\})\frac{1}{\epsilon^{2}d_{j}}\bigg{\|}\bigg{[} \int_{\mathcal{Y}}w(y)a(y)w(y)^{*}\mathrm{d}\mu(y)\bigg{\|}_{\mathcal{A}}.\] Setting \(w(y)=a_{i,j}(y)\), we derive that \((\lambda I-\hat{M}_{i,j})^{-1}\) is unbounded. Conversely, assume \((\lambda I-\hat{M}_{i,j})^{-1}\) is unbounded. Then, we obtain \[\|(\lambda I-\hat{M}_{i,j})^{-1}w\|_{\mathcal{M}}^{2}\leq d_{j} \operatorname*{ess\,sup}_{y\in\mathcal{Y}}\|(\lambda I-f_{i,j}(y))^{-1}\|^{2} \bigg{\|}\int_{\mathcal{Y}}w(y)^{*}w(y)\mathrm{d}\mu(y)\bigg{\|}_{\mathcal{A}}\] \[\qquad\leq d_{j}\operatorname*{ess\,sup}_{y\in\mathcal{Y}}\|( \lambda I-f_{i,j}(y))^{-1}\|^{2}\|w\|_{\mathcal{M}}^{2}.\] Thus, for any \(\epsilon>0\), \(\mu(\{y\in\mathcal{Y}\,\mid\,\|(\lambda I-f_{i,j}(y))^{-1}\|_{\mathcal{A}}>1 /\epsilon\})>0\), which completes the proof. #### 2.3.3 Construction of the generalized Oseledets space \(\mathcal{V}_{j}(y)\) For cocycles generated by matrices, the existence of the Oseledets space is guaranteed by the multiplicative ergodic theorem. This theorem has been generalized for cocycles generated by compact operators or operators that have similar properties to the compactness [31, 32]. In our case, since the cocycle is generated by a unitary operator, we can construct \(\mathcal{V}_{j}\) explicitly if \(h\) is periodic. **Proposition 11**.: _Assume \(h^{n}(y)=y\) for any \(y\in\mathcal{Y}\). Let \(U:\mathcal{Y}\to\mathcal{B}(\oplus_{k=1}^{n}L^{2}(\mathcal{Z}))\) be defined as_ \[U(y)=(U_{g(h(y),\cdot)}\oplus U_{g(h^{2}(y),\cdot)}\oplus\cdots\oplus U_{g(h^{n -1}(y),\cdot)}\oplus U_{g(y,\cdot)})S,\] _where \(S\) is the permutation operator defined as \(S\oplus_{k=1}^{n}u_{k}=\oplus_{k=1}^{n-1}u_{k+1}\oplus u_{1}\). Let \(E(y)\) be the spectral measure with respect to \(U(y)\) and \(\mathcal{T}\subset[0,2\pi)\) be a subset of \([0,2\pi)\). Let \(\tilde{\mathcal{V}}_{k}(y)\) be the range of \(P_{k}E(y)(T)\) and let \(\mathcal{V}(y)=\tilde{\mathcal{V}}_{n}(y)\), where \(P_{k}:\oplus_{l=1}^{n}L^{2}(\mathcal{Z})\to L^{2}(\mathcal{Z})\) is the projection defined as \(\oplus_{l=1}^{n}u_{l}\mapsto u_{k}\). Then, we have \(U_{g(y,.)}\mathcal{V}(h(y))\subseteq\mathcal{V}(y)\)._ Proof.: We first show \(\oplus_{k=1}^{n}\tilde{\mathcal{V}}_{k}(y)\) is an invariant subspace of \(U(y)\). Let \(u\in E(y)(\mathcal{T})\). Then, we have \[U(y)\tilde{P}_{k}u=U_{g(h^{k-1}(y),.)}u_{k}=\tilde{P}_{k}U(y)u\] for \(k=1,\ldots,n\). Here, \(\tilde{P}_{k}:\oplus_{l=1}^{n}L^{2}(\mathcal{Z})\to\oplus_{l=1}^{n}L^{2}( \mathcal{Z})\) is the linear operator defined as \(\oplus_{l=1}^{n}u_{l}\mapsto\oplus_{l=1}^{n}\tilde{u}_{l}\), where \(\tilde{u}_{l}=0\) for \(l\neq k\) and \(\tilde{u}_{l}=u_{k}\) for \(l=k\). Since \(E(y)(\mathcal{T})\) is an invariant subspace of \(U(y)\), we have \(U(y)u\in E(y)(\mathcal{T})\). Thus, we have \(U(y)\oplus_{k=1}^{n}\tilde{\mathcal{V}}_{k}(y)\subseteq\oplus_{k=1}^{n} \tilde{\mathcal{V}}_{k}(y)\). Therefore, we have \(U_{g(y,.)}\tilde{\mathcal{V}}_{n}(h(y))\subseteq\tilde{\mathcal{V}}_{n-1}(h(y))\). In addition, for \(\oplus_{k=1}^{n}u_{k}\in\oplus_{k=1}^{n}L^{2}(\mathcal{Z})\), we have \[S^{-1}U(h(y))S\oplus_{k=1}^{n}u_{k} =S^{-1}U(h(y))\oplus_{k=1}^{n}u_{k+1}=S^{-1}\oplus_{k=1}^{n}U_{g( h^{k+1}(y),.)}(y)u_{k+2}\] \[=\oplus_{k=1}^{n}U_{g(h^{k}(y),.)}(y)u_{k+1}=U(y)\oplus_{k=1}^{n} u_{k},\] where \(u_{k+n}=u_{k}\) for \(k=1,2\). Thus, we have \(S^{-1}U(h(y))S=U(y)\), and the spectral measure \(E(h(y))\) of \(U(h(y))\) is represented as \(E(h(y))=SE(y)S^{-1}\). Therefore, for \(k=1,2,\ldots\), we obtain \[P_{k}E(h(y))(\mathcal{T})=P_{k}SE(y)(\mathcal{T})S^{-1}=P_{k+1}E(y)(\mathcal{T })S^{-1},\] where \(P_{k+1}=P_{1}\), which implies \(\tilde{\mathcal{V}}_{k}(h(y))=\tilde{\mathcal{V}}_{k+1}(y)\). Combining this identity with the inclusion \(U_{g(y,.)}\mathcal{V}_{n}(h(y))\subseteq\mathcal{V}_{n-1}(h(y))\), we have \(U_{g(y,.)}\tilde{\mathcal{V}}_{n}(h(y))\subseteq\tilde{\mathcal{V}}_{n}(y)\). **Corollary 12**.: _Let \(\mathcal{T}_{1},\mathcal{T}_{2},\ldots\subset[0,2\pi)\) be a sequence of countable disjoint subsets of \([0,2\pi)\) such that \([0,2\pi)=\bigcup_{j=1}^{\infty}\mathcal{T}_{j}\). If we set \(\mathcal{V}_{j}(y)\) as \(\mathcal{V}(y)\) in Proposition 11 by replacing \(\mathcal{T}\) with \(\mathcal{T}_{j}\), it satisfies the assumption in Theorem 8._ #### 2.3.4 Connection with Koopman operator on Hilbert space Assume \(p_{j}(y)=\hat{p}_{j}\) for any \(y\in\mathcal{Y}\), where \(\hat{p}_{j}\in\mathcal{A}\) is a projection. By Proposition 8, we obtain the following commutative diagram: where \(\hat{\mathcal{A}}_{j}=\{\hat{p}_{j}a\hat{p}_{j}\ |\ a\in\mathcal{A}\}\) is a \(C^{*}\)-subalgebra of \(\mathcal{A}\) and \(P_{j}:\mathcal{M}\to L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\) is defined as \(w\cdot P_{j}=w\hat{p}_{j}\). If \(\mathcal{V}_{j}\) is a finite dimensional space, then \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\) is isomorphic to a Hilbert space and the action of \(K_{T}\) on \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\) is reduced to that of \(U_{T}\). **Proposition 13**.: _Assume \(\mathcal{V}_{j}\) is an \(n\)-dimensional space. Let \(\{\gamma_{1},\ldots,\gamma_{n}\}\) be an orthonormal basis of \(\mathcal{V}_{j}\) and let \(\lambda_{j}:L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\rightarrow\oplus_{i=1 }^{n}L^{2}(\mathcal{X})\) be a linear operator defined as \(\lambda_{j}(v\otimes a)=(v\otimes a\gamma_{1},\ldots,v\otimes a\gamma_{n})\). Then, \(\lambda_{j}\) is an isomorphism and we have the following commutative diagram:_ Proof.: Let \(\tilde{\lambda}_{j}:\oplus_{i=1}^{n}L^{2}(\mathcal{X})\to L^{2}( \mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\) be a linear operator defined as \(\tilde{\lambda}_{j}(v_{1}\otimes u_{1},\ldots,v_{n}\otimes u_{n})=\sum_{i=1}^ {n}v_{i}\otimes u_{i}\gamma_{i}^{\prime}\). Then, \(\tilde{\lambda}_{j}\) is the inverse of \(\lambda_{j}\). In addition, we have \[\langle\lambda_{j}(v\otimes a),\lambda_{j}(v\otimes a)\rangle_{ \oplus_{i=1}^{n}L^{2}(\mathcal{X})}=\sum_{i=1}^{n}\left\langle v\otimes a \gamma_{i},v\otimes a\gamma_{i}\right\rangle_{L^{2}(\mathcal{X})}\] \[\qquad=\langle v,v\rangle_{L^{2}(\mathcal{Y})}\sum_{i=1}^{n} \left\langle\gamma_{i},a^{*}a\gamma_{i}\right\rangle_{L^{2}(\mathcal{Z})}\leq n \|\left\langle v,v\right\rangle_{L^{2}(\mathcal{Y})}a^{*}a\|_{\mathcal{A}}=n \|v\otimes a\|_{\mathcal{M}}^{2}.\] Thus, \(\lambda_{j}\) is an isomorphism. The commutativity of the diagram is derived by Proposition 2. ## 3 Examples ### The case of \(\mathcal{Z}\) is a compact Hausdorff group Let \(\mathcal{Z}\) be a compact Hausdorff group equipped with the (normalized) Haar measure \(\nu\). Let \(\hat{\mathcal{Z}}\) be the set of equivalent classes of irreducible unitary representations. For an irreducible representation \(\rho\), let \(\mathcal{E}_{\rho}\) be the representation space of \(\rho\) and let \(n_{\rho}\) be the dimension of \(\mathcal{E}_{\rho}\). Note that since \(\mathcal{Z}\) is a compact group, \(n_{\rho}\) is finite. Let \(\{e_{\rho,1},\ldots,e_{\rho,n_{\rho}}\}\) be an orthonormal basis of \(\mathcal{E}_{\rho}\) and let \(\gamma_{\rho,i,j}:\mathcal{Z}\rightarrow\mathbb{C}\) be the matrix coefficient defined as \(\gamma_{\rho,i,j}(z)=\langle e_{\rho,i},\rho(z)e_{\rho,j}\rangle\). By the Peter-Weyl theorem, \(\bigcup_{[\rho]\in\hat{\mathcal{Z}}}\{\gamma_{\rho,i,j}\mid\ i,j=1,\ldots,n_{ \rho}\}\) is an orthonormal basis of \(L^{2}(\mathcal{Z})\), where \([\rho]\) is the equivalent class of an irreducible representation \(\rho\). We set the map \(g:\mathcal{Y}\times\mathcal{Z}\rightarrow\mathcal{Z}\) as \(g(y,z)=z\tilde{g}(y)\), where \(\tilde{g}:\mathcal{Y}\rightarrow\mathcal{Z}\) is a measurable map. Let \(\Gamma_{\rho,i}:\mathcal{E}_{\rho}\to L^{2}(\mathcal{Z})\) be the linear operator defined as \(e_{\rho,j}\mapsto\gamma_{\rho,i,j}\) for \(i,j=1,\ldots,n_{\rho}\). Note that the adjoint \(\Gamma_{\rho,i}^{*}:L^{2}(\mathcal{Z})\rightarrow\mathcal{E}_{\rho}\) is written as \(u\mapsto\sum_{j=1}^{n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle e_{ \rho,j}\). Then, regarding the Koopman operator \(U_{g(y,\cdot)}\) on \(L^{2}(\mathcal{Z})\), we have \[U_{g(y,\cdot)}\Gamma_{\rho,i}\Gamma_{\rho,i}^{*}u(z)=U_{g(y, \cdot)}\sum_{j=1}^{n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle\gamma _{\rho,i,j}(z)=\sum_{j=1}^{n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle \gamma_{\rho,i,j}(\tilde{g}(y)z)\\ =\sum_{j=1}^{n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle \left\langle e_{\rho,i},\rho(z\tilde{g}(y))e_{\rho,j}\right\rangle=\sum_{j=1}^ {n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle\left\langle\rho(z)^{* }e_{\rho,i},\rho(\tilde{g}(y))e_{\rho,j}\right\rangle\] \[=\sum_{j=1}^{n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle \left\langle\rho(z)^{*}e_{\rho,i},\sum_{k=1}^{n_{\rho}}\left\langle e_{\rho,k}, \rho(\tilde{g}(y))e_{\rho,j}\right\rangle e_{\rho,k}\right\rangle\] \[=\sum_{j,k=1}^{n_{\rho}}\left\langle\gamma_{\rho,i,j},u\right\rangle \left\langle\rho(\tilde{g}(y))^{*}e_{\rho,k},e_{\rho,j}\right\rangle\gamma_{ \rho,i,k}(z)\] \[=\sum_{k=1}^{n_{\rho}}\left\langle\rho(\tilde{g}(y))^{*}e_{\rho,k },\Gamma_{\rho,i}^{*}u\right\rangle\gamma_{\rho,i,k}(z)=\bigg{(}\Gamma_{\rho,i }\sum_{k=1}^{n_{\rho}}\left\langle\rho(\tilde{g}(y))^{*}e_{\rho,k},\Gamma_{ \rho,i}^{*}u\right\rangle e_{\rho,k}\bigg{)}(z)\] \[=\Gamma_{\rho,i}\rho(\tilde{g}(y))\Gamma_{\rho,i}^{*}u(z)\] for \(u\in L^{2}(\mathcal{Z})\), \(z\in\mathcal{Z}\), and \(i=1,\ldots,n_{\rho}\). Thus, we have \(U_{g(y,\cdot)}=\sum_{[\rho]\in\hat{\mathcal{Z}}}\sum_{i=1}^{n_{\rho}}\Gamma_{ \rho,i}\rho(\tilde{g}(y))\Gamma_{\rho,i}^{*}\). Therefore, the range of \(\Gamma_{\rho,i}\) is an invariant subspace of \(U_{g(y,\cdot)}\) for any \(y\in\mathcal{Y}\). Thus, we set \(\mathcal{V}_{[\rho],j}\) as the constant map which takes its value the range of \(\Gamma_{\rho,j}\), and apply Proposition 8. In this case, the multiplication operator \(\hat{M}_{i,[\rho],j}\) is calculated as \((w\cdot\hat{M}_{i,[\rho],j})(y)=w(y)\Gamma_{\rho,j}\rho(\tilde{g}(h^{i}(y))) \Gamma_{\rho,j}^{*}\), and by Proposition 10, its spectrum is calculated as \[\sigma(\hat{M}_{i,[\rho],j})=\{\lambda\in\mathbb{C}\ \mid\ ^{\forall}\epsilon>0,\ \mu(\{y\in\mathcal{Y}\ \mid\ \lambda\in\sigma_{\epsilon}(\Gamma_{\rho,j}\rho(\tilde{g}(h^{i}(y)))\Gamma_{ \rho,j}^{*})\})>0\}\] Note that since \(\rho(\tilde{g}(h^{i}(y)))\) is a linear operator on a finite dimensional space, it has only point spectra. By Corollary 9, we obtain a discrete decomposition of \(K_{T}|_{\mathcal{W}}\) with the multiplication operators \(\hat{M}_{i,[\rho],j}\). Let \(\hat{p}_{[\rho],j}=\Gamma_{\rho,j}\Gamma_{\rho,j}^{*}\). Then, \(K_{T}\) maps \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{[\rho],j}\) to \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{[\rho],j}\), where \(\hat{\mathcal{A}}_{[\rho],j}=\{\hat{p}_{[\rho],j}a\hat{p}_{[\rho],j}\ \mid\ a\in\mathcal{A}\}\). Since \(\mathcal{V}_{[\rho],j}\) is a finite dimensional space, by Proposition 13, the action of \(K_{T}\) restricted to \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{[\rho],j}\) is reduced to that of \(\otimes_{i=1}^{n_{\rho}}U_{T}\) on \(\otimes_{i=1}^{n_{\rho}}(L^{2}(\mathcal{X}))\) as \(K_{T}=\lambda_{j,[\rho]}(\otimes_{i=1}^{n_{\rho}}U_{T})\lambda_{[\rho],j}^{-1}\), where \(\lambda_{[\rho],j}(v\otimes a)=(v\otimes a\gamma_{\rho,j,1},\ldots,v\otimes a \gamma_{\rho,j,n_{\rho}})\). ### The case of \(\mathcal{Z}=\mathbb{Z}\) Let \(\mathcal{Z}=\mathbb{Z}\) equipped with the counting measure. We set the map \(g:\mathcal{Y}\times\mathcal{Z}\rightarrow\mathcal{Z}\) as \(g(y,z)=z+\tilde{g}(y)\), where \(\tilde{g}:\mathcal{Y}\rightarrow\mathcal{Z}\) is a measurable map. For \(i\in\mathbb{Z}\), let \(e_{i}:\mathbb{T}\rightarrow\mathbb{C}\) be defined as \(e_{i}(\omega)=\mathrm{e}^{\sqrt{-1}i\omega}\), where \(\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}\). Note that \(\{e_{i}\ \mid\ i\in\mathbb{Z}\}\) is an orthonormal basis of \(L^{2}(\mathbb{T})\). In addition, for \(i\in\mathbb{Z}\), let \(\gamma_{i}:\mathbb{Z}\rightarrow\mathbb{C}\) be defined as \(\gamma_{i}(i)=1\), \(\gamma_{i}(z)=0\ (z\neq i)\). Note also that \(\{\gamma_{i}\ \mid\ i\in\mathbb{Z}\}\) is an orthonormal basis of \(L^{2}(\mathbb{Z})\). Let \(\Gamma:L^{2}(\mathbb{T})\to L^{2}(\mathbb{Z})\) be the linear operator defined as \(e_{i}\mapsto\gamma_{i}\) for any \(i\in\mathbb{Z}\) and let \(\phi_{y}(\omega)=\mathrm{e}^{\sqrt{-1}\tilde{g}(y)\omega}\). Then, we have \[\Gamma M_{\phi_{y}}\Gamma^{*}\gamma_{i}=\Gamma M_{\phi_{y}}e_{i}=\Gamma e_{i} \phi=\Gamma e_{i+\tilde{g}(y)}=\gamma_{i+\tilde{g}(y)}=U_{g(y,\cdot)}\gamma_{i},\] where \(M_{\phi_{y}}\) is the multiplication operator on \(L^{2}(\mathbb{T})\) defined as \(M_{\phi_{y}}u(\omega)=u(\omega)\phi_{y}(\omega)=u(\omega)\mathrm{e}^{\sqrt{-1} \tilde{g}(y)\omega}\). Thus, we have the spectral decomposition \(U_{g(y,\cdot)}=\int_{\omega\in\mathbb{T}}\mathrm{e}^{\sqrt{-1}\tilde{g}(y) \omega}\mathrm{d}E(\omega)\), where \(E\) is the spectral measure defined as \(E(\Omega)=\Gamma M_{\chi_{\Omega}}\Gamma^{*}\) for a Borel set \(\Omega\) and \(\chi_{\Omega}\) is the characteristic function of \(\Omega\). Let \(T_{1},T_{2},\ldots\) be a sequence of countable disjoint subsets of \(\mathbb{T}\) such that \(\mathbb{T}=\bigcup_{j=1}^{\infty}T_{j}\). Then, the range of \(E(T_{j})\) is an invariant subspace of \(U_{g(y,\cdot)}\) for any \(y\in\mathcal{Y}\). Thus, we set \(\mathcal{V}_{j}\) as the constant map which takes its value the range of \(E(T_{j})\), and apply Proposition 8. In this case, \(\hat{M}_{i,j}\) is calculated as \((w\cdot\hat{M}_{i,j})(y)=w(y)\Gamma M_{\phi_{h^{i}(y)}}M_{\chi_{T_{j}}}\Gamma^{*}\). Let \(\hat{p}_{j}=E(T_{j})\). Then, \(K_{T}\) maps \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\) to \(L^{2}(\mathcal{Y})\otimes\hat{\mathcal{A}}_{j}\). Since \(\mathcal{V}_{j}\) is an infinite dimensional space, we cannot reduce the action of \(K_{T}\) restricted to \(L^{2}(\mathcal{Y})\otimes\hat{A}_{j}\) to that of \(U_{T}\) on a Hilbert space. However, by Corollary 9, we obtain a discrete decomposition of \(K_{T}|_{\mathcal{W}}\) in the Hilbert \(C^{*}\)-module even in this case of the spectral decomposition of \(U_{g(y,\cdot)}\) is continuous. ## 4 Continuous-time systems ### Skew product system and Koopman operator on Hilbert space As in the Section 2, let \(\mathcal{Y}\) and \(\mathcal{Z}\) be separable measure spaces equipped with measures \(\mu\) and \(\nu\), respectively and let \(\mathcal{X}=\mathcal{Y}\times\mathcal{Z}\), the direct product measure space of \(\mathcal{Y}\) and \(\mathcal{Z}\). Let \(h:\mathbb{R}\times\mathcal{Y}\rightarrow\mathcal{Y}\) be a map such that for any \(t\in\mathbb{R}\), \(h(t,\cdot)\) is a measure preserving and invertible map on \(\mathcal{Y}\). Moreover, let \(g:\mathbb{R}\times\mathcal{X}\rightarrow\mathcal{Z}\) be a map such that for any \(t\in\mathbb{R}\), \(g(t,\cdot,\cdot)\) is a measurable map from \(\mathcal{X}\) to \(\mathcal{Z}\) and for any \(y\in\mathcal{Y}\), \(g(t,y,\cdot)\) is measure preserving and invertible on \(\mathcal{Z}\). Consider the following skew product flow on \(\mathcal{X}\): \[\Phi(t,y,z)=(h(t,y),g(t,y,z))\] that satisfies \(\Phi(0,y,z)=(y,z)\) and \(\Phi(s,\Phi(t,y,z))=\Phi(s+t,y,z)\) for any \(s,t\in\mathbb{R}\), \(y\in\mathcal{Y}\), and \(z\in\mathcal{Z}\). We denote \(\Phi(t,\cdot,\cdot)=\Phi_{t}\), \(h(t,\cdot)=h_{t}\), and \(g(t,\cdot,\cdot)=g_{t}\), respectively. For \(t\in\mathbb{R}\), we consider the Koopman operator \(U_{\Phi_{t}}\) on \(L^{2}(\mathcal{X})\). Instead of \(U_{T}\) for discrete systems, we consider a family of Koopman operators \(\{U_{\Phi_{t}}\}_{t\in\mathbb{R}}\) for continuous systems. ### Operator on Hilbert \(C^{*}\)-module related to the Koopman operator Analogous to the case of discrete systems, we extend the Koopman operator \(U_{\Phi_{t}}\) to an operator on the Hilbert \(C^{*}\)-module \(\mathcal{M}\). **Definition 6**.: For \(t\in\mathbb{R}\), we define a right \(\mathcal{A}\)-linear operator \(K_{\Phi_{t}}\) on \(\mathcal{M}\) by \[K_{\Phi_{t}}(v\otimes a)(y)=v(h_{t}(y))U_{g_{t}(y,\cdot)}a\] for \(v\in L^{2}(\mathcal{Y})\), \(a\in\mathcal{A}\), and \(y\in\mathcal{Y}\). Here, for \(x\in\mathcal{Y}\), \(U_{g_{t}(y,\cdot)}\) is the Koopman operator on \(L^{2}(\mathcal{Z})\) with respect to the map \(g_{t}(y,\cdot)\). _Remark 1_.: The operator family \(\{K_{\Phi_{t}}\}_{t\in\mathbb{R}}\) satisfies \(K_{\Phi_{s}}K_{\Phi_{t}}=K_{\Phi_{s+t}}\) for any \(s,t\in\mathbb{R}\) and \(K_{\Phi_{0}}=I\). However, it is not strongly continuous even for a simple case. Let \(\mathcal{Z}=\mathbb{R}/2\pi\mathbb{Z}\) equipped with the normalized Haar measure on \(\mathcal{Z}\). Let \(g_{t}(y,z)=z+t\alpha\) for \(\alpha\neq 0\). For \(v\equiv 1\) and \(a=I\), we have \[\|K_{\Phi_{t}}v\otimes a-v\otimes a\|_{\mathcal{M}}^{2}=\left\|\int_{y\in \mathcal{Y}}(U_{g_{t}(y,\cdot)}-I)^{*}(U_{g_{t}(y,\cdot)}-I)\mathrm{d}\mu(y) \right\|_{\mathcal{A}}\] \[=\left\|\int_{y\in\mathcal{Y}}(2I-U_{g_{t}(y,\cdot)}-U_{g_{t}(y, \cdot)}^{*})\mathrm{d}\mu(y)\right\|_{\mathcal{A}}\] \[=\|2I-\tilde{U}^{*}M_{t}\tilde{U}-\tilde{U}^{*}M_{t}^{*}\tilde{U} \|_{\mathcal{A}}\] \[=\|2I-M_{t}-M_{t}^{*}\|_{\mathcal{A}}\] \[=\sup_{n\in\mathbb{Z}}|2-\mathrm{e}^{\sqrt{-1}nt\alpha}-\mathrm{e }^{-\sqrt{-1}nt\alpha}|,\] where \(\tilde{U}:L^{2}(\mathcal{Z})\to L^{2}(\mathbb{Z})\) the unitary operator defined as \(\gamma_{i}\mapsto e_{i}\), \(\gamma_{i}(z)=\mathrm{e}^{\sqrt{-1}tz}\), and \(e_{i}\) is the map on \(\mathbb{Z}\) defined as \(e_{i}(i)=1\) and \(e_{i}(n)=0\) for \(n\neq i\). Moreover, \(M_{t}:L^{2}(\mathbb{Z})\to L^{2}(\mathbb{Z})\) is the multiplication operator with respect to the map \(n\mapsto\mathrm{e}^{\sqrt{-1}\alpha tn}\). The third equality holds since \[\tilde{U}^{*}M_{t}\tilde{U}\gamma_{i}=\tilde{U}^{*}\mathrm{e}^{\sqrt{-1} \alpha ti}e_{i}=\mathrm{e}^{\sqrt{-1}\alpha ti}\gamma_{i}=U_{g_{t}(y,\cdot)} \gamma_{i}.\] Let \(\epsilon=|2-\mathrm{e}^{\sqrt{-1}\alpha}-\mathrm{e}^{-\sqrt{-1}\alpha}|\). For any \(\delta>0\), let \(n_{0}\in\mathbb{Z}\) such that \(n_{0}\geq 1/\delta\) and let \(t=1/n_{0}\). Then, we have \[\|K_{\Phi_{t}}v\otimes a-v\otimes a\|_{\mathcal{M}}^{2}\geq|2-\mathrm{e}^{ \sqrt{-1}n_{0}t\alpha}-\mathrm{e}^{-\sqrt{-1}n_{0}t\alpha}|=\epsilon.\] We adopt the generator defined using a weaker topology than the topology of the Hilbert \(C^{*}\)-module. **Definition 7** (Equicontinuous \(C_{0}\)-group [41]).: Let \(M\) be a sequentially complete locally convex space and for any \(t\in\mathbb{R}\), let \(\kappa_{t}:M\to M\) be a linear operator on \(M\) which satisfies 1. \(\kappa_{0}=I\), 2. \(\kappa_{s}\kappa_{t}=\kappa_{s+t}\) for any \(s,t\in\mathbb{R}\), 3. \(\lim_{t\to 0}\kappa_{t}w=w\) for any \(w\in M\), 4. For any continuous seminorm \(p\) on \(M\), there exists a continuous seminorm \(q\) such that \(p(\kappa_{t}w)\leq q(w)\) for any \(w\in M\) and \(t\in\mathbb{R}\). The family \(\{\kappa_{t}\}_{t\in\mathbb{R}}\) is called an equicontinuous \(C_{0}\)-group. **Proposition 14**.: _The space \(\mathcal{M}\subseteq\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\) equipped with the strong operator topology is a sequentially complete locally convex space. In addition, assume \(\mathcal{Y}\) and \(\mathcal{Z}\) are locally compact Hausdorff spaces, \(\mu\) and \(\nu\) are regular probability measures, and \(h\) and \(g\) are continuous. Then, \(\{K_{\Phi_{t}}\}_{t\in\mathbb{R}}\) is an equicontinuous \(C_{0}\)-group._ To prove Proposition 14, we use the following lemma: **Lemma 15**.: _Let \(\Omega\) and \(\mathcal{X}\) be topological spaces. If a map \(\Psi:\Omega\times\mathcal{X}\to\mathbb{C}\) is continuous and compactly supported, then the map \(\Omega\ni t\mapsto\Psi(t,\cdot)\in C_{c}(\mathcal{X})\) is continuous. Here, \(C_{c}(\mathcal{X})\) is the space of compactly supported continuous functions on \(\mathcal{X}\)._ Proof.: The statement follows from Lemma 4.16 by Eisner et al. [42]. _Proof of Proposition 14._ (\(\mathcal{M}\) **is a sequentially complete locally convex space**) For \(p\in L^{2}(\mathcal{Z})\), let \(\|\cdot\|_{p}:\mathcal{M}\to\mathbb{R}_{+}\) be defined as \(\|w\|_{p}=\|wp\|_{L^{2}(\mathcal{X})}\) for \(w\in\mathcal{M}\subseteq\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\). Then, \(\|\cdot\|_{p}\) is a seminorm in \(\mathcal{M}\). Moreover, let \(\{w_{i}\}_{i\in\mathbb{N}}\) be a countable Cauchy sequence in \(\mathcal{M}\). Then, for any \(v\in L^{2}(\mathcal{Z})\), \(\{w_{i}v\}_{i\in\mathbb{N}}\) is a Cauchy sequence in the Hilbert space \(L^{2}(\mathcal{X})\). Thus, there exists \(\tilde{w}\in L^{2}(\mathcal{X})\) such that \(\lim_{i\to\infty}w_{i}v=\tilde{w}\). Let \(w:L^{2}(\mathcal{Z})\to L^{2}(\mathcal{X})\) be the map defined as \(w:v\mapsto\tilde{w}\). Then, \(w\) is linear and \[\|wv\|_{L^{2}(\mathcal{X})}=\|\lim_{i\to\infty}w_{i}v\|_{L^{2}(\mathcal{X})} \leq\sup_{i\in\mathbb{N}}\|w_{i}v\|_{L^{2}(\mathcal{X})}\leq\sup_{i\in\mathbb{ N}}\|w_{i}\|_{\mathcal{M}}\,\|v\|_{L^{2}(\mathcal{Z})}\] for \(v\in L^{2}(\mathcal{Z})\). By the uniform boundedness principle, \(\sup_{i\in\mathbb{N}}\|w_{i}\|<\infty\). Thus, \(w\in\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\). Since \(\mathcal{M}\subseteq\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\) is closed with respect to the strong operator topology, we obtain \(w\in\mathcal{M}\). Therefore, \(\{w_{i}\}_{i\in\mathbb{N}}\) converges to \(w\) in \(\mathcal{M}\). \((\{K_{\Phi_{t}}\}_{t\in\mathbb{R}}\) **is an equicontinuous \(C_{0}\)-group**) For any \(v\in L^{2}(\mathcal{Y})\), \(a\in\mathcal{A}\), and \(u\in L^{2}(\mathcal{Z})\), we have \[\|(K_{\Phi_{t}}v\otimes a)u\|_{L^{2}(\mathcal{X})}^{2} =\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}|v(h_{t}(y))(au)(g_{t }(y,z))|^{2}\mathrm{d}\nu(z)\mathrm{d}\mu(y)\] \[=\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}|v(y)(au)(z)|^{2} \mathrm{d}\nu(z)\mathrm{d}\mu(y)=\|(v\otimes a)u\|_{L^{2}(\mathcal{X})},\] which shows that the condition 4 of Definition 7 is satisfied. Regarding the condition 3, let \(\epsilon>0\), let \(\{\gamma_{i}\}_{i=1}^{\infty}\) be an orthonormal basis of \(L^{2}(\mathcal{Z})\), and let \(\mathcal{D}=\{\sum_{i\in F}c_{i}\gamma_{i}\,\mid\,F\subset\mathbb{Z}:\,\, \text{finite},\,c_{i}\in\mathbb{C}\}\). Since \(C_{c}(\mathcal{Z})\), \(C_{c}(\mathcal{Y})\), and \(\mathcal{D}\) are dense in \(L^{2}(\mathcal{Z})\), \(L^{2}(\mathcal{Y})\), and \(L^{2}(\mathcal{Z})\), respectively, for any \(i\in\mathbb{N}\) and any \(v\in L^{2}(\mathcal{Y})\), \(a\in\mathcal{A}\), and \(u\in L^{2}(\mathcal{Z})\), there exist \(\tilde{v}\in C_{c}(\mathcal{Y})\), \(\tilde{\gamma}_{i}\in C_{c}(\mathcal{Z})\), and \(\tilde{u}\in\mathcal{D}\) such that \(\|\tilde{v}-v\|_{L^{2}(\mathcal{Y})}\leq\epsilon\), \(\|\tilde{\gamma}_{i}-a\gamma_{i}\|_{L^{2}(\mathcal{Z})}\leq\epsilon/(\sqrt{2 ^{i}})\), and \(\|\tilde{u}-u\|_{L^{2}(\mathcal{Z})}\leq\epsilon\). Let \(\tilde{a}=\sum_{i=1}^{\infty}\tilde{\gamma}_{i}\gamma_{i}^{\prime}\), where the limit is taken with respect to the strong operator topology. The operator \(\tilde{a}\) is bounded since we have \[\|\tilde{a}u\|_{L^{2}(\mathcal{Z})}\leq\|au\|_{L^{2}(\mathcal{Z})}+\|\tilde{a }u-au\|_{L^{2}(\mathcal{Z})}\leq\|a\|_{\mathcal{A}}\|u\|_{L^{2}(\mathcal{Z})}+ \bigg{\|}\sum_{i=1}^{\infty}(a\gamma_{i}\gamma_{i}^{\prime}u-\tilde{\gamma}_{i }\gamma_{i}^{\prime}u)\bigg{\|}_{L^{2}(\mathcal{Z})}\] Thus, we have \(\tilde{a}\in\mathcal{A}\). In addition, we have \[\|(K_{\Phi_{t}}\tilde{v}\otimes\tilde{a})\tilde{u}-(\tilde{v} \otimes\tilde{a})\tilde{u}\|_{L^{2}(\mathcal{X})}^{2}\] \[\qquad=\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}|\tilde{v}(h_{t }(y))(U_{g_{t}(y,\cdot)}\tilde{a}\tilde{u})(z)-\tilde{v}(y)(\tilde{a}\tilde{u} )(z)|^{2}\mathrm{d}\nu(z)\mathrm{d}\mu(y)\] \[\qquad=\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}|\tilde{v}(h_{t }(y))(\tilde{a}\tilde{u})(g_{t}(y,z))-\tilde{v}(y)(\tilde{a}\tilde{u})(z)|^{2} \mathrm{d}\nu(z)\mathrm{d}\mu(y).\] Let \(\Psi:\mathbb{R}\times\mathcal{Y}\times\mathcal{Z}\to\mathbb{C}\) be defined as \((t,y,z)\mapsto\tilde{v}(h_{t}(y))(\tilde{a}\tilde{u})(g_{t}(y,z))-\tilde{v}(y)( \tilde{a}\tilde{u})(z)\). Since \(\Psi\) is continuous, by Lemma 15, the map \(t\mapsto\Psi(t,\cdot,\cdot)\in C_{c}(\mathcal{X})\) is also continuous. Thus, we have \[\lim_{t\to 0}\|(K_{\Phi_{t}}\tilde{v}\otimes\tilde{a})\tilde{u}-( \tilde{v}\otimes\tilde{a})\tilde{u}\|_{L^{2}(\mathcal{X})} \leq\lim_{t\to 0}\|(K_{\Phi_{t}}\tilde{v}\otimes\tilde{a}) \tilde{u}-(\tilde{v}\otimes\tilde{a})\tilde{u}\|_{L^{\infty}(\mathcal{X})}\] \[=\lim_{t\to 0}\|\Psi_{t}\|_{\infty}=0,\] where \(\|\cdot\|_{\infty}\) is the sup norm in \(C_{c}(\mathcal{X})\). Therefore, \(\lim_{t\to 0}\|(K_{\Phi_{t}}v\otimes a)u-(v\otimes a)u\|_{L^{2}(\mathcal{X})}=0\). Indeed, we have \[\|(K_{\Phi_{t}}v\otimes a)u-(K_{\Phi_{t}}\tilde{v}\otimes\tilde{ a})\tilde{u}\|_{L^{2}(\mathcal{X})}=\|(v\otimes a)u-(\tilde{v}\otimes\tilde{a}) \tilde{u}\|_{L^{2}(\mathcal{X})}\] \[\quad\leq\|(\tilde{v}\otimes(a-\tilde{a}))\tilde{u}\|_{L^{2}( \mathcal{X})}+\|((v-\tilde{v})\otimes a)\tilde{u}\|_{L^{2}(\mathcal{X})}+\|(v \otimes a)(u-\tilde{u})\|_{L^{2}(\mathcal{X})}\] \[\quad\leq\|\tilde{v}\|_{L^{2}(\mathcal{Y})}\|(a-\tilde{a})\tilde{ u}\|_{L^{2}(\mathcal{Z})}+\|v-\tilde{v}\|_{L^{2}(\mathcal{Y})}\|a\tilde{u}\|_{L^{2}( \mathcal{Z})}+\|v\|_{L^{2}(\mathcal{Y})}\|a\|_{\mathcal{A}}\|u-\tilde{u}\|_{L^ {2}(\mathcal{Z})}\] \[\quad\leq(\|v\|_{L^{2}(\mathcal{Y})}+\epsilon)\|u\|_{L^{2}( \mathcal{Z})}\epsilon+\epsilon\|au\|_{L^{2}(\mathcal{Z})}+\|v\|_{L^{2}(\mathcal{ Y})}\|a\|_{\mathcal{A}}\epsilon.\] As a result, \(\{K_{\Phi_{t}}\}_{t\in\mathbb{R}}\) satisfies the condition 3 of Definition 7. **Definition 8**.: The generator \(L_{\Phi}\) of \(\{K_{\Phi_{t}}\}_{t\in\mathbb{R}}\) is defined as \[L_{\Phi}w=\lim_{t\to 0}\frac{K_{\Phi_{t}}w-w}{t},\] where the limit is with respect to the strong operator topology in \(\mathcal{M}\). **Proposition 16** (Choe, 1985 [41]).: _The generator \(L_{\Phi}\) is a densely defined linear operator in \(\mathcal{M}\) with respect to the strong operator topology._ ### Decomposition of \(K_{\Phi_{t}}\) and \(L_{\Phi}\) We derive the eigenoperator decomposition for continuous systems. In the following, we assume \(\mathcal{Y}\) and \(\mathcal{Z}\) are differentiable manifolds, \(\mu\) and \(\nu\) are regular probability measures, and \(h\) and \(g\) are differentiable. #### 4.3.1 Fundamental decomposition We first define vectors to decompose the operator \(K_{\Phi_{t}}\) using the cocycle. **Definition 9**.: For \(s\in\mathbb{R}\), we define a linear operator \(w_{s}:L^{2}(\mathcal{Z})\to L^{2}(\mathcal{X})\) as \[(w_{s}u)(y,z)=(U_{g_{s}(y,\cdot)}u)(z).\] **Proposition 17**.: _For \(s\in\mathbb{R}\), we have \(w_{s}\in\mathcal{M}\). Moreover, \(K_{\Phi_{t}}w_{s}=w_{s+t}=w_{s}\cdot M_{s,t}\), where \(M_{s,t}\) is a left \(\mathcal{A}\)-linear multiplication operator on \(\mathcal{M}\) defined as \((w\cdot M_{s,t})(y)=w(y)U_{g_{t}(h_{s}(y),\cdot)}\)._ Proof.: We obtain \(w_{s}\in\mathcal{M}\) by Lemma 30. Moreover, we have \[K_{\Phi_{t}}w_{s}=U_{g_{t}(y,\cdot)}U_{g_{s}(h_{t}(y),\cdot)}=U_{g_{s}(h_{t}(y), g_{t}(y,\cdot))}=U_{g_{s+t}(y,\cdot)}=U_{g_{s}(y,\cdot)}U_{g_{t}(h_{s}(y), \cdot)}.\] **Proposition 18**.: _For \(s\in\mathbb{R}\) and \(u\in C^{1}_{c}(\mathcal{Z})\), let \(\tilde{w}_{s,u}(y,z)=\frac{\partial u\circ g}{\partial t}(s,y,z)\). Then, \((L_{\Phi}w_{s})u=\tilde{w}_{s,u}\) and \(L_{\Phi}w_{s}=w_{s}\cdot N_{s}\), where \((w\cdot N_{s})(y)=w(y)(M_{\frac{\partial g}{\partial t}(0,h_{s}(y),\cdot)} \frac{\partial}{\partial z})\). Here, \(C^{1}_{c}(\mathcal{Z})\) is the space of compactly supported and continuously differentiable functions on \(\mathcal{Z}\)._ Proof.: For \(u\in C^{1}_{c}(\mathcal{Z})\), we have \[\left\|\frac{1}{t}(K_{\Phi_{t}}w_{s}-w_{s})u-\tilde{w}_{s,u}\right\| _{L^{2}(\mathcal{X})}^{2}\] \[= \int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\left|\frac{1}{t}(U_ {g_{t}(y,\cdot)}U_{g_{s}(h_{t}(y),\cdot)}-U_{g_{s}(y,\cdot)})u(z)-\tilde{w}_{ s,u}(y,z)\right|^{2}\mathrm{d}\nu(z)\mathrm{d}\mu(y)\] \[= \int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\left|\frac{1}{t}(U_ {g_{s+t}(y,\cdot)}-U_{g_{s}(y,\cdot)})u(z)-\tilde{w}_{s,u}(y,z)\right|^{2} \mathrm{d}\nu(z)\mathrm{d}\mu(y)\] \[= \int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\left|\frac{1}{t}(u(g _{s+t}(y,z))-u(g_{s}(y,z)))-\tilde{w}_{s,u}(y,z)\right|^{2}\mathrm{d}\nu(z) \mathrm{d}\mu(y).\] Since \(g\) is continuous, there exists \(D>0\) such that for any \(s\in\mathbb{R}\), \(y\in\mathcal{Y}\), and \(z\in\mathcal{Z}\), \(|\frac{\partial u\circ g}{\partial t}(s,y,z)|<D\). By the mean-value theorem, for any \(y\in\mathcal{Y}\) and \(z\in\mathcal{Z}\), there exists \(c\in(s,s+t)\) for \(t>0\) or \(c\in(s+t,s)\) for \(t<0\) such that \[\left|\frac{1}{t}(u(g_{s+t}(y,z))-u(g_{s}(y,z)))\right|=\left|\frac{\partial u \circ g}{\partial t}(c,y,z)\right|\leq D.\] Thus, by the Lebesgue's dominated convergence theorem, we obtain \[\lim_{t\to 0}\left\|\frac{1}{t}(K_{\Phi_{t}}w_{s}-w_{s})u- \tilde{w}_{s,u}\right\|_{L^{2}(\mathcal{X})}^{2}\\ =\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\lim_{t\to 0} \left|\frac{1}{t}(u(g_{s+t}(y,z))-u(g_{s}(y,z)))-\tilde{w}_{s,u}(y,z)\right|^{2} \mathrm{d}\nu(z)\mathrm{d}\mu(y)=0.\] Thus, we have \((L_{\Phi}w_{s})u=\tilde{w}_{s,u}\). Moreover, \(\tilde{w}_{s,u}\) is represented as \[\tilde{w}_{s,u}(y,z) =\frac{\partial u\circ g}{\partial t}(s,y,z)=\frac{\partial u}{ \partial z}(g(s,y,z))\frac{\partial g}{\partial t}(s,y,z)\] \[=\left(U_{g_{s}(y,\cdot)}\frac{\partial u}{\partial z}\right)(z) \frac{\partial g}{\partial t}(s,y,z)=U_{g_{s}(y,\cdot)}M_{\frac{\partial g}{ \partial t}(s,y,g_{s}^{-1}(y,\cdot))}\frac{\partial}{\partial z}u(z).\] Since \(g_{s}(y,g_{-s}(h_{s}(y),z))=g_{s}(h_{-s}(h_{s}(y)),g_{-s}(h_{s}(y),z))=g_{0}(h_ {s}(y),z)=z\), \(g_{s}(y,\cdot)^{-1}=g_{-s}(h_{s}(y),\cdot)\). Thus, we have \[\frac{\partial g}{\partial t}(s,y,g_{s}^{-1}(y,z))=\frac{\partial g}{\partial t }(s,h_{-s}(h_{s}(y)),g_{-s}(h_{s}(y),z))=\frac{\partial g}{\partial t}(0,h_{s} (y),z).\] The vectors \(w_{s}\) describe the dynamics on \(\mathcal{Z}\), which is specific for the skew product dynamical systems and we are interested in. **Proposition 19**.: _The action of the Koopman operator \(U_{\Phi_{t}}\) is decomposed into two parts as_ \[U_{\Phi_{t}}(v\otimes u)=U_{h_{t}}v\otimes U_{\Phi_{s}}u\circ g_{t-s}\] _for \(v\in L^{2}(\mathcal{Y})\), \(u\in L^{2}(\mathcal{Z})\), and \(s,t\in\mathbb{R}\)._ Proof.: By the definition of \(U_{\Phi_{t}}\), we have \[U_{\Phi_{t}}(v\otimes u)(y,z) =v(h_{t}(y))u(g_{t}(y,z))=v(h_{t}(y))u(g_{t-s}(h_{s}(y),g_{s}(y,z)))\] \[=U_{h_{t}}v(y)U_{\Phi_{s}}u\circ g_{t-s}(y,z).\] Let \[\mathcal{W}_{0}=\bigg{\{}\sum_{s\in F}w_{s}c_{s}\,\mid\,F\subseteq\mathbb{R}: \text{ finite set},\ c_{s}\in\mathcal{A}\bigg{\}}\] and \(\mathcal{W}\) be the completion of \(\mathcal{W}_{0}\) with respect to the norm in \(\mathcal{M}\) (\(\mathcal{W}\) is a submodule of \(\mathcal{M}\) and Hilbert \(\mathcal{A}\)-module). Moreover, for \(s\in\mathbb{R}\) and \(u\in L^{2}(\mathcal{Z})\), let \(\tilde{w}_{u,s}\in L^{2}(\mathcal{X})\) be defined as \(\tilde{w}_{u,s}(y,z)=u(g_{s}(y,z))\). Let \[\tilde{\mathcal{W}}_{0}=\bigg{\{}\sum_{j=1}^{n}\sum_{s\in F}c_{s}\tilde{w}_{u _{j},s}\,\mid\,n\in\mathbb{N},\ F\subseteq\mathbb{R}:\text{ finite set},\ c_{s}\in\mathbb{C},\ u_{j}\in L^{2}(\mathcal{Z})\bigg{\}}\] and \(\tilde{\mathcal{W}}\) be the completion of \(\tilde{\mathcal{W}}_{0}\) with respect to the norm in \(L^{2}(\mathcal{X})\). **Proposition 20**.: _With the notation defined in Proposition 2, we have \(K_{\Phi_{t}}|_{\mathcal{W}}\iota_{i}|_{\tilde{\mathcal{W}}}=\iota_{i}U_{\Phi_ {t}}|_{\tilde{\mathcal{W}}}\) for \(t\in\mathbb{R}\) and \(i=1,2,\ldots\)._ Proof.: For \(u\in L^{2}(\mathcal{Z})\), \(s\in\mathbb{R}\), and \(i>0\), we have \[(\iota_{i}\tilde{w}_{u,s})(y)=u(g_{s}(y,\cdot))\gamma_{i}^{\prime}=U_{g_{s}(y,\cdot)}u\gamma_{i}^{\prime}=w_{s}(y)(u\gamma_{i}^{\prime}).\] Thus, we obtain \(\iota_{i}\tilde{w}_{u,s}\in\mathcal{W}\). Therefore, the range of \(\iota_{i}|_{\tilde{\mathcal{W}}}\) is contained in \(\mathcal{W}\). The equality is deduced by the definitions of \(K_{\Phi_{t}}\) and \(U_{\Phi_{t}}\). #### Further decomposition We further decompose \(w_{s}\) and \(N_{s}\) and obtain a more detailed decomposition of \(L_{\Phi}|_{\mathcal{W}}\). For \(y\in\mathcal{Y}\), let \(\mathcal{V}_{1}(y),\mathcal{V}_{2}(y),\ldots\) be a sequence of closed subspaces of \(L^{2}(\mathcal{Z})\) which satisfies \(L^{2}(\mathcal{Z})=\overline{\operatorname{Span}\{\bigcup_{j=1}^{\infty} \mathcal{V}_{j}(y)\}}\) for a.s. \(y\in\mathcal{Y}\). For \(s\in\mathbb{R}\) and \(j=1,2,\ldots\), we define a linear map \(\hat{w}_{s,j}\) from \(L^{2}(\mathcal{Z})\) to \(L^{2}(\mathcal{X})\) as \((\hat{w}_{s,j}u)(y,z)=(w_{s}(y)p_{j}(h_{s}(y))u)(y,z)\), where \(p_{j}(y):L^{2}(\mathcal{Z})\rightarrow\mathcal{V}_{j}(y)\) is the projection onto \(\mathcal{V}_{j}(y)\). Assume \(p_{j}(y)\) satisfies \((y,z)\mapsto(p_{j}(y)u)(z)\in L^{2}(\mathcal{X})\). We denote by \(p_{j}\) the linear operator from \(L^{2}(\mathcal{Z})\to L^{2}(\mathcal{X})\) defined as \(p_{j}u(y,z)=(p_{j}(y)u)(z)\). For each \(j=1,2,\ldots\), the following proposition holds. Here, we define a differential operator \(V_{\Phi}\) by \[V_{\Phi}v(y,z)=\frac{\partial v}{\partial y}(y,z)\frac{\partial h}{\partial t}(0,y)+\frac{\partial v}{\partial z}(y,z)\frac{\partial g}{\partial t}(0,y,z) \tag{6}\] for \(v\in C^{1}_{c}(\mathcal{X})\). **Theorem 21** (Eigenoperator decomposition for continuous-time systems).: _For \(s\in\mathbb{R}\) and \(u\in L^{2}(\mathcal{Z})\), let_ \[\tilde{w}_{s,u,j}(y,z)=\frac{\partial(p_{j}(h_{t}(y))u\circ g(t,y,z))}{\partial t }\bigg{|}_{t=s}.\] _Assume for any \(u\in p_{j}^{-1}(C^{1}_{c}(\mathcal{X}))\) and any \(y\in\mathcal{Y}\), \(\frac{\partial(p_{j}(h_{t}(y))u)(g_{t}(y,\cdot))}{\partial t}\big{|}_{t=0}=(V _{\Phi}p_{j}u)(y,\cdot)\in\mathcal{V}_{j}(y)\). Then, \((L_{\Phi}\hat{w}_{s,j})u=\tilde{w}_{s,u,j}=(\hat{w}_{s,j}\cdot\hat{N}_{s,j})u\), where \(\hat{N}_{s,j}\) is defined as \((w\cdot\hat{N}_{s,j})u(y,z)=w(y)(V_{\Phi}p_{j}u)(h_{s}(y),z)\)._ We call \(\hat{N}_{s,j}\) an eigenoperator and \(\hat{w}_{s,j}\) an eigenvector. Proof.: For \(u\in p_{j}^{-1}(C^{1}_{c}(\mathcal{X}))\), we have \[\bigg{\|}\frac{1}{t}(K_{\Phi_{t}}\hat{w}_{s,j}-\hat{w}_{s,j})u- \tilde{w}_{s,u,j}\bigg{\|}_{L^{2}(\mathcal{X})}^{2}\] \[=\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\bigg{|}\frac{1}{t} (U_{g_{t}(y,\cdot)}U_{g_{s}(h_{t}(y),\cdot)}p_{j}(h_{s+t}(y))-U_{g_{s}(y,\cdot )}p_{j}(h_{s}(y)))u(z)-\tilde{w}_{s,u,j}(y,z)\bigg{|}^{2}\mathrm{d}\nu(z) \mathrm{d}\mu(y)\] \[=\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\bigg{|}\frac{1}{t} (U_{g_{s+t}(y,\cdot)}p_{j}(h_{s+t}(y))-U_{g_{s}(y,\cdot)}p_{j}(h_{s}(y)))u(z)- \tilde{w}_{s,u,j}(y,z)\bigg{|}^{2}\mathrm{d}\nu(z)\mathrm{d}\mu(y)\] \[=\int_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}\bigg{|}\frac{1}{t} ((p_{j}(h_{s+t}(y))u)(g_{s+t}(y,z))-(p_{j}(h_{s}(y))u)(g_{s}(y,z)))-\tilde{w}_ {s,u,j}(y,z)\bigg{|}^{2}\mathrm{d}\nu(z)\mathrm{d}\mu(y)\] \[\to 0\ (t\to 0).\] Thus, we have \((L_{\Phi}\hat{w}_{s,j})u=\tilde{w}_{s,u,j}\). Moreover, \(\tilde{w}_{s,u,j}\) is represented as \[\tilde{w}_{s,u,j}(y,z) =\frac{\partial}{\partial t}(U_{g_{t}(y,\cdot)}p_{j}(h_{t}(y))u)(z )\bigg{|}_{t=s}=U_{g_{s}(y,\cdot)}\frac{\partial}{\partial t}(U_{g_{t}(h_{s}(y ),\cdot)}p_{j}(h_{s+t}(y))u)(z)\bigg{|}_{t=0}\] \[=U_{g_{s}(y,\cdot)}\frac{\partial(p_{j}(h_{t}(h_{s}(y)))u)(g(t,h_{ s}(y),z))}{\partial t}\bigg{|}_{t=0}\] \[=U_{g_{s}(y,\cdot)}p_{j}(h_{s}(y))\frac{\partial(p_{s,j}(h_{t}(y)) u)(g(t,h_{s}(y),z))}{\partial t}\bigg{|}_{t=0}\] \[=\hat{w}_{s,j}(y)\bigg{(}\frac{\partial(p_{s,j}(y)u)(z)}{\partial y }\bigg{|}_{\begin{subarray}{c}y=h(0,y),\\ z=g(0,h_{s}(y),z)\end{subarray}}\frac{\partial h(t,y)}{\partial t}\bigg{|}_{t=0}\] \[\frac{\partial}{\partial t}(U_{g_{t}(y,\cdot)}p_{j}(h_{t}(y))u)(z) \bigg{|}_{t=0}=\lim_{t\to 0}\frac{1}{t}(U_{g_{t}(y,\cdot)}p_{j}(h_{t}(y))-p_{j}(y))u(z)\] \[\quad=\lim_{t\to 0}\frac{1}{t}p_{j}(y)(U_{g_{t}(y,\cdot)}p_{j}(h_{t }(y))-p_{j}(y))u(z)=p_{j}(y)\frac{\partial}{\partial t}(U_{g_{t}(y,\cdot)}p_{j} (h_{t}(y))u)(z)\bigg{|}_{t=0}.\] Thus, the assumption \(\frac{\partial(p_{j}(h_{t}(y))u)(g_{t}(y,\cdot))}{\partial t}\big{|}_{t=0}\in \mathcal{V}_{j}(y)\) in Theorem 21 is satisfied. _Example 1_.: Let \(\mathcal{Y}=\mathcal{Z}=\mathbb{R}/2\pi\mathbb{Z}=:\mathbb{T}\). For \(\alpha,\beta>0\), consider the following continuous dynamical system: \[\left(\frac{\mathrm{d}y(t)}{\mathrm{d}t},\frac{\mathrm{d}z(t)}{ \mathrm{d}t}\right)=(1,\alpha(1+\beta\cos(y(t)))). \tag{7}\] In this case, we have \(\frac{\partial g}{\partial t}(0,y,z)=\alpha(1+\beta\cos(y(0)))=\alpha(1+\beta \cos(y))\) and \(h_{s}(y)=y+s\). Let \(\gamma_{k,j}(y,z)=\mathrm{e}^{\sqrt{-1}(ky+jz)}\). Then, we have \[V_{\Phi}\gamma_{k,j}(y,z)=\big{(}\sqrt{-1}k+\sqrt{-1}j\alpha(1+ \beta\cos(y))\big{)}\gamma_{k,j}(y,z).\] Let \(\mathcal{V}_{j}=\overline{\mathrm{Span}\{\gamma_{k,j}\;\mid\;k\in\mathbb{Z}\}}\). We can see \(\mathcal{V}_{j}\) is an invariant subspace of \(V_{\Phi}\). In addition, let \(\mathcal{V}_{j}(y)=\overline{\mathrm{Span}\{\gamma_{k,j}(y,\cdot)\;\mid\;k \in\mathbb{Z}\}}\), and let \(p_{j}(y)\) be the projection onto \(\mathcal{V}_{j}(y)\). Then, since we have \[(V_{\Phi}p_{j})(y)\gamma_{k,j}(y,\cdot)=(V_{\Phi}\gamma_{k,j})(y,\cdot),\] the spectrum of \((V_{\Phi}p_{j})(y)\) is calculated as \[\sigma((V_{\Phi}p_{j})(y))=\{\sqrt{-1}k+\sqrt{-1}j\alpha(1+\beta \cos(y))\;\mid\;k\in\mathbb{Z}\}.\] Therefore, we have \[\bigcup_{y\in\mathcal{Y}}\sigma((V_{\Phi}p_{j})(y))=\bigcup_{y \in\mathcal{Y}}\sigma((V_{\Phi}p_{j})(h_{s}(y)))=\bigcup_{y\in\mathcal{Y}}\{ \sqrt{-1}k+\sqrt{-1}j\alpha(1+\beta\cos(y))\;\mid\;k\in\mathbb{Z}\}.\] Regarding \(\hat{w}_{s,j}\), we have \[U_{g_{s}(y,\cdot)}\gamma_{k,j}(y,\cdot) =\gamma_{k,j}(y,\cdot+\alpha(s+\beta(\sin(y+s)-\sin(y))))\] \[=\gamma_{k,j}(y,\cdot)\mathrm{e}^{\sqrt{-1}j\alpha(s+\beta(\sin( y+s)-\sin(y)))}.\] Thus, the spectrum of the family of operators \(\{\hat{w}_{s,j}(y)\}_{s}\) on \(L^{2}(\mathcal{Z})\) is \(\gamma_{j}\big{(}\alpha s+\alpha\beta(\sin(y+s)-\sin(y))\big{)}\). In the following subsections, we will generalize the arguments in Example 1. 3.3 Construction of the generalized Oseledets space \(\mathcal{V}_{j}(y)\) using a function space on \(\mathcal{X}\) We show how we can construct the generalized Oseledets space \(\mathcal{V}_{j}(y)\) required for obtaining \(p_{j}\) appearing in Theorem 21. In this subsection, we assume \(\mathcal{Y}\) is compact. Let \[\mathcal{N}=C(\mathcal{Y})\otimes L^{2}(\mathcal{Z}) \tag{8}\] be the Hilbert \(C(\mathcal{Y})\)-module. Note that a Hilbert \(C^{*}\)-module is also a Banach space. Here, we just regard \(\mathcal{N}\) as a Banach space equipped with the norm \(\sup_{y\in\mathcal{Y}}\int_{z\in\mathcal{Z}}|a(y)u(z)|^{2}\mathrm{d}\nu(z).\) For \(t\in\mathbb{R}\), let \(U_{\Phi_{t}}\) be the Koopman operator on \(\mathcal{N}\) with respect to \(\Phi_{t}\). **Proposition 23**.: _The family of operators \(\{U_{\Phi_{t}}\}_{t\in\mathbb{R}}\) is a strongly continuous one-parameter group._ Proof.: Let \(v\in C(\mathcal{Y})\otimes_{\mathrm{alg}}C_{c}(\mathcal{Z})\) and let \(\epsilon>0\). Then, there exists \(\delta>0\) such that for any \(|t|\leq\delta\), \(y\in\mathcal{Y}\), and \(z\in\mathcal{Z}\), \(|v(h_{t}(y),g_{t}(y,z))-v(y,z)|\leq\epsilon\). Thus, we have \[\|U_{\Phi_{t}}v-v\|_{\mathcal{N}}=\sup_{y\in\mathcal{Y}}\bigg{(}\int_{z\in \mathcal{Z}}|v(h_{t}(y),g_{t}(y,z))-v(y,z)|^{2}\mathrm{d}\nu(z)\bigg{)}^{1/2} \leq\epsilon. \tag{9}\] In addition, for any \(v\in C(\mathcal{Y})\otimes_{\mathrm{alg}}L^{2}(\mathcal{Z})\), we have \[\|U_{\Phi_{t}}v\|_{\mathcal{N}} =\sup_{y\in\mathcal{Y}}\bigg{(}\int_{z\in\mathcal{Z}}|v(h_{t}(y), g_{t}(y,z))|^{2}\mathrm{d}\nu(z)\bigg{)}^{1/2}\] \[=\sup_{y\in\mathcal{Y}}\bigg{(}\int_{z\in\mathcal{Z}}|v(y,z)|^{2} \mathrm{d}\nu(z)\bigg{)}^{1/2}=\|v\|_{\mathcal{N}}.\] Since \(C(\mathcal{Y})\otimes_{\mathrm{alg}}C_{c}(\mathcal{Z})\) is dense in \(\mathcal{N}\), Eq. (9) is satisfied for any \(v\in\mathcal{N}\). We note that the generator of \(\{U_{\Phi_{t}}\}_{t\in\mathbb{R}}\) is \(V_{\Phi}\) defined in Eq. (6). If we set \(\mathcal{V}_{j}\) as \(\mathcal{V}\) in the following proposition, it satisfies the assumption of Theorem 21 (see also Remark 2). **Proposition 24**.: _Let \(\mathcal{V}\) be an invariant subspace of \(U_{\Phi_{t}}\) and let \(\mathcal{V}(y)=\overline{R_{y}\mathcal{V}}\). Then, we have \(U_{g_{t}(y,\cdot)}\mathcal{V}(h_{t}(y))\subseteq\mathcal{V}(y)\). Here, \(R_{y}:\mathcal{N}\to L^{2}(\mathcal{Z})\) be a linear map defined as \(R_{y}(a\otimes u)(z)=a(y)u(z)\) for \(y\in\mathcal{Y}\)._ Proof.: For \(t\in\mathbb{R}\), \(y\in\mathcal{Y}\), and \(v\in C(\mathcal{Y})\otimes_{\mathrm{alg}}L^{2}(\mathcal{Z})\), we have \[U_{g_{t}(y,\cdot)}R_{h_{t}(y)}v(z)=v(h_{t}(y),g_{t}(y,z))=R_{y}U_{\Phi_{t}}v(z).\] Thus, we have \(U_{g_{t}(y,\cdot)}R_{h_{t}(y)}=R_{y}U_{\Phi_{t}}\). Since \(\mathcal{V}\) is an invariant subspace of \(U_{\Phi_{t}}\), we have \(U_{g_{t}(y,\cdot)}\mathcal{V}(h_{t}(y))\subseteq\mathcal{V}(y)\). The following proposition shows an example of \(\mathcal{V}\) constructed in Proposition 24. It is for a simple case where \(V_{\Phi}\) has an eigenvalue, but provides us with an intuition of what the eigenoperators describe. **Proposition 25**.: _Assume there exists \(y\in\mathcal{Y}\) such that \(\{h(t,y)\,\mid\,t\in\mathbb{R}\}\) is dense in \(\mathcal{Y}\). Assume \(V_{\Phi}\) has an eigenvalue \(\lambda\) and the corresponding eigenvector \(\tilde{v}\in C^{1}_{c}(\mathcal{X})\). Then, there exists \(C\geq 0\) such that for a.s. \(y\in\mathcal{Y}\), \(\|\tilde{v}(y,\cdot)\|_{L^{2}(\mathcal{Z})}=C\). Assume \(C>0\) and let \(p(y)u=v(y,\cdot)v(y,\cdot)^{*}u\) for \(u\in L^{2}(\mathcal{Z})\), where \(v=\tilde{v}/C\). Then, \(p(y)(V_{\Phi}pu)(y,\cdot)=(V_{\Phi}pu)(y,\cdot)\) and_ \[\sigma((V_{\Phi}p)(y))=\lambda-\int_{\mathcal{Z}}\frac{\partial v}{\partial y}( y,z)\overline{v}(y,z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y)=\int_{ \mathcal{Z}}\frac{\partial v}{\partial z}(y,z)\frac{\partial g}{\partial t}(0, y,z)\overline{v}(y,z)\mathrm{d}\nu(z).\] _Moreover, \(\sigma((V_{\Phi}p)(y))\subseteq\sqrt{-1}\mathbb{R}\)._ Proof.: The vector \(\tilde{v}\) is an eigenvector of \(U_{\Phi_{t}}\) for any \(t\in\mathbb{R}\), and its corresponding eigenvalue is \(\mathrm{e}^{\lambda t}\) (\(\lambda\in\sqrt{-1}\mathbb{R}\)). Thus, we have \[\int_{\mathcal{Z}}\tilde{v}(y,z)\overline{\tilde{v}}(y,z)\mathrm{d }\nu(z)= \int_{\mathcal{Z}}\mathrm{e}^{-\lambda t}U_{\Phi_{t}}\tilde{v}(y,z) \overline{\mathrm{e}^{-\lambda t}U_{\Phi_{t}}\tilde{v}(y,z)}\mathrm{d}\nu(z)\] \[= \int_{\mathcal{Z}}\tilde{v}(h_{t}(y),g_{t}(y,z))\overline{\tilde{ v}}(h_{t}(y),g_{t}(y,z))\mathrm{d}\nu(z)\] \[= \int_{\mathcal{Z}}\tilde{v}(h_{t}(y),z)\overline{\tilde{v}}(h_{t }(y),z)\mathrm{d}\nu(z).\] For \(u\in L^{2}(\mathcal{Z})\), we have \[(V_{\Phi}pu)(y,z)\] \[=\bigg{(}\frac{\partial v}{\partial y}(y,z)\int_{\mathcal{Z}} \overline{v}(y,z)u(z)\mathrm{d}\nu(z)+v(y,z)\int_{\mathcal{Z}}\frac{\partial \overline{v}}{\partial y}(y,z)u(z)\mathrm{d}\nu(z)\bigg{)}\frac{\partial h}{ \partial t}(0,y)\] \[\qquad\qquad+\frac{\partial v}{\partial z}(y,z)\int_{\mathcal{Z} }\overline{v}(y,z)u(z)\mathrm{d}\nu(z)\frac{\partial g}{\partial t}(0,y,z)\] \[=(V_{\Phi}v)(y,z)\int_{\mathcal{Z}}\overline{v}(y,z)u(z)\mathrm{ d}\nu(z)+v(y,z)\int_{\mathcal{Z}}\frac{\partial\overline{v}}{\partial y}(y,z)u (z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y)\] \[=\lambda v(y,z)\int_{\mathcal{Z}}\overline{v}(y,z)u(z)\mathrm{d} \nu(z)+v(y,z)\int_{\mathcal{Z}}\frac{\partial\overline{v}}{\partial y}(y,z)u (z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y).\] Thus, \(p(y)(V_{\Phi}pu)(y)=(V_{\Phi}pu)(y)\). In addition, we have \[(V_{\Phi}p)(y)v(y,\cdot) =\lambda v(y,\cdot)\int_{\mathcal{Z}}\overline{v}(y,z)v(y,z) \mathrm{d}\nu(z)+v(y,\cdot)\int_{\mathcal{Z}}\frac{\partial\overline{v}}{ \partial y}(y,z)v(y,z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y)\] \[=\bigg{(}\lambda+\int_{\mathcal{Z}}\frac{\partial\overline{v}}{ \partial y}(y,z)v(y,z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y)\bigg{)} v(y,\cdot).\] Moreover, we have \[0= \int_{\mathcal{Z}}\frac{\partial v(y,z)\overline{v}(y,z)}{ \partial y}\mathrm{d}\nu(z)= \int_{\mathcal{Z}}\frac{\partial\overline{v}}{\partial y}(y,z)v(y,z)\mathrm{d }\nu(z)+\int_{\mathcal{Z}}\frac{\partial v}{\partial y}(y,z)\overline{v}(y,z )\mathrm{d}\nu(z)\] \[= 2\Re\bigg{(}\int_{\mathcal{Z}}\frac{\partial\overline{v}}{ \partial y}(y,z)v(y,z)\mathrm{d}\nu(z)\bigg{)},\] and \[\int_{\mathcal{Z}}\frac{\partial\overline{v}}{\partial y}(y,z)v(y,z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y) =-\int_{\mathcal{Z}}\frac{\partial v}{\partial y}(y,z)\overline{ v}(y,z)\mathrm{d}\nu(z)\frac{\partial h}{\partial t}(0,y)\] \[=-\int_{\mathcal{Z}}\bigg{(}V_{\Phi}v(y,z)-\frac{\partial v}{ \partial z}(y,z)\frac{\partial g}{\partial t}(0,y,z)\bigg{)}\overline{v}(y,z )\mathrm{d}\nu(z)\] \[= \int_{\mathcal{Z}}\bigg{(}-\lambda v(y,z)+\frac{\partial v}{ \partial z}(y,z)\frac{\partial g}{\partial t}(0,y,z)\bigg{)}\overline{v}(y,z) \mathrm{d}\nu(z).\] _Remark 3_.: Proposition 25 implies that the eigenoperators have the information of the dynamics on \(\mathcal{Z}\) for each \(y\), which cannot be extracted by eigenvalues of \(V_{\Phi}\). Indeed, if an eigenvector \(v_{j}\) of \(V_{\Phi}\) depends only on \(y\), then \(\sigma(\hat{N}_{s,j})=\sigma(V_{\Phi}p(y))=0\). On the other hand, the corresponding eigenvalue of \(V_{\Phi}\) can be nonzero. #### 4.3.4 Approximation of \(V_{\Phi}\) in RKHS For numerical computations, to construct the subspace \(\mathcal{V}_{j}(y)\), we need to approximate \(V_{\Phi}\) using RKHSs. Approximating the generator of the Koopman operator in RKHSs was proposed by Das et al. [22]. Here, we apply a similar technique to approximating \(V_{\Phi}\). In this subsection, we assume \(\mathcal{Y}=\mathbb{T}\). We also assume \(\mathcal{Z}\) is compact and \(\nu\) is a Borel probability measure satisfying \(\mathrm{supp}(\nu)=\mathcal{Z}\). Let \(\phi_{i}(l)=\mathrm{e}^{\sqrt{-1}il}\) and \(\lambda_{i}=\mathrm{e}^{-|i|}\) for \(i\in\mathbb{Z}\) and \(l\in\mathbb{T}\). Let \(p_{1}:\mathbb{T}\times\mathbb{T}\to\mathbb{C}\) be the positive definite kernel defined as \(p_{1}(l_{1},l_{2})=\sum_{i\in\mathbb{Z}}\lambda_{i}\phi_{i}(l_{1})\overline{ \phi_{i}(l_{2})}\). In addition, let \(p_{2}:\mathcal{Z}\times\mathcal{Z}\to\mathbb{C}\) be a positive definite kernel, and let \(\tilde{P}:L^{2}(\mathcal{Z})\to L^{2}(\mathcal{Z})\) be the integral operator with respect to \(p_{2}\). Let \(\tilde{\lambda}_{1}\geq\tilde{\lambda}_{2}\geq\ldots>0\) and \(\tilde{\phi}_{1},\tilde{\phi}_{2},\ldots\) be eigenvalues and the corresponding orthonormal eigenvectors of \(\tilde{P}\), respectively. By Mercer's theorem, \(p_{2}(z_{1},z_{2})=\sum_{i=1}^{\infty}\tilde{\lambda}_{i}\tilde{\phi}_{i}(z_ {1})\overline{\tilde{\phi}_{i}(z_{2})}\), where the sum converges uniformly on \(\mathcal{Z}\times\mathcal{Z}\). Let \(\tau>0\) and let \(\lambda_{\tau,i,j}=\mathrm{e}^{\tau(1-\lambda_{i}^{-1}\tilde{\lambda}_{j}^{-1})}\) for \(i\in\mathbb{Z}\) and \(j=1,2,\ldots\). Let \[p_{\tau}((l_{1},z_{1}),(l_{2},z_{2}))=\sum_{i\in\mathbb{Z}}\sum_{j=1}^{\infty} \lambda_{\tau,i,j}\phi_{i}(l_{1})\tilde{\phi}_{j}(z_{1})\overline{\tilde{\phi }_{j}(z_{2})\overline{\phi_{i}(l_{2})}}\] and let \(\mathcal{H}_{\tau}\) be the RKHS associated with \(p_{\tau}\). In addition, let \(P_{\tau}\) be the integral operator with respect to \(p_{\tau}\). **Proposition 26**.: _Let \(\iota_{\tau}:\mathcal{H}_{\tau}\to\mathcal{N}\) be the inclusion map, where \(\mathcal{N}\) is defined as Eq. (8). Then, for any \(v\in\mathcal{N}\), \(\|\iota_{\tau}P_{\tau}v-v\|_{\mathcal{N}}\) converges to \(0\) as \(\tau\to 0\)._ Proof.: Let \(\psi_{i,j}=\phi_{i}\otimes\tilde{\phi}_{j}\). Since \(\{\tilde{\phi}_{i}\}_{i=1}^{\infty}\) is an orthonormal basis in \(L^{2}(\mathcal{Z})\), the subspace \(\{\sum_{i=-n}^{n}\sum_{j=1}^{m}a_{i,j}\psi_{i,j}\mid n,m\in\mathbb{N},a_{i,j} \in\mathbb{C}\}\) is dense in \(\mathcal{N}\) with \(\|\psi_{i,j}\|_{\mathcal{N}}=1\). In addition, since we have \(\iota_{\tau}P_{\tau}\psi_{i,j}=\lambda_{\tau,i,j}\psi_{i,j}\) and \(0\leq\lambda_{\tau,i,j}\leq 1\), we obtain \(\|\iota_{\tau}P_{\tau}\|_{\mathcal{N}}\leq 1\). For any \(\epsilon>0\) and \(v\in\mathcal{N}\), there exist \(n,m\in\mathbb{N}\), \(a_{i,j}\in\mathbb{C}\), and \(\tau_{0}>0\) such that \(\|\sum_{i=-n}^{n}\sum_{j=1}^{m}a_{i,j}\psi_{i,j}-v\|_{\mathcal{N}}\leq\epsilon\) and \((1-\lambda_{\tau,i,j})(\sum_{i=-n}^{n}\sum_{j=1}^{m}|a_{i,j}|)\leq\epsilon\) for \(i=-n,\ldots,n\), \(j=1,\ldots,m\), and \(\tau\leq\tau_{0}\). Thus, for \(\tau\leq\tau_{0}\), we have \[\|\iota_{\tau}P_{\tau}v-v\|_{\mathcal{N}}\] \[\leq\left\|\iota_{\tau}P_{\tau}v-\iota_{\tau}P_{\tau}\sum_{i=-n}^ {n}\sum_{j=1}^{m}a_{i,j}\psi_{i,j}\right\|_{\mathcal{N}}+\left\|\iota_{\tau}P_{ \tau}\sum_{i=-n}^{n}\sum_{j=1}^{m}a_{i,j}\psi_{i,j}-\sum_{i=-n}^{n}\sum_{j=1}^{m }a_{i,j}\psi_{i,j}\right\|_{\mathcal{N}}\] \[\qquad\qquad+\left\|\sum_{i=-n}^{n}\sum_{j=1}^{m}a_{i,j}\psi_{i,j }-v\right\|_{\mathcal{N}}\] \[\leq\epsilon+\sum_{i=-n}^{n}\sum_{j=1}^{m}(1-\lambda_{\tau,i,j})|a_{i,j}|+ \epsilon=3\epsilon.\] By Proposition 26, we can see that \(V_{\Phi}\) can be approximated by \(P_{\tau}V_{\Phi^{L}\tau}\) in the following sense. **Corollary 27**.: _Let \(\tau_{0}>0\). For any \(v\in\mathcal{H}_{\tau_{0}}\), \(\|\iota_{\tau}P_{\tau}V_{\Phi^{L}\tau}v-V_{\Phi^{L}\tau}v\|_{\mathcal{N}}\) converges to \(0\) as \(\tau\to 0\)._ ## 5 Numerical examples We numerically investigate the eigenoperator decomposition. ### Moving Gaussian vortex We first visualize \(\hat{w}_{s,j}\), the eigenvector in Theorem 21. Let \(\mathcal{Y}=\mathbb{T}\) and \(\mathcal{Z}=\mathbb{T}^{2}\). Consider the dynamical system \[\bigg{(}\frac{\mathrm{d}y(t)}{\mathrm{d}t},\bigg{(}\frac{\mathrm{d}z_{1}(t)}{ \mathrm{d}t},\frac{\mathrm{d}z_{2}(t)}{\mathrm{d}t}\bigg{)}\bigg{)}=\bigg{(}1, \bigg{(}-\frac{\partial\zeta}{\partial z_{2}}(y(t),z(t)),\frac{\partial\zeta} {\partial z_{1}}(y(t),z(t))\bigg{)}\bigg{)}, \tag{10}\] where \(\zeta(y,z)=\mathrm{e}^{\kappa(\cos(z_{1}-y)+\cos z_{2})}\). This problem is also studied by Giannakis and Das [28]. In this case, \(\frac{\partial g}{\partial t}(0,y,z)=(-\frac{\partial\zeta}{\partial z_{2}}(y,z),\frac{\partial\zeta}{\partial z_{1}}(y,z))\) and \(V_{\Phi}=\frac{\partial}{\partial y}-\frac{\partial\zeta}{\partial z_{2}}(y,z)\frac{\partial}{\partial z_{1}}+\frac{\partial\zeta}{\partial z_{1}}(y,z) \frac{\partial}{\partial z_{2}}\). To construct the subspace \(\mathcal{V}_{j}(y)\) approximately, we set \(\kappa=0.5\) and approximated \(V_{\Phi}\) in the RKHS \(\mathcal{H}_{\tau}\otimes\mathcal{H}_{\tau}\otimes\mathcal{H}_{\tau}\) with \(\tau=0.1\). Here, \(\mathcal{H}_{\tau}\) is the RKHS associated with the positive definite kernel \(p_{\tau}:\mathbb{T}\times\mathbb{T}\rightarrow\mathbb{C}\) defined as \(p_{\tau}(y_{1},y_{2})=\sum_{i\in\mathbb{Z}}\lambda_{\tau,i}\phi_{i}(y_{1}) \overline{\phi_{i}(y_{2})}\). In addition, \(\phi_{i}(y)=\mathrm{e}^{\sqrt{-\mathrm{T}}iy}\) and \(\lambda_{\tau,i}=\mathrm{e}^{-\tau|i|^{p}}\) for \(i\in\mathbb{Z}\). We set \(p=0.1\). We computed eigenvectors \(\tilde{v}_{1},\tilde{v}_{2},\ldots\) of the approximated operator in the RKHS. Here, the index is ordered from the eigenvector corresponding to the closest eigenvalue to \(10^{-10}\). We set \(\mathcal{V}_{1}(y)=\mathrm{Span}\{\tilde{v}_{1}(y,\cdot),\ldots,\tilde{v}_{d}( y,\cdot)\}\). Since the eigenvector \(w_{s,j}(y)\) is an operator, its visualization is not easy. Thus, we set any test vector \(q_{y,d}\in\mathcal{V}_{1}(y)\) and visualize \(w_{s,j}(y)q_{y,d}\) instead of \(w_{s,j}\) itself. As the test vector, we set \(q_{y,d}=1/d\sum_{i=1}^{d}\tilde{v}_{i}(y,\cdot)\) as an example. Figure 2 shows the eigenvector \(\hat{w}_{s,1}\) acting on the vector \(q_{y,d}\). We can see that the pattern becomes more clear as \(d\) becomes large, which implies that considering higher dimensional subspaces \(\mathcal{V}_{1}(y)\) than a one dimensional space catches the feature of the dynamical system in this case. ### Idealized stratospheric flow Next, we observe the eigenoperator \(\hat{N}_{s,j}\). We study what information the eigenoperators capture. Let \(\mathcal{Y}=\mathbb{T}\) and \(\mathcal{Z}=\mathbb{T}\times[-\pi,\pi]\). Consider the same dynamical system as Eq. (10) with \(\zeta(y,z)=c_{3}z_{2}-U_{0}L\tanh(z_{2}/L)+\sum_{i=1}^{3}A_{i}U_{0}L\operatorname {sech}^{2}(z_{2}/L)\cos(k_{i}z_{1}-\sigma_{i}y)\), where \(L=0.1\), \(A_{1}=0.075\), \(A_{2}=0.4\), \(A_{3}=0.2\), \(k_{1}=1\), \(k_{2}=2k_{1}\), \(k_{3}=3k_{1}\), \(U_{0}=62.66\), \(c_{3}=0.7U_{0}\), \(\sigma_{2}=-1\), and \(\sigma_{1}=2\sigma_{1}\). A similar problem is also studied by Froyland et al. [26]. We approximated \(V_{\Phi}\) in the RKHS \(\mathcal{H}_{\tau}\otimes\mathcal{H}_{\tau}\otimes\mathcal{H}_{\tau}\) with \(\tau=0.1\). We computed eigenvectors \(\tilde{v}_{1},\tilde{v}_{2},\ldots\) of the approximated operator in the RKHS and set \(\mathcal{V}_{j}(y)=\mathrm{Span}\{\tilde{v}_{j}(y)\}\) for \(j=1,2,\ldots\). We study \(\hat{N}_{s,j}\) for different \(j\) and what information we can extract according to \(\hat{N}_{s,j}\). Figure 3 shows the heatmap of the function \(\hat{w}_{0,j}\tilde{v}_{j}(0,\cdot)\) for different values of \(j\). Since we have \(\hat{w}_{0,j}\tilde{v}_{j}(y,\cdot)=\tilde{v}_{j}(y,\cdot)\), it provides us coherent patterns. We computed the spectrum of \(\hat{N}_{s,j}\) for the corresponding \(j\). The eigenoperator \(\hat{N}_{s,j}\) is different from the spectrum of \(V_{\Phi}\), the generator of the Koopman operator. We also computed the spectrum of \(V_{\Phi}\). We can see that the pattern becomes complicated as the magnitude of the spectrum \(\sigma(\hat{N}_{0,j})\) of the eigenoperator becomes large. On the other hand, the spectrum of \(V_{\Phi}\) does not provide such an observation. ## 6 Conclusion and discussion In this paper, we considered a skew product dynamical system on \(\mathcal{Y}\times\mathcal{Z}\) and defined a linear operator on a Hilbert \(C^{*}\)-module related to the Koopman operator. We proposed the eigenoperator decomposition as a generalization of the eigenvalue decomposition. The eigenvectors are constructed using a cocycle. The eigenoperators reconstruct the Koopman operator projected on generalized Oseledets subspaces. Thus, if the Oseledets subspaces are infinite-dimensional spaces, the eigenoperators can have continuous spectra related to the Koopman operator. Our approach is different from existing approach to dealing with continuous and residual spectra of Koopman operators, such as focusing on the spectral measure [19] and approximating Koopman operators using compact operators on different space from the space where the Koopman operators are defined [20, 28]. In addition, the proposed decomposition gives us information of the behavior of coherent patterns on \(\mathcal{Z}\). Extracting coherent structure of skew product dynamical systems has been investigated [22, 26, 35]. The proposed decomposition will allow us to classify these coherent patterns. For future work, studying data-driven approaches to obtaining the decomposition is an important direction of researches. Investigating practical and computationally efficient ways to approximate operators on Hilbert \(C^{*}\)-modules would be essential in that direction of researches. Another interesting direction is applying the proposed decomposition to quantum computation. Decomposing a Koopman operator for quantum computation was proposed [43]. It would be interesting to generalize the decomposition using the proposed decomposition. ## Acknowledgments We thank Suddhasattwa Das for many constructive discussions on this work. DG acknowledges support from the US National Science Foundation under grant DMS-1854383, the US Office of Naval Research under MURI grant N00014-19-1-242, and the US Department of Defense, Basic Research Office under Vannevar Bush Faculty Fellowship grant N00014-21-1-2946. MI and II acknowledge support from the Japan Science and Technlogy Agency under CREST grant JPMJCR1913. II acknowledge support from the Japan Science and Technlogy Agency under ACT-X grant JPMJAX2004. ## Appendix A Proof of Proposition 1 To show Proposition 1, we use the following lemmas [44]. **Lemma 28**.: _We have \(\mathcal{M}\subseteq\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\)._ Proof.: Let \(\iota:L^{2}(\mathcal{Y})\otimes_{\mathrm{alg}}\mathcal{A}\to\mathcal{B}(L^{ 2}(\mathcal{Z}),L^{2}(\mathcal{X}))\) be defined as \((\iota(v\otimes a)u)(y,z)=v(y)(au)(z)\). Then, \(\iota\) is an injection. In addition, we have \[\|\iota(v\otimes a)\|_{\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}( \mathcal{X}))} =\sup_{\|u\|_{L^{2}(\mathcal{Z})}=1}\|\iota(v\otimes a)u\|_{L^{2} (\mathcal{X})}\] \[=\|v\|_{L^{2}(\mathcal{Y})}\sup_{\|u\|_{L^{2}(\mathcal{Z})}=1}\| au\|_{L^{2}(\mathcal{Z})}=\|v\otimes a\|_{\mathcal{M}}.\] Therefore, we have \(\mathcal{M}\subseteq\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\). We recall that a Hilbert \(C^{*}\)-module \(\mathcal{M}\) is referred to as self-dual if for any bounded \(\mathcal{A}\)-linear map \(b:\mathcal{M}\to\mathcal{A}\), there exists a unique \(\hat{b}\in\mathcal{M}\) such that \(b(w)=\langle\hat{b},w\rangle_{\mathcal{M}}\). **Lemma 29**.: _Let \(\{w_{i}\}_{i=1}^{\infty}\) be a sequence in \(\mathcal{M}\). Assume there exists \(w\in\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\) such that for any \(u\in L^{2}(\mathcal{Z})\), \(\iota(w_{i})u\to wu\) in \(L^{2}(\mathcal{X})\), where \(\iota\) is defined in the proof of Lemma 28. Then, \(w\in\mathcal{M}\)._ Proof.: For \(\tilde{w}\in\mathcal{M}\) and \(u_{1},u_{2}\in L^{2}(\mathcal{Z})\), we have \[\langle\iota(\tilde{w})u_{1},wu_{2}\rangle=\left\langle\iota(\tilde{ w})u_{1},\lim_{i\to\infty}\iota(w_{i})u_{2}\right\rangle =\lim_{i\to\infty}\int_{z\in\mathcal{Z}}\int_{y\in\mathcal{Y}} \overline{u_{1}(z)}\tilde{w}(y)^{*}w_{i}(y)u_{2}(z)\mathrm{d}\mu(y)\mathrm{d} \nu(z)\] \[=\lim_{i\to\infty}\left\langle u_{1},\left\langle\tilde{w},w_{i} \right\rangle_{\mathcal{M}}u_{2}\right\rangle_{L^{2}(\mathcal{Z})}.\] Thus, by the Riesz representation theorem, there exists \(a\in\mathcal{A}\) such that \(\langle\iota(\tilde{w})u_{1},wu_{2}\rangle=\langle u_{1},au_{2}\rangle.\) The map \(\tilde{w}\mapsto a^{*}\) is a bounded \(\mathcal{A}\)-linear map from \(\mathcal{M}\) to \(\mathcal{A}.\) Since \(\mathcal{A}\) is self-dual, \(\mathcal{M}\) is also self-dual. As a result, there exists \(\hat{w}\in\mathcal{M}\) such that \(\langle\hat{w},\tilde{w}\rangle_{\mathcal{M}}=a^{*}\) and \(w=\iota(\hat{w}).\) **Lemma 30**.: _Let \(w:\mathcal{Y}\to\mathcal{A}.\) Assume for any \(u\in L^{2}(\mathcal{Z})\), the map \((y,z)\mapsto(w(y)u)(z)\) is contained in \(L^{2}(\mathcal{X}).\) Then, \(w\in\mathcal{M}.\)_ Proof.: Let \(\{\gamma_{i}\}_{i=1}^{\infty}\) be an orthonormal basis of \(L^{2}(\mathcal{Z}).\) For \(u\in L^{2}(\mathcal{Y}),\) we have \[w(y)u=w(y)\sum_{i=1}^{\infty}\gamma_{i}\gamma_{i}^{\prime}u=\sum_{i=1}^{ \infty}w(y)\gamma_{i}\gamma_{i}^{\prime}u.\] Here, \(\gamma_{i}^{\prime}\) denotes the dual of \(\gamma_{i}\in L^{2}(\mathcal{Z}).\) Since the map \((y,z)\mapsto(w(y)\gamma_{i})(z)\) is in \(L^{2}(\mathcal{X}),\) we obtain \(w(y)\gamma_{i}\gamma_{i}^{\prime}\in\mathcal{M}.\) Thus, by regarding \(w\) as an element in \(\mathcal{B}(L^{2}(\mathcal{Z}),L^{2}(\mathcal{X}))\) defined as \(u\mapsto((y,z)\mapsto(w(y)u)(z)),\) by Lemma 29, \(w\in\mathcal{M}\) holds. Proof of Proposition 1.: Since \((y,z)\mapsto(K_{T}(v\otimes a)(y)u)(z)=v(h(y))(au)(g(y,z))\) is in \(L^{2}(\mathcal{X}),\) by Lemma 30, \(K_{T}(v\otimes a)\in\mathcal{M}\) holds. Regarding the unitarity of \(K_{T},\) let \(L_{T}:\mathcal{M}\to\mathcal{M}\) be a right \(\mathcal{A}\)-linear operator defined as \(L_{T}(v\otimes a)=v(h^{-1}(y))U_{g(h^{-1}(y),\cdot)}^{*}a.\) Then, \(L_{T}\) is the inverse of \(K_{T}\) and for \(v_{1},v_{2}\in L^{2}(\mathcal{Y})\) and \(a_{1},a_{2}\in\mathcal{A},\) we have \[\left\langle K_{T}(v_{1}\otimes a_{1}),K_{T}(v_{2}\otimes a_{2}) \right\rangle_{\mathcal{M}}=\int_{y\in\mathcal{Y}}\overline{v_{1}(h(y))}a_{1} ^{*}U_{g(y,\cdot)}^{*}U_{g(y,\cdot)}a_{2}v_{2}(h(y))\mathrm{d}\mu(y)\] \[=\left\langle v_{1},v_{2}\right\rangle_{L^{2}(\mathcal{Y})}a_{1 }^{*}a_{2}=\left\langle v_{1}\otimes a_{1},v_{2}\otimes a_{2}\right\rangle_{ \mathcal{M}}.\]
2307.06790
Random surfaces and lattice Yang-Mills
We study Wilson loop expectations in lattice Yang-Mills models with a compact Lie group $G$. Using tools recently introduced in a companion paper, we provide alternate derivations, interpretations, and generalizations of several recent theorems about Brownian motion limits (Dahlqvist), lattice string trajectories (Chatterjee and Jafarov) and surface sums (Magee and Puder). We show further that one can express Wilson loop expectations as sums over embedded planar maps in a manner that applies to any matrix dimension $N \geq 1$, any inverse temperature $\beta>0$, and any lattice dimension $d \geq 2$. When $G=\mathrm{U}(N)$, the embedded maps we consider are pairs $(\mathcal M, \phi)$ where $\mathcal M$ is a planar (or higher genus) map and $\phi$ is a graph homomorphism from $\mathcal M$ to a lattice such as $\mathbb Z^d$. The faces of $\mathcal M$ come in two partite classes: $\textit{edge-faces}$ (each mapped by $\phi$ onto a single edge) and $\textit{plaquette-faces}$ (each mapped by $\phi$ onto a single plaquette). The weight of a lattice edge $e$ is the Weingarten function applied to the partition whose parts are given by half the boundary lengths of the faces in $\phi^{-1}(e)$. (The Weingarten function becomes quite simple in the $N\to \infty$ limit.) The overall weight of an embedded map is proportional to $N^\chi$ (where $\chi$ is the Euler characteristic) times the product of the edge weights. We establish analogous results for $\mathrm{SU}(N)$, $\mathrm{O}(N)$, $\mathrm{SO}(N)$, and $\mathrm{Sp}(N/2)$, where the embedded surfaces and weights take a different form. There are several variants of these constructions. In this context, we present a list of relevant open problems spanning several disciplines: random matrix theory, representation theory, statistical physics, and the theory of random surfaces, including random planar maps and Liouville quantum gravity.
Sky Cao, Minjae Park, Scott Sheffield
2023-07-13T14:59:44Z
http://arxiv.org/abs/2307.06790v3
# Random surfaces and lattice Yang-Mills ###### Abstract We study Wilson loop expectations in lattice Yang-Mills models with a compact Lie group \(G\). Using tools recently introduced in a companion paper [10], we provide alternate derivations, interpretations, and generalizations of several recent theorems about Brownian motion limits (Dahlqvist), lattice string trajectories (Chatterjee and Jafarov) and surface sums (Magee and Puder). We show further that one can express Wilson loop expectations as sums over embedded planar maps in a manner that applies to any matrix dimension \(N\geq 1\), any inverse temperature \(\beta>0\), and any lattice dimension \(d\geq 2\). When \(G=\mathrm{U}(N)\), the embedded maps we consider are pairs \((\mathcal{M},\phi)\) where \(\mathcal{M}\) is a planar (or higher genus) map and \(\phi\) is a graph homomorphism from \(\mathcal{M}\) to a lattice such as \(\mathbb{Z}^{d}\). The faces of \(\mathcal{M}\) come in two partite classes: _edge-faces_ (each mapped by \(\phi\) onto a single edge) and _plaquette-faces_ (each mapped by \(\phi\) onto a single plaquette). The weight of a lattice edge \(e\) is the Weingarten function applied to the partition whose parts are given by half the boundary lengths of the faces in \(\phi^{-1}(e)\). (The Weingarten function becomes quite simple in the \(N\to\infty\) limit.) The overall weight of an embedded map is proportional to \(N^{\chi}\) (where \(\chi\) is the Euler characteristic) times the product of the edge weights. We establish analogous results for \(\mathrm{SU}(N)\), \(\mathrm{O}(N)\), \(\mathrm{SO}(N)\), and \(\mathrm{Sp}(N/2)\), where the embedded surfaces and weights take a different form. There are several variants of these constructions. In this context, we present a list of relevant open problems spanning several disciplines: random matrix theory, representation theory, statistical physics, and the theory of random surfaces, including random planar maps and Liouville quantum gravity. ###### Contents * 1 Introduction * 1.1 Overview * 1.2 Random matrices * 1.3 Continuum Yang-Mills * 1.4 Lattice models and planar maps * 1.5 Main results * 1.6 Summary of paper and reading guide * 2 Notation and background * 2.1 Poisson point process on strand diagrams * 2.2 Representation theory and other preliminaries * 3 Surface-sum representation of Wilson loop expectations * 4 Brownian motion and Poisson process exploration * 4.1 Strand-by-strand exploration * 4.2 Extension to general values of \(N\) * 5 Makeenko-Migdal/Master loop/Schwinger-Dyson equations * 6 Other groups * 6.1 Orthogonal and Symplectic * 6.2 Special Unitary and Special Orthogonal * 7 Open problems * A Properties of the Orthogonal Weingarten function ## 1 Introduction ### Overview On a heuristic level, Euclidean Yang-Mills theory is a "probability measure" defined by \[d\mu_{\rm YM}(\omega) = \frac{1}{Z}e^{-\frac{1}{2g^{2}}S_{\rm YM}(\omega)}d\omega\] where \(\omega\) ranges over a space \(\mathcal{A}\) of Lie-algebra-valued connection forms on some Riemannian manifold, the Yang-Mills action \(S_{\rm YM}\) is the \(L^{2}\)-norm of the curvature of \(\omega\), \(g\) is a coupling constant, and \(d\omega\) is a "Lebesgue measure" on \(\mathcal{A}\). Making precise sense of the heuristic definition above is a famous open problem that we will not solve here [10]. Instead, we will study _lattice Yang-Mills theory_ (a.k.a. _lattice gauge theory_), an approximation to the continuum theory introduced in 1974 by Wilson [11] who also credits Polyakov and Smit for similar ideas [11]. An online search for scholarly work on "lattice gauge theory" turns up tens of thousands of articles in physics and mathematics, and we cannot cover all of the variants and applications here. Wilson's memoir and Chatterjee's recent survey for probabilists are good places to start [11, 2]. See also Yang's account of his early work with Mills in 1954 [12]. Lattice Yang-Mills assigns a random \(N\)-by-\(N\) matrix from some compact Lie group \(G\) -- usually \({\rm U}(N)\), \({\rm O}(N)\), \({\rm SU}(N)\), \({\rm SO}(N)\), or \({\rm Sp}(N/2)\) -- to each directed edge of a graph \(\Lambda\), which is usually \(\mathbb{Z}^{d}\) or a finite induced subgraph of \(\mathbb{Z}^{d}\). We require this assignment to have an edge-reversal symmetry: if \(Q_{e}\) is the matrix assigned to a directed edge \(e=(v,w)\), then \(Q_{(w,v)}=Q_{(v,w)}^{-1}\). If \(p=(e_{1},e_{2},\ldots,e_{k})\) is a directed path, then we write \(Q_{p}=Q_{e_{1}}Q_{e_{2}}\ldots Q_{e_{k}}\). A _loop_ is a directed cycle \(\ell\) defined modulo cyclical reordering (which amounts to repositioning the starting point of the loop). We define a set \(\mathcal{P}\) of directed loops in \(\Lambda\) that we call _plaquettes_. Usually \(\mathcal{P}\) is the set of directed unit squares in \(\Lambda\) (i.e., directed cycles with four distinct vertices), but in principle \(\mathcal{P}\) can be any collection of loops that is closed under reversal (i.e. \(p\in\mathcal{P}\) implies that the orientation reversal of \(p\) is in \(\mathcal{P}\)). Let \(M\) be one of the aforementioned classical Lie groups. Define the **normalized trace** by \(\operatorname{tr}(M):=\frac{1}{N}\mathrm{Tr}(M)=\frac{1}{N}\Bigl{(}\sum_{j=1} ^{N}M_{j,j}\Bigr{)}\) and write \(\mathrm{Re}(z)\) for the real part of \(z\). Note that if \(M\) is the identity, then \(\mathrm{Re}\bigl{(}\operatorname{tr}(M)\bigr{)}=1\) and \(\mathrm{Re}\bigl{(}\operatorname{tr}(-M)\bigr{)}=-1\). In some sense \(\mathrm{Re}\bigl{(}\operatorname{tr}(M)\bigr{)}\in[-1,1]\) is a measure of how close \(M\) is to the identity matrix. It is large (close to \(1\)) if \(M\) is near the identity. If \(\ell\) is a loop then \(\operatorname{tr}(Q_{\ell})\) is well-defined because the conjugacy class of \(Q_{e_{1}}Q_{e_{2}}\ldots Q_{e_{k}}\) (and hence the trace) does not change if we cyclically reorder the \(e_{i}\). If \(\ell^{-1}\) is the orientation reversal of \(\ell\) then \(\operatorname{tr}(Q_{\ell})=\overline{\operatorname{tr}(Q_{\ell^{-1}})}\). This is because inverting a matrix inverts its eigenvalues, and (for matrices in compact Lie groups) each eigenvalue \(z\) satisfies \(|z|^{2}=z\overline{z}=1\) so that \(1/z=\overline{z}\). This also implies that for the matrices \(M\) in our compact Lie groups, we can write \(\frac{1}{2}\bigl{(}\operatorname{tr}(M)+\operatorname{tr}(M^{-1})\bigr{)}= \mathrm{Re}\bigl{(}\operatorname{tr}(M)\bigr{)}\). The lattice Yang-Mills measure is the probability measure \[Z^{-1}\prod_{p\in\mathcal{P}}\exp\bigl{(}N\beta\mathrm{Tr}(Q_{p})\bigr{)} \prod_{e\in E_{\Lambda}^{+}}dQ_{e} \tag{1.1}\] where \(\beta>0\) is an inverse temperature, \(Z\) is a normalizing constant, each \(dQ_{e}\) is Haar measure on the compact Lie group \(G\), and (to avoid counting an undirected edge twice) \(E_{\Lambda}^{+}\) is the set of oriented edges of \(\Lambda\) for which the endpoint is lexicographically after the starting point. This is a positive measure because \(\mathcal{P}\) is closed under direction-reversal -- this direction-reversal property implies that we can define a set \(\mathcal{P}^{+}\) of "positively oriented plaquettes" containing exactly one element of \(\{\ell,\ell^{-1}\}\) for each \(\ell\in\mathcal{P}\), and then rewrite (1.1) as \[Z^{-1}\prod_{p\in\mathcal{P}^{+}}\exp\bigl{(}2N\beta\,\mathrm{Re}(\mathrm{Tr}( Q_{p}))\bigr{)}\prod_{e\in E_{\Lambda}^{+}}dQ_{e}. \tag{1.2}\] _Remark 1.1_.: We note here that the above action differs from some previous work [11, 12, 13] by a factor of \(2\): where we have \(2\beta\) the previous works have just \(\beta\). This slightly simplifies many of our formulas later on, where \(\beta\) appears instead of \(\frac{\beta}{2}\). Informally, the Yang-Mills measure on \((Q_{e})\) configurations corresponds to i.i.d. _Haar measure_ (one instance of Haar measure for each positively directed edge of \(\Lambda\)) modified by a _weighting_ that favors configurations for which \(Q_{p}\) is close to the identity whenever \(p\in\mathcal{P}\). A _Wilson loop observable_ is a quantity of the form \[W_{\mathcal{L}}(Q):=\prod_{\ell\in\mathcal{L}}\operatorname{tr}(Q_{\ell}),\] where \(\mathcal{L}\) is some finite collection of loops in \(\Lambda\). A _Wilson loop expectation_ is a quantity of the form \[\mathbb{E}\bigl{[}W_{\mathcal{L}}(Q)\bigr{]}.\] _Remark 1.2_.: In contrast to some previous works [11, 12, 13], our Wilson loops are defined with the normalized trace rather than the trace. Thus, our Wilson loop expectations are \(N^{-|{\cal L}|}\) (here \(|{\cal L}|\) denotes the number of loops in \({\cal L}\)) times the Wilson loop expectations that appear in the works mentioned above. This is a cosmetic distinction; the scaling we use is natural when taking large \(N\) limits. The fundamental goal of lattice Yang-Mills theory is to understand these quantities. That is, one seeks to compute \[\int\prod_{\ell\in{\cal L}}{\rm tr}(Q_{\ell})\,Z^{-1}\prod_{p\in{\cal P}}\exp \bigl{(}N\beta{\rm Tr}(Q_{p})\bigr{)}\prod_{e\in E_{\Lambda}^{+}}dQ_{e} \tag{1.3}\] which we can Taylor expand and write as \[Z^{-1}\int\prod_{\ell\in{\cal L}}{\rm tr}(Q_{\ell})\ \prod_{p\in{\cal P}}\Bigl{(} \sum_{k=0}^{\infty}\frac{(N\beta)^{k}}{k!}{\rm Tr}(Q_{p})\Bigr{)}\prod_{e\in E_ {\Lambda}^{+}}dQ_{e}. \tag{1.4}\] Given \(K:{\cal P}\to{\mathbb{Z}}_{+}\), write \(K!=\prod_{p\in{\cal P}}K(p)!\) and \(\beta^{K}=\prod_{p\in{\cal P}}\beta^{K(\rho)}\). Using this notation, write (1.4) as \[Z^{-1}\sum_{K:{\cal P}\to{\mathbb{Z}}_{+}}\frac{(N\beta)^{K}}{K!}\int\prod_{ \ell\in{\cal L}}{\rm tr}(Q_{\ell})\prod_{p\in{\cal P}}\Bigl{(}{\rm Tr}(Q_{p}) \Bigr{)}^{K(p)}\prod_{e\in E_{\Lambda}^{+}}dQ_{e}. \tag{1.5}\] This leads to a classical problem in random matrix theory, which is somehow at the heart of this subject. How can we best compute and understand the individual summands in (1.5), which can be described in words as "expected products of traces of products of matrices--each of which comes from a set of i.i.d. Haar-distributed matrices and their inverses"? This question is expressed more carefully in Section 1.2. Variants of this question have a long history, beginning with the foundational work of 't Hooft and Brezin et al and Itzykson, Parisi, Zuber from the 1970's [19, 10] and expanding greatly over subsequent decades, encompassing various types of random matrices, including Gaussian ensembles (such as GUE or GOE) as well as Haar measure on compact Lie groups [1, 2, 3, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 283, 284, 285, 286, 287, 288, 290, 289, 291, 280, 281, 282, 283, 284, 285, 287, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 321, 322, 323, 324, 325, 326, 327, 328, 339, 340, 341, 342, 343, 343, 344, 344, 345, 346, 347, 348, 359, 360, 371, 372, 373, 374, 375, 376, 377, 378, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 301, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 404, 405, 406, 407, 408, 409, 409, 411, 400, 409, 42, 400, 409, 431, 400, 409, 442, 401, 402, 403, 404, 405, 406, 407, 409, 411, 400, 409, 42, 401, 409, 431, 400, 409, 443, 44, 401, 409, 444, 402, 403, 404, 405, 406, 407, 409, 410, 408, 409, 42, 401, 409, 431, 409, 44, 403, 404, 405, 406, 407, 409, 411, 409, 42, 401, 409, 431, 409, 44, 401, 409, 44, 402, 403, 404, 405, 406, 407, 409, 410, 408, 409, 42, 401, 409, 431, 409, 44, 403, 404, 405, 406, 407, 409, 411, 409, 44, 404, 411, 409, 42, 401, 409, 431, 409, 44, 410, 442, 403, 404, 405, 406, 407, 409, 411, 409, 44, 412, 409, 44, 413, 44, 414, 424, 44, 45, 414, 46, 47, 409, 42, 401, 409, 431, 44, 448, 45, 414, 46, 47, 409, 415, 416, 47, 409, 42, 431, 44, 44, 45, 414, 48, 45, 414, 46, 47, 409, 431, 44, 45, 414, 46, 47, 409, 44, 45, 414, 48, 46, 47, 409, 45, 414, 48, 49, 49, 45, 414, 46, 47, 409, 46, 47, 409, 48, 49, 49, 49, 400, 49, 410, 409, 42, 409, 431, 409, 44, 44, 45, 414, 46, 47, 409, 45, 414, 47, 409, 46, 48, 49, 49, 400, 49, 410, 409, 42, 409, 44, 409, 431, 44, 45, 414, 46, 47, 409, 45, 414, 47, 409, 49, 46, 48, 49, 49, 400, 49, 410, 409, 42, and others [11, 12]. Further analysis on this theme appears in recent work by Buc-d'Alche which in particular describes the \(N\to\infty\) asymptotic behavior of Wilson loop expectations in terms of so-called _unitary maps_[1] while also considering generalizations to mixtures of deterministic and unitary matrices. A series of groundbreaking papers by Chatterjee and/or Jafarov has provided a different approach to identities involving (1.5) including the _Makeenko-Migdal/Master Loop/Schwinger-Dyson equations_. This approach enables them to describe the \(N\to\infty\) behavior of (1.5) in terms of so-called _lattice string trajectories_[13, 14, 15, 16], see also another recent derivation by [17] and several generalizations due to Diez and Miaskiwskyi [15]. These works build on a vast literature in this area, including early works of Makeenko and Migdal [12] (see also the recent physics paper [18] which combines these equations with the bootstrap method in order to numerically compute Wilson loop expectations). Although they work in the setting where \(\Lambda\) is an induced subgraph of \(\mathbb{Z}^{d}\), one may also recall a standard "gauge fixing" argument that allows one to reduce to the case that \(Q_{e}\) is fixed to be the identity for all \(e\) within some spanning tree of \(\Lambda\). This is equivalent to identifying that entire tree with a single vertex, which reduces \(\Lambda\) to a blossom graph that (in the case \(\beta=0\)) agrees exactly with the setting discussed by Magee and Puder. We will provide an alternate derivation of some of the blossom graph results of Magee and Puder [19, 19] as well as the master field and string trajectory results of Chatterjee and Jafarov [13, 14, 15]. Our statements along these lines will be in several ways more general than those in previous works. 1. **General \(N\):** We allow for any matrix dimension \(N\geq 1\) (the results in [19, 19] are stated for \(N\) sufficiently large; one has to use a slightly different definition of the Weingarten function for smaller \(N\)). 2. **General graphs:** We consider general \(\Lambda\) and \(\mathcal{P}\) in our derivation of the Makeenko-Migdal/Master loop/Schwinger-Dyson relations (in [13, 14, 15, 16] the plaquettes \(\mathcal{P}\) are taken to be squares, though this is not fundamental to the argument). 3. **More general recurrence formula:** We also derive a more general form of the above-mentioned relations. To roughly explain the distinction, recall that the Makeenko-Migdal/Master loop/Schwinger-Dyson relation in [13, Theorem 3.6] expresses the Wilson loop expectation of a string \(s\) in terms of strings \(s^{\prime}\) obtained by applying local moves to \(s\). A stronger result [13, Theorem 8.1] uses only the \(s^{\prime}\) obtained from local moves involving a _single fixed edge_\(e\in\Lambda\). Our slightly stronger result uses only the \(s^{\prime}\) obtained from local moves involving a _single fixed edge_ of \(s\). The distinction is that there may be many edges in \(s\) that correspond to the same \(e\in\Lambda\). We refer to this as the _single-location_ Makeenko-Migdal/Master loop/Schwinger-Dyson equation. 4. **General matrix families:** We also include analogs of our result for the most fundamental Lie group families (namely \(\mathrm{U}(N)\), \(\mathrm{O}(N)\), \(\mathrm{SU}(N)\), \(\mathrm{SO}(N)\), \(\mathrm{Sp}(N/2)\)) while some of the earlier papers focused on one or two such groups. While [13, 14, 15] first frame their results in terms of \(\mathrm{SO}(N)\) and \(\mathrm{SU}(N)\) we will frame our results and discussion in terms of \(\mathrm{U}(N)\), which from our point of view is the simplest case. We then extend the theory to \(\mathrm{O}(N)\), \(\mathrm{SU}(N)\), \(\mathrm{SO}(N)\), and \(\mathrm{Sp}(N/2)\) in Section 6. This is the longest and most technically challenging part of the paper, as each group family comes with its own interesting set of challenges. Another straightforward generalization of our result would be to include some deterministic matrices in the words; this type of generalization is considered e.g. in [1]. We expect that this should be possible in our setting as well, but we will not discuss this here. In all of settings described above, we will explain how to express Wilson loop expectations in terms of random lattice-embedded planar maps, which give rise to convergent sums for any \(\Lambda\), any \(N\geq 1\), and any \(\beta\). These are closely related to both the topological surface sums in [13, 13] and the string trajectories in [11, 12, 13, 14], but our derivation and planar map interpretation will be rather different. The main point we want to stress in this paper is that there are powerful ways to express Wilson loop expectations as sums over embedded planar maps. Some settings are more challenging than others (certain symmetries that apply in one setting may not apply in all settings) but we will nonetheless develop a framework that is very general, and that we hope will lead to progress on some of the open problems listed in Section 7. _Remark 1.3_.: One of the long-term goals of this theory is to construct and understand a continuum scaling limit of quantities like (1.5) as \(\beta\to\infty\) and the lattice mesh size simultaneously goes to zero at an appropriate rate. Thus, ideally one desires an understanding of the terms of (1.5) that is sufficiently robust that it allows one to make predictions about these limits. _Remark 1.4_.: When \(\beta\) is large, the function \(x\to\exp(2\beta x)\), defined for \(x\in[-1,1]\), is largest for \(x\) near \(1\) and _much smaller_ in the rest of \([-1,1]\). In principle, one could replace the \(\exp\) in (1.3) by a different function with this property: say \(x\to\frac{1}{2}(x^{b}+x^{b+1})\) for some large \(b\). If we took this approach, then the analog of (1.5) would have only finitely many summands, but we would still expect it to have a similar scaling limit behavior as \(b\to\infty\) and the lattice mesh size simultaneously goes to zero. Somehow \(b\) is playing the role of \(\beta\) here: instead of taking the number of plaquettes of a given type to be _a priori_ Poisson with parameter \(\beta\) we can take the number to be either \(b\) or \(b+1\) (each with probability \(1/2\)). Alternatively, one can replace (1.1) with \[Z^{-1}\Big{[}\Big{(}|\mathcal{P}|^{-1}\sum_{p\in\mathcal{P}}\operatorname{tr} (Q_{p})\Big{)}^{b}+\Big{(}|\mathcal{P}|^{-1}\sum_{p\in\mathcal{P}} \operatorname{tr}(Q_{p})\Big{)}^{b+1}\Big{]}\prod_{e\in E_{\Lambda}^{+}}dQ_{e} \tag{1.6}\] which somehow fixes the _total_ number of plaquettes to be \(b\) or \(b+1\). This approach might also have a similar scaling limit if \(b\to\infty\) at the right rate. If one is working toward the goal of "constructing a candidate continuum theory" one is allowed to use whatever approach turns out to be most computationally tractable. _Acknowledgements._ We thank Bjoern Bringmann, Sourav Chatterjee, Hao Shen and Tom Spencer for helpful conversations. We thank the Institute for Advanced Study for hosting us while this work was completed. The first author was supported by the Minerva Research Foundation at IAS, as well as by NSF Award: DMS 2303165. The third author is supported by NSF Award: DMS 2153742. ### Random matrices At the heart of our analysis are two classical questions about the traces of random matrices. The first is the one we discussed in Section 1.1 and the second is a close variant. 1. Suppose \(M_{1},M_{2},\ldots M_{k}\) are i.i.d. samples from Haar measure on \(\mathrm{U}(N)\) (or a similar Lie group) and that \(W_{1},\ldots,W_{m}\) are words in the \(M_{i}\) and \(M_{i}^{-1}\). Can we compute the expectation \[\mathbb{E}\Bigl{[}\prod_{i=1}^{k}\mathrm{tr}(W_{i})\Bigr{]}\] in a "nice" way? For example, can we express \(\mathbb{E}\Bigl{[}\mathrm{tr}\bigl{(}M_{1}M_{2}M_{1}^{-1}M_{2}^{-3}\bigr{)} \mathrm{tr}(M_{1}^{3}M_{2}M_{1}^{-1}M_{2}^{-1})\Bigr{]}\) as a simple function of \(N\)? 2. How does the answer to the previous question change if instead of sampling from Haar measure, we obtain each \(M_{i}\) by running a Brownian motion on the Lie group for \(t_{i}\) units of time, starting with the identity? The second question can be understood as an "external field" version of the first question. This is because when the \(t_{i}\) are small, the \(M_{i}\) are more likely to be close to the identity, and this "bias toward the identity" is similar in spirit to the "bias toward positive spin" imposed in e.g. an Ising model with an external field. The second question also arises naturally in _two-dimensional_ Yang-Mills theory and has been heavily studied in that context [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. In two dimensions, the fine-mesh scaling limit of Yang-Mills theory is well understood, but if one attempts to compute the Wilson loop expectation for a complicated collection of loops (perhaps with many intersections and self-intersections) one obtains precisely an instance of the second problem above--indeed, the problems are equivalent since one can obtain _any_ instance of the second problem for _some_ two-dimensional loop. In a recent companion paper [23] (including some of the authors of this paper), it was shown that the answer to the second question can be expressed as an expectation w.r.t. a certain Poisson point process. In this paper, we will explain how the answer to the first question can be derived directly from the analysis in [23] by taking \(t_{i}\to\infty\). This is our first main result, which we state informally as follows. For a precise version, see Theorem 2.5. **Theorem 1.5** (Recovery of Weingarten calculus via Brownian motion).: _The expectations of traces of words of Unitary Brownian motion converge as the time parameter goes to infinity to an explicit limit given in terms of the Weingarten function. Similar results hold for the other classical Lie groups._ _Remark 1.6_.: We note that Theorem 2.5 has previously appeared in [1], albeit stated in slightly different (but equivalent) terms - see Sections 4 and 5 of the paper. Dahlqvist's proof relies heavily on representation theory. On the other hand, we believe that our proof may be easier to read for those who have a probability background but perhaps are not as familiar with representation theory. Additionally, our proof technique differs from Dahlqvist's in an essential way, which allows us to obtain the more general version of the Makeenko-Migdal/Master loop/Schwinger-Dyson equation for lattice Yang-Mills that we previously alluded to (Theorem 1.12). See Remark 4.13 for more discussion on the differences between the two arguments. Our approach to this result is in some sense very straightforward. The analysis in [10] notes that when all of the \(t_{i}\) are less than infinity, the noise generating the Lie group Brownian motion is a Gaussian white noise on a Lie algebra; because all randomness is Gaussian, all of the relevant quantities can be easily deduced from Wick's formula and planar maps (see the overview of these techniques [11]) which leads to a Poisson point process formulation of the theory. The analysis in this paper begins with the Poisson point process formulation obtained in [10] and shows that geometric cancellations simplify in the \(t_{i}\to\infty\) limit, so that the _Weingarten function_ (as originally introduced in [20]) appears naturally without any difficult computation. This approach also provides other insights - for instance, certain single-edge analogs of the string-exploration steps in [12] can be interpreted in terms of the so-called Jucys-Murphy elements [22, 23, 24, 25, 26]. ### Continuum Yang-Mills The famous continuum _Yang-Mills problem_[25] is (roughly speaking) to construct and understand the basic properties of a continuum analog of the lattice mode described above, which should somehow make rigorous sense of the measure in 1.1. This problem remains open for \(d\geq 3\) and its solution for \(d=4\) would in some sense also yield a solution to the quantum Minkowski version of Yang-Mills that forms the basis of the standard model in physics, see the Millennium Prize description [25]. This paper is focused on understanding a _lattice_ version of Yang-Mills theory in terms of sums over surfaces, with the aim of gaining insight into a possible continuum theory. It is not clear what kind of fine-mesh scaling limit one should expect the lattice models to have, but our hope is that the lattice analysis presented here will provide some clues, and we present several open problems along these lines in Section 7. We remark that a number of purely continuum approaches to this problem are also being actively pursued. For example, there is an _SPDE-based approach_ which aims to construct a dynamical version of continuum Yang-Mills (on a torus, say) and show that it converges to a stationary law in the large time limit. One can take as the initial value a "Lie-algebra-valued Gaussian free field connection" that one expects to approximate the correct continuum theory at small scales and try to argue that the behavior at large scales converges to a limit over time. See e.g. [1, 1, 2, 1, 2, 3]. There has been some significant recent progress in this area, especially in two and three dimensions. Alternatively, one can also work directly in the continuum _without_ attempting to understand a dynamical process. One might regularize the continuum model in some other way--perhaps starting with a continuum Gaussian. Some form of this was implemented by Magnen, Rivasseau, and Seneor [27]. Some approaches along these lines might also be amenable to the type of random surface analysis discussed in this paper; see Section 7. ### Lattice models and planar maps Consider a pair \((\mathcal{M},\phi)\) where \(\mathcal{M}\) is a planar (or higher genus) map and \(\phi:\mathcal{M}\to\Lambda\) is a graph homomorphism.1 We call this pair a **semi-folded map** or **edge-plaquette embedding** if the following hold: Footnote 1: In other words, if two vertices \(v,w\in V(\mathcal{M})\) are adjacent in the graph \(\mathcal{M}\), then \(\phi(v),\phi(w)\) are adjacent in the graph \(\phi(\mathcal{M})\subset\Lambda\). 1. The dual graph of \(\mathcal{M}\) is bipartite. The faces of \(\mathcal{M}\) in one partite class are designated as "edge-faces" (shown blue in figures) and those in the other class are called "plaquette-faces" (shown yellow in figures). 2. \(\phi\) maps each plaquette-face of \(\mathcal{M}\) isometrically _onto_ a plaquette in \(\mathcal{P}\). 3. \(\phi\) maps each edge-face of \(\mathcal{M}\) onto a single edge of \(\Lambda\). See Figures 1-4 for examples and intuition. In order to construct a model of random edge-plaquette-embedding that is useful in Yang-Mills theory. We will need to assign a "weight" to every face of \(\Lambda\) (depending on the number of plaquettes there) and every edge (depending on the number and type of blue faces there). This weight is closely related to the so-called Weingarten function, which we discuss next. #### 1.4.1 Weingarten function Note that a complex-valued function on \(\mathrm{S}_{n}\) can be identified as an element in the group algebra \(\mathbb{C}[S_{n}]\), that is \(\sigma\mapsto f(\sigma)\) is identified as \(\sum_{\sigma\in\mathrm{S}_{n}}f(\sigma)\sigma\). Let \(\mathbb{Q}[N]\subset\mathbb{C}[N]\) be the field of rational functions with rational coefficients in the variable \(N\). When \(N\geqslant n\) the **Weingarten function**\(\mathrm{Wg}_{N}\)2 can be defined as the inverse in the group ring of \(\mathbb{Q}(N)[S_{n}]\) of Figure 1: In an edge-plaquette embedding, we can imagine that each blue face is “twisted and collapsed” onto a single edge, see Figure 2. In the sequence above, we first twist, then collapse matching vertices, then collapse edges. the function \(\sigma\to N^{\#\text{cycles}(\sigma)}\). (There is a slightly different definition for \(N<n\), see Section 2.) Note that \(\text{Wg}_{N}(\sigma)\) depends only on the conjugacy class of \(\sigma\)--i.e. on the cycle structure of \(\sigma\). We can order cycles from biggest to smallest, represent this by a Young diagram, and interpret \(\text{Wg}_{N}\) as a function on Young diagrams. It is not the simplest function, and one Figure 3: **Edge-plaquette embedding example:** If the blue face is an octagon, then there will be 8 yellow plaquettes meeting at the corresponding edge. In this example shown, the pre-image of each yellow face on the right may consist of two yellow faces on the left. In other words, there are two “copies” of each of the four plaquettes shown on the right. Figure 2: **Edge-plaquette embedding example:** Each of the 16 blue faces on the upper left gets mapped to a single vertical edge in the upper right, while each yellow face on the upper left gets mapped to a vertical yellow face on the upper right—the edge colored red is the one mapped to the top. On the lower left, additional yellow faces are added; their images on the right alternate between upper and layers in checkerboard fashion. Going from left to right requires “folding up” the blue squares and collapsing the blue 2-gons. explicit formula (see the overview and additional references in [11]) is as follows: \[\mathrm{Wg}_{N}(\sigma)=\frac{1}{(n!)}\sum_{\lambda\vdash n}\Bigl{[}\chi_{ \lambda}(\mathrm{id})\chi_{\lambda}(\sigma)\prod_{(i,j)\in\lambda}(N+j-i)^{-1} \Bigr{]} \tag{1.7}\] where \(\mathrm{id}\) is the identity permutation, \(\lambda\vdash n\) denotes that \(\lambda\) is a partition of \(n\), \(\chi_{\lambda}(\sigma)\) is the character (trace of \(\sigma\) in the irreducible representation indexed by \(\lambda\)). Alternatively, when \(N\geq n\) the Weingarten function is the group ring inverse of \(f(\sigma)=\sigma\to N^{\#\mathrm{cycles}(\sigma)}\) and can hence be formally expanded as \[I+(I-f)+(I-f)^{2}+\ldots. \tag{1.8}\] The latter observation plays a key role in the derivation of [11, Theorem 2.9]. As will be explained in Section 3, we interpret \(\sigma\) as a collection of blue faces (one blue face of length \(2k\) for each cycle of \(\sigma\) of length \(k\)). Then \(\mathrm{Wg}_{N}(\sigma)\) is essentially the _weight_ associated to given collection of blue faces at an edge. Actually, as we detail in Section 3, the edge weights are given by the _normalized Weingarten function_, which we define as \[\overline{\mathrm{Wg}}_{N}(\sigma):=N^{2n-\#\mathrm{cycles}(\sigma)}\mathrm{ Wg}_{N}(\sigma).\] This is the normalization which leads to a nontrivial \(N\to\infty\) limit (see Remark 3.2). Figure 4: **Edge-embedding example showing orientations:** (1) Three oriented plaquette images in \(\phi(\mathcal{M})\). (2) The blue faces connecting them have different types. (3) “Untwist” by flipping the lower-left plaquette across its red-red diagonal so that the three red and three blue faces are orientably embedded in the plane. (4) Add some new faces (three yellow squares and five blue 2-gons) to fill in the hole. Interpret the resulting colored map as a portion of \(\mathcal{M}\) orientably embedded in the plane. (5) Map this portion back into the lattice. Not all six yellow plaquettes are visible on the right because some overlap each other. Given an edge-plaquette embedding \((\mathcal{M},\psi)\) and an edge \(e\) of our lattice \(\Lambda\), we will write \(\overline{\mathrm{Wg}}_{N}(e)\) as shorthand for \(\overline{\mathrm{Wg}}_{N}(\mu_{e}(\mathcal{M},\psi))\), where \(\mu_{e}(\mathcal{M},\psi)\) is the partition given by half the degrees of the blue faces mapped to \(e\). ### Main results We already informally stated the first of our main results - recall Theorem 1.5. In this subsection, we proceed to state the remaining main results of this paper. First, when computing Wilson loop expectations, we imagine the simplest setting in which we fix the number of yellow faces of each type (i.e. assign weight \(1\) to that number and \(0\) to all others). This corresponds to focusing on a single summand in (1.5), or equivalently to taking \(\beta=0\) in (1.5). In this case we have the following: **Theorem 1.7** (Surface-sum representation for word expectations).: _When the gauge group is \(\mathrm{U}(N)\), the expected trace product is proportional to \(\sum\big{(}\prod_{e}\overline{\mathrm{Wg}}_{N}(e)\big{)}\cdot N^{\chi-2k}\) where the sum is over spanning edge-plaquette embeddings with given plaquette numbers, \(\chi\) is the Euler characteristic and \(k=|\mathcal{L}|\) is the number of loops._ _Remark 1.8_.: We regard Theorem 1.7 and the upcoming Corollary 1.10 as the main conceptual contribution of this paper. These results introduce the new concept of an edge-plaquette embedding, and give a fundamentally new description of Wilson loop expectations in terms of random planar maps3, thereby connecting two very different areas of research. Ultimately, we hope to prove new results about lattice gauge theories via analysis of these random planar maps, in particular building on the many advances in their understanding - see Section 7 for some open problems. See also Remark 1.11. Footnote 3: This is rather loose terminology, as our surface sums are signed, and in general higher genus surfaces may appear. _Remark 1.9_.: The results of Magee and Puder [19, 22] could also be applied here to give a surface sum representation of the terms in (1.5). However, the relation to random planar maps is not as clear in their formulation. As mentioned in Remark 1.8, this is the main point of our result. For more comparison with Magee and Puder, see Section 1.5.1. For a precise statement of this theorem, see Theorem 3.8. Even in the \(\mathrm{U}(N)\) case there are several variants to this result. The various "string trajectory moves" in [13] can be interpreted in terms of the exploration of a surface built out of blue \(2\)-gons and \(4\)-gons and yellow squares. One can also interpret the individual Jucys-Murphy elements in these terms. By applying Theorem 1.7 to every term in the series appearing in equation (1.5), we obtain that Wilson loop expectations may be expressed as a weighted sum over edge-plaquette embeddings. We state this informally as the following corollary. **Corollary 1.10** (Surface-sum representation of Wilson loop expectations).: When the gauge group is \(\mathrm{U}(N)\), the Wilson loop expectation \(\mathbb{E}\big{[}W_{\mathcal{L}}(Q)\big{]}\) is proportional to \(\sum\frac{\partial^{\mathrm{area}}}{K!}\big{(}\prod_{e}\overline{\mathrm{Wg }}_{N}(e)\big{)}\cdot N^{\chi-2k}\), where the sum is over spanning edge-plaquette embeddings with arbitrary plaquette numbers, area is the total number of plaquettes in the edge-plaquette embedding, \(K!\) is a combinatorial factor depending only on the plaquette counts, \(\chi\) is the Euler characteristic, and \(k=|\mathcal{L}|\) is the number of loops. For a precise statement of this corollary, see Corollary 3.11. _Remark 1.11_.: Recently, Taggi and coauthors [11, 11, 12] have succeeded in proving various results about spin \(O(n)\) and related models by analyzing a certain related random path (or random loop) model. Starting from the spin \(O(n)\) model, they arrive at their random path model in exactly an analogous manner as how we arrive at Corollary 1.10. Namely, starting from the action for the spin \(O(n)\) model, which at a single edge is of the form \(\exp(\beta\sigma_{x}\cdot\sigma_{y})\), where \(\sigma_{x},\sigma_{y}\in S^{n}\), they expand \(\exp(\beta(\sigma_{x}\cdot\sigma_{y}))=\sum_{k}\frac{\beta^{k}}{k!}(\sigma_{ x}\cdot\sigma_{y})^{k}\) for each edge \((x,y)\), and then compute the resulting \(S^{n}\)-integrals. The \(S^{n}\)-integrals may be easily computed, with the resulting expressions only involving very explicit quantities such as factorials and Gamma functions (see [11, equation (2.12)]). This is one simplification compared to our setting, where the \(\mathrm{U}(N)\)-integrals lead to the appearance of the Weingarten function, which is much more complicated to understand. Another key difference is that while the \(S^{n}\)-integrals are always positive, the \(\mathrm{U}(N)\)-integrals may be both positive and negative. Thus the random path model of Taggi et al. may be interpreted as a genuine probability measure, while our surface sums may only be interpreted as signed measures. Next, we give an informal statement of the Makeenko-Migdal/Master loop/Schwinger-Dyson equations satisfied by Wilson loop expectations. The corresponding precise statement is Theorem 5.6. **Theorem 1.12** (Single-location Makeenko-Migdal/Master loop/Schwinger-Dyson equation).: _Wilson loop expectations satisfy the following recursion:_ \[\mathbb{E}\big{[}W_{\mathcal{L}}(Q)\big{]}=\mathrm{splitting}+\mathrm{ merger}+\mathrm{deformation}\] Here, splitting, merger, and deformation correspond to certain types of operations we may apply to a given collection of loops \(\mathcal{L}\) to obtain a new collection of loops. They will be precisely defined in Section 5. _Remark 1.13_.: As previously mentioned, versions of this recursion for various Lie groups have previously appeared [1, 1, 2, 3, 4]. We note that the precise form of our recursion is slightly different from (and more general than) the existing literature - see Remarks 5.4 and 5.7. Ultimately, the reason for this difference is due to our proof method. Whereas previous approaches are based on integration-by-parts4, our approach is essentially equivalent to applying a certain recursion that is satisfied by the Weingarten function (see e.g. [1, Proposition 2.2]), although we don't phrase our argument in this way - we prefer to proceed more probabilistically via our aforementioned Poisson point process formulation. Footnote 4: The argument in [22] essentially reduces to integration by parts, as explained in [1, Appendix A.2]. To sketch the argument, the proof uses the fact that if the lattice Langevin dynamics is started at stationarity, then the expectation of any observable must be constant in time. Then, applying Ito’s formula to Wilson loop observables, one obtains an identity saying that the drift term must have expectation zero. This identity is precisely integration by parts. #### 1.5.1 Discussion of Magee and Puder The vocabulary in [12, 13] is somewhat different from ours, but the results can be expressed in similar terms. We won't give a detailed account of those results, but let us briefly outline a couple of key ideas to assist readers trying to compare their approach to ours. The approach in [12, 13] makes heavy use of commutator words. Suppose a loop \(\ell\) in \(\mathcal{L}\) corresponds to a commutator word \(ABA^{-1}B^{-1}\) (where \(A\) and \(B\) could in principle describe paths of length longer than one). Imagine then that we have a surface \(S\) with a single boundary loop, whose boundary is mapped to \(\ell\). We can turn this surface into a closed surface in two ways. First, we can identify the boundary of an ordinary disk (with circular boundary) with the boundary of \(S\), thereby gluing a circular disk onto \(S\). Second we can glue the boundary of \(S\) to itself by first gluing the pre-images of the \(A\) and \(A^{-1}\) segments to each other and then gluing the pre-images of the \(B\) and \(B^{-1}\) segments to each other--which somehow turns the disk bounded by \(\ell\) into a torus. It is not hard to see that the second approach produces a surface whose genus is \(1\) higher than the surface produced by the first approach: it effectively "adds a handle" to the surface. If we write a long loop \(\ell\) as a product of \(n\) commutator words, then those words provide us a recipe for turning a disk bounded by \(\ell\) into an \(n\)-holed torus (by performing gluings of the type mentioned above for each commutator). Theorem 1.7 is closely related to [12, Theorem 2.8]. We remark that one could also interpret [12, Theorem 2.9] in terms of embedded maps (somehow involving multiple layers of blue faces). We note that [12, Theorem 2.9] is in some ways simpler than [12, Theorem 2.8] (it does not involve the Weingarten function) and in other ways more complicated (it involves another quantity called the \(L^{2}\) Euler characteristic, which is in general not so trivial). We note that [12, Theorem 2.9] is derived from [12, Theorem 2.8]. We will not give an alternate derivation of this step, aside from remarking that the expansion in (1.8) plays a role. ### Summary of paper and reading guide We close this section off with a summary of the rest of the paper. In Section 2, we introduce the notation and background material that will be needed in the rest of the paper. In Section 3, we derive our surface-sum representation of Wilson loop expectations. In Section 4, we show how to recover the Weingarten calculus by taking limits of Unitary Brownian motion, using a certain strand-by-strand exploration that we introduce in the section. In Section 5, we apply our strand-by-strand exploration to obtain the single-location Makeenko-Migdal/Master loop/Schwinger-Dyson equation for Wilson loop expectations. Finally, in Section 6, we adapt our results to the cases of \(G=\mathrm{O}(N),\mathrm{Sp}(N/2),\mathrm{SU}(N),\mathrm{SO}(N)\). To the reader who wants to understand our surface-sum representation of Wilson loop expectations as quickly as possible, we recommend the following expedited reading strategy. First, read enough of Section 2 to understand the statement of Corollary 2.7. Then, proceed directly to Section 3 to see how this corollary is applied to obtain the surface-sum representation. This is roughly ten pages of material. ## 2 Notation and background In this section, we introduce some basic notation and background that will be needed throughout this paper. * For \(n\in\mathbb{N}\), we denote the set \([1,n]\cap\mathbb{Z}=\{1,\ldots,n\}\) by \([n]\). * For \(a,b\in\mathbb{Z}\), \(a<b\), we denote \((a:b]:=\{a+1,\ldots,b\}\). So \([n]=(0:n]\). * For a set \(A\), we let \(\binom{A}{2}\) denote the unordered set of ordered pairs of elements of \(A\). ### Poisson point process on strand diagrams In this section, we review a result in the companion paper [20] that is necessary for this paper. In particular, we express the expected trace of unitary Brownian motions in terms of a certain Poisson point process, which we encode in a strand diagram (Definition 2.1). Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\Gamma_{2},\ldots,\Gamma_{k})\) be an (ordered) collection of words \(\Gamma_{i}\) on letters \(\{\lambda_{1},\cdots,\lambda_{L}\}\) where \[\Gamma_{i}=\lambda_{c_{i}(1)}^{\varepsilon_{i}(1)}\cdots\lambda_{c_{i}(M_{i})} ^{\varepsilon_{i}(M_{i})}\] for some \(c_{i}:[M_{i}]\to[L]\) and \(\varepsilon_{i}:[M_{i}]\to\{-1,1\}\). By letting \(M=M_{1}+\cdots+M_{k}\) and concatenating \(c_{i}\)'s and \(\varepsilon_{i}\)'s, we may define \(c:[M]\to[L]\) and \(\varepsilon:[M]\to\{-1,1\}\). Our goal is to compute \[\mathbb{E}[\operatorname{Tr}\bigl{(}B(\boldsymbol{\Gamma})\bigr{)}],\] where \[\operatorname{Tr}\bigl{(}B(\boldsymbol{\Gamma})\bigr{)}:=\operatorname{Tr}( B(\Gamma_{1}))\cdots\operatorname{Tr}(B(\Gamma_{k})),\text{ and }B(\Gamma_{i})=B_{T}^{\varepsilon_{i}(1)}(\lambda_{c_{i}(1)})\cdots B_{T}^{ \varepsilon_{i}(M_{i})}(\lambda_{c_{i}(M_{i})}),\] and where \(\{B_{T}(\lambda_{\ell})\}_{\ell\in[L]}\) is a collection of independent Brownian motions on \(\operatorname{U}(N)\) started at the identity and run for time \(T>0\). We also define \[\mathcal{C}=\bigcup_{\ell\in[L]}\binom{c^{-1}(\ell)}{2}=\{(m,m^{*}):m<m^{*} \text{ and }c(m)=c(m^{*})\},\] and \[\mathcal{D}_{T}=\bigsqcup_{(m,m^{*})\in\mathcal{C}}[0,T],\] equipped with some parametrizing bijection \(\eta:\mathcal{C}\times[0,1]\to\mathcal{D}_{T}\).5 Given a point \(x\in\mathcal{D}_{T}\), let \(\mathfrak{l}(x)\in\mathcal{C}\) be the index of the interval which contains \(x\). We now consider the Poisson point process \(\Sigma\) on \(\mathcal{D}_{T}\) with intensity given by the Lebesgue measure. Equivalently, \(\Sigma\) has the same law with \(\Sigma_{\infty}\cap\mathcal{D}_{T}\) where \(\Sigma_{\infty}\) is the Poisson point process on Footnote 5: The bijection \(\eta\) is only to record the location of points. In [20], the interval \([0,T_{c(m)}]\) is identified as the interior of each loop (so that \(\eta\) is a space-filling curve) for more geometric interpretation, but the Lebesgue measures are identical in the end. \[\mathcal{D}_{\infty}=\bigsqcup_{(m,m^{*})\in\mathcal{C}}[0,\infty)\] with intensity given by the Lebesgue measure. In other words, \(\Sigma_{\infty}\) is a disjoint union of i.i.d. rate 1 Poisson processes on \([0,\infty)\). As we previously alluded to, expectations of Unitary Brownian motion may be represented by a certain diagram which is obtained from a Poisson process on \(\mathcal{D}_{T}\). To begin to make this statement precise, in the following definition, we describe how to associate a diagram to a given collection of points of \(\mathcal{D}_{T}\). **Definition 2.1** (Strand diagram).: Let \(\Gamma=\lambda_{c(1)}^{\varepsilon(1)}\cdots\lambda_{c(M)}^{\varepsilon(M)}\) be a word on \(\{\lambda_{1},\cdots,\lambda_{L}\}\) and \(\Sigma\) be a collection of points in \(\mathcal{D}_{T}\). Then \(\eta^{-1}(\Sigma)\) be a collection of points \(((m,m^{*}),t)\in\mathcal{C}\times[0,1]\) for \((m,m^{*})\in\mathcal{C}\) and \(t\in[0,1]\). Let \(n_{\ell}=|c_{i}^{-1}(\ell)|\) for each \(\ell\in[L]\). The **strand diagram of \((\Gamma,\Sigma)\)** is an array of right- or left-directed arrows, each of which is identified as the unit interval \([0,1]\), placed as follows. * There are \(L\) columns and each column is labelled by \(\lambda_{\ell}\) for \(\ell\in[L]\); * The column labeled by \(\lambda_{\ell}\) consists of a stack of \(n_{\ell}\) unit-length arrows, each of which corresponds to an element of \(c_{i}^{-1}(\ell)\); * If an arrow corresponds to \(m=c_{i}^{-1}(\ell)\in[M]\), it is right-directed (resp. left-directed) if \(\varepsilon(m)=1\) (resp. \(\varepsilon(m)=-1\)); * The end of arrow corresponding to \(m\) is connected to the origin of the arrow corresponding to \(m+1\), modulo \(M\); * For each point \(((m,m^{*}),t)\in\eta^{-1}(\Sigma)\), if \(\varepsilon(m)\varepsilon(m^{*})=1\), we insert a green crossing (called the "same-direction swap") on two arrows corresponding to \(m\) and \(m^{*}\) at location \(t\in[0,1]\). Otherwise, we put a blue double bar (called the "opposite-direction swap") on two arrows corresponding to \(m\) and \(m^{*}\) at location \(t\in[0,1]\). In general, if \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots\Gamma_{k})\) is a collection of words \(\Gamma_{i}\) on \(\{\lambda_{1},\cdots,\lambda_{L}\}\), we define the strand diagram of \((\boldsymbol{\Gamma},\Sigma)\) as a collection of strand diagrams of \((\Gamma_{1},\Sigma_{1}),\cdots,(\Gamma_{k},\Sigma_{k})\) where \(\Sigma=\bigsqcup_{i=1}^{k}\Sigma_{i}\) with the same labelled columns. See Figure 5 for an example. We define a CW-complex from a strand diagram as in Figure 5. Each word \(\Gamma_{i}\) can be represented by a regular polygon with unit-length arrows (preserving the orientation), and each same-direction swap or opposite-direction swap corresponds to a path connecting two arrows at the specified location in the strand diagram. As a result, we have \(k\) 2-cells for each polygon, \(M+2|\Sigma|\) 1-cells, and \(M+2|\Sigma|\) 0-cells, where \(M=n_{1}+\cdots+n_{L}\). Then there exists a _closed_ surface with the minimum genus constructed by adding extra \(F\) 2-cells, that is by following every 1-cell and adding a 2-cell whenever they form a cycle. (Equivalently, it can be viewed as a _ribbon graph_.) By Euler's formula, the number \(F\) of extra 2-cells determines the minimum genus, that is \(\chi=(M+2|\Sigma|)-(M+3|\Sigma|)+(k+F)=k+F-|\Sigma|\). We define the Euler characteristic \(\chi\) of the strand diagram as the Euler characteristic of this surface with the minimum genus. We now give a precise statement of how Unitary Brownian motion expectations reduce to certain diagrammatic sums. We quote the following result from [12]. **Lemma 2.2** (Expected trace as Poisson sums [12]).: Let \(\mathbf{\Gamma}\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\) and \(T>0\). Let \(\Sigma\) be the Poisson point process on \(\mathcal{D}_{T}\). Consider the strand diagram \(S\) for \((\mathbf{\Gamma},\eta^{-1}(\Sigma))\). Then \[\mathbb{E}\big{[}\mathrm{Tr}(B(\mathbf{\Gamma}))\big{]}=\exp\left(-\frac{1}{2 }\sum_{m=1}^{M}T+\sum_{(m,m^{*})\in\mathcal{C}}T\right)\mathbb{E}\Big{[} \varepsilon(\Sigma)(-1)^{|\Sigma|}N^{-k+\chi(S)}\Big{]}.\] Lemma 2.2 may be interpreted as follows. First, observe that for each individual letter \(\lambda_{\ell}\), the portion of the strand diagram corresponding to \(\lambda_{\ell}\) may be thought of as a matching on \([2n_{\ell}]\), see Figure 6. In order to compute the Euler characteristic \(\chi(S)\) of a given strand diagram \(S\), it suffices to give the partitions \(\pi_{1},\ldots,\pi_{L}\) of \([2n_{1}],\ldots,[2n_{L}]\), respectively. In particular, the number of vertices \(V(S)\) is precisely the number of components of the diagram given by combining the exterior connections (which we have been drawing as dashed red lines) with the interior connections specified by \(\pi=(\pi_{1},\ldots,\pi_{L})\). Let \(\#\mathrm{comp}(\mathbf{\Gamma},\pi)\) be the number of components of the diagram arising from \(\mathbf{\Gamma},\pi\). Define also \[w_{T}(\pi):=\exp\bigg{(}\sum_{\ell\in[L]}\bigg{(}\binom{n_{\ell}}{2}-n_{\ell} \bigg{)}T\bigg{)}\mathbb{E}\big{[}\varepsilon(\Sigma)(-1/N)^{|\Sigma|}1(\pi(S )=\pi)\big{]}, \tag{2.1}\] Figure 6: **Left:** The same strand diagram as Figure 5 but with ends of arrows labelled. By following all swaps, each row of the strand diagram defines a matching on \([2n_{\ell}]\) for \(\ell=1,2,3\). **Right:** The corresponding CW complex picture with labels. It is straightforward that the number of components in the left picture is exactly the number of faces in this picture. which is interpreted as the partition function of all point configurations which results in the collection of partitions \(\pi\). From these considerations, combined with Lemma 2.2, we have the following. **Lemma 2.3**.: Let \(\mathbf{\Gamma}\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Let \(\Sigma\) be the Poisson point process on \(\mathcal{D}_{T}\). Consider the strand diagram \(S\) for \((\Gamma,\eta^{-1}(\Sigma))\). Then \[\mathbb{E}\big{[}\mathrm{Tr}(B_{T}(\mathbf{\Gamma}))\big{]}=\sum_{\pi=(\pi_{1 },\ldots,\pi_{L})}w_{T}(\pi)N^{\#\mathrm{comp}(\mathbf{\Gamma},\pi)}. \tag{2.2}\] Lemma 2.3 says that in order to compute the expectations of traces of words of Unitary Brownian motion, we may perform a weighted sum over all partitions of the corresponding strand diagram, where the weights are given by \(w_{T}(\pi)\), and the statistic we are averaging over is \(N\) raised to the number of components of the diagram made from the exterior connections specified by the collection of words \(\Gamma\) and the interior connections specified by the collection of partitions \(\pi\). See also Figure 7 for a visualization. We proceed to give a precise statement of Theorem 1.5, which is we are able to obtain the \(T\to\infty\) limit of the right-hand side of (2.2). First, we make the following definition. **Definition 2.4** (Balanced collection of words).: A collection of words \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) on letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\) is balanced if for each letter \(\lambda_{i}\), the number of times that \(\lambda_{i}\) appears in \(\mathbf{\Gamma}\) is equal to the number of times \(\lambda_{i}^{-1}\) appears in \(\mathbf{\Gamma}\). Next, we describe a certain special set of partitions that plays a key role in the limiting formula. Suppose that \(\mathbf{\Gamma}\) is balanced. Then \(n_{\ell}\) is even for all \(\ell\in[L]\). Given a pair of bijections \(\sigma,\tau:[n_{\ell}/2]\to(n_{\ell}/2:n_{\ell}]\), we may obtain a partition \([\sigma\ \tau]\) of \([2n_{\ell}]\) as in Figure 8. Clearly, the set of all partitions that arise this way is a strict subset of the set of all partitions of \([2n_{\ell}]\). Observe that \(\sigma\tau^{-1}:[n_{\ell}/2]\to[n_{\ell}/2]\) is a bijection, and thus we may view \(\sigma\tau^{-1}\in\mathrm{S}_{n_{\ell}/2}\). It turns out that as \(T\to\infty\), these are the only partitions that have a non-vanishing weight. This is inherent in the following theorem. Its proof is the subject of Section 4. **Theorem 2.5**.: _Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{M})\) be a balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Then_ \[\lim_{T\to\infty}\mathbb{E}\big{[}\mathrm{Tr}(B_{T}(\mathbf{\Gamma}))\big{]}= \sum_{\pi=([\sigma_{\ell}\ \tau_{\ell}],\ \ell\in[L])}\bigg{(}\prod_{\ell\in L}\mathrm{Wg}_{N}(\sigma_{\ell}\tau_{\ell}^{ -1})\bigg{)}N^{\#\mathrm{comp}(\mathbf{\Gamma},\pi)}.\] _Here, the sum in the right hand side is over \(\pi\) wihch can be obtained from pairs of bijections \(\sigma_{\ell},\tau_{\ell}:[n_{\ell}/2]\to(n_{\ell}/2:n_{\ell}]\), \(\ell\in[L]\)._ Recall we defined the Weingarten function \(\mathrm{Wg}_{N}\) when \(N\) is large in Section 1.4.1 (in particular, see (1.7)). We will give the definition for general values of \(N\) in Section 2.2 - see Definition 2.21. _Remark 2.6_.: In Theorem 2.5, it suffices to only look at balanced \(\mathbf{\Gamma}\), because if \(\mathbf{\Gamma}\) is not balanced, then \(\lim_{T\to\infty}\mathbb{E}[\mathrm{Tr}(B_{T}(\mathbf{\Gamma}))]=0\), due to known properties of Haar integration. Since Unitary Brownian motion converges in distribution to the normalized Haar measure in the large-time limit, the combination of Lemma 2.3 and Theorem 2.5 allows us to obtain the Weingarten calculus as a corollary, which we state in the following form. Similar to existing notation, we denote \(\mathrm{Tr}(U(\boldsymbol{\Gamma})):=\mathrm{Tr}(U(\Gamma_{1}))\cdots\mathrm{Tr} (U(\Gamma_{k}))\), where \(U(\Gamma_{i})\) is obtained by substituting an independent Haar-distributed Unitary matrix for each letter. **Corollary 2.7**.: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{M})\) be a balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Then \[\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}))]=\sum_{\pi=([\sigma_{\ell}\ \tau_{\ell}],\ell\in[L])}\bigg{(}\prod_{\ell\in L}\mathrm{Wg}_{N}(\sigma_{\ell }\tau_{\ell}^{-1})\bigg{)}N^{\#\mathrm{comp}(\boldsymbol{\Gamma},\pi)}.\] Here, the sum in the right hand side is over \(\pi\) which can be obtained from pairs of bijections \(\sigma_{\ell},\tau_{\ell}:[n_{\ell}/2]\to(n_{\ell}/2:n_{\ell}]\), \(\ell\in[L]\). This corollary has a similar interpretation as Lemma 2.3 in terms of a weighted sum, with the previous weights \(w_{T}(\pi_{\ell})\) replaced by Weingarten weights \(\operatorname{Wg}_{N}(\sigma_{\ell}\tau_{\ell}^{-1})\). _Remark 2.8_.: Having understood the statement of Corollary 2.7, the reader who wants to understand the surface-sum representation of Wilson loop expectations as quickly as possible may now skip ahead to Section 3. The remainder of the material in this section is only needed starting from Section 4. We finish off this subsection with an instructive example which illustrates how one may compute \(\#\mathrm{comp}(\mathbf{\Gamma},\pi)\) for a given \(\mathbf{\Gamma},\pi\). _Example 2.9_.: Suppose our letters are \(\{A,B\}\), and our words are \(\Gamma_{1}=\Gamma_{2}=ABA^{-1}B^{-1}\). Since each of \(A,B,A^{-1},B^{-1}\) appears twice total in \(\Gamma_{1},\Gamma_{2}\), we start with the diagram in the left of Figure 9. Notice that we have labeled the vertices of each strand by a number, which will come in handy later when we want to represent the number of connected components of the resulting diagram after including the interior and exterior connections. The choice of words affects the exterior connections of the strand diagram. For instance, in our current example, we would include the exterior connections as illustrated in the right of Figure 9. Now ignoring for the moment the exterior connections, suppose we have pairs of matchings of the strand diagrams as indicated in the left Figure 10. Now the specific statistic we need to compute is the number of connected components of the following diagram, which is essentially obtained by including both the interior (blue) and exterior (red) connections - see the right of Figure 10. We now make use of the vertex labels. In general, the various connected components of the diagram may be indexed by the cycles of a permutation. In this case, the permutation is on 16 elements. The cycles are obtained by starting at a given vertex and alternately following the dashed red lines and solid blue lines. For instance, in the above figure, we get a single cycle: (1 11 10 7 5 15 14 3 2 12 9 8 6 16 13 4), which implies that there is a single connected component. ### Representation theory and other preliminaries The strand diagrams of Section 2.1 may naturally be viewed as elements of the so-called Brauer algebra, which is a well-studied object in mathematics. We proceed to introduce the Brauer algebra because this will form a convenient language when phrasing our proofs. **Definition 2.10** (Brauer algebra).: For \(n\geqslant 1\), let \(\mathcal{M}(n)\) be the space of matchings of \([2n]\), i.e. partitions of \([2n]\) into two-element sets. We will view matchings pictorially as in Figure 11. We refer to pairs that involve both a left and right element as "left-right pairings", and Figure 10: Left: blue interior connections. Right: combining the interior and exterior connections. pairs that involve two left elements or two right elements as "same-side pairings". In the above picture, \(\{1,3\},\{6,8\}\) are same-side pairings, while \(\{2,9\},\{4,10\},\{5,7\}\) are left-right pairings. Let \(\mathcal{B}_{n}\) be the vector space of \(\mathbb{C}\)-valued functions on \(\mathcal{M}(n)\). We will often view elements \(f\in\mathcal{B}_{n}\) as formal sums \(f=\sum_{\pi}f(\pi)\pi\), where \(\pi\) ranges over \(\mathcal{M}(n)\). Fix \(\zeta\in\mathbb{C}\). We may define a product of matchings \(\pi_{1},\pi_{2}\in\mathcal{M}(n)\) as in Figure 13. In words, we put \(\pi_{1},\pi_{2}\) together side-by-side, and then follow the lines to obtain a new matching. Any closed loops incur a factor of \(\zeta\). Observe that this product induces a product on \(\mathcal{B}_{n}\) which turns \(\mathcal{B}_{n}\) into an algebra. Explicitly, if we represent \(f,g\in\mathcal{B}_{n}\) by formal linear combinations \(f=\sum_{\pi_{1}\in\mathcal{M}(n)}f(\pi_{1})\pi_{1}\), \(g=\sum_{\pi_{2}\in\mathcal{M}(n)}f(\pi_{2})\pi_{2}\), then the product \(fg\) is given by: \[fg=\sum_{\pi_{1},\pi_{2}\in\mathcal{M}(n)}f(\pi_{1})g(\pi_{2})\pi_{1}\pi_{2}.\] We refer to \(\mathcal{B}_{n}\) as the Brauer algebra. _Remark 2.11_.: Typically, elements of \(\mathcal{B}_{n}\) are drawn as top-bottom matchings, yet here we have chosen to draw them as left-right matchings. In what follows, we always take \(\zeta=N\). This is the choice of \(\zeta\) which relates multiplication in the Brauer algebra with expectations of Unitary Brownian motion: note that the factor of \(N\) that we incur when we form a loop exactly matches the factor of \(N\) that we incur in the strand diagram when we add another connected component. We specify a norm on \(\mathcal{B}_{n}\), which will enable us to later talk about convergence in \(\mathcal{B}_{n}\). **Definition 2.12** (Norm on \(\mathcal{B}_{n}\)).: For \(f\in\mathcal{B}_{n}\), define \(\|f\|\) to be the \(L^{1}\) norm, i.e. \(\|f\|:=\sum_{\pi\in\mathcal{M}(n)}|f(\pi)|\). Next, we define a certain sub-algebra of the Brauer algebra, called the walled Brauer algebra. This arises naturally in computing expectations of Unitary Brownian motion, as it turns out that the strand diagrams of Section 2.1 are not only elements of the Brauer algebra, but even more they are elements of the walled Brauer algebra. Figure 13: Example of multiplication in the Brauer algebra. **Definition 2.13** (Walled Brauer algebra).: Let \(n,m\geq 1\). Let \(\mathcal{M}(n,m)\subseteq\mathcal{M}(n+m)\) be the subset of matchings of \([2(n+m)]\) such that every same-side pairing is between a top \(n\) element and bottom \(m\) element, while every left-right pairing is between two top \(n\) elements or two bottom \(m\) elements. Pictorially, one imagines a dashed line separating the top \(n\) elements from the bottom \(m\) elements, and the only pairings which can cross this dashed line are same-side pairings. See Figure 12 for an example when \(n=m=3\). The walled Brauer algebra \(\mathcal{B}_{n,m}\) is the sub-algebra of \(\mathcal{B}_{n+m}\) consisting of functions \(f\in\mathcal{B}_{n+m}\) which are supported on the matchings \(\mathcal{M}(n,m)\). One may check that given two matchings \(\pi_{1},\pi_{2}\in\mathcal{M}(n,m)\), their product \(\pi_{1}\pi_{2}\) is proportional to a matching in \(\mathcal{M}(n,m)\). This implies that the product on \(\mathcal{B}_{n+m}\) descends to a product on \(\mathcal{B}_{n,m}\). Observe that \(\mathrm{S}_{n}\) can be embedded in \(\mathcal{M}(n)\subseteq\mathcal{B}_{n}\) as follows. Given \(\sigma\in\mathrm{S}_{n}\), we can view it as an element of \(\mathcal{M}(n)\) as in Figure 14. We may also embed \(\mathrm{S}_{n}\) into \(\mathcal{M}(n,n)\subseteq\mathcal{B}_{n,n}\) as follows. by connecting the top \(n\) vertices on the left and right as we did to embed \(\mathrm{S}_{n}\) into \(\mathcal{B}_{n}\), and then connecting the bottom \(n\) vertices on the left and right by straight lines. Next, we define the following notation for certain special elements of the walled Brauer algebra \(\mathcal{B}_{n,m}\). These correspond to the same-direction and opposite-direction swaps introduced in Section 2.1. **Definition 2.14**.: Given \(1\leq i<j\leq n\) or \(n+1\leq i<j\leq n+m\), define \((i\ j)\in\mathcal{B}_{n,m}\) to be the pairing of \([2(n+m)]\) which swaps the \(i,j\) vertices with their corresponding versions on the right, while keeping the other vertices fixed. This is best explained by the example in Figure 15 when \(n=m=3\). Given \(1\leq i\leq n\) and \(n+1\leq j\leq n+m\), let \(\langle i\ j\rangle\) be the pairing which has a same-side pairing between \(i,j\) on the left, as well as their corresponding versions on the right, while keeping the other vertices fixed. See Figure 16 for an example when \(n=m=3\). Next, we define the following notation for another set of special elements of the walled Brauer algebra \(\mathcal{B}_{n,n}\). These elements are matchings which have no left-right pairings. **Definition 2.15**.: Let \(\sigma,\tau:[n]\rightarrow(n:2n]\) be bijections. Define \([\sigma\ \tau]\in\mathcal{B}_{n,n}\) to be the element of the walled Brauer algebra which is given by \(\sigma\) on the left and \(\tau\) on the right. See the Figure 17 for an example when \(n=3\). Note the particular way we have chosen to label the vertices in Figure 17. From now on, this is how we will label vertices when working with the walled Brauer algebra \(\mathcal{B}_{n,n}\). Ultimately the labeling will not matter, but we have chosen to label in this way to better relate to the Jucys-Murphy elements, which we next define. **Definition 2.16** (Jucys-Murphy elements).: For \(n\geq 2\), define the Jucys-Murphy element \(J_{n}:=(1\ n)+\cdots+(n-1\ n)\in\mathbb{C}[S_{n}]\). We also view \(J_{n}\in\mathbb{C}[S_{m}]\) for any \(n\leq m\). Define \(J_{1}:=0\). _Remark 2.17_.: One may show that the Jucys-Murphy elements commute with each other. In the following, we will also view \(J_{1},\ldots,J_{n}\) as elements of \(\mathcal{B}_{n,n}\), by using the previously mentioned embedding of \(\mathrm{S}_{n}\subseteq\mathcal{B}_{n,n}\). We will also need to refer to Jucys-Murphy elements which act on bottom elements rather than top elements. We define this next. **Definition 2.18**.: Let \(n,m\geq 1\). Define \(J^{\prime}_{1},\ldots,J^{\prime}_{m}\in\mathcal{B}_{n,m}\) by \[J^{\prime}_{k}:=(n+1\ n+k)+\cdots+(n+k-1\ n+k),\ \ k\in[m].\] **Definition 2.19** (Norm on group algebra).: For \(f\in\mathbb{C}[\mathrm{S}_{n}]\), we define \(\|f\|\) to be the \(L^{1}\) norm, i.e. \(\|f\|:=\sum_{\pi\in\mathrm{S}_{n}}|f(\pi)|\). Note that our norm on \(\mathbb{C}[\mathrm{S}_{n}]\) and on \(\mathcal{B}_{n}\) are compatible with our embeddings \(\mathbb{C}[\mathrm{S}_{n}]\subseteq\mathcal{B}_{n}\) and \(\mathbb{C}[\mathrm{S}_{n}]\subseteq\mathcal{B}_{n,n}\subseteq\mathcal{B}_{2n}\). _Remark 2.20_.: With this definition of the norm, we have that \(\|fg\|\leq\|f\|\cdot\|g\|\) for \(f,g\in\mathbb{C}[\mathrm{S}_{n}]\). This implies that \(\|e^{f}\|\leq e^{\|f\|}\), which further implies that if \(\|f\|<1\), then \[\int_{0}^{\infty}e^{-u(\mathrm{id}+f)}du\in\mathbb{C}[\mathrm{S}_{n}]\] converges absolutely. Moreover, one has that \[\int_{0}^{\infty}e^{-u(\mathrm{id}+f)}du=(\mathrm{id}+f)^{-1}.\] Next, we discuss an alternate form of the Weingarten function \(\operatorname{Wg}_{N}\) which arises naturally in the proof of Theorem 2.5. First, suppose \(N\geq n\). The case of general \(N\) will be addressed a bit later. Then \(\operatorname{Wg}_{N}\in\mathbb{C}[S_{n}]\) is the following inverse: \[\operatorname{Wg}_{N}:=\bigg{(}\sum_{\sigma\in\mathbb{S}_{n}}N^{\#\text{ cycles}(\sigma)}\sigma\bigg{)}^{-1}.\] Jucys [10] proved the following identity: \[\sum_{\sigma\in\mathbb{S}_{n}}N^{\#\text{cycles}(\sigma)}\sigma=(N+J_{n}) \cdots(N+J_{1}). \tag{2.3}\] Note that when \(N\geq n\), each \(N+J_{k}\) for \(k\in[n]\) is invertible because then \(\|J_{k}\|=k-1<N\) (recall Remark 2.20), with inverse given by: \[(N+J_{k})^{-1}=N(\operatorname{id}+J_{k}/N)^{-1}=N\int_{0}^{\infty}e^{-u( \operatorname{id}+J_{k}/N)}du.\] Since the \(J_{1},\dots,J_{n}\) commute with each other, we have that (as observed by [11]) \[\operatorname{Wg}_{N}=(N+J_{n})^{-1}\cdots(N+J_{1})^{-1},\quad\text{when $N \geq n$}. \tag{2.4}\] The reason why we introduce this formula for the Weingarten function is because the terms \((N+J_{k})^{-1}\), \(k\in[n]\) will appear naturally in our argument. Next, we discuss the definition of the Weingarten function in case of general \(N\). We follow [12]. **Definition 2.21** (Weingarten function).: Let \(N,n\geq 1\). Define \(\operatorname{Wg}_{N}\in\mathbb{C}[\mathbb{S}_{n}]\) (as usual, we omit the dependence on \(n\)) by \[\operatorname{Wg}_{N}(\sigma):=\frac{1}{n!}\sum_{\begin{subarray}{c}\lambda \vdash n\\ \ell(\lambda)\leq N\end{subarray}}\Big{[}\chi_{\lambda}(\operatorname{id}) \chi_{\lambda}(\sigma)\prod_{(i,j)\in\lambda}(N+j-i)^{-1}\Big{]}. \tag{2.5}\] Here, \(\ell(\lambda)\) is the number of rows of \(\lambda\), i.e. the number of parts in the partition of \(n\) given by \(\lambda\). Compared with the formula (1.7) for \(N\geq n\), the only difference is in the restriction \(\ell(\lambda)\leq N\) when summing over Young diagrams \(\lambda\). Note that when \(N\geq n\), every Young diagram with \(n\) boxes has at most \(n\leq N\) rows, and thus the definition (2.5) reduces to (1.7) if \(N\geq n\). #### 2.2.1 Additional technicalities for the small \(N\) case The following material is only needed to prove Theorem 2.5 in the case \(N<2\max_{\ell\in[L]}n_{\ell}\). We encourage the reader on a first reading to skip this subsection and continue on to Section 4 to first read over the proof in the case \(N\geq 2\max_{\ell\in[L]}n_{\ell}\), which already contains the main probabilistic ideas. The reader may come back to this section once they are ready to read Section 4.2, where the results introduced here will be needed. Let \(e_{1},\ldots,e_{N}\) denote the standard basis of \(\mathbb{C}^{N}\). The tensor space \((\mathbb{C}^{N})^{\otimes n}\) has a basis given by \((e_{i},i=(i_{1},\ldots,i_{n})\in[\![N]\!]^{n})\), where \(e_{i}:=e_{i_{1}}\otimes\cdots\otimes e_{i_{n}}\). The space \((\mathbb{C}^{N})^{\otimes n}\) has a natural inner product which when restricted to basis elements is given by \[\langle e_{i},e_{j}\rangle=\delta_{ij}=\delta_{i_{1}j_{1}}\cdots\delta_{i_{k} j_{k}}.\] Let \(M\in\operatorname{End}((\mathbb{C}^{N})^{\otimes n})\). One may think of \(M\) as an \(N^{n}\times N^{n}\) matrix, whose matrix entries are given by: \[M_{ij}=\langle e_{i},Me_{j}\rangle,\ \ i,j\in[\![N]\!]^{n}.\] In particular, if \(M_{1},\ldots,M_{n}\in\operatorname{End}(\mathbb{C}^{N})\), then the matrix entries of the tensor product \(M=M_{1}\otimes\cdots\otimes M_{n}\) are given by \[M_{ij}=\langle e_{i},(M_{1}\otimes\cdots\otimes M_{n})e_{j}\rangle =\langle e_{i_{1}}\otimes\cdots e_{i_{n}},(M_{1}e_{j_{1}}) \otimes\cdots\otimes(M_{n}e_{j_{n}})\rangle\] \[=\langle e_{i_{1}},M_{1}e_{j_{1}}\rangle\cdots\langle e_{i_{n}},M _{n}e_{j_{n}}\rangle\] \[=(M_{1})_{i_{1}j_{1}}\cdots(M_{n})_{i_{n}j_{n}},\] i.e. the product of the corresponding matrix entries of \(M_{1},\ldots,M_{n}\). **Definition 2.22**.: Let \(N,n\geq 1\). We define a representation \(\rho_{+}\) of \(\mathcal{B}_{n}\) as follows. Given a pairing \(\pi\) of \([2n]\), define \(\rho_{+}(\pi)\) to be the linear map in \(\operatorname{End}((\mathbb{C}^{N})^{\otimes n})\) whose matrix entries are given by: \[(\rho_{+}(\pi))_{(i_{1},\ldots,i_{n}),(i_{2_{a}},\ldots,i_{n+1})}:=\prod_{\{a,b\}\in\pi}\delta^{i_{a}i_{b}}.\] For notational brevity, we omit the dependence of \(\rho_{+}\) on \(N,n\). In the following, we mostly apply \(\rho_{+}\) to elements of \(\mathcal{B}_{n,n}\subseteq\mathcal{B}_{2n}\). The way one visualizes this definition is as follows. Suppose \(n=5\) and we are given the pairing displayed in Figure 18. Then the matrix entry corresponding to indices \((i_{1},\ldots,i_{5}),(j_{1},\ldots,j_{5})\) is simply \(1\) if all constraints indicated by the pairing are satisfied (in this case, \(i_{1}=j_{2}\), \(i_{2}=j_{3}\), \(i_{3}=i_{5}\), \(i_{4}=j_{4}\), and \(j_{1}=j_{5}\)), and \(0\) otherwise. Figure 18: Visualization of matrix entries of \(\rho_{+}(\pi)\). _Remark 2.23_.: There is an alternative definition of \(\rho_{+}\) that one typically sees (e.g. [1]). First, let \(E_{ij}\in\operatorname{End}(\mathbb{C}^{N})\) be the elementary matrix which has a \(1\) in its \((i,j)\) entry and zeros everywhere else. We may write \(E_{ij}=e_{i}e_{j}^{T}\). Then \[\rho_{+}(\pi)=\prod_{\{a,b\}\in\pi}\delta^{i_{a}i_{b}}E_{i_{1}i_{2n}}\otimes E _{i_{2}i_{2n-1}}\otimes\cdots\otimes E_{i_{n}i_{n+1}}.\] Here and in the following, repeated indices are implicitly summed over. To see why this definition is equivalent, we may compute an arbitrary matrix entry: \[\big{(}\rho_{+}(\pi)\big{)}_{(i_{1},\ldots,i_{n}),(i_{2n},\ldots, i_{n+1})} =\prod_{\{a,b\}\in\pi}\delta^{j_{a}j_{b}}\langle e_{i_{1}}\otimes \cdots e_{i_{n}},\big{(}E_{j_{1}j_{2n}}\otimes\cdots\otimes E_{j_{n}j_{n+1}} \big{)}\big{(}e_{i_{2n}}\otimes\cdots e_{i_{n+1}}\big{)}\rangle\] \[=\prod_{\{a,b\}\in\pi}\delta^{j_{a}j_{b}}\langle e_{i_{1}},e_{j_ {1}}e_{j_{2n}}^{T}e_{i_{2n}}\rangle\cdots\langle e_{i_{n}},e_{j_{n}}e_{j_{n+1} }^{T}e_{i_{n+1}}\rangle\] \[=\prod_{\{a,b\}\in\pi}\delta^{j_{a}j_{b}}\delta_{i_{1}j_{1}} \delta_{j_{2n}i_{2n}}\cdots\delta_{i_{n}j_{n}}\delta_{j_{n+1}i_{n+1}}\] \[=\prod_{\{a,b\}\in\pi}\delta^{i_{a}i_{b}}.\] Recalling that we may view \(\mathrm{S}_{n}\) as embedded in \(\mathcal{B}_{n}\), the restriction of the representation \(\rho_{+}\) to \(\mathrm{S}_{n}\) defines a representation of \(\mathrm{S}_{n}\). **Definition 2.24**.: Let \(N,n\geq 1\). Define the representation \(\rho:\mathbb{C}[\mathrm{S}_{n}]\to\operatorname{End}((\mathbb{C}^{N})^{ \otimes n})\) to be the restriction of \(\rho_{+}:\mathcal{B}_{n}\to\operatorname{End}((\mathbb{C}^{N})^{\otimes n})\) to \(\mathbb{C}[\mathrm{S}_{n}]\subseteq\mathcal{B}_{n}\). Again, we omit the dependence of \(\rho\) on \(N,n\) for notational brevity. _Remark 2.25_.: One may verify that \(\rho\) has the following explicit form on pure tensors: \[\rho(\sigma)(v_{1}\otimes\cdots\otimes v_{n})=v_{\sigma(1)}\otimes\cdots \otimes v_{\sigma(n)},\ \ \sigma\in\mathrm{S}_{n},\ \ v_{1},\ldots,v_{n}\in\mathbb{C}^{N}.\] In words, \(\rho(\sigma)\) acts by permutation of tensors. Next, we discuss how the formula (2.4) needs to be modified when \(N\) is general. First, recall that when \(N\geq n\), the Weingarten function may also be defined as the inverse of \(\sum_{\sigma}N^{\#\mathrm{cycles}(\sigma)}\sigma\) in \(\mathbb{C}[\mathrm{S}_{n}]\). For general \(N\), this inverse may not exist. However, we quote the following result from [13], which says that the Weingarten function can still be interpreted as an inverse, in a suitable sense. **Lemma 2.26** (Section 2 of [13]).: Let \(N,n\geq 1\). We have that \(\rho\big{(}\sum_{\sigma}N^{\#\mathrm{cycles}(\sigma)}\sigma\big{)}\) is invertible, with inverse given by \(\rho(\mathrm{Wg}_{N})\). _Remark 2.27_.: This is the whole point of introducing the representation \(\rho\), in that \(\rho(\sum_{\sigma}N^{\#\mathrm{cycles}(\sigma)}\sigma)\) is always invertible as a matrix, even though \(\sum_{\sigma}N^{\#\mathrm{cycles}(\sigma)}\sigma\) may not be invertible in \(\mathbb{C}[S_{n}]\). The simplest example of this difference is when \(N=1\) and \(n=2\), in which case \(\sum_{\sigma}N^{\#\mathrm{cycles}(\sigma)}\sigma=\mathrm{id}+(1\ 2)\). Now clearly, \(\mathrm{id}+(1\ 2)\) is not invertible in \(\mathbb{C}[\mathrm{S}_{2}]\), because the inverse would be given in general by \(a\cdot\mathrm{id}+b\cdot(1\ 2)\), where \(a,b\in\mathbb{C}\) solve the following system of equations: \[a+b=1,\] \[a+b=0.\] On the other hand, when \(N=1\), the space \((\mathbb{C}^{N})^{\otimes n}\) is one-dimensional no matter the value of \(n\). On this space, both \(\rho(\mathrm{id})\) and \(\rho((1\ 2))\) are the identity operator. (Recall that \(\rho((1\ 2))(u\otimes v)=v\otimes u\). If \(u,v\in\mathbb{C}\), then \(v\otimes u=u\otimes v\), so that \(\rho((1,2))(u\otimes v)=u\otimes v\).) Thus \(\rho(\mathrm{id}+(1\ 2))\) acts as multiplication by \(2\), and thus \(\rho(\mathrm{id}+(1\ 2))^{-1}\) is multiplication by \(1/2\). Similarly, we next show that the elements \(N+J_{k}\), \(k\in[n]\) are always invertible, if we apply the representation \(\rho_{+}\). **Lemma 2.28**.: Let \(N,n\geq 1\). Let \(\rho_{+}:\mathcal{B}_{n,n}\to\mathrm{End}((\mathbb{C}^{N})^{\otimes 2n})\) be the representation6 from Definition 2.22. For all \(k\in[n]\), all eigenvalues of \(\rho_{+}(J_{k})\) are at least \(-N+1\). Footnote 6: Recall that \(\mathcal{B}_{n,n}\subseteq\mathcal{B}_{2n}\). The representation \(\rho_{+}\) is originally defined on \(\mathcal{B}_{2n}\), here we restrict it to \(\mathcal{B}_{n,n}\). Proof.: Due to our embedding of \(\mathrm{S}_{n}\) into \(\mathcal{B}_{n,n}\), \(\rho_{+}(J_{k})\) acts as the identity on the last \(n\) coordinates of \((\mathbb{C}^{N})^{\otimes 2n}\). On the first \(n\) coordinates, \(\rho_{+}(J_{k})\) acts as \(\rho(J_{k})\) (as defined in Definition 2.24). Thus, it suffices to show that all eigenvalues of \(\rho(J_{k})\) are at least \(-N+1\). This follows from the combination of two classic results in the representation theory of the symmetric group: 1. By Schur-Weyl duality (see e.g. [1, Theorem 2.1]), we have that in the decomposition of \(\rho\) into irreps, only those irreps corresponding to Young diagrams \(\lambda\) with at most \(N\) rows (i.e. \(\ell(\lambda)\leq N\)) appear. 2. Let \(\rho^{\lambda}\) be the irrep corresponding to \(\lambda\). The eigenvalues of \(\rho^{\lambda}(J_{k})\) are explicitly known: for each Young tableaux with shape \(\lambda\), let \((i,j)\) be the coordinates of the box which contains the integer \(k\). Here, \(i\) is the row index and \(j\) the column index. Then \(\rho^{\lambda}(J_{k})\) has an eigenvalue equal to \(j-i\). Moreover, all eigenvalues of \(\rho^{\lambda}(J_{k})\) arise this way. This result was proven by Jucys [13] and independently later by Murphy [14]. The second fact implies that the every eigenvalue of \(\rho^{\lambda}(J_{k})\) is at least \(-\ell(\lambda)+1\), since the box with the most negative value of \(j-i\) is \((\ell(\lambda),1)\). Combining this with the first fact, the desired result now follows. This lemma shows that all for all \(k\in[n]\), all eigenvalues of \(\rho_{+}(N+J_{k})\), are at least \(1\), and thus \(\rho_{+}(N+J_{k})\) is invertible. Moreover, we have the following lemma, which generalizes (2.4) to the case of general \(N\). **Lemma 2.29**.: Let \(N,n\geq 1\). We have that \[\rho_{+}(\mathrm{Wg}_{N})=\rho_{+}(N+J_{n})^{-1}\cdots\rho_{+}(N+J_{1})^{-1}.\] Proof.: Due to our embedding of \(\mathrm{S}_{n}\) into \(\mathcal{B}_{n,n}\), for any element \(f\in\mathbb{C}[\mathrm{S}_{n}]\), the matrix \(\rho_{+}(f)\in\mathrm{End}((\mathbb{C}^{N})^{\otimes 2n})\) acts as the identity on the last \(n\) coordinates of \((\mathbb{C}^{N})^{\otimes 2n}\). On the first \(n\) coordinates, \(\rho_{+}(f)\) acts as \(\rho(f)\in\mathrm{End}((\mathbb{C}^{N})^{\otimes n})\). Thus it suffices to prove the claimed identity with \(\rho_{+}\) replaced by \(\rho\). Since \(\rho\) is a representation, we have that (using that the Jucys-Murphy elements commute with each other and applying (2.3) in the final identity) \[\rho(N+J_{n})^{-1}\cdots\rho(N+J_{1})^{-1}=\rho((N+J_{1})\cdots(N+J_{n}))^{-1} =\rho\bigg{(}\sum_{\sigma}N^{\#\mathrm{cycles}(\sigma)}\sigma\bigg{)}^{-1}.\] The desired result now follows by Lemma 2.26. In the course of proving Theorem 2.5 for general values of \(N\), we will also need the following technical lemma. **Lemma 2.30**.: Let \(N,n\geq 1\). All eigenvalues of \[\frac{1}{N}\rho(J_{n}+\cdots+J_{1})\in\operatorname{End}((\mathbb{C}^{N})^{ \otimes n})\] are at least \(-\frac{n}{2}+\frac{1}{2}\). More precisely, if \(n=mN+r\) with \(0\leq r\leq N-1\), then all eigenvalues are at least \[-\frac{n}{2}+\frac{m^{2}}{2}+\frac{r}{2}-\frac{1}{2}\frac{r(r-1)}{N}+\frac{mr} {N}.\] Proof.: As noted in the proof of Lemma 2.28, by Schur-Weyl duality (see e.g. [10, Theorem 2.1]), we have that in the decomposition of \(\rho\) into irreps, only those irreps corresponding to Young diagrams \(\lambda\) with at most \(N\) rows (i.e. \(\ell(\lambda)\leq N\)) appear. Thus letting \(\rho^{\lambda}\) be the irrep corresponding to \(\lambda\), it suffices to show the claim with \(\rho\) replaced by \(\rho^{\lambda}\), for any Young diagram \(\lambda\) with at most \(N\) rows. Towards this end, let \(\lambda\) be a Young diagram, for example as in Figure 19. As discussed in the proof of Lemma 2.28, for \(k\in[n]\) the eigenvalues of \(\rho^{\lambda}(J_{k})\) are given by the content of the \(k\)th box when we range over standard Young tableaux with shape \(\lambda\). Even more, [15, 16] show that the \((\rho^{\lambda}(J_{k}),k\in[n])\) have a joint eigenbasis indexed by standard Young tableaux with shape \(\lambda\), where the eigenvalues corresponding to a given standard Young tableaux are the contents of the boxes of the Young diagram. This discussion shows that on each eigenbasis element, \(\rho^{\lambda}(J_{n}+\cdots+J_{1})\) acts in the same manner, that is as a whole \(\rho^{\lambda}(J_{1}+\cdots+J_{n})\) acts as a multiple \(c_{\lambda}\) of the identity, where \(c_{\lambda}\) is the sum of contents of all the boxes in \(\lambda\). To envision the computation of \(c_{\lambda}\), we label each box of \(\lambda\) with its content, i.e. the number \(j-i\), where \((i,j)\) is the row-column coordinate of the box. For the Young diagram in Figure 19, we have the labeling in Figure 20. The constant \(c_{\lambda}\) is then the sum of all box labels. For example, for the Young diagram in Figures 19 and 20, \(c_{\lambda}=6\). Now, fix \(n,N\geq 1\). To prove the lemma, we need to understand how negative the content sum \(c_{\lambda}\) may be for a Young diagram with \(n\) boxes and at most \(N\) rows. Clearly, to minimize \(c_{\lambda}\), we want a Young diagram with as many columns of size \(N\) as possible. Thus, if \(n=mN+r\) with \(0\leqslant r\leqslant N-1\), then the Young diagram in Figure 21 minimizes \(c_{\lambda}\). The content sum \(c_{\lambda}\) of such a diagram (by first summing the contents along each column) is \[c_{\lambda} =-\binom{N}{2}+\bigg{(}-\binom{N}{2}+N\bigg{)}+\cdots+\bigg{(}- \binom{N}{2}+(m-1)N\bigg{)}+\bigg{(}-\binom{r}{2}+mr\bigg{)}\] \[=-m\binom{N}{2}+\binom{m}{2}N-\binom{r}{2}+mr.\] From this, we may obtain \[\frac{1}{N}c_{\lambda} =-\frac{1}{2}m(N-1)+\frac{1}{2}m(m-1)-\frac{1}{2}\frac{r(r-1)}{N} +\frac{mr}{N}\] \[=-\frac{1}{2}(mN+r)+\frac{1}{2}m^{2}+\frac{1}{2}r-\frac{1}{2} \frac{r(r-1)}{N}+\frac{mr}{N}.\] Since \(n=mN+r\), this proves the second claim. Moreover, we see that if \(m\geqslant 1\), then the above is at least \(-\frac{1}{2}n+\frac{1}{2}m^{2}\geqslant-\frac{1}{2}n+\frac{1}{2}\), as desired. Now, suppose that \(m=0\), so that \(n=r\). Then the above is equal to \[-\frac{1}{2}n+\frac{1}{2}r\bigg{(}1+\frac{1}{N}-\frac{r}{N}\bigg{)}.\] One may check that under the restriction \(1\leqslant r\leqslant N-1\), the above is minimized at \(r=1\) with a value of \(-\frac{1}{2}n+\frac{1}{2}\), as desired. Recall the Jucys-Murphy elements acting on bottom vertices defined in Definition 2.18. **Corollary 2.31**.: Let \(N,n\geqslant 1\). Let \(\rho_{+}:\mathcal{B}_{n,n+1}\to\operatorname{End}((\mathbb{C}^{N})^{\otimes 2n +1})\). All eigenvalues of \[\frac{1}{N}\rho_{+}(J_{n}+\cdots+J_{1}+J^{\prime}_{n+1}+\cdots+J^{\prime}_{1})\] are strictly greater than \(-n\). Figure 21: When \(n=mN+r\), the “worst case” Young diagram in terms of smallest content sum. Proof.: Observe that \(\rho_{+}(J_{n}+\cdots+J_{1})\) acts as the identity on the last \(n+1\) coordinates, and on the first \(n\) coordinates, \(\rho_{+}(J_{n}+\cdots+J_{1})\) acts as \(\rho(J_{n}+\cdots+J_{1})\). In other words, \(\rho_{+}(J_{n}+\cdots+J_{1})=\rho(J_{n}+\cdots+J_{1})\otimes I_{n+1}\), where \(I_{n+1}\in\operatorname{End}((\mathbb{C}^{N})^{\otimes(n+1)})\) is the identity. Similarly, we have that \(\rho_{+}(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})=I_{n}\otimes\rho(J_{n+1}+ \cdots+J_{1})\), where \(I_{n}\in\operatorname{End}((\mathbb{C}^{N})^{\otimes n})\) is the identity. In general, given two matrices \(M_{1},M_{2}\), the eigenvalues of \(M_{1}\otimes M_{2}\) are the products of the eigenvalues of \(M_{1}\) and eigenvalues of \(M_{2}\). Combining this fact with Lemma 2.30, we obtain that all eigenvalues of \(\frac{1}{N}\rho(J_{n}+\cdots+J_{1})\otimes I_{n+1}\) are at least \(-\frac{1}{2}n+\frac{1}{2}\), and all eigenvalues of \(I_{n}\otimes\frac{1}{N}\rho(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})\) are at least \(-\frac{1}{2}(n+1)+\frac{1}{2}\). The desired result now follows. ## 3 Surface-sum representation of Wilson loop expectations In this section, we show how to apply Corollary 2.7 to express Wilson loop expectations as sums over edge-plaquette embeddings (which were introduced in Section 1.4). We first prove a more abstract result about expectations of traces of words of Haar distributed Unitary matrices (Theorem 3.8) which has no reference to a lattice, and then apply this result to Wilson loop expectations to obtain Corollary 3.11. **Definition 3.1**.: Define the normalized Weingarten function \(\overline{\mathrm{Wg}}_{N}\) by: \[\overline{\mathrm{Wg}}_{N}(\pi):=N^{n+|\pi|}\mathrm{Wg}_{N}(\pi),\ \ \pi\in\mathrm{S}_{n}.\] Here, \(\|\pi\|:=n-\#\mathrm{cycles}(\pi)\). _Remark 3.2_.: We will see later on that the normalized Weingarten function is the more natural quantity to work with, as it leads to nicer statements of our formulas. Another nice thing about \(\overline{\mathrm{Wg}}_{N}\) is that with this choice of normalization, the limit as \(N\to\infty\) exists and depends on \(\pi\). Indeed, we in fact have (see e.g. [10, Corollary 2.7]) \[\overline{\mathrm{Wg}}_{N}(\pi)=\mathrm{M\ddot{ob}}(\pi)+O(N^{-2})\ \mathrm{as}\ N\to\infty,\] where if \(\pi\) is decomposed into cycles of lengths \(C_{1},\ldots,C_{k}\), then \[\mathrm{M\ddot{ob}}(\pi):=\prod_{i\in[k]}c_{C_{i}-1}(-1)^{C_{i}-1},\ \ \ \text{where}\ c_{k}:=\frac{(2k)!}{k!(k+1)!}\ \text{is the}\ k\text{th Catalan number}(\ref{eq:1})\] Recall from Corollary 2.7 that expectations of traces of words with respect to Haar measure may be expressed in terms of sums over pairs of matchings of strand diagrams, with matchings weighted by the Weingarten function. In this section, we will use this to express Wilson loop expectations in lattice gauge theories as weighted sums over edge-plaquette embeddings. The main step is to describe how to obtain a map from a given balanced collection of words \(\mathbf{\Gamma}\) along with a collection of matchings of strand diagrams. Let us start with some examples. We begin with a collection of faces corresponding to the words of \(\mathbf{\Gamma}\). Each word \(\Gamma_{i}\) gives a face whose degree (i.e. number of boundary edges) is the length of \(\Gamma_{i}\). The boundary edges of each such face are naturally labeled by letters in \(\{\lambda_{1},\ldots,\lambda_{L}\}\). These faces can be obtained from adding the exterior connections specified by \(\mathbf{\Gamma}\) to the strand diagram, as in Figure 9 (think of the red exterior connections as being shrunk down to a single vertex). Next, consider a pair of left-right matchings of a strand diagram as displayed in the left of Figure 22. Think of this as the portion of the diagram corresponding to some letter \(\lambda\) in \(\{\lambda_{1},\ldots,\lambda_{L}\}\). One can imagine that the two endpoints of each blue line are identified (this corresponds to "shrinking" each blue line away). In this case, one is then left with a collection of faces as in the right of Figure 22. In Figure 23, we give another example of a pair of matchings of the strand diagram, and the corresponding collection of faces. By specifying a pair of left-right matchings \((\sigma_{\ell},\tau_{\ell})\) for each letter \(\lambda_{\ell}\), we can obtain another collection of faces, in the manner described in Figures 22 and 23. We thus naturally have two collections of faces: the set of faces which correspond to words in \(\mathbf{\Gamma}\), and the set of faces obtained as above for every strand diagram. **Convention 3.3**.: We refer to the faces which correspond to words in \(\mathbf{\Gamma}\) as "plaquette-faces", or "yellow faces". We refer to the faces which are obtained from the interior of the strand diagram as "edge-faces", or "blue faces". Observe that every edge is incident to exactly two faces - one blue face and one yellow face. This naturally induces a gluing of the faces, and so we obtain a map whose dual is bipartite from the data \((\mathbf{\Gamma},((\sigma_{\ell},\tau_{\ell}),\ell\in[L]))\). Now the point is that the number of vertices of this map is precisely the number of components of the strand diagram. This relation is captured in the following lemma. Figure 23: Left: a pair of left and right matchings. Right: the faces obtained by “shrinking away” the blue matching edges. Figure 22: Left: a pair of left and right matchings. Right: the faces obtained by “shrinking away” the blue matching edges. **Lemma 3.4**.: Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Suppose that for each \(\ell\in[k]\), the number of occurrences of \(\lambda_{\ell}\) is \(n_{\ell}\). For \(\ell\in[k]\), let \(\sigma_{\ell},\tau_{\ell}:[n_{\ell}]\to(n_{\ell}:2n_{\ell}]\) be a pair of matchings of the portion of the strand diagram corresponding to \(\Gamma_{k}\). Then \(\#\mathrm{comp}(\mathbf{\Gamma},\ ((\sigma_{\ell},\tau_{\ell}),\ \ell\in[L]))\) is equal to the number of vertices in the corresponding map. Proof.: To compute the number of vertices in the map, we can proceed as follows. Recalling that the map arises from combining an interior connection with an exterior connection of the strand diagrams, we may begin by giving the vertices of the strands separate labels. For the portion of the strand diagram corresponding to \(\lambda_{\ell}\), we give a total of \(4n_{\ell}\) labels, since there are \(2n_{\ell}\) strands. Each connection (be it interior or exterior) results in the identification of two labels. In terms of the map, labels which have been identified are in fact the same vertex. Therefore the number of vertices in the map corresponds to the number of different equivalence classes of labels, after performing all label identifications indicated by the connections. The equivalence class of a given label may be obtained by starting at the label, and alternately following the exterior and interior connections, until we arrive back at the initial label. Recalling Example 2.9, observe that this is precisely the same method for computing the number of connected components of a given strand diagram with interior and exterior connections. Thus the connected components of the strand diagram are in bijection (moreover, there is a canonical identification) with the vertices of the map. **Definition 3.5**.: Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Define \(\mathrm{DBM}(\mathbf{\Gamma})\) (short for "dual bipartite map") to be the set of all possible maps which can be obtained from adding interior matchings to the strand diagram corresponding to \(\mathbf{\Gamma}\). For a given map \(\mathcal{M}\in\mathrm{DBM}(\mathbf{\Gamma})\), and \(\ell\in[L]\), let \(\mu_{\ell}(\mathcal{M})\) be the partition of \(n_{\ell}\) (the total number of occurrences of \(\lambda_{\ell}\)) given by \(1/2\) times the degrees of the blue faces which are glued in to the strand diagram of \(\lambda_{\ell}\). _Remark 3.6_.: Note that all maps in \(\mathrm{DBM}(\mathbf{\Gamma})\) are orientable. The faces corresponding to a word in \(\mathbf{\Gamma}\) are endowed with a natural orientation (given by traversing the word). The faces coming from interior matchings can then always be endowed with a consistent orientation. For instance, in Figures 22 and 23, the orientations of these faces should be the reverse of what is drawn. _Remark 3.7_.: Observe that for any \(\mathcal{M}\in\mathrm{DBM}(\mathbf{\Gamma})\), we have that \[E(\mathcal{M})=2\sum_{i\in[L]}n_{i},\ \ F(\mathcal{M})=k+\sum_{i\in[L]} \ell(\mu_{i}(\mathcal{M})). \tag{3.2}\] Here, \(\ell(\mu_{i}(\mathcal{M}))\) is the number of parts of the partition \(\mu_{i}(\mathcal{M})\). The first identity says that the number of edges is equal to the total number of strands in the strand diagrams, and the second identity says that the total number of faces is equal to the number of words plus the total number of cycles of the interior matching of the strand diagram. **Theorem 3.8**.: _Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). We have that_ \[\mathbb{E}[\mathrm{Tr}(U(\mathbf{\Gamma}))]=\sum_{\mathcal{M}\in\mathrm{DBM}( \mathbf{\Gamma})}\bigg{(}\prod_{\ell\in[L]}\overline{\mathrm{Wg}}_{N}(\mu_{ \ell}(\mathcal{M}))\bigg{)}N^{\chi(\mathcal{M})-k}.\] Proof.: By Corollary 2.7, the definition of \(\mathrm{DBM}(\mathbf{\Gamma})\), and Lemma 3.4, we have that \[\mathbb{E}[\mathrm{Tr}(U(\mathbf{\Gamma}))]=\sum_{\mathcal{M}\in\mathrm{DBM}(\mathbf{ \Gamma})}\bigg{(}\prod_{\ell\in[L]}\mathrm{Wg}_{N}(\mu_{\ell}(\mathcal{M})) \bigg{)}N^{V(\mathcal{M})}.\] Applying the identities (3.2), we further obtain \[\mathbb{E}[\mathrm{Tr}(U(\mathbf{\Gamma}))]=\sum_{\mathcal{M}\in \mathrm{DBM}(\mathbf{\Gamma})}\bigg{(}\prod_{i\in[L]}N^{n_{i}+[\mu_{i}(\mathcal{M} )]}\mathrm{Wg}_{N}(\mu_{i}(\mathcal{M}))\bigg{)}N^{V(\mathcal{M})-E(\mathcal{M })+F(\mathcal{M})-k}.\] The desired result now follows. Next, suppose that the letters \(\{\lambda_{1},\dots,\lambda_{L}\}\) are edges of the lattice \(\Lambda\). In this case, a map \(\mathcal{M}\in\mathrm{DBM}(\mathbf{\Gamma})\) exactly corresponds to an edge-plaquette embedding \((\mathcal{M},\psi)\), where the function \(\psi\) is determined by the requirement that it maps each edge of \(\mathcal{M}\) (which is canonically labeled by a letter in \(\{\lambda_{1},\dots,\lambda_{L}\}\)) to the corresponding edge of \(\Lambda\). We now apply these considerations to lattice Yang-Mills. Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string. Recall equation (1.5), which we reproduce here: \[\langle W_{s}\rangle_{\Lambda,\beta}=Z_{\Lambda,\beta}^{-1}\sum_{K:\mathcal{P} \rightarrow\mathbb{N}}\frac{(N\beta)^{K}}{K!}\int W_{s}(Q)\prod_{p\in\mathcal{ P}}\mathrm{Tr}(Q_{p})^{K(p)}\prod_{e\in E_{\Lambda}}dQ_{e}. \tag{3.3}\] For each fixed \(K:\mathcal{P}\rightarrow\mathbb{N}\), we may apply Theorem 3.8 to obtain an expression for the integral above in terms of a sum over edge-plaquette embeddings. We first set some notation. **Definition 3.9**.: Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string, and let \(K:\mathcal{P}\rightarrow\mathbb{N}\). Define the set \(\mathrm{EPE}(s,K)\) of edge-plaquette embeddings associated to \(s,K\) to as follows. If \(s,K\) is unbalanced, then \(\mathrm{EPE}(s,K):=\varnothing\). If \(s,K\) is balanced, let \(\mathbf{\Gamma}\) be the collection of words consisting of \(s\) and \(K(p)\) copies of the plaquette \(p\) for each \(p\in\mathcal{P}\). We define \(\mathrm{EPE}(s,K)\) to be the set of edge-plaquette embeddings \((\mathcal{M},\psi)\) obtained from maps \(\mathcal{M}\in\mathrm{DBM}(\mathbf{\Gamma})\). In words, \(\mathrm{EPE}(s,K)\) is the set of edge-plaquette embeddings with plaquette counts specified by \(K\). Next, define \[\mathrm{EPE}(s):=\bigsqcup_{K:\mathcal{P}\rightarrow\mathbb{N}} \mathrm{EPE}(s,K).\] For \((\mathcal{M},\psi)\in\mathrm{EPE}(s)\), and \(e\in E_{\Lambda}\), let \(\mu_{e}(\psi)\) be the partition of \(|\psi^{-1}(e)|/2\) induced by \(1/2\) times the degrees of the faces of \(\psi^{-1}(e)\). Define \[\mathrm{area}(\mathcal{M},\psi) :=\sum_{p\in\mathcal{P}}|\psi^{-1}(p)|,\] \[(\psi^{-1})! :=\prod_{p\in\mathcal{P}}|\psi^{-1}(p)|!.\] Note that if \((\mathcal{M},\psi)\in\mathrm{EPE}(s,K)\), then \(\mathrm{area}(\mathcal{M},\psi)=\sum_{p}K(p)\) and \((\psi^{-1})!=K!\). _Remark 3.10_ (Boundaries of edge-plaquette embeddings).: One can also think of \(\mathrm{EPE}(s)\) as the set of edge-plaquette embeddings which have "boundary" \(s\), by deleting the plaquette-faces (see Convention 3.3) of \(\mathcal{M}\) which correspond to \(\ell_{1},\dots,\ell_{n}\) (i.e. the plaquette-faces whose boundary is mapped by \(\psi\) to one of edges of \(\ell_{1},\dots,\ell_{n}\)). Denote by \((\overline{\mathcal{M}},\psi)\) the map obtained in this way from a given \((\mathcal{M},\psi)\). Then \(\overline{\mathcal{M}}\) is a map with \(n\) boundary components (whose duals consist of neighboring edge-faces), and the boundary components are mapped by the embedding \(\psi\) to \(\ell_{1},\dots,\ell_{n}\). This is the sense in which \((\overline{\mathcal{M}},\psi)\) has "boundary" given by \(s\). See Figure 24 for an illustration. Also, note that \(\chi(\overline{\mathcal{M}})=\chi(\mathcal{M})-n\). Thus if one wants to sum over maps \((\overline{\mathcal{M}},\psi)\) with boundary \(s\), then the term \(\chi(\mathcal{M})-2n\) which appears in what follows should be replaced by \(\chi(\overline{\mathcal{M}})-n\). **Corollary 3.11**.: Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string. We have that \[\langle W_{s}\rangle_{\Lambda,\beta}=Z_{\Lambda,\beta}^{-1}\sum_{(\mathcal{M},\psi)\in\mathrm{EPE}(s)}\frac{\beta^{\mathrm{area}(\mathcal{M},\psi)}}{(\psi ^{-1})!}\bigg{(}\prod_{e\in E_{\Lambda}}\overline{\mathrm{Wg}}_{N}(\mu_{e}( \psi))\bigg{)}N^{\chi(\mathcal{M})-2n}.\] Proof.: Combining equation (3.3) with Theorem 3.8, we have that (recall that our Wilson loops are defined using the normalized trace) \[\langle W_{s}\rangle_{\Lambda,\beta}=Z_{\Lambda,\beta}^{-1}\sum_{K:\mathcal{ P}\to\mathbb{N}}\frac{(N\beta)^{K}}{K!}\sum_{(\mathcal{M},\psi)\in\mathrm{EPE}(s,K)} \bigg{(}\prod_{e\in E_{\Lambda}}\overline{\mathrm{Wg}}_{N}(\mu_{e}(\psi)) \bigg{)}N^{\chi(\mathcal{M})-2n}N^{-K}.\] Recalling that \(\mathrm{area}(\mathcal{M},\psi)=\sum_{p}K(p)\) and \((\psi^{-1})!=K!\) for \((\mathcal{M},\psi)\in\mathrm{EPE}(s,K)\), the desired result now follows. Figure 24: An example of an edge-plaquette embedding with boundary when \(\ell_{1}=ABCD\) and \(\ell_{2}=D^{-1}C^{-1}B^{-1}A^{-1}\). The top left sphere is a map \(\mathcal{M}\) whose dual is bipartite and the bottom left map \(\overline{\mathcal{M}}\) is obtained by removing two yellow faces corresponding to \(\ell_{1}\) and \(\ell_{2}\). The Euler characteristic changes from \(2\) to \(0\) after removal of the two faces. The boundary of \(\overline{\mathcal{M}}\) maps onto the edges of \(\ell_{1}\) and \(\ell_{2}\), and thus we can interpret \(\overline{\mathcal{M}}\) as having boundary given by the union of \(\ell_{1}\) and \(\ell_{2}\). We close this section with some heuristic discussion of the large-\(N\) limit, where the surface sums are expected to simplify greatly. Recalling Remark 3.2, the large-\(N\) limit of the normalized Weingarten function factors into a product of Catalan numbers. This implies a nice factorization of the surface-sum weights according to connected components. For brevity, given a surface \((\mathcal{M},\psi)\), let \[w_{\infty}(\mathcal{M},\psi):=\lim_{N\to\infty}\prod_{e\in E_{\Lambda}}\overline {\mathrm{Wg}}_{N}(\mu_{e}(\psi)).\] If \((\mathcal{M},\psi)\) splits into connected components \(((\mathcal{M}_{i},\psi_{i}),i\in[k])\), then \[w_{\infty}(\mathcal{M},\psi)=\prod_{i\in[k]}w_{\infty}(\mathcal{M}_{i},\psi_{ i}). \tag{3.4}\] Now given a general \((\mathcal{M},\psi)\in\mathrm{EPE}(s)\), we can split split \((\mathcal{M},\psi)\) into the union of \((\mathcal{M}_{0},\psi_{0})\) and \((\mathcal{M}^{\prime},\psi^{\prime})\), where \(\mathcal{M}_{0}\) contains all components of \(\mathcal{M}\) which are connected to \(s\), and \(\mathcal{M}^{\prime}\) contains everything else. Then by the factorization (3.4), we would expect that when \(N\) is large, we can factor out a copy of the partition function \(Z_{\Lambda,\beta}\) from the numerator of \(\langle W_{s}\rangle_{\Lambda,\beta}\), and then write \[\langle W_{s}\rangle_{\Lambda,\beta}\approx\sum_{(\mathcal{M}_{0},\psi_{0}) \in\mathrm{EPE}_{0}(s)}\frac{\beta^{\mathrm{area}(\mathcal{M}_{0},\psi_{0})}} {(\psi_{0}^{-1})!}w_{\infty}(\mathcal{M}_{0},\psi_{0})N^{\chi(\mathcal{M}_{0}) -2n}.\] where \(\mathrm{EPE}_{0}(s)\) is the subset of \(\mathrm{EPE}(s)\) consisting of the \((\mathcal{M}_{0},\psi_{0})\) such that all components are connected to \(s\). Next, note that for \((\mathcal{M}_{0},\psi_{0})\in\mathrm{EPE}_{0}(s)\), we have that \(\lim_{N\to\infty}N^{\chi(\mathcal{M}_{0})-2n}\in\{0,1\}\), and moreover, the limit is \(1\) if and only if each of the \(n\) strings \(s_{1},\ldots,s_{n}\) is part of a separate component of \(\mathcal{M}_{0}\), and all components have the topology of the sphere (this is the only situation which gives the maximal Euler characteristic \(\chi(\mathcal{M}_{0})=2n\)). We thus obtain that (using the factorization property of \(w_{\infty}\)) \[\lim_{N\to\infty}\sum_{(\mathcal{M}_{0},\psi_{0})\in\mathrm{EPE} _{0}(s)}\frac{\beta^{\mathrm{area}(\mathcal{M}_{0},\psi_{0})}}{(\psi_{0}^{-1 })!}w_{\infty}(\mathcal{M}_{0},\psi_{0})N^{\chi(\mathcal{M}_{0})-2n}=\] \[\prod_{i\in[n]}\sum_{(\mathcal{M}_{0},i,\psi_{0,i})\in\mathrm{EPE }_{0}(s_{i})}\frac{\beta^{\mathrm{area}(\mathcal{M}_{0,i},\psi_{0,i})}}{(\psi_ {0,i}^{-1})!}w_{\infty}(\mathcal{M}_{0,i},\psi_{0,i})\] Combining the previous few displays, we thus heuristically see the factorization of Wilson loop expectations in the large-\(N\) limit: \[\lim_{N\to\infty}\langle W_{s}\rangle_{\Lambda,\beta}=\prod_{i\in[n]}\lim_{N \to\infty}\langle W_{s_{i}}\rangle_{\Lambda,\beta}.\] In summary, the large-\(N\) factorization of Wilson loop expectations (proven in [1, 16, 17]) can be seen from our surface-sum picture as follows: (1) by using that the weights factor according to connectivity, we can obtain a sum over surfaces which are connected to \(s\) (rather than a ratio of sums over surfaces), (2) we can further restrict to those surfaces which are made of \(n\) disjoint spheres, where each \(s_{i}\) is in a distinct sphere. Of course, some work is required to make this picture rigorous. In finite-volume, it might be possible to prove the factorization for any \(\beta\). However in infinite-volume, we likely need a small \(\beta\) condition (as in previous works) in order to deal with absolute convergence issues. Brownian motion and Poisson process exploration In this section, we prove Theorem 2.5. First, in Section 4, we define and analyze a particular exploration process that is central to our proof. We then give the proof of the theorem in the case where \(N\) is large, where it is easier to focus on the main ideas. In Section 4.2, we extend the argument to the case of general \(N\). ### Strand-by-strand exploration We begin towards the proof of Theorem 2.5. The main difficulty is that the weights \(w_{T}(\pi)\) appearing in Theorem 2.5, when expressed as a series in \(T\), do not converge absolutely when \(T\to\infty\). In fact, the series is of the schematic form \(\sum_{k}\frac{(-T)^{k}}{k!}c_{k}\), for some coefficients \(c_{k}\). Clearly, to show convergence as \(T\to\infty\), we need to take advantage of delicate cancellations which occur, rather than any sort of absolute summability. Uncovering these cancellations is the main technical part in the argument. This will be achieved via a certain exploration of the Poisson point process introduced in Section 2.1 which will provide an alternate form for the weights \(w_{T}(\pi)\) which makes taking the \(T\to\infty\) limit trivial. **Notation 4.1**.: We will often refer to the opposite-direction swaps introduced in Section 2.1 as "turnarounds". Also, we will often use the term "matching" and "partition" interchangeably. In the following, recall the Poisson process and strand diagram material introduced in Section 2.1. Because the Poisson processes corresponding to different letters are independent, it will suffice to just analyze the portion of the strand diagram corresponding to a single letter \(\lambda\). **Notation 4.2**.: In this section, it will be notationally convenient for us to assume that the strand diagram corresponding to \(\lambda\) has \(2n\) total strands, with \(n\) right-directed and left-directed strands each. This corresponds to the case that \(\lambda\) and \(\lambda^{-1}\) each appear a total of \(n\) times in the given collection of words \(\mathbf{\Gamma}\). This is in contrast to the notation of Definition 2.1, where \(n_{\ell}\) is the the total number of occurrences of a given letter \(\lambda_{\ell}\) and its inverse \(\lambda_{\ell}^{-1}\). To make consistent with this previous notation, we could perhaps introduce \(n_{+}=n_{-}=n/2\) and work with \(n_{+}\). However, the parameter \(n\) often appears in subscripts or superscripts, and adding a subscript "\(+\)" to \(n\) will result in iterated subscripts, which will complicate many expressions. Therefore, we decide just to use \(n\) to denote the number of right-directed (and left-directed) strands. Having restricted to a single letter \(\lambda\), let \(\mathcal{C},\mathcal{D}_{T},\Sigma\) (introduced in Section 2.1) correspond to the single letter \(\lambda\), with a total of \(n\) positive occurrences and \(n\) negative occurrences. Let \(\Sigma(T)=\Sigma_{\infty}\cap\mathcal{D}_{T}\). By Lemma 2.3, expectations of words of Unitary Brownian motion may be expressed in terms of \(w_{T}(\pi)\) (which is defined in (2.1)), for pairings \(\pi\in\mathcal{M}(4n)\). Now that we have introduced the Brauer algebra \(\mathcal{B}_{n}\) in Definition 2.10, one may in fact view the pairings \(\pi\) as elements of \(\mathcal{B}_{2n}\). Even more, we may restrict to the walled Brauer algebra \(\mathcal{B}_{n,n}\subseteq\mathcal{B}_{2n}\), for reasons we next describe. The weights \(w_{T}(\pi)\) are naturally expressed in terms of a random walk on the walled Brauer algebra \(\mathcal{B}_{n,n}\), as follows. We may visualize \(\Sigma(T)\) as in the following picture (where \(n=3\)). The green lines represent same-swaps, the blue lines represent turnarounds, and locations of the green/blue lines correspond to the points of \(\Sigma(T)\). Recalling Definition 2.14, we have that each green line corresponds to an element of the form \((i\ j)\), and each blue line corresponds to an element of the form \(\langle i\ j\rangle\). We can read off the element of \(\mathcal{B}_{n,n}\) from the above strand diagram by exploring along each strand. For example, in Figure 26, we explore from the left of strand 3, as well as the right of strand 6. We see that 3 gets matched to 6 on the left, while 6 gets matched to 2 on the right. Clearly, if we do this for all other strands (not drawn), we can obtain the element of \(\mathcal{B}_{n,n}\) that the above diagram corresponds to. On the algebraic side, this exploration amounts to multiplying together all increments \((i\ j)\) or \(\langle i\ j\rangle\) corresponding to the points of \(\Sigma\), in the order that they appear. Due to the Poissonian nature of the points, each possible increment \((i\ j)\), \(\langle i\ j\rangle\) is equally likely, and thus one may interpret \(\Sigma\) as giving a random walk on \(\mathcal{B}_{n,n}\), as previously mentioned. However, there is an additional wrinkle, in that we need to keep track of more than just the final pairing, since each green line contributes a factor of \(-\frac{1}{N}\) while each blue line contributes a factor of \(\frac{1}{N}\). With this in mind, we make the following definition. **Definition 4.3**.: Given a finite collection of points \(P\subseteq\mathcal{D}_{T}\), let \(F(P)=F_{N}(P)\) be the element of \(\mathcal{B}_{n,n}\) that \(P\) corresponds to, with the additional factors of \(-\frac{1}{N}\) for same-direction swaps and \(\frac{1}{N}\) for turnarounds. Let \(M(P)\) be the pairing corresponding to \(P\), i.e. the element of \(\mathcal{B}_{n,n}\) obtained when ignoring the additional factors. With this definition, observe that we may express \[w_{T}(\pi)=e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))\mathbb{1}(M(\Sigma(T)) =\pi)]\] Or, as elements of the walled Brauer algebra \(\mathcal{B}_{n,n}\), we have the equality \[\sum_{\pi}w_{T}(\pi)\pi=e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))]. \tag{4.1}\] In the following, we will mainly focus on understanding \(\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))]\). Clearly, once we understand this, we will also know \(\lim_{T\to\infty}w_{T}(\pi)\) for any \(\pi\). In terms of the strands, this condition says that once a blue turnaround appears, the only points which can thereafter appear that touch either of the matched strands must be the blue turnaround between the same two strands. **Lemma 4.4**.: Let \(i_{0}\in[\![n]\!]\), \(j_{0}\in(n:2n]\). Let \(\Sigma_{-\{i_{0},j_{0}\}}(T)\) be the Poisson point process obtained by deleting all points of \(\Sigma(T)\) touching the \(i_{0}\)th or \(j_{0}\)th strands. Then \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma(T))]\!] =e^{-4(n-1)T}\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma_{- \{i_{0},j_{0}\}}(T))]\!],\] \[\mathbb{E}[\![F(\Sigma(T))]\!\langle i_{0}\ j_{0}\rangle =e^{-4(n-1)T}\mathbb{E}[\![F(\Sigma_{-\{i_{0},j_{0}\}}(T))]\! \langle i_{0}\ j_{0}\rangle.\] Proof.: We only show the first identity as the second follows similarly. Let \(A_{T}\) be the event that the process \(\Sigma(T)\) contains no points touching the \(i_{0}\)th or \(j_{0}\)th strand, besides those which give the turnaround \(\langle i_{0}\ j_{0}\rangle\). Since each strand is involved in \(2n-1\) total Poisson processes, the number of Poisson processes that involve the \(i\)th or \(j\)th strand is \(2(2n-1)-1=4n-3\). On the event \(A_{T}\), all but one of these processes must have zero points, and thus \(\mathbb{P}(A_{T})=e^{-4(n-1)T}\). Let \(\Sigma_{\langle i_{0}\ j_{0}\rangle}(T)\) be the process obtained by keeping only those points which give the turnaround \(\langle i_{0}\ j_{0}\rangle\). On \(A_{T}\), we may split \[\Sigma(T)=\Sigma_{-\{i_{0},j_{0}\}}(T)\sqcup\Sigma_{\{i_{0},j_{0}\}}(T),\] and moreover \[F(\Sigma(T))=F(\Sigma_{\{i_{0},j_{0}\}}(T))F(\Sigma_{-\{i_{0},j_{0}\}}(T)).\] We thus have that \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma(T))\mathbb{1}_{ A_{T}}] =\mathbb{P}(A_{T})\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F( \Sigma_{\{i_{0},j_{0}\}}(T))]\mathbb{E}[\![F(\Sigma_{-\{i_{0},j_{0}\}}(T))] \!]\] \[=e^{-4(n-1)T}\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma_{ \{i_{0},j_{0}\}}(T))]\mathbb{E}[\![F(\Sigma_{-\{i_{0},j_{0}\}}(T))]\!].\] By explicit calculation, using that \(\langle i_{0}\ j_{0}\rangle^{k}=N^{k-1}\langle i_{0}\ j_{0}\rangle\), we have that \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma_{\{i_{0},j_{0}\}}(T))]=e^{- T}\langle i_{0}\ j_{0}\rangle\sum_{k=0}^{\infty}\frac{T^{k}}{k!}\bigg{(}\frac{ \langle i_{0}\ j_{0}\rangle}{N}\bigg{)}^{k}=\langle i_{0}\ j_{0}\rangle e^{-T }\sum_{k=0}^{\infty}\frac{T^{k}}{k!}=\langle i_{0}\ j_{0}\rangle.\] To finish, it suffices to show that \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma(T))]\!]=\langle i_{0}\ j_{0} \rangle\mathbb{E}[\![F(\Sigma(T))\mathbb{1}_{A_{T}}]\!],\] or in other words, \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma(T))\mathbb{1}_{A_{T}^{c}}] =0.\] We show that for each \(k\geq 1\), we have that \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[\![F(\Sigma(T))\mathbb{1}_{A_{T}^{c}} \ |\ |\Sigma(T)|=k]=0.\] (If \(k=0\) then \(A_{T}^{c}\) cannot occur.) Let \(\Omega_{k}\) be the set of length-\(k\) sequences of elements of the set \[\{(i\ j):1\leq i<j\leq n\}\cup\{(i\ j):n+1\leq i<j\leq 2n\}\cup\{\langle i\ j \rangle:i\in[\![n]\!],j\in(n:2n]\},\] such that there exists some element not equal to \(\langle i_{0}\ j_{0}\rangle\) that involves either \(i_{0}\) or \(j_{0}\). For each \((x_{1},\ldots,x_{k})\in\Omega_{k}\), let \(n_{T}(x_{1},\ldots,x_{k})\) be the number of transpositions (i.e. elements of the form \((i\ j)\)) in the sequence. Observe that \[\langle i_{0}\ j_{0}\rangle\mathbb{E}[F(\Sigma(T))\mathbb{1}_{A_{T}^{c}}\ |\ | \Sigma(T)|=k]=\langle i_{0}\ j_{0}\rangle\frac{1}{{2n\choose 2}^{k}N^{k}}\sum_{(x_{1}, \ldots,x_{k})\in\Omega_{k}}(-1)^{n_{T}(x_{1},\ldots,x_{k})}x_{1}\cdots x_{k}.\] We now define a bijection \(h:\Omega_{k}\to\Omega_{k}\) such that if \(h(x_{1},\ldots,x_{k})=(y_{1},\ldots,y_{k})\), then \[\langle i_{0}\ j_{0}\rangle(-1)^{n_{T}(y_{1},\ldots,y_{k})}y_{1}\cdots y_{k}= -\langle i_{0}\ j_{0}\rangle(-1)^{n_{T}(x_{1},\ldots,x_{k})}x_{1}\cdots x_{k}.\] Note that this immediately implies that \[\langle i_{0}\ j_{0}\rangle\sum_{(x_{1},\ldots,x_{k})\in\Omega_{k}}(-1)^{n_{ T}(x_{1},\ldots,x_{k})}x_{1}\cdots x_{k}=-\langle i_{0}\ j_{0}\rangle\sum_{(x_{1},\ldots,x_{k})\in \Omega_{k}}(-1)^{n_{T}(x_{1},\ldots,x_{k})}x_{1}\cdots x_{k},\] which implies that the above is zero, which would give the desired result. To define \(h\), given a sequence \((x_{1},\ldots,x_{k})\), let \(1\leq r\leq k\) be index of the first element \(x_{r}\) which causes the sequence \((x_{1},\ldots,x_{n})\) to be in \(\Omega_{k}\). Then either \(x_{r}\) is a transposition of the form \((i_{0}\ k)\) or \((k\ j_{0})\), or \(x_{r}\) is a turnaround of the form \(\langle i_{0}\ k\rangle\) or \(\langle k\ j_{0}\rangle\). If \(x_{r}\) is a transposition, we set \[h(x_{1},\ldots,x_{k}):=\begin{cases}(x_{1},\ldots,x_{r-1},\langle k\ j_{0} \rangle,x_{r+1},\ldots,x_{k})&x_{r}=(i_{0}\ k)\\ (x_{1},\ldots,x_{r-1},\langle i_{0}\ k\rangle,x_{r+1},\ldots,x_{k})&x_{r}=(k \ j_{0}).\end{cases}\] and if \(x_{r}\) is a turnaround, we set \[h(x_{1},\ldots,x_{k}):=\begin{cases}(x_{1},\ldots,x_{r-1},(k\ j_{0}),x_{r+1},\ldots,x_{k})&x_{r}=\langle i_{0}\ k\rangle\\ (x_{1},\ldots,x_{r-1},(i_{0}\ k),x_{r+1},\ldots,x_{k})&x_{r}=\langle k\ j_{0} \rangle.\end{cases}\] In words, if \(x_{r}\) is a transposition involving \(i_{0}\) (resp. \(j_{0}\)), then \(h\) switches \(x_{r}\) to a turnaround involving \(j_{0}\) (resp. \(i_{0}\)). Similarly, if \(x_{r}\) is a turnaround involving \(i_{0}\) (resp. \(j_{0}\)), then \(h\) switches \(x_{r}\) to a transposition involving \(j_{0}\) (resp. \(i_{0}\)). Note that \(h\) is an involution, and thus a bijection. Also, we clearly have by construction that \[(-1)^{n_{T}(x_{1},\ldots,x_{k})}=-(-1)^{n_{T}(h(x_{1},\ldots,x_{k}))}.\] Thus to finish, it suffices to show that with \(h(x_{1},\ldots,x_{k})=(y_{1},\ldots,y_{k})\), we have that \(\langle i_{0}\ j_{0}\rangle x_{1}\cdots x_{k}=\langle i_{0}\ j_{0}\rangle y_{1} \cdots y_{k}\). By construction of \(h\), it just suffices to show that \(\langle i_{0}\ j_{0}\rangle x_{1}\cdots x_{r}=\langle i_{0}\ j_{0}\rangle x_{1 }\cdots x_{r-1}y_{r}\). By the assumption on \(r\), we have that \(x_{1}\cdots x_{r-1}\) commutes with \(\langle i_{0}\ j_{0}\rangle\), and so \[\langle i_{0}\ j_{0}\rangle x_{1}\cdots x_{r}=x_{1}\cdots x_{r-1}\langle i_{0} \ j_{0}\rangle x_{r},\ \ \langle i_{0}\ j_{0}\rangle x_{1}\cdots x_{r-1}y_{r}=x_{1}\cdots x_{r-1}\langle i _{0}\ j_{0}\rangle y_{r}.\] To finish, we claim that \(\langle i_{0}\ j_{0}\rangle x_{r}=\langle i_{0}j_{0}\rangle y_{r}\), i.e. the switching procedure used to define \(h\) does not change the overall matching. This follows by the identities \(\langle i\ k\rangle(i\ j)=\langle i\ k\rangle\langle j\ k\rangle\) and \(\langle i\ k\rangle\langle i\ j\rangle=\langle i\ k\rangle(j\ k)\). For the first identity, observe that the two products of matchings in Figure 27 are equal. The second identity follows similarly. We now finally describe our exploration of the strand diagram corresponding to \(\Sigma\). The exploration proceeds strand-by-strand. We first give an informal description with accompanying figures before proceeding to the formal mathematical definition. The main feature of the exploration is that we explore only a single strand at a time, rather than all strands at once. That is, we start at (say) the top strand, and explore left-to-right until we see a swap or a turnaround involving this strand. If we see a swap between the top strand and another strand, then we begin exploring the other strand. If we see a turnaround, then the current exploration era ends, and we begin to explore the next strand. To visualize this exploration, suppose we want to explore the the diagram in Figure 28. Our exploration proceeds in three separate eras, drawn as in Figure 29. Note that at the start of the second era, we begin exploring the top strand instead of the second-to-top strand, because of the previous swap between these two strands. Likewise, at the start of the third era, we also begin exploring from the top strand, because this is effectively the bottom strand due to the previously seen swaps. Another thing to note is that in principle, during the first exploration era, it is certainly possible for the point process to have swaps that involve two non-top strands. However, our exploration process does not see these swaps. It turns out that by exploring the random environment in the manner we described, we can in fact assume that in every exploration era, every swap in the point Figure 28: Before the strand-by-strand exploration. Figure 27: The above two products of matchings are equal. process involves the current exploration strand, so that we don't need to worry about such "unseen swaps". This property is due to certain cancellations that we may take advantage of, which are very similar in spirit to the cancellations observed in the proof of Lemma 4.4. At the end of the last exploration era, we have built up an element of \(\mathcal{B}_{n,n}\) (we have omitted the additional factors of \(\pm\frac{1}{N}\) and only drawn the left and right matchings), as displayed in Figure 30. Here, the colors are for visual purposes and don't affect the end element of \(\mathcal{B}_{n,n}\): we have colored the matching edges to denote the exploration era in which these pairing were discovered. Now here is why we chose to explore as we did: conditioned on everything we have seen up to the end of the last exploration era, the expectation of \(F(\Sigma(T))\)_is essentially given7 by the matching in Figure 30_. This property is intimately related to our previous comment that we can assume that there are no unseen swaps, i.e. swaps which do not involve the current exploration strand. This key property of our exploration enables us to give a rather explicit closed-form expression for the overall expectation of \(F(\Sigma(T))\). Even more, it is almost trivial to take the \(T\to\infty\) limit of the closed-form expression, and this allows us to recover the Weingarten calculus. Footnote 7: Technically, this is only true up to some explicit factors, but this is more of a technical detail. Figure 29: The successive eras of our strand-by-strand exploration. We now proceed to the precise definition of the exploration. First, for \(i\in[2n]\), define \[\Sigma_{i}:=\bigcup_{j\in[2n]\backslash\{i\}}\Sigma_{\{i,j\}},\] i.e. \(\Sigma_{i}\) collects all Poisson processes with which \(i\) is involved. In terms of the strands, \(\Sigma_{i}\) collects all swaps and turnarounds touching the \(i\)th strand. The exploration is described by two processes \((E_{t})_{t\geqslant 0}\), \((\pi_{t})_{t\geqslant 0}\), the first of which takes values in \([n]\), and the second of which takes values in \(\mathrm{S}_{n}\) (which we view as the set of bijections of \([n]\)). One should think of \(E_{t}\) as tracking the current exploration era, and \(\pi_{t}\) as tracking the current strand of exploration. We start with \(E_{0}:=1\), \(\pi_{0}:=\mathrm{id}\). We begin exploring \(\Sigma_{\pi_{0}(E_{0})}=\Sigma_{1}\) until we see the first point, which we denote by \(U_{1}\). At time \(U_{1}\), we update \(E\) and \(\pi\) as follows. There is some \(j\in[2n]\backslash\{E_{0}\}\) such that \(\mathfrak{l}(U_{1})=\pi_{0}(E_{0}),j\}\). For \(t\in(0,U_{1})\), we set \(E_{t}:=E_{0}\), \(\pi_{t}:=\pi_{0}\). Now if \(j\in[n]\), then we set \(E_{U_{1}}:=E_{0}\) and \(\pi_{U_{1}}:=(\pi_{0}(1)\ j)\pi_{0}\). We then continue exploring \(\Sigma_{\pi_{U_{1}}(E_{U_{1}})}\) from time \(U_{1}\). Otherwise, if \(j\in[n:2n]\), then we set \(E_{U_{1}}:=E_{0}+1\) (i.e. a new exploration era begins) and \(\pi_{U_{1}}:=\pi_{0}\). Additionally, we remove all points of \(\Sigma_{\pi_{0}(E_{0})}\cup\Sigma_{j}\) from \(\Sigma\). The exploration then continues on this reduced point process. In terms of the strands, the removal of points corresponds to only looking at those swaps or turnarounds which do not involve \(\pi_{0}(E_{0}),j\). The exploration stops once all exploration eras have ended, i.e. once we have explored all strands up to their first time of turnaround. This is the first time \(t\) such that \(E_{t}=n+1\). For \(i\in[n]\), let \(T_{i}:=\inf\{t\geqslant 0:E_{t}=i+1\}\), i.e. the time at which the \(i\)th exploration era ends. Let \(\mathcal{Q}_{t}\) be the set of points that the exploration has seen up to time \(t\). Let \((\mathcal{F}_{t},t\geqslant 0)\) be the filtration generated by the processes \(E,\pi\). The following key proposition makes precise the key property of our exploration that we described earlier. **Proposition 4.5**.: We have that \[e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))\mathbbm{1}(T_{n} \leqslant T)]=\\ \mathbb{E}\big{[}F(\mathcal{Q}(T_{n}))\mathbbm{1}(T_{n}\leqslant T )e^{2(n-1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}.\] Figure 30: The matching discovered by our strand-by-strand exploration. Proof.: Fix \(N\). We proceed by induction on \(n\). When \(n=1\), the result is true for all \(T\geqslant 0\), because then \(A_{T}\) always occurs, and furthermore when \(T_{1}\leqslant T\), we have that \(F(\Sigma(T))=F(\mathcal{Q}(T_{1}))\). Now suppose that for some general \(n\geqslant 1\), the result is true for all \(T\geqslant 0\). We proceed to show that the case \(n+1\) also holds. We start by conditioning on \(\mathcal{F}_{T_{1}}\). Pictorially, this corresponds to exploring until the end of the first era, see the left of Figure 31. One should think of the two parallel vertical red lines as occurring at the same time (namely \(T_{1}\)), although for visual purposes we have drawn them to be slightly separated. Next, naturally, we may split the diagram in Figure 31 into two parts: the part to the left of \(T_{1}\), and the part to the right of \(T_{1}\). This corresponds to splitting \[\Sigma(T)=\Sigma(T_{1})\cup(\Sigma(T)\backslash\Sigma(T_{1})).\] Since the Poisson processes before \(T_{1}\) and after \(T_{1}\) are conditionally independent, we have that \[\mathbb{E}[F(\Sigma(T))\mathbbm{1}(T_{n+1}\leqslant T)\ |\ \mathcal{F}_{T_{1}}]=\] \[\mathbb{E}[F(\Sigma(T_{1}))\ |\ \mathcal{F}_{T_{1}}]\mathbb{E}[F( \Sigma(T)\backslash\Sigma(T_{1}))\mathbbm{1}(T_{n+1}-T_{1}\leqslant T-T_{1}) \ |\ \mathcal{F}_{T_{1}}].\] We first use our inductive assumption to rewrite the second conditional expectation on the right hand side above. By our cancellation lemma (Lemma 4.4), we may assume that there are no swaps or turnarounds which involve either of the two matched strands after \(T_{1}\), as long as we multiply by the explicit exponential factor \(e^{-(4(n+1)-1)(T-T_{1})}=e^{-4n(T-T_{1})}\). Pictorially, after \(T_{1}\), the two segments which are colored bright green in the right Figure 31 are no longer connected to the other strands in the diagram. The point now is that after having taken out the two green strands, the expectation of the remainder of the diagram after \(T_{1}\) is exactly given by our inductive assumption. Thus, we have the identity \[e^{\binom{2n}{2}(T-T_{1})-n(T-T_{1})}\mathbb{E}\big{[}F(\Sigma(T) \backslash\Sigma(T_{1}))\mathbbm{1}(T_{n+1}-T_{1}\leqslant T-T_{1})\ |\ \mathcal{F}_{T_{1}}]=\] \[e^{-4n(T-T_{1})}\mathbb{E}\big{[}F(\mathcal{Q}_{T}\backslash \mathcal{Q}_{T_{1}})\mathbbm{1}(T_{n+1}-T_{1}\leqslant T-T_{1})e^{2(n-1)(T_{2} -T_{1})}e^{2(n-2)(T_{3}-T_{2})}\cdots e^{2(n-n)(T_{n+1}-T_{n})}\big{]}.\] Applying this identity, as well as the identity \(\binom{2(n+1)}{2}-(n+1)=4n+\binom{2n}{2}-n\), we obtain \[e^{\binom{2(n+1)}{2}T-(n+1)T}\mathbb{E}[F(\Sigma(T))\mathbbm{1}(T_{n+1} \leqslant T)]=\] Figure 31: Left: We start at the first strand and explore until we see a turnaround. Right: Once we have seen a turnaround, we may treat the two strands involved in the turnaround as “out of the game”. \[\mathbb{E}\big{[}F(\Sigma(T_{1}))F(\mathcal{Q}_{T_{n+1}}\backslash\mathcal{Q}_{T_{1 }})\mathbb{1}\left(T_{n+1}\leqslant T\right)e^{(4n+\binom{2n}{2}-n)T_{1}}e^{2(n +1-2)(T_{2}-T_{1})}\cdots e^{2(n+1-(n+1))(T_{n+1}-T_{n})}\big{]}.\] To finish, we now argue that \[e^{(4n+\binom{2n}{2}-n)T_{1}}\mathbb{E}\big{[}F(\Sigma(T_{1}))F( \mathcal{Q}_{T_{n+1}}\backslash\mathcal{Q}_{T_{1}})\ |\ \mathcal{F}_{T_{n+1}}\big{]} =e^{2nT_{1}}F(\mathcal{Q}_{T_{1}})F(\mathcal{Q}_{T_{n+1}} \backslash\mathcal{Q}_{T_{1}}) \tag{4.2}\] \[=e^{2nT_{1}}F(\mathcal{Q}_{T_{n+1}}).\] Note that this would complete the proof of the inductive step. For a picture of what we have in mind when conditioning on \(\mathcal{F}_{T_{n+1}}\), see Figure 32. In the left of Figure 32, we treat the portion of the diagram to the right of \(T_{1}\) as fixed, whereas the portions of the strands before \(T_{1}\) which are black have not been fully explored. The identity (4.2) says that after averaging over this randomness, we may simply assume that there are no additional swaps or turnarounds in \([0,T_{1}]\), so that the expectation is given by the right of Figure 32 (which corresponds to the right hand side of the identity). The identity (4.2) follows by cancellations similar to those exploited in the proof of the cancellation lemma (Lemma 4.4). Indeed, observe that the two diagrams in Figure 33 equal, in the sense that the final matching is the same (the red and orange strands are unchanged, so one only needs to track the brown and purple strands). Note however that the left Figure 33: The above two diagrams are equal as elements of \(\mathcal{B}_{n,n}\) Figure 32: Left: we can assume that our exploration process looks like this after applying the inductive assumption. Right: completing the inductive step by arguing that after cancellation, we may assume that there are no other points before \(T_{1}\), besides the previously seen red swaps. diagram will have an opposite sign compared to the right diagram, because swaps incur a factor of \(-1\) while turnarounds do not. This gives the desired cancellation between swaps and turnaround which do not connect two strands which have been matched by the portion of the diagram after \(T_{1}\). Thus the total number of Poisson processes which must have zero points is \(\binom{n}{2}+\binom{n+1}{2}+n^{2}\). Here, \(\binom{n}{2}\) counts the possible swaps between two top strands, \(\binom{n+1}{2}\) counts the possible swaps between two bottom strands, and \(n^{2}=n(n+1)-n\) counts the turnarounds which connect a top and bottom strand which are not already connected by the diagram to the right of \(T_{1}\). We now finish by noting the identity \[4n+\binom{2n}{2}-n-\binom{n}{2}-\binom{n+1}{2}-n^{2}=2n.\qed\] Next, to extract the Jucys-Murphy elements, it is helpful to think of all the swaps in the \(i\)th exploration era as involving the \(i\)th strand. Towards this end, we show that the expectation of \(F(\mathcal{Q}(T_{n}))\) appearing in Proposition 4.5 may be computed by following a slightly different exploration, one in which each exploration era stays on a single strand, and in each era, we keep track of all swaps that touch the strand we are currently exploring. First, we define processes \((\bar{E}_{t})_{t\geqslant 0}\) and \((\bar{\pi}_{t})_{t\geqslant 0}\) as follows. As before, we start with \(\bar{E}_{0}=1\) and \(\bar{\pi}_{0}=\text{id}\). We proceed to explore \(\mathcal{P}_{\bar{E}_{0}}\) (in contrast to before, where we explored \(\mathcal{P}_{\pi_{0}(E_{0})}\)). When we see a swap of the form \(\{1,j\}\), \(j\in[n]\), we update \(\bar{\pi}\mapsto\bar{\pi}(1\ j)\). When we see a turnaround \(\langle 1\ j\rangle\), \(j\in(n:2n]\), the first exploration era ends, we update \(\bar{E}\) to be \(2\), and we remove from \(\mathcal{P}\) all points in \(\mathcal{P}_{E_{0}}\). We then continue until the end of the \(n\)th exploration era. See Figure 34 for how one may visually compare this alternative exploration with our original exploration. Formally, we may define a bijection on sets of points \(P\mapsto P^{\prime}\), which preserves the Poisson measure, and moreover if we follow our original exploration process on the set \(P\), then that amounts to following the alternative exploration on the set \(P^{\prime}\). Under this bijection, the Figure 34: If, every time we see a swap, we imagine we “cut and swap” the two strands which were involved, we go from the left picture the to the right picture. Note that the left matching is unchanged. The original right matching can be reconstructed from the left matching and the swaps. In the right picture, all swaps in the first era involve the top strand, all swaps in the second era involve the second-top strand, etc. left matching found by the original exploration is equal to the left matching found by the alternative exploration, whereas the right matchings of the two explorations differ in a precise way, which is exactly encoded in the process \(\bar{\pi}\). As an example, observe that in the previous picture, just before time \(T_{1}\), we have that \(\bar{\pi}_{t}=(4\ 3)(4\ 2)=(2\ 3\ 4)\). Observe that \(\bar{\pi}_{t}(4)=2\), and on the right hand side of the original exploration, \(2\) is matched to \(6\). More generally, the rule is as follows. Let \(\sigma(\mathcal{Q})\) be the left matching found by the alternative exploration process. Then the right matching \(\tau(\mathcal{Q})\) is given by \(\sigma(Q)\bar{\pi}_{t}\). Finally, because the bijection preserves the Poisson measure, when we apply the two explorations to a Poisson process, then they have the same law. We have thus arrived at the following result. **Lemma 4.6**.: We have that \[\mathbb{E}\big{[}F(\mathcal{Q}(T_{n}))\mathbb{1}\,(T_{n}\leq T)e ^{2(n-1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}=\] \[\frac{1}{N^{n}n!}\sum_{\sigma:[n]\to(n:2n]}\left[\sigma\ \sigma\mathbb{E}\big{[}\bar{\bar{ \pi}}_{T_{n}}e^{2(n-1)T_{1}}\mathbb{1}\,(T_{n}\leq T)e^{2(n-2)(T_{2}-T_{1})} \cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}\right]\] _Remark 4.7_.: The factor of \(\frac{1}{N^{n}}\) arises because each turnaround incurs factor of \(\frac{1}{N}\), and there are \(n\) total turnarounds on the event \(T_{n}\leq T\). The factor \(\frac{1}{n!}\) arises because the first turnaround is equally likely to touch any of the \(n\) bottom strands, the second turnaround is equally likely to touch any of the \(n-1\) remaining bottom strands, etc. **Lemma 4.8**.: Let \(U_{1},\ldots,U_{n}\stackrel{{ i.i.d.}}{{\sim}}\mathrm{Exp}(1)\). We have that \[\mathbb{E}\big{[}\bar{\pi}_{T_{n}}e^{2(n-1)T_{1}}\mathbb{1}\,(T_ {n}\leq T)e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}=\] \[n!\mathbb{E}[\exp(-U_{n}J_{n}/N)\cdots\exp(-U_{1}J_{1}/N)\mathbb{ 1}\,(U_{1}+\cdots+U_{n}\leq T)].\] Proof.: Note that the duration \(T_{k}-T_{k-1}\) of the \(k\)th exploration process is an exponential random variable with rate \(n-k+1\). We thus have the explicit formula \[\mathbb{E}\big{[}\bar{\pi}_{T_{n}} \mathbb{1}\,(T_{n}\leq T)e^{2(n-1)T_{1}}e^{2(n-2)(T_{2}-T_{1})} \cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}=\] \[\int_{0}^{T}dt_{1}\int_{t_{1}}^{T}dt_{2}\cdots\int_{t_{n-1}}^{T} dt_{n}\big{(}ne^{-nt_{1}}\big{)}\big{(}(n-1)e^{-(n-1)(t_{2}-t_{1})} \big{)}\cdots e^{-(t_{n}-t_{n-1})}\ \times\] \[f_{n}(t_{1})f_{n-1}(t_{2}-t_{1})\cdots f_{1}(t_{n}-t_{n-1})\ \times\] \[e^{2(n-1)t_{1}}e^{2(n-2)(t_{2}-t_{1})}\cdots e^{2(n-n)(t_{n}-t_{ n-1})}.\] Here, \(f_{n}(t_{1})\) is the expected contribution of all same-direction swaps in the first exploration era, conditioned on \(T_{1}=t_{1}\), \(f_{n-1}(t_{2}-t_{1})\) is the expected contribution of all same-direction swaps in the second exploration era, conditioned on \(T_{2}-T_{1}=t_{2}-t_{1}\), etc. Conditioned on \(T_{1}=t_{1}\), the number of total same-direction swaps is \(\mathrm{Poi}((n-1)t_{1})\), and conditional on the total number of same-direction swaps being equal to \(k\), the expected contribution is uniformly distributed on all possible sequences of \(k\) swaps, i.e. \((-J_{n}/N(n-1))^{k}\). We thus have the explicit formula \[f_{n}(t_{1})=e^{-(n-1)t_{1}}\sum_{k=0}^{\infty}\frac{((n-1)t_{1})^{k}}{k!} \bigg{(}\frac{-J_{n}}{N(n-1)}\bigg{)}^{k}\] \[=e^{-(n-1)t_{1}}\sum_{k=0}^{\infty}\frac{(-t_{1}J_{n}/N)^{k}}{k!}=e^{-(n-1)t_{1}} e^{-t_{1}J_{n}/N}.\] More generally, we have the formula \[f_{n-k+1}(t_{k}-t_{k-1})=e^{-(n-k)(t_{k}-t_{k-1})}e^{-(t_{k}-t_{k-1})J_{n-k+1}/N },\ \ k\in[n].\] Inserting this into our first display, we obtain \[\mathbb{E}\big{[}\bar{\pi}_{T_{n}}\mathbb{1}\left(T_{n}\leq T \right)e^{2(n-1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})} \big{]}=\\ n!\int_{0}^{T}dt_{1}\int_{t_{1}}^{T}dt_{2}\cdots\int_{t_{n-1}}^{T} dt_{n}\ e^{-t_{1}}\cdots e^{-(t_{n}-t_{n-1})}e^{-t_{1}J_{n}/N}e^{-(t_{2}-t_{1})J_{ n-1}/N}\cdots e^{-(t_{n}-t_{n-1})J_{1}/N}.\] To finish, observe that the right hand side above is precisely the right hand side of the claimed identity. Up to now, we did not need to make any assumption on the size of \(N\). We begin to do so here. Later, in Section 4.2, we will show how to remove these assumptions, but for now we prefer to work in a simplified setting where the main ideas are more transparent. **Lemma 4.9**.: Suppose that \(N\geq n\). Then \[\lim_{T\to\infty}\mathbb{E}\big{[}\bar{\pi}_{T_{n}}\mathbb{1}(T_{n}\leq T)e^{ 2(n-1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}=n!N ^{n}(N+J_{n})^{-1}\cdots(N+J_{1})^{-1}.\] Proof.: By Lemma 4.8, we may compute \[\mathbb{E}\big{[}\bar{\pi}_{T_{n}}\mathbb{1}(T_{n}\leq T)e^{2(n- 1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}=\\ n!\int_{0}^{T}du_{n}\int_{0}^{T-u_{n}}du_{n_{1}}\cdots\int_{0}^{T -(u_{n}+\cdots+u_{2})}du_{1}\big{(}e^{-u_{n}}e^{-u_{n}J_{n}/N}\big{)}\cdots \big{(}e^{-u_{1}}e^{-u_{1}J_{1}/N}\big{)}.\] Since \(N\geq n\), we have that \(\|J_{k}/N\|<1\) for all \(k\in[n]\). This implies that the following integral is absolutely convergent (recall Remark 2.20): \[\int_{0}^{\infty}du_{n}\int_{0}^{\infty}du_{n-1}\cdots\int_{0}^{\infty}du_{1} \big{(}e^{-u_{n}}e^{-u_{n}J_{n}/N}\big{)}\cdots\big{(}e^{-u_{1}}e^{-u_{1}J_{1 }/N}\big{)},\] and moreover, the limit in question is equal to \(n!\) times the above. To finish, simply observe that the above splits into a product of \(n\) integrals, where the \(k\)th integral may be evaluated: \[\int_{0}^{\infty}du_{k}e^{-u_{k}}e^{-u_{k}J_{k}/N}=\int_{0}^{\infty}du_{k}e^{- u_{k}(\mathrm{id}+J_{k}/N)}=\left(\mathrm{id}+\frac{J_{k}}{N}\right)^{-1}=N(N+J_{k} )^{-1}.\] The desired result follows. Next, we argue why the contribution to the partition function \(e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))]\) coming from the event \(\{T_{n}>T\}\) vanishes in the \(T\to\infty\) limit (that is, as \(T\) becomes large, we can assume that all exploration eras have finished by time \(T\)). We first show that when the numbers of top strands and bottom strands are mismatched, the expectation vanishes as \(T\to\infty\). This will be needed in the proof of Proposition 4.19 later. **Lemma 4.10**.: Suppose that \(n\geqslant 2\) and \(N\geqslant 2n\). Suppose that \(\Sigma\) is a Poisson process arising from having \(n-1\) top strands and \(n\) bottom strands. Then \[\sup_{T\geqslant 0}e^{\binom{2n-1}{2}T-(n-1)T}\|\mathbb{E}[F(\Sigma(T))]\|<\infty.\] Proof.: We proceed by induction. First, consider the base case \(n=2\). In this case, by conditioning on the first time of turnaround, we can explicitly compute \[e^{2T}\mathbb{E}[F(\Sigma(T))]=\ e^{2T}\int_{0}^{T}2e^{-2u}X(u)YZ(u)du+e^{2T}e^ {-2T}e^{-T}e^{-TJ_{2}^{\prime}/N},\] where \(X(u)\) is the expected contribution of all swaps up to time \(u\), and \(Z(u)\) is the expected contribution of all points after time \(u\), where both are conditioned on the first turnaround happening at time \(u\). Also, \(Y=\frac{1}{2}\big{(}\langle 1\ 2\rangle+\langle 1\ 3\rangle\big{)}\) is the expected contribution of the turnaround, since each of the two turnarounds is equally likely. Note that the time of first turnaround is exponential of rate \(2\), which explains the presence of the \(2e^{-2u}\) term. The second term above corresponds to the case where the first turnaround happens after time \(T\). We have the explicit formulas \[X(u)=e^{-u}e^{-uJ_{2}^{\prime}/N},\ \ Z(u)=e^{-2(T-u)},\] where \(J_{2}^{\prime}\) is the Jucys-Murphy element which we view as acting on the bottom two strands (recall Definition 2.18). This formula follows because the number of swaps up to time \(u\) is \(\mathrm{Poisson}(u)\), and each swap incurs a factor \(-J_{2}^{\prime}/N\). The fact that \(Z(u)=e^{-2(T-u)}\) follows because once a turnaround occurs, we can argue via cancellation as in the proof of Lemma 4.4 that the only points which can occur thereafter are turnarounds between the same two strands. Plugging in the formulas for \(X(u),Z(u)\), we may obtain the expression \[\int_{0}^{T}e^{-u(\mathrm{id}+J_{2}^{\prime}/N)}du\big{(}\langle 1\ 2\rangle+ \langle 1\ 3\rangle\big{)}+e^{-T(\mathrm{id}+J_{2}^{\prime}/N)}.\] Since \(N\geqslant 2n\) is sufficiently large, as \(T\to\infty\) the above stays bounded (in fact, it converges to some explicit expression involving \((\mathrm{id}+J_{2}^{\prime}/N)^{-1}\), as in the proof of Lemma 4.8). This shows the case \(n=2\). Now suppose the claim is true for some \(n\). Suppose also that \(N\geqslant 2(n+1)\). We show that the claim is true for \(n+1\). As in the base case, by conditioning on the first time of turnaround, we may express (note that \(\binom{2n+1}{2}-n=2n^{2}\)) \[e^{\binom{2n+1}{2}T-nT}\mathbb{E}[F(\Sigma(T))]=e^{2n^{2}T}\int _{0}^{T}n(n+1)e^{-n(n+1)u}X_{n}(u)Y_{n}Z_{n}(u)du\ + \tag{4.3}\] \[e^{2n^{2}T}e^{-n(n+1)T}e^{-\binom{n}{2}T}e^{-T(J_{n}+\cdots+J_{ 1})/N}e^{-\binom{n+1}{2}T}e^{-T(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N},\] where \(X_{n}(u)\) is the expected contribution of all swaps up to time \(u\) and \(Z_{n}(u)\) is the expected contribution of all points after time \(u\), where both are conditioned on the first turnaround happening at time \(u\). Also, \(Y_{n}=\frac{1}{n(n+1)}\sum_{i\in[n],j\in(n:2n+1]}\langle i\ j\rangle\) is the expectation of the first turnaround. Let \(J_{1}^{\prime},\ldots,J_{n+1}^{\prime}\) be the Jucys-Murphy elements which act on the bottom \(n+1\) strands, as in Definition 2.18. Similar to before, we may explicitly compute \[X_{n}(u) =e^{-\binom{n}{2}u}e^{-\binom{n+1}{2}u}e^{-u(J_{n}+\cdots+J_{1})/N }e^{-u(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N},\] \[Z_{n}(u) =e^{-2(2n-1)(T-u)}f_{n}(T-u),\] where \(f_{n}(T-u)\) is the expected contribution of the points involving the remaining \(n-1\) top and \(n\) bottom strands after time \(u\), conditioned on the first turnaround happening at time \(u\). Observe that the \(e^{-2(2n-1)(T-u)}\) factor in \(Z_{n}(u)\) arises due to similar cancellations as in the proof of Lemma 4.4, which allows us to restrict to the event that after the first turnaround \(\langle i\ j\rangle\), the only points which can involve either of the two matched strands are exactly the turnarounds of the form \(\langle i\ j\rangle\). This means that a total of \(2(2n-1)\) rate-\(1\) Poisson processes must have zero points on the interval \([u,T]\). Plugging in our formulas for \(X_{n}(u),Z_{n}(u)\), and using the identities \(2n^{2}-n(n+1)-\binom{n}{2}-\binom{n+1}{2}=-n\), \(2n^{2}-2(2n-1)=2(n-1)^{2}\), we have that the first term on the right hand side of (4.3) is equal to \[n(n+1)\int_{0}^{T}e^{-nu}e^{-u(J_{n}+\cdots+J_{1})/N}e^{-u(J_{n+1}^{\prime}+ \cdots+J_{1}^{\prime})/N}Ye^{2(n-1)^{2}(T-u)}f_{n}(T-u)du.\] By the inductive assumption, we have that \(\sup_{S\geqslant 0}e^{2(n-1)^{2}S}\|f_{n}(S)\|<\infty\). Also, since \(N\geqslant 2(n+1)\), we have that \(\|(J_{n}+\cdots+J_{1})/N\|<n/2\) and \(\|(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N\|<n/2\), which implies \[\int_{0}^{\infty}e^{-nu}\big{\|}e^{-u(J_{n}+\cdots+J_{1})/N}e^{-u(J_{n+1}^{ \prime}+\cdots+J_{1}^{\prime})/N}\big{\|}du<\infty.\] Combining the two facts, we obtain that the first term on the right hand side of (4.3) is uniformly bounded in \(T\). The second term in the right hand side of (4.3) may be expressed \[e^{-nT}e^{-T(J_{n}+\cdots+J_{1})/N}e^{-T(J_{n+1}^{\prime}+\cdots+J_{1}^{ \prime})/N}.\] By arguing as before, we may show that this stays bounded as \(T\to\infty\) (in fact, it converges to zero). This completes the proof of the inductive step. Combining this lemma with an inductive argument, we can obtain the following. **Proposition 4.11**.: Suppose that \(N\geqslant 2n\). We have that \[\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))1(T_{n}>T)]=0.\] Proof.: Fix \(N\). First, when \(n=1\), we have that \[\mathbb{E}[F(\Sigma(T))1\,(T_{1}>T)]=e^{-T}\mathrm{id},\] where \(\mathrm{id}\) here denotes the identity element of \(\mathcal{B}_{n,n}\). The right hand side above clearly goes to zero as \(T\to\infty\). This shows the base case \(n=1\). Now suppose the result is true for some general \(n\geqslant 1\). Suppose also that \(N\geqslant 2(n+1)\). We proceed to show that the \(n+1\) case is true. Towards this end, observe that we may decompose \[\mathbbm{1}(T_{n+1}>T)=\mathbbm{1}(T_{1}>T)+\mathbbm{1}(T_{1}\leqslant T<T_{n+1}).\] We split into the two cases indicated above. In the first case, we condition on the exploration at time \(T\): \[e^{\binom{2(n+1)}{2}T-(n+1)T}\mathbb{E}\big{[}\mathbb{E}[F(\Sigma(T))\ |\ \mathcal{F}_{T}] \mathbbm{1}(T_{1}>T)]\big{]}.\] To help visualize, imagine we have the situation in Figure 35, where we explore the first strand until time \(T\), and we have not yet seen a turnaround. Conditioned on this picture, the expectation of the diagram can be computed as follows. First, since we have already explored one strand, the remaining points effectively form a Poisson process corresponding to \(n\) top strands and \(n+1\) bottom strands. Call this modified process \(\bar{\Sigma}(T)\). We visualize this in Figure 36, where the top-most strand in the left diagram is dashed, to signify that there are no points touching this strand. Having computed the expectation of the modified diagram in the left of Figure 36, to obtain the conditional expectation of \(F(\Sigma(T))\) we simply need to multiply by the right diagram in the figure, which captures the effect of all swaps seen by our exploration up to time \(T\). This discussion corresponds to the following identity for the conditional expectation: \[\mathbb{E}[F(\Sigma(T))\ |\ \mathcal{F}_{T}]=\mathbb{E}[F(\bar{\Sigma}(T))]F( \mathcal{Q}_{T}).\] We may then compute \[e^{\binom{2(n+1)}{2}T-(n+1)T} \mathbb{E}\big{[}\mathbb{E}[F(\Sigma(T))\ |\ \mathcal{F}_{T}] \mathbbm{1}(T_{1}>T)]\big{]}\] \[=e^{\binom{2(n+1)}{2}T-(n+1)T}\mathbb{E}[F(\bar{\Sigma}(T))] \mathbb{E}[F(\mathcal{Q}_{T})\mathbbm{1}(T_{1}>T)]\] \[=e^{\binom{2(n+1)}{2}T-(n+1)T}\mathbb{E}[F(\bar{\Sigma}(T))]e^{-( n+1)T}e^{-nT}e^{-TJ_{n+1}/N}\] Figure 35: We explore the top strand, and do not see a turnaround before time \(T\). \[=\big{(}e^{\binom{2n+1}{2}T-nT}\mathbb{E}[F(\bar{\Sigma}(T))]\big{)}e^{-T}e^{-TJ_{n +1}/N}.\] As \(T\to\infty\), the right hand side above goes to zero, since by Lemma 4.10 (and our assumption that \(N\geq 2(n+1)\)), the term in the parentheses above is \(O(1)\), and since \(N\geq n+1\), we have that \(\|J_{n+1}/N\|<1\), so that \(e^{-T}e^{-TJ_{n+1}/N}\to 0\). This shows the inductive step in the first case. Next, we consider the case corresponding to \(\mathbb{1}\,(T_{1}\leq T<T_{n+1})\). We condition on the exploration at time \(T_{1}\). Consider the diagram in Figure 37 which corresponds to \(n+1=4\). On the event that \(T_{1}\leq T<T_{n+1}\), the portion of the diagram to the right of \(T_{1}\) can be treated as having \(n\) top strands and \(n\) bottom strands. By arguing similarly to the previous case, i.e. by splitting our strand diagrams into the portion before \(T_{1}\) and the portion after Figure 37: We explore the top strand and see a turnaround at time \(T_{1}\). \(T_{1}\), we may compute the conditional expectation: \[\mathbb{E}\big{[}F(\Sigma(T))\mathbbm{1}(T_{1}\leqslant T<T_{n+1})\ |\ \mathcal{F}_{T_{1}}=u \big{]}=\mathbb{E}\big{[}F(\bar{\Sigma}(u))\big{]}e^{-4n(T-u)}F(\mathcal{Q}_{T_{ 1}})f_{n}(T-u),\] where \(\bar{\Sigma}\) is a Poisson process corresponding to having \(n\) top strands and \(n+1\) bottom strands, and \(f_{n}(T-u)\) is the expectation of the remaining \(n\) top and \(n\) bottom strands after time \(u\), on the event that not all \(n\) exploration eras end before time is up. Observe that by our inductive assumption, we have that for any \(u\geqslant 0\), \[\lim_{T\to\infty}e^{\binom{2n}{2}(T-u)-n(T-u)}f_{n}(T-u)=0. \tag{4.4}\] Since \(T_{1}\) is an exponential random variable of rate \(n+1\), we may compute the expectation: \[e^{\binom{2(n+1)}{2}T-(n+1)T}\mathbb{E}\big{[}F(\Sigma(T)) \mathbbm{1}(T_{1}\leqslant T<T_{n+1})\big{]}=\] \[e^{\binom{2(n+1)}{2}T-(n+1)T}\int_{0}^{T}du\ (n+1)e^{-(n+1)u}\big{(} \mathbb{E}\big{[}F(\bar{\Sigma}(u))\big{]}\big{)}\big{(}e^{-nu}e^{-uJ_{n+1}/N }\big{)}Y\big{(}e^{-4n(T-u)}f_{n}(T-u)\big{)}\] Here, the term \(e^{-nu}e^{-uJ_{n+1}/N}\) arises from taking the expectation of all swaps in the first exploration era (i.e. \(F(\mathcal{Q}(T_{1}))\)), conditioned on \(T_{1}=u\), and \(Y\) is the expectation of the first turnaround. Since \[\binom{2(n+1)}{2}-(n+1) =2n^{2}+2n\] \[\binom{2(n+1)}{2}-(n+1)-4n =\binom{2n}{2}-n,\] we have that the above is further equal to \[(n+1)\int_{0}^{T}du\ \big{(}e^{2n^{2}u}\mathbb{E}[F(\bar{\Sigma}(u))] \big{)}\big{(}e^{-u}e^{-uJ_{n+1}/N}\big{)}Ye^{-4n(T-u)}f_{n}(T-u).\] Now since \(N\geqslant 2n\), by Lemma 4.10, we have that \(e^{2n^{2}u}\mathbb{E}[F(\bar{\Sigma}(u))]=O(1)\) and \(e^{-u}e^{-uJ_{n+1}/N}\) is integrable. Combining this with (4.4) and dominated convergence, we finally obtain \[\lim_{T\to\infty}e^{\binom{2(n+1)}{2}T-(n+1)T}\mathbb{E}\big{[}F( \Sigma(T))\mathbbm{1}(T_{1}\leqslant T<T_{n+1})\big{]}=0.\] This completes the proof of the inductive step, and thus the desired result now follows. We can now finally take the \(T\to\infty\) limit. **Proposition 4.12**.: Suppose that \(N\geqslant 2n\). Then as \(T\to\infty\), we have that \[\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))]= \sum_{\sigma,\tau:[n]\to(n:2n]}\mathrm{Wg}_{N}(\sigma\tau^{-1})[\sigma\ \tau].\] Proof.: By combining Proposition 4.5, Lemmas 4.6 and 4.9, and Proposition 4.11, we obtain \[\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))] =\sum_{\sigma:[n]\to(n:2n]}\left[\sigma\ \sigma\mathrm{Wg}_{N}\right]\] \[=\sum_{\sigma:[n]\to(n:2n]}\sum_{\pi\in\mathrm{S}_{n}}[\sigma\ \sigma\pi] \mathrm{Wg}_{N}(\pi)\] \[=\sum_{\sigma,\tau:[n]\to(n:2n]}[\sigma\ \tau]\mathrm{Wg}_{N}( \sigma^{-1}\tau).\] To finish, recall that \(\mathrm{Wg}_{N}(\sigma^{-1}\tau)=\mathrm{Wg}_{N}(\sigma\tau^{-1})\), because \(\mathrm{Wg}_{N}\) is a class function. We can now prove Theorem 2.5 in the case \(N\geq 2n\). Proof of Theorem 2.5 when \(N\geq n\).: Recall from (4.1) that \[e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\mathcal{P}(T))]=\sum_{\pi}w_{T}(\pi)\pi.\] By Proposition 4.12, we obtain \[\lim_{T\to\infty}w_{T}(\pi)=\mathbb{1}(\pi=[\sigma\ \tau]\text{ for some }\sigma,\tau:[n]\to(n:2n])\mathrm{Wg}_{N}(\sigma\tau^{-1}).\] Since \(w_{T}(\pi_{1},\dots,\pi_{L})=w_{T}(\pi_{1})\cdots w_{T}(\pi_{L})\), the desired result now follows. _Remark 4.13_ (Comparison to [1]).: If one translates Dahlqvist's proof to the language of Poisson point processes, then his strategy amounts to an exploration of the Poisson process which simultaneously explores all strands. This is certainly a natural exploration to try. [1, Lemma 5.1] amounts to the statement that the main contribution comes from the event that all exploration eras end for this "simultaneous exploration". [1, Lemma 5.2] gives a formula for the limiting contribution on this main event. He then extracts the Weingarten function from this formula by [1, Lemma 5.3]. We believe that our proof technique via strand-by-strand exploration is intrinsically interesting, because first of all it is rather surprising that such an exploration actually works. Recall that this was Proposition 4.5, whose proof rested on certain cancellations that could be uncovered (Lemma 4.4). Moreover, the strand-by-strand exploration naturally uncovers the Jucys-Murphy elements, thus giving an alternative perspective on the appearance of the Weingarten function. Finally, the strand-by-strand exploration naturally leads to a single-strand recursion that results in a slightly more general version of the Makeenko-Migdal/Master loop/Schwinger-Dyson equation - see Remarks 5.4 and 5.7 for more discussion. ### Extension to general values of \(N\) Recall that in the proof of Proposition 4.12, we deduced the existence of \(\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\mathbb{E}[F(\Sigma(T))]\) from the existence of \(\lim_{T\to\infty}\mathbb{E}[\bar{\pi}_{T_{n}}\mathbb{1}(T_{n}\leq T)]\in \mathbb{C}[\mathrm{S}_{n}]\). However, when \(N\leq n\), the trouble is that the latter limit no longer exists. Thus to prove Theorem 2.5 in the case where \(N\) is small, we need some alternative argument which does not rely on convergence in the group algebra. Indeed, we will show that although \(\lim_{T\to\infty}\mathbb{E}[\bar{\pi}_{T_{n}}\mathbbm{1}(T_{n}\leq T)]\) does not necessarily exist in \(\mathbb{C}[\mathrm{S}_{n}]\), once we apply the representation \(\rho_{+}\) (Definition 2.22), the limit _does exist_. Moreover, the limit \(\lim_{T\to\infty}\rho_{+}\big{(}\mathbb{E}[\bar{\pi}_{T_{n}}\mathbbm{1}(T_{n} \leq T)]\big{)}\) already contains enough information in order to compute expectations of traces of words. Once we have built up enough background, the actual proof of Theorem 2.5 for general values of \(N\) will be a small variation of the proof for large \(N\), as the major technical steps were already covered in Section 4 (and any additional background covered in Section 2.2). Towards this end, it will be useful to recall why expectations of traces of words may be reduced to weighted sums over the Brauer algebra, i.e. why Lemma 2.3 is true. Let \(\Gamma\) be a word on letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\). We may assume \(\Gamma=\lambda_{c(1)}^{\varepsilon(1)}\cdots\lambda_{c(n)}^{\varepsilon(n)}\), where \(\varepsilon:[n]\to\{\pm 1\}\) and \(c:[n]\to[L]\). Let \(M=(M_{1},\ldots,M_{L})\) be a given collection of \(N\times N\) Unitary matrices. The computation of \(\mathrm{Tr}(M(\Gamma))=\mathrm{Tr}\big{(}M_{c(1)}^{\varepsilon(1)}\cdots M_{ c(n)}^{\varepsilon(n)}\big{)}\) may be visualized in terms of the strand diagram as in Figure 38, where we consider the concrete case \(\Gamma=\lambda_{1}^{2}\lambda_{2}\lambda_{1}^{-2}\lambda_{2}^{-1}\). In Figure 38, we can imagine we are traversing the strand diagram. Every black strand contributes a matrix element, and every dashed red strand enforces an identification of indices. In the end we sum over all indices which appear. Of course, we could have written the trace more succinctly as \[\mathrm{Tr}(M_{1}^{2}M_{2}M_{1}^{-2}M_{2}^{-1})=(M_{1})_{i_{1}i_{2}}(M_{1})_{i _{2}i_{3}}(M_{2})_{i_{3}i_{4}}(\overline{M}_{1})_{i_{5}i_{4}}(\overline{M}_{1} )_{i_{6}i_{5}}(\overline{M}_{2})_{i_{1}i_{6}},\] but we prefer to keep the \(\delta\) functions because they correspond to the dashed red lines. We now want to give an expression as above for general words and strand diagrams. Given the strand diagram of a word \(\Gamma\), note that the diagram has a single component, with a unique ordering of its vertices \(x_{1},\ldots,x_{V}\) up to cyclic equivalence. This ordering is such that the edges alternate between black strands and dashed red lines. Let \(B(\Gamma)\) be the set of black strands, and \(R(\Gamma)\) be the set of dashed red lines. Further split \(B(\Gamma)=B_{+}(\Gamma)\cup B_{-}(\Gamma)\), where \(B_{+}(\Gamma),B_{-}(\Gamma)\) are the set of positive (i.e. right) and negative (i.e. left)-oriented black strands. In the previous example, \[B_{+}(\Gamma)=\{(x_{1},x_{2}),(x_{3},x_{4}),(x_{5},x_{6})\},\ \ B_{-}(\Gamma)=\{(x_{7},x_{8}),(x_{9},x_{10}),(x_{11},x_{12})\},\] \[R(\Gamma)=\{(x_{2},x_{3}),(x_{4},x_{5}),(x_{6},x_{7}),(x_{8},x_{9}),(x_{10},x_{11})\}.\] Given a collection of indices \(i=(i_{1},\ldots,i_{V})\in[\![N]\!]^{V}\), and an edge \(e=(x_{j},x_{j+1})\), let \(i_{e}=(i_{j},i_{j+1})\), \(i_{-e}=(i_{j+1},i_{j})\). Let \(r(e)\in[\![L]\!]\) be the index of the letter that \(e\) corresponds to. Then the general formula for \(\operatorname{Tr}(M(\Gamma))\) in terms of the strand diagram is: \[\operatorname{Tr}(M(\Gamma))=\prod_{e\in B_{+}(\Gamma)}(M_{r(e)})_{i_{e}}\prod _{e\in B_{-}(\Gamma)}(\overline{M}_{r(e)})_{i_{-e}}\prod_{e\in R(\Gamma)} \delta^{i_{e}},\] where we implicitly sum over \(i=(i_{1},\ldots,i_{V})\in[\![N]\!]^{V}\). Now the point is as follows. If \(M_{1},\ldots,M_{L}\) are independent \(\operatorname{U}(N)\)-valued Brownian motions, then upon taking expectations of the above, we may obtain that \(\mathbb{E}[\operatorname{Tr}(M(\Gamma))]\) is equal to a weighted sum of diagrams as follows. First, for \(\ell\in[\![L]\!]\), let \(B_{+}(\Gamma,\ell)\), \(B_{-}(\Gamma,\ell)\) be the sets of positively and negatively oriented edges corresponding to the letter \(\lambda_{\ell}\). Since the \(M_{1},\ldots,M_{L}\) are independent, we have that \[\mathbb{E}[\operatorname{Tr}(M(\Gamma))]=\prod_{\ell\in[\![L]\!]}\mathbb{E} \bigg{[}\prod_{e\in B_{+}(\Gamma,\ell)}(M_{r(e)})_{i_{e}}\prod_{e\in B_{-}( \Gamma,\ell)}(\overline{M_{r(e)}})_{i_{-e}}\bigg{]}\prod_{e\in R(\Gamma)} \delta^{i_{e}}.\] We recall the following lemma from [10, Appendix A] (see also (2.1)) which gives a formula for each of the expectations appearing in the right hand side above. **Proposition 4.14**.: Let \(i_{1},\ldots,i_{n}\), \(i^{\prime}_{1},\ldots,i^{\prime}_{n}\), \(j_{1},\ldots,j_{n}\), \(j^{\prime}_{1},\ldots,j^{\prime}_{n}\in[\![N]\!]\). We have that \[\mathbb{E}\big{[}(B_{T})_{i_{1}j_{1}}\cdots(B_{T})_{i_{n}j_{n}}(\overline{B}_ {T})_{i^{\prime}_{1}j^{\prime}_{1}}\cdots(\overline{B}_{T})_{i^{\prime}_{n}j^ {\prime}_{n}}\big{]}=\sum_{\pi}w_{T}(\pi)\mathbb{1}(\text{indices match with $\pi$}).\] Here, the sum is over walled pairings \(\pi\in\mathcal{M}(n,n)\) (recall Definition 2.13). Using this, we may write \[\mathbb{E}[\operatorname{Tr}(M(\Gamma))]=\sum_{\pi=(\pi_{1},\ldots,\pi_{L})}w _{T}(\pi_{1})\cdots w_{T}(\pi_{L})\prod_{\ell\in L}\prod_{\{a,b\}\in\pi_{\ell }}\delta^{i_{a}i_{b}}\prod_{e\in R(\Gamma)}\delta^{i_{e}}.\] Now, observe that \[\prod_{\ell\in L}\prod_{\{a,b\}\in\pi_{\ell}}\delta^{i_{a}i_{b}}\prod_{e\in R( \Gamma)}\delta^{i_{e}}=N^{\#\text{comp}(\Gamma,\pi)},\] where recall \(\#\text{comp}(\Gamma,\pi)\) is the number of components obtained by deleting all black strands but including all interior matchings specified by \(\pi_{1},\ldots,\pi_{L}\). For instance, in our previous example, suppose our matchings were as in Figure 39. Since each edge in Figure 39 (be it red or black) imposes a constraint on the indices, the total number of free summation indices is exactly equal to the number of connected components in the above diagram. Each free summation index may take one of \(N\) values, whence the term \(N^{\#\text{comp}(\Gamma,\pi)}\). Lemma 2.3 follows directly from these considerations8. Footnote 8: In our discussion, we have only considered a single word, but everything extends directly to the case of multiple words. Now recall from Definition 2.22 that the matrix elements of the representation \(\rho_{+}(\pi)\) are exactly given by \[(\rho_{+}(\pi))_{i\sqcup i^{\prime},j\sqcup j^{\prime}}=\mathbb{1}(\text{ indices match with }\pi).\] Here, \(i\sqcup i^{\prime}\) denotes the length-\(2n\) vector of indices given by concatenation: \((i_{1},\ldots,i_{n},i^{\prime}_{1},\ldots,i^{\prime}_{n})\), and similarly for \(j\sqcup j^{\prime}\). Combining this with the previous discussion, we have the following result. First, for some notation, let \(\mathbf{i}_{\ell},\mathbf{j}_{\ell}\) respectively collect all left and right indices which appear in the strand diagram corresponding to \(\lambda_{\ell}\). For the example in Figure 39, we have that \(\mathbf{i}_{1}=(i_{1},i_{3},i_{8},i_{7})\), \(\mathbf{j}_{1}=(i_{2},i_{4},i_{7},i_{9})\), \(\mathbf{i}_{2}=(i_{5},i_{12})\), \(\mathbf{j}_{2}=(i_{6},i_{11})\). **Lemma 4.15**.: Let \(\mathbf{\Gamma}\) be a balanced collection of words on letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Let \(\pi=(\pi_{\ell},\ell\in[L])\). Then \[\prod_{\ell\in[L]}\rho_{+}(\pi_{\ell})_{\mathbf{i}_{\ell}\mathbf{j}_{\ell}} \prod_{e\in R(\mathbf{\Gamma})}\delta^{i_{e}}=N^{\#\mathrm{comp}(\mathbf{ \Gamma},\pi)}.\] Using this lemma and the previous discussion, we could have written \(\mathbb{E}[\mathrm{Tr}(M(\Gamma))]\) in terms of \(\rho_{+}(\pi_{1}),\ldots,\rho_{+}(\pi_{L})\), as follows. **Lemma 4.16**.: Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{n})\) be a balanced collection of words with letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\). We have that \[\mathbb{E}[\mathrm{Tr}(B_{T}(\mathbf{\Gamma}))]=\bigg{(}\sum_{\pi_{1}\in \mathcal{B}_{n_{1},n_{1}}}w_{T}(\pi_{1})\rho_{+}(\pi_{1})\bigg{)}_{\mathbf{i}_ {1}\mathbf{j}_{1}}\cdots\bigg{(}\sum_{\pi_{L}\in\mathcal{B}_{n_{L},n_{L}}}w_{ T}(\pi_{L})\rho_{+}(\pi_{L})\bigg{)}_{\mathbf{i}_{L}\mathbf{j}_{L}}\prod_{e\in R (\mathbf{\Gamma})}\delta^{i_{e}},\] As mentioned at the beginning of this subsection, we have rewritten expectations of traces of words of Unitary Brownian motion in terms of some function (namely, \(\rho_{+}\)) of weighted sums over the Brauer algebra. The point now is that \(\lim_{T\to\infty}\rho_{+}\big{(}\mathbb{E}[\bar{\pi}_{T_{n}}\mathbb{1}(T_{n} \leqslant T)]\big{)}\) exists for all \(N\). Once we show this, the rest of the proof of Theorem 2.5 in the case of general \(N\) is exactly the same. **Lemma 4.17** (Analog of Lemma 4.9).: We have that \[\lim_{T\to\infty}\rho_{+}\big{(}\mathbb{E}\big{[}\bar{\pi}_{T_{n}}e^{2(n-1)T_{ 1}}\mathbb{1}(T_{n}\leqslant T)e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_ {n-1})}\big{]}\big{)}=n!N^{n}\rho_{+}(\mathrm{Wg}_{N}).\] Proof.: By Lemma 4.8, we may compute \[\rho_{+}\big{(}\mathbb{E}\big{[}\bar{\pi}_{T_{n}}e^{2(n-1)T_{1}} \mathbbm{1}(T_{n}\leq T)e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})} \big{]}\big{)}=\] \[\quad n!\int_{0}^{T}du_{n}\int_{0}^{T-u_{1}}du_{n-1}\cdots\int_{0 }^{T-(u_{1}+\cdots+u_{n-1})}du_{1}e^{-u_{n}\rho_{+}(\mathrm{id}+J_{n}/N)}\cdots e ^{-u_{1}\rho_{+}(\mathrm{id}+J_{1}/N)}.\] By Lemma 2.28, for all \(k\in[n]\), all eigenvalues of \(\rho_{+}(J_{k})\) are at least \(-N+1\), and thus all eigenvalues of \(\rho_{+}(\mathrm{id}+J_{k}/N)\) are at least \(1/N\), and in particular all eigenvalues are strictly positive. Thus as we send \(T\to\infty\) the above converges to (applying Lemma 2.29 in the final identity) \[n!\int_{0}^{\infty}e^{-u_{n}\rho_{+}(\mathrm{id}+J_{n}/N)}du_{n} \cdots\int_{0}^{\infty}e^{-u_{1}\rho_{+}(\mathrm{id}+J_{1}/N)}du_{1} =n!\rho_{+}(\mathrm{id}+J_{n}/N)^{-1}\cdots\rho_{+}(\mathrm{id}+J _{1}/N)^{-1}\] \[=n!N^{n}\rho_{+}(N+J_{n})^{-1}\cdots\rho_{+}(N+J_{1})^{-1}\] \[=n!N^{n}\rho_{+}(\mathrm{Wg}_{N}),\] as desired. We also have the following analogs of Lemma 4.10 and Proposition 4.11. **Lemma 4.18** (Analog of Lemma 4.10).: Suppose that \(\Sigma\) is a Poisson process arising from having \(n-1\) top strands and \(n\) bottom strands. Then \[\sup_{T\geq 0}e^{\binom{2n-1}{2}T-(n-1)T}\|\rho_{+}\big{(}\mathbb{E}[F( \Sigma(T))]\big{)}\|<\infty.\] Proof.: In the proof of Lemma 4.10, the condition on \(N\) was needed to show that \[\int_{0}^{\infty}e^{-nu}\big{\|}e^{-u(J_{n}+\cdots+J_{1})/N}e^{-u (J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N}\big{\|}du <\infty,\] \[\quad\sup_{T\geq 0}e^{-nT}\big{\|}e^{-T(J_{n}+\cdots+J_{1})/N}e^{- T(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N}\big{\|} <\infty.\] When we apply \(\rho_{+}\), we instead need to show that \[\int_{0}^{\infty}e^{-nu}\big{\|}e^{-u\rho_{+}(J_{n}+\cdots+J_{1} )/N}e^{-u\rho_{+}(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N}\big{\|}du <\infty,\] \[\quad\sup_{T\geq 0}e^{-nT}\big{\|}e^{-T\rho_{+}(J_{n}+\cdots+J_{1 })/N}e^{-T\rho_{+}(J_{n+1}^{\prime}+\cdots+J_{1}^{\prime})/N}\big{\|} <\infty.\] These claims both follow from Corollary 2.31, which gives that the eigenvalues of \(\frac{1}{N}\rho_{+}(J_{n}+\cdots+J_{1})+\frac{1}{N}\rho_{+}(J_{n+1}^{\prime}+ \cdots+J_{1}^{\prime})\) are all strictly greater than \(-n\). **Proposition 4.19** (Analog of Proposition 4.11).: We have that \[\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\rho_{+}\big{(}\mathbb{E}[F(\Sigma(T)) \mathbbm{1}(T_{n}>T)]\big{)}=0.\] Proof.: The points in the proof of Proposition 4.11 where we needed \(N\) to be large were in the application of Lemma 4.10 and in arguing that \(e^{-u}e^{-uJ_{n+1}/N}\) is integrable. For the present proposition, we may apply Lemma 4.18 which does not require \(N\) to be large. The fact that \(e^{-u}e^{-u\rho_{+}(J_{n+1})/N}\) is integrable follows from Lemma 2.28, as noted in the proof of Lemma 4.17. **Proposition 4.20** (Analog of Proposition 4.12).: We have that \[\lim_{T\to\infty}e^{\binom{2n}{2}T-nT}\mathbb{E}[\rho_{+}\big{(}F(\Sigma(T)) \big{)}]=\sum_{\sigma,\tau[n]\to(n:2n]}\rho_{+}([\sigma\ \tau])\mathrm{Wg}_{N}(\sigma\tau^{-1}).\] Proof.: We argue exactly as in the proof of Proposition 4.12, except we replace the applications of Lemma 4.9 and Proposition 4.11 with Lemma 4.17 and Proposition 4.19. Proof of Theorem 2.5.: Combining Lemma 4.16 and Proposition 4.20, we have that \[\lim_{T\to\infty}\mathbb{E}\big{[}\mathrm{Tr}(B_{T}(\boldsymbol{\Gamma})) \big{]}=\prod_{\ell\in L}\bigg{(}\sum_{\sigma_{\ell},\tau_{\ell}:[n]\to(n:2n]} \rho_{+}([\sigma_{\ell}\ \tau_{\ell}])\mathrm{Wg}_{N}(\sigma_{\ell}\tau_{\ell}^{-1})\bigg{)}_{\mathbf{ i}\mathbf{j}\mathbf{j}\mathbf{\ell}}\prod_{e\in R(\Gamma)}\delta^{i_{e}}.\] By Lemma 4.15, the right hand side above may be written \[\sum_{\pi=([\sigma_{\ell}\ \tau_{\ell}],\ell\in[L])}\bigg{(}\prod_{\ell\in L} \mathrm{Wg}_{N}(\sigma_{\ell}\tau_{\ell}^{-1})\bigg{)}N^{\#\mathrm{comp}( \Gamma,\pi)},\] as desired. ## 5 Makeenko-Migdal/Master loop/Schwinger-Dyson equations In this section, we utilize the Process process formulation described in Section 2.1 and analyzed in Section 4 to prove a recursion relation (Proposition 5.2) on expectations of products of traces of words in independent Haar-distributed Unitary matrices. We then apply this recursion to deduce the Makeenko-Migdal/Master loop/Schwinger-Dyson equations (Theorem 5.6) for Wilson loop expectations. First, we describe the terms which will appear in our recursion. Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). We will often refer to the edge at the \((i,j)\) location of \(\boldsymbol{\Gamma}\), which is meant to be the \(j\)th letter of \(\Gamma_{i}\). **Definition 5.1** (Splittings and mergers).: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Let \((i,j)\) be a location of \(\boldsymbol{\Gamma}\). Define the set of positive and negative splittings \(\mathbb{S}_{+}((i,j),\boldsymbol{\Gamma})\) and \(\mathbb{S}_{-}((i,j),\boldsymbol{\Gamma})\), as well as the set of positive and negative mergers \(\mathbb{M}_{+}^{U}((i,j),\Gamma)\) and \(\mathbb{M}_{-}^{U}((i,j),\Gamma)\), as follows. The set of positive splittings \(\mathbb{S}_{+}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by splitting \(\Gamma_{i}\) into two words as follows. Let \((i,k)\), \(k\neq j\) be another location of \(\Gamma_{i}\) which has the same letter as at location \((i,j)\). Suppose \(\Gamma_{i}\) is of the form \(A\lambda B\lambda C\), where \(\lambda\) is the letter at locations \((i,j)\) and \((i,k)\). We may split \(\Gamma_{i}\) into \(\Gamma_{i,1}=A\lambda C\) and \(\Gamma_{i,2}=B\lambda\). The set \(\mathbb{S}_{+}((i,j),\boldsymbol{\Gamma})\) is the set of all collections of words that may be obtained this way. Similarly, the set of negative splittings \(\mathbb{S}_{-}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by splitting \(\Gamma_{i}\) into two words as follows. Let \((i,k)\), \(k\neq j\) be a location of \(\Gamma_{i}\) which has inverse of the letter at location \((i,j)\). We may write \(\Gamma_{i}=A\lambda B\lambda^{-1}C\) or \(\Gamma_{i}=A\lambda^{-1}B\lambda C\). In either case, we split \(\Gamma_{i}\) into \(\Gamma_{i,1}=AC\) and \(\Gamma_{i,2}=B\). The set \(\mathbb{S}_{-}((i,j),\boldsymbol{\Gamma})\) is the set of all collections of words that may be obtained this way. The set of positive mergers \(\mathbb{M}_{+}^{U}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by merging \(\Gamma_{i}\) with some \(\Gamma_{\ell}\), \(\ell\neq i\), as follows. Let \((\ell,m)\) be a location which has the same letter as at location \((i,j)\). Suppose \(\Gamma_{i}=A\lambda B\) and \(\Gamma_{\ell}=C\lambda D\). Then \(\Gamma_{i},\Gamma_{\ell}\) are replaced by their positive merger \(A\lambda DC\lambda B\). The set \(\mathbb{M}_{+}^{U}((i,j),\boldsymbol{\Gamma})\) is the set of all collections of words that may be obtained this way. Similarly, the set of negative mergers \(\mathbb{M}_{-}^{U}((i,j),\boldsymbol{\Gamma})\) is the set of collections of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by merging \(\Gamma_{i}\) with some \(\Gamma_{\ell}\), \(\ell\neq i\), as follows. Let \((\ell,m)\) be a location which has the inverse of the letter at location \((i,j)\). Suppose \(\Gamma_{i}=A\lambda B\) and \(\Gamma_{\ell}=C\lambda^{-1}D\). Then \(\Gamma_{i},\Gamma_{\ell}\) are replaced by their negative merger \(ADCB\). The set \(\mathbb{M}_{-}^{U}((i,j),\boldsymbol{\Gamma})\) is the set of all collections of words that may be obtained this way. In the following, let \(\operatorname{tr}(U(\boldsymbol{\Gamma}))=\prod_{i\in[k]}\operatorname{tr}(U (\Gamma_{i}))\), where \(U(\Gamma_{i})\) is obtained by substituting into \(\Gamma_{i}\) an independent Haar-distributed Unitary matrix for each letter \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Note that in contrast to previous results, we are using the normalized trace here, which we find to be more natural for stating the recursion. **Proposition 5.2** (Single-location word recursion).: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). For any location \((i,j)\) of \(\Gamma\), we have that \[\mathbb{E}[\operatorname{tr}(U(\boldsymbol{\Gamma}))]= -\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{+}((i,j), \boldsymbol{\Gamma})}\mathbb{E}[\operatorname{tr}(U(\boldsymbol{\Gamma}^{ \prime}))]+\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{-}((i,j), \boldsymbol{\Gamma})}\mathbb{E}[\operatorname{tr}(U(\boldsymbol{\Gamma}^{ \prime}))]\] \[-\frac{1}{N^{2}}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{M}_{+ }^{U}((i,j),\boldsymbol{\Gamma})}\mathbb{E}[\operatorname{tr}(U(\boldsymbol{ \Gamma}^{\prime}))]+\frac{1}{N^{2}}\sum_{\boldsymbol{\Gamma}^{\prime}\in \mathbb{M}_{-}^{U}((i,j),\boldsymbol{\Gamma})}\mathbb{E}[\operatorname{tr}(U( \boldsymbol{\Gamma}^{\prime}))].\] Proof.: Without loss of generality, take \((i,j)=(1,1)\), so that we look at the first letter of \(\Gamma_{1}\). Let \(\lambda\in\{\lambda_{1},\ldots,\lambda_{L}\}\) be this letter. Recall from Corollary 2.7 that \(\mathbb{E}[\operatorname{Tr}(U(\boldsymbol{\Gamma}))]\) is equal to a sum over pairs of matchings of strand diagrams, weighted by the Weingarten function applied to each pair, as well as \(N\) raised to the number of components of the resulting strand diagram. For each strand diagram corresponding to a letter \(\lambda^{\prime}\neq\lambda\), fix a pair of matchings \(\sigma_{\lambda^{\prime}},\tau_{\lambda^{\prime}}\). We apply our strand-by-strand Poisson process exploration from Section 4 to the strand diagram corresponding to \(\lambda\), but stop at the first time we see any point in the first exploration era. This will result in the claimed recursion. Let \(n\) be the number of times that \(\lambda\) appears in \(\boldsymbol{\Gamma}\), so that the portion of the strand diagram corresponding to \(\lambda\) has \(n\) right-directed strands and \(n\) left-directed strands. Let all notation be as in Section 4. Now, suppose that \(N\geq 2n\). (As was the case for the proof of Theorem 2.5, the case of general \(N\) will follow by small modifications from the case of large \(N\), by applying the representation \(\rho_{+}\) and using the various general \(N\) results proven in Section 4.2.) By combining Lemma 4.4 and Propositions 4.5, 4.11, and Proposition 4.12, we have that \[\sum_{\sigma,\tau:[n]\rightarrow(n:2n]} \mathrm{Wg}_{N}(\sigma\tau^{-1})[\sigma\ \tau]=\] \[\lim_{T\rightarrow\infty}\mathbb{E}\big{[}F(\mathcal{Q}(T_{n})) \mathbb{1}(T_{n}\leq T)e^{2(n-1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)( T_{n}-T_{n-1})}\big{]}.\] We will derive a recursion for the left hand side above by looking at the first point seen by our exploration process \(\mathcal{Q}\). For brevity, let \[f_{n}(T):=\mathbb{E}\big{[}F(\mathcal{Q}(T_{n}))\mathbb{1}(T_{n}\leq T)e^{2(n- 1)T_{1}}e^{2(n-2)(T_{2}-T_{1})}\cdots e^{2(n-n)(T_{n}-T_{n-1})}\big{]}.\] Let \(U_{1}\) be the time of the first swap seen by \(\mathcal{Q}\). Note that \(U_{1}\) is an exponential random variable with rate \(2n-1\) (since there are \(n-1\) possible same-direction swaps and \(n\) possible opposite-direction swaps). By conditioning on this time, we may obtain a recursion like \[f_{n}(T) =-\frac{1}{N}\sum_{j=1}^{n-1}\int_{0}^{T}e^{-(2n-1)u}(n\ j)e^{2(n -1)u}f_{n}(T-u)du+\] \[\frac{1}{N}\sum_{j=n+1}^{2n}\int_{0}^{T}e^{-(2n-1)u}\langle n\ j \rangle e^{2(n-1)u}f_{n-1}(j,T-u)du.\] Note the factor \(e^{2(n-1)u}\) comes from the \(e^{2(n-1)T_{1}}\) term. The first sum corresponds to the case that we first see a same-direction swap, and the second sum corresponds to the case that we first see an opposite-direction swap. Here, \(f_{n-1}(j,T-u)\) denotes the corresponding expectation where we take out the top and bottom strand which are matched by the opposite-direction swap \(\langle n\ j\rangle\) and continue the exploration on the remaining strands. The point now is that when we send \(T\rightarrow\infty\), we obtain the recursion: \[\sum_{\sigma,\tau:[n]\rightarrow(n:2n]}\mathrm{Wg}_{N}(\sigma \tau^{-1})[\sigma\ \tau] =-\frac{1}{N}\sum_{j=1}^{n-1}\sum_{\sigma,\tau:[n]\rightarrow(n:2n]} \mathrm{Wg}_{N}(\sigma\tau^{-1})(n\ j)[\sigma\ \tau]\ +\] \[\frac{1}{N}\sum_{j=n+1}^{2n}\sum_{\begin{subarray}{c}\sigma, \tau:[n]\rightarrow(n:2n]\\ \sigma(n)=\tau(n)=j\end{subarray}}\mathrm{Wg}_{N}(\sigma\tau^{-1})[\sigma\ \tau].\] (In the case of general \(N\) the above is true after applying \(\rho_{+}\) to both sides.) We now claim that by inserting this equation into the sum over pairs of matchings in the portion of the strand diagram corresponding to \(\lambda\) (and then applying Corollary 2.7 to compute the expectation), we obtain the claimed recursion. To help visualize why, note that before having explored the Poisson process, the strand diagram looks as in Figure 40. Here, the top strand corresponds to the first letter of \(\Gamma_{1}\). The dashed red strands indicate the exterior connections which are determined by the words involving the letter \(\lambda\). When we follow our exploration process until the first point of any kind, there are several possibilities that can occur. The first point can be (1) a same-direction swap which connects the top-most strand with another right-directed strand (2) an opposite-direction swap (i.e. turnaround) which connects the top-most strand with a left-directed strand. Also, the two strands which are connected can (1) be in the same word (2) be in different words. Any combination of these two things can happen, and so all told there are four different scenarios to account for. These four scenarios correspond to the four different categories of strings appearing in the right hand side of the loop equation: positive/negative splittings and positive/negative mergers. We proceed on a case-by-case basis. Throughout, let \(U_{1}\) denote the time of the first point. Suppose that the first point we see is a swap which connects the top-most strand with another right-directed strand, and moreover the two strands are in the same word. See Figure 41. Since the top two strands belong in the same word \(\Gamma_{1}\), we can write \(\Gamma_{1}=\lambda\Gamma_{1,1}\lambda\Gamma_{1,2}\), where \(\Gamma_{1,1}\) collects all letters which appear in between the dashed red line labeled \(12\) and the dashed red line labeled \(2\), while \(\Gamma_{1,2}\) collects all letters which appear in between the dashed red line labeled \(11\) and the dashed red line labeled \(1\). After accounting for the same-direction swap we saw, we may treat the part of the diagram after time \(U_{1}\) as in Figure 42. Notice here that the dashed red lines labeled \(1\) and \(2\) have been swapped. The effect of this is that the top and second-top strands are now in different words: \(\lambda\Gamma_{1,1}\) and \(\lambda\Gamma_{1,2}\). Note that the resulting string \(s^{\prime}=(\lambda\Gamma_{1,1},\lambda\Gamma_{1,2},\Gamma_{2},\dots,\Gamma_ {k})\) is precisely a positive splitting of \(s\) at Figure 40: Strand diagram before exploration. \(\lambda\). We thus see that this case contributes the term \[-\frac{1}{N}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{S}_{+}((i,j),\mathbf{\Gamma })}\mathbb{E}[\mathrm{Tr}(U(\mathbf{\Gamma}^{\prime}))].\] If the first point is a swap but the two matched strands are in different words (say \(\Gamma_{1}=\lambda\Gamma^{\prime}\) and \(\Gamma_{2}=\lambda\Gamma^{\prime}_{2}\)), then we would have the two diagrams in Figure 43. Note that the two matched strands are now effectively in the same word: \(\lambda\Gamma^{\prime}_{1}\lambda\Gamma^{\prime}_{2}\). Thus the case where the first point is a swap and the two matched strands are in different words contributes the positive merger term: \[-\frac{1}{N}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{M}^{\prime}_{+}(\lambda, \mathbf{\Gamma})}\mathbb{E}[\mathrm{Tr}(U(\mathbf{\Gamma}^{\prime}))].\] Next, suppose that the first point is a turnaround and the two matched strands are in the same word. We then have the two diagrams in Figure 44. Originally, we have the word \(\Gamma=\lambda\Gamma_{1,1}\lambda^{-1}\Gamma_{1,2}\). After seeing the turnaround swap, we may treat the rest of the diagram after \(U_{1}\) as in the right figure, where the two matched strands have been deleted, and the word \(\Gamma\) has been replaced by two words \(\Gamma_{1,1}\) and \(\Gamma_{1,2}\). This case corresponds to a negative splitting, and contributes the term \[\frac{1}{N}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{S}_{-}(\lambda,\mathbf{ \Gamma})}\mathbb{E}[\mathrm{Tr}(U(\mathbf{\Gamma}^{\prime}))].\] Figure 42: After accounting for the swap, we have the above effective strand diagram. Figure 43: Left: the first point is a swap between strands in different words. Right: the resulting effective strand diagram. The final case is when the first point is a parallel swap and the two matched strands are in different words. The two pictures are as in Figure 45. Originally, the top strand is part of the word \(\Gamma_{1}=\lambda\Gamma_{1}^{\prime}\), while the bottom strand is part of the word \(\Gamma_{2}=\lambda\Gamma_{2}^{\prime}\). After the turnaround swap, the two words \(\Gamma_{1},\Gamma_{2}\) merge to form the word \(\Gamma_{1}^{\prime}\Gamma_{2}^{\prime}\). Thus, this case contributes the term \[\frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{M}_{-}^{U}(\lambda, \boldsymbol{\Gamma})}\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}^{\prime}))].\] In summary, we have obtained the following recursion (stated using the usual trace): \[\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}))]= -\frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{+}( (i,j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}^{ \prime}))]+\frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{-}((i, j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}^{\prime}))]\] \[-\frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{M}_{+}^{ U}((i,j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}^{ \prime}))]+\frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{M}_{-}^{U} ((i,j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{Tr}(U(\boldsymbol{\Gamma}^{ \prime}))].\] To convert to the normalized trace, we need to multiply both sides by \(N^{-|k|}\), and then observe that in the splitting terms, a factor of \(\frac{1}{N}\) gets absorbed due to the fact that \(\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{\pm}((i,j),\boldsymbol{\Gamma})\) has one more word than \(\boldsymbol{\Gamma}\), and in the merger terms, a factor of \(\frac{1}{N}\) pops out because \(\boldsymbol{\Gamma}^{\prime}\in\mathbb{M}_{\pm}^{U}((i,j),\boldsymbol{\Gamma})\) has one less word than \(\boldsymbol{\Gamma}\). Figure 44: Left: the first point is a turnaround between two strands in the same word. Right: the resulting effective strand diagram. Figure 45: Left: the first point is a turnaround between two strands in different words. Right: the resulting effective strand diagram. Next, we apply the loop recursion Proposition 5.2 to obtain a recursion for Wilson loop expectations. In contrast to the notation of Section 1, we denote collections of loops by \(s\) instead of \(\mathcal{L}\), and we refer to \(s\) as a string. Recall the notation that \(W_{s}(Q)=\prod_{k\in[n]}\operatorname{tr}(Q_{\ell_{k}})\). **Notation 5.3**.: Given a string \(s=(\ell_{1},\dots,\ell_{n})\), let \(\phi(s):=\langle W_{s}\rangle_{\Lambda,\beta}\), where \(\langle\cdot\rangle_{\Lambda,\beta}\) denotes expectation with respect to the lattice Yang-Mills measure defined in (1.1). We omit the dependence of \(\phi\) on \(\Lambda,\beta,N\). Note that Definition 5.1 specializes to the case of loops on a lattice: given a string \(s\), we have the sets of positive/negative splittings/mergers \(\mathbb{S}_{\pm}((k,i),s)\) and \(\mathbb{M}_{\pm}^{U}((k,i),s)\). _Remark 5.4_.: We remark that our definition of the set of splittings and mergers is slightly different than what appears in [10, 2]. In our definition, we consider all possible splittings/mergers that involve the specific location \((k,i)\), whereas in the earlier works, the authors consider any splitting/merger that involves any two locations of the string which correspond to the same lattice edge. We need to define another type of string operation which appears for lattice Yang-Mills. **Definition 5.5** (Deformations).: Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string. Let \((k,i)\) be a location in \(s\). We define the sets of positive and negative deformations \(\mathbb{D}_{+}((k,i),s)\) and \(\mathbb{D}_{-}((k,i),s)\) as follows. The set of positive deformations \(\mathbb{D}_{+}((k,i),s)\) is the set of all possible strings which can be obtained by a positive merger between \(s\) at location \((k,i)\) and some oriented plaquette \(p\in\mathcal{P}\). The set of negative deformations \(\mathbb{D}_{-}((k,i),s)\) is the set of possible strings which can be obtained by a negative merger between \(s\) at location \((k,i)\) and some oriented plaquette \(p\in\mathcal{P}\). Let \(e\) be the oriented edge of \(\Lambda\) that is at location \((k,i)\) in \(s\). Let \(p\in\mathcal{P}\). In order for their to exist a positive merger between \(s\) and \(p\), note that \(p\) must contain \(e\). In this case, we denote by \(s\oplus_{(k,i)}p\) to be the positive merger of \(s\) and \(p\) at location \((k,i)\). Similarly, in order for their to exist a negative merger between \(s\) and \(p\), note that \(p\) must contain \(-e\). In this case, we denote by \(s\ominus_{(k,i)}p\) to be the negative merger of \(s\) and \(p\) at location \((k,i)\). Let \(p>e\) denote that the plaquette \(p\) contains the edge \(e\). Note then that (here \(e\) is the edge at location \((k,i)\) of \(s\)) \[\mathbb{D}_{+}((k,i),s) =\{s\oplus_{(k,i)}p:p\in\mathcal{P},p>e\} \tag{5.1}\] \[\mathbb{D}_{-}((k,i),s) =\{s\ominus_{(k,i)}p:p\in\mathcal{P},p>-e\}.\] **Theorem 5.6** (Single-location Makeenko-Migdal/Master loop/Schwinger-Dyson equation).: _Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string. Let \((k,i)\) be a location in \(s\). We have that_ \[\phi(s)= -\sum_{s^{\prime}\in\mathbb{S}_{+}((k,i),s)}\phi(s^{\prime})+ \sum_{s^{\prime}\in\mathbb{S}_{-}((k,i),s)}\phi(s^{\prime})-\frac{1}{N^{2}} \sum_{s^{\prime}\in\mathbb{M}_{+}^{U}((k,i),s)}\phi(s^{\prime})+\frac{1}{N^{2} }\sum_{s^{\prime}\in\mathbb{M}_{-}^{U}((k,i),s)}\phi(s^{\prime})\] \[-\beta\sum_{s^{\prime}\in\mathbb{D}_{+}((k,i),s)}\phi(s^{\prime}) +\beta\sum_{s^{\prime}\in\mathbb{D}_{-}((k,i),s)}\phi(s^{\prime}).\] _Remark 5.7_.: We re-emphasize here that the above recursion is slightly more general than previous literature [10, 11, 12], because we defined the string operations appearing on the right hand side of the equation in a slightly more restrictive manner - recall Remark 5.4. In particular, the right hand side of our formula formally depends on \(i\) while the 'unsymmetrized' version stated in [10, Theorem 8.1] does not. The Makeenko-Migdal/Master loop/Schwinger-Dyson equation of the previous works may be recovered from our equation by summing over all locations of \(s\). Also, recall Remark 1.1 that our scaling is so that \(\beta\) in our paper corresponds to \(2\beta\) in previous papers. This explains why \(\beta\) appears in the above recursion, while \(\beta/2\) appears in [12, Equation (1.7)]. Proof.: Recall from equation (1.5) that \[\phi(s)=Z_{\Lambda,\beta}^{-1}\sum_{K:\mathcal{P}\to\mathbb{N}}\frac{(N\beta) ^{K}}{K!}\int W_{s}(Q)\prod_{p\in\mathcal{P}}\operatorname{Tr}(Q_{p})^{K(p)} \prod_{e\in E_{\Lambda}}dQ_{e}.\] For brevity, let \[I(s,K):=\int W_{s}(Q)\prod_{p\in\mathcal{P}}\operatorname{Tr}(Q_{p})^{K(p)} \prod_{e\in E_{\Lambda}}dQ_{e}.\] Fix \(K:\mathcal{P}\to N\). It may help to keep in mind that \(K(p)\) counts the number of copies of \(p\) that are present. Before we apply Proposition 5.2, let us set some notation. Let \(e\) be the oriented edge of \(\Lambda\) that is traversed at location \((k,i)\) in the string \(s\). Recall that \(p\succ e\) means that \(p\) contains \(e\), and \(p\succ-e\) means that \(p\) contains \(e\) with the opposite orientation. Recall also that if \(p\succ e\) or \(p\succ-e\), let \(s\oplus_{(k,i)}p\) and \(s\ominus_{(k,i)}p\) be the positive and negative deformations of \(s\) by \(p\) at location \((k,i)\). For \(p\in\mathcal{P}\), let \(\delta_{p}:\mathcal{P}\to\mathbb{N}\) be the delta function at \(p\). Now applying the word recursion Proposition 5.2, we have that \[I(s,K)= -\frac{1}{N}\sum_{s^{\prime}\in\mathbb{S}_{+}((k,i),s)}I(s^{ \prime},K)+\frac{1}{N}\sum_{s^{\prime}\in\mathbb{S}_{-}((k,i),s)}I(s^{\prime},K)\] \[-\frac{1}{N}\sum_{s^{\prime}\in\mathbb{M}_{+}^{U}((k,i),s)}I(s^{ \prime},K)+\frac{1}{N}\sum_{s^{\prime}\in\mathbb{M}_{-}^{U}((k,i),s)}I(s^{ \prime},K)\] \[-\frac{1}{N}\sum_{\begin{subarray}{c}p\in\mathcal{P}\\ p>e\end{subarray}}K(p)I(s\oplus_{(k,i)}p,K-\delta_{p})+\frac{1}{N}\sum_{ \begin{subarray}{c}p\in\mathcal{P}\\ p>-e\end{subarray}}K(p)I(s\ominus_{(k,i)}p,K-\delta_{p}).\] (Here, the factor of \(K(p)\) arising in the last two terms arises because there are \(K(p)\) copies of the plaquette \(p\) which can possibly be used to deform \(s\).) From this, we obtain (note that in the splitting terms, the \(1/N\) factor gets absorbed due to the fact that \(s^{\prime}\) has one more loop than \(s\), while in the merging terms, there is an extra \(1/N\) factor because \(s^{\prime}\) has one less loop than \(s\)) \[\phi(s)=-\sum_{s^{\prime}\in\mathbb{S}_{+}((k,i),s)}\phi(s)+\sum_{s^{\prime} \in\mathbb{S}_{-}((k,i),s)}\phi(s)-\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M }_{+}^{U}((k,i),s)}\phi(s^{\prime})+\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{ M}_{-}^{U}((k,i),s)}\phi(s^{\prime})\] \[\begin{split} D_{1}&:=-Z_{\Lambda,\beta}^{-1}\frac{1}{N} \sum_{\begin{subarray}{c}p\in\mathcal{P}\\ p>e\end{subarray}}\sum_{\begin{subarray}{c}K:\mathcal{P}\rightarrow\mathbb{N} \\ K(p)\geq 1\end{subarray}}\frac{(N\beta)^{K}}{K!}K(p)I(s\oplus_{(k,i)}p,K-\delta _{p}),\\ D_{2}&:=Z_{\Lambda,\beta}^{-1}\frac{1}{N}\sum_{ \begin{subarray}{c}p\in\mathcal{P}\\ p>-e\end{subarray}}\sum_{\begin{subarray}{c}K:\mathcal{P}\rightarrow\mathbb{N} \\ K(p)\geq 1\end{subarray}}\frac{(N\beta)^{K}}{K!}K(p)I(s\ominus_{(k,i)}p,K-\delta _{p})\end{split}\] Observe that we may write (by changing variables \(K\mapsto K-\delta_{p}\) and then recalling (5.1)) \[\begin{split} D_{1}&=-Z_{\Lambda,\beta}^{-1}\frac{1 }{N}(N\beta)\sum_{\begin{subarray}{c}p\in\mathcal{P}\\ p>e\end{subarray}}\sum_{\begin{subarray}{c}K:\mathcal{P}\rightarrow\mathbb{N} \\ K(p)\geq 1\end{subarray}}\frac{(N\beta)^{K}}{K!}I(s\oplus_{(k,i)}p,K)\\ &=-\beta\sum_{\begin{subarray}{c}p\in\mathcal{P}\\ p>e\end{subarray}}\phi(s\oplus_{(k,i)}p)=-\beta\sum_{s^{\prime}\in\mathbb{D}_{+ }((k,i),s)}\phi(s^{\prime}),\end{split}\] and similarly \[D_{2}=\beta\sum_{\begin{subarray}{c}p\in\mathcal{P}\\ p\succ-e\end{subarray}}\phi(s\ominus_{(k,i)}p)=\beta\sum_{s^{\prime}\in \mathbb{D}_{-}((k,i),s)}\phi(s^{\prime}).\] The desired result now follows. ## 6 Other groups In this section, we adapt our results to the cases \(G=\operatorname{O}(N),\operatorname{Sp}(N/2),\operatorname{SU}(N), \operatorname{SO}(N)\). In Section 6.1, we address the cases \(G=\operatorname{O}(N),\operatorname{Sp}(N/2)\), and in Section 6.2, we address the cases \(G=\operatorname{SU}(N),\operatorname{SO}(N)\). Define the matrix \(J\) by \[J:=\begin{pmatrix}0&I_{N/2}\\ -I_{N/2}&0\end{pmatrix}. \tag{6.1}\] We quickly recall the definitions of the various groups. \[\begin{split}\operatorname{O}(N)&:=\{O\in \operatorname{GL}(N,R):O^{T}O=I_{N}\}\\ \operatorname{Sp}(N/2)&:=\{S\in\operatorname{U}(N):S^{T}JS=J\}\\ \operatorname{SU}(N)&:=\{U\in\operatorname{U}(N):\det(S)=1\}\\ \operatorname{SO}(N)&:=\{O\in\operatorname{SO}(N):\det(O)=1\}.\end{split}\] **Notation 6.1**.: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\dots,\Gamma_{M})\) be a collection of words on \(\{\lambda_{1},\dots,\lambda_{L}\}\). Given a compact Lie group \(G\), we will denote \(\operatorname{Tr}(G(\boldsymbol{\Gamma}))=\operatorname{Tr}(G(\Gamma_{1})) \cdots\operatorname{Tr}(G(\Gamma_{M}))\), where \(G(\Gamma_{i})\) is obtained by substituting an independent Haar-distributed element of \(G\) for each of the letters \(\{\lambda_{1},\dots,\lambda_{L}\}\). ### Orthogonal and Symplectic In this section, we adapt our previous results to \(G=\mathrm{O}(N),\mathrm{Sp}(N/2)\). These two cases are at times very similar, and thus we choose to place them in the same section. However, they are also at times very different, which prevents us from handling the two cases completely simultaneously - there are certain parts which require special attention in the \(\mathrm{O}(N)\) case, and certain parts in the \(\mathrm{Sp}(N/2)\) case. **Notation 6.2**.: In this section, we will denote matchings on \([n]\) (i.e. partitions of \([n]\) into two-element sets) by \(\pi,\pi^{\prime},\pi^{\prime\prime}\), etc., and often write \(\pi:[n]\to[n]\). #### 6.1.1 Orthogonal surface sums First, we discuss the surface sums that arise in the \(\mathrm{O}(N)\) case. We begin by introducing the needed setup in order to state the analog of Corollary 2.7 (the Unitary Weingarten calculus) for \(\mathrm{O}(N)\). **Definition 6.3** (Unoriented-balanced collection of words).: Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{M})\) be a collection of words on letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\). For \(\ell\in[L]\), let \(n_{\ell}\) be the total number of times \(\lambda_{\ell}\) or \(\lambda_{\ell}^{-1}\) occurs in \(\mathbf{\Gamma}\). We say that \(\mathbf{\Gamma}\) is unoriented-balanced if \(n_{\ell}\) is even for each \(\ell\in[L]\). _Remark 6.4_.: By \(O\mapsto-O\) distributional symmetry of Haar-distributed \(\mathrm{O}(N)\) matrices, if \(\mathbf{\Gamma}\) is not unoriented-balanced then \(\mathbb{E}[\mathrm{Tr}(O(\mathbf{\Gamma}))]=0\). Thus when computing \(\mathbb{E}[\mathrm{Tr}(O(\mathbf{\Gamma}))]\), we may assume \(\mathbf{\Gamma}\) is unoriented-balanced. **Definition 6.5**.: Let \(n\geq 1\) be even. Let \(\pi,\pi^{\prime}:[n]\to[n]\) be matchings. Visually, we will think of \(\pi,\pi^{\prime}\) as giving left and right matchings, as in the Figure 46. This defines an element of the Brauer algebra \(\mathcal{B}_{n}\), which we denote by \([\pi\ \pi^{\prime}]\). Let \(\#\mathrm{cycles}(\pi,\pi^{\prime})\) be the number of connected components in the graph one obtains by adding in the strands connecting the left and right vertices - see Figure 47 for an example. **Definition 6.6**.: Let \(n\geq 1\) be even. Given left and right matchings \(\pi,\pi^{\prime}:[n]\to[n]\), the face profile \(\ell(\pi,\pi^{\prime})\) is the partition of \(n\) induced by the cycles of \(\pi\pi^{\prime}\). We note that all parts of \(\ell(\pi,\pi^{\prime})\) are even, and thus \(\frac{1}{2}\ell(\pi,\pi^{\prime})\) is a partition of \(\frac{n}{2}\). For matchings \(\pi,\pi^{\prime}\) in Figures 46 and 47, the face profile \(\ell(\pi,\pi^{\prime})=\{4,2\}\). Note also that \(\#\text{cycles}(\pi,\pi^{\prime})\) is exactly the number of parts of \(\ell(\pi,\pi^{\prime})\). **Definition 6.7** (Orthogonal Weingarten function).: Let \(\zeta\in\mathbb{C}\). Let \(n\geq 1\) be even. We define the Orthogonal Weingarten function \(\mathrm{Wg}^{\mathrm{O}}_{\zeta,n}\) as follows. The input is a pair of matchings \(\pi,\pi^{\prime}:[n]\to[n]\), and the output is a number \(\mathrm{Wg}^{\mathrm{O}}_{\zeta,n}(\pi,\pi^{\prime})\in\mathbb{C}\). First, define the Gram matrix \[\mathbf{G}^{\mathrm{O}}_{\zeta,n}(\pi,\pi^{\prime}):=\zeta^{\# \text{cycles}(\pi,\pi^{\prime})},\ \ \pi,\pi^{\prime}:[n]\to[n]\text{ matchings}.\] We define \(\mathrm{Wg}^{\mathrm{O}}_{\zeta,n}\) to be the pseudo-inverse of \(\mathbf{G}=\mathbf{G}^{\mathrm{O}}_{\zeta,n}(\pi,\pi^{\prime})\), that is the symmetric matrix \(W\) which satisfies \[W\mathbf{G}W=W\text{ and }\mathbf{G}W\mathbf{G}=\mathbf{G}.\] We typically omit the \(n\) variable and write \(\mathrm{Wg}^{\mathrm{O}}_{\zeta}\). The normalized Orthogonal Weingarten function is defined to be \[\overline{\mathrm{Wg}^{\mathrm{O}}_{\zeta}}(\pi,\pi^{\prime})= \zeta^{n-\#\text{cycles}(\pi,\pi^{\prime})}\mathrm{Wg}^{\mathrm{O}}_{\zeta}( \pi,\pi^{\prime}).\] _Remark 6.8_.: From [13, Theorem 3.13], the normalized Orthogonal Weingarten function has the following large-\(N\) asymptotics: \[\lim_{N\to\infty}\overline{\mathrm{Wg}^{\mathrm{O}}_{N}}(\pi, \pi^{\prime})=\prod_{a\in\frac{1}{2}\ell(\pi,\pi^{\prime})}(-1)^{a-1}c_{a-1},\] where \(c_{k}\) is the \(k\)th Catalan number as in (3.1), and the product is over all parts in the face profile of \(\frac{1}{2}\ell(\pi,\pi^{\prime})\) (which recall is a partition of \(\frac{n}{2}\)). In fact, the proof of the cited theorem extends without change to a general complex parameter \(\zeta\to\infty\), and thus we have that \[\lim_{\zeta\to\infty}\overline{\mathrm{Wg}^{\mathrm{O}}_{\zeta}}( \pi,\pi^{\prime})=\prod_{a\in\frac{1}{2}\ell(\pi,\pi^{\prime})}(-1)^{a-1}c_{a -1}.\] Figure 47: For the left and right matchings \(\pi,\pi^{\prime}\) from Figure 46, there are two connected components in the graph obtained by adding the strands, and thus \(\#\text{cycles}(\pi,\pi^{\prime})=2\). We state the following lemma which says that the Orthogonal Weingarten function is a function of the face profile of \((\pi,\pi^{\prime})\). It essentially follows from [12], although we haven't found a precise statement in the literature. Thus for the reader's convenience, we give more detail as to why the lemma is true in Appendix A. **Lemma 6.9**.: The Orthogonal Weingarten function \(\mathrm{Wg}^{\mathrm{O}}_{\zeta}(\pi,\pi^{\prime})\) is a function of the face profile \(\ell(\pi,\pi^{\prime})\) of \(\pi,\pi^{\prime}\). _Remark 6.10_.: We defined the Orthogonal Weingarten function in a slightly different manner than the Unitary Weingarten function (Definition 2.21). For an expression of \(\mathrm{Wg}^{\mathrm{O}}_{\zeta}\) in terms of characters, see [10, Theorem 3.9] or [11, Proposition 5]. The interpretation of the Weingarten function which is most relevant for us is as a weight assigned to pairs of left and right matchings, and the most direct definition of \(\mathrm{Wg}^{\mathrm{O}}_{\zeta}\) from this point of view is as the pseudo-inverse of the Gram matrix. Also, note that we defined the Orthogonal Weingarten function for a general complex parameter \(\zeta\in\mathbb{C}\). This did not require any extra considerations. For Orthogonal Haar integration, this level of generality is not needed and we could have restricted to \(\zeta=N\) a positive integer. However, it turns out that the Symplectic Weingarten function is related to the Orthogonal Weingarten function with \(\zeta=-N\) a negative integer - see Lemma 6.21. Moreover, it will be more convenient to work with \(\mathrm{Wg}^{\mathrm{O}}_{-N}\) rather than the Symplectic Weingarten function, due to a certain sign issue. See Remark 6.27 for more discussion. **Definition 6.11**.: Let \(n\geq 1\) be even. Let \(\pi_{0}:[n]\to[n]\) be the matching given by \(\{\{n,n-1\},\{n-2,n-3\},\ldots,\{2,1\}\}\). One may visualize \(\pi_{0}\) as in Figure 48. We omit the dependence of \(\pi_{0}\) on \(n\). **Definition 6.12**.: Let \(n\) be even. For each matching \(\pi:[n]\to[n]\), we define a permutation \(\sigma_{\pi}\in\mathrm{S}_{n}\) such that \(\sigma_{\pi}[\pi\ \pi]\sigma_{\pi}^{-1}=[\pi_{0}\ \pi_{0}]\) as follows. We may write \(\pi=\{\{\pi(1),\pi(2)\},\ldots,\{\pi(n-1),\pi(n)\}\}\), where \(1=\pi(1)<\pi(3)<\cdots<\pi(n-1)\), and \(\pi(2j-1)<\pi(2j)\) for \(j\in[n/2]\). We then define \(\sigma_{\pi}(j):=\pi(j)\). See Figure 49 for an example of \(\sigma_{\pi}\). Visually, \(\sigma_{\pi}\) can be thought of as a permutation of the vertices which takes \([\pi\ \pi]\) to the "standard form" \([\pi_{0}\ \pi_{0}]\). In general, there may be many such permutations; the definition of \(\sigma_{\pi}\) makes a particular choice for each \(\pi\). This particular way of choosing the permutation does not matter so much for O(\(N\)), however for Sp(\(N/2\)) it is important that \(\sigma_{\pi}\) be defined as it is, due to the fact that sgn(\(\sigma_{\pi}\)) appears in the definition of the Symplectic Gram matrix (see Definition 6.19), and thus also the Symplectic Weingarten function. (Different permutations which take [\(\pi\)\(\pi\)] to the standard form [\(\pi_{0}\)\(\pi_{0}\)] may have opposite signs.) Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{M})\) be an unoriented-balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Recall that in the Unitary case, the choice of \(\mathbf{\Gamma}\) specifies a choice of red exterior connections in our strand diagram. In the orthogonal case, the situation is similar, except now we specify that all strands point in the same direction (right). By doing so, the dashed red strands that we add may not have a consistent orientation with the black strands. This is a reflection of the fact that in the Orthogonal case, the surfaces we obtain may be unorientable. We explain through an example how to obtain the red exterior connections from \(\mathbf{\Gamma}\) - see Figure 50. For each \(\ell\in[L]\), let \(\pi_{\ell},\pi^{\prime}_{\ell}:[n_{\ell}]\rightarrow[n_{\ell}]\) be matchings. Similar to the Unitary case, we may form the diagram obtained by \(\boldsymbol{\Gamma}\) and \(\boldsymbol{\pi}=(\pi_{\ell},\pi^{\prime}_{\ell},\ell\in[L])\) by starting with the red exterior connections specified by \(\boldsymbol{\pi}\), and then adding in the blue interior connections specified by \(\boldsymbol{\pi}\). Let \(\#\mathrm{comp}(\boldsymbol{\Gamma},\pi)\) be the number of components of this diagram. See Figure 51 for an example. **Proposition 6.13** (Orthogonal Weingarten calculus).: Let \(G=\mathrm{O}(N)\). Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be an unoriented-balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Then \[\mathbb{E}[\mathrm{Tr}(G(\boldsymbol{\Gamma}))]=\sum_{\boldsymbol{\pi}=([\pi _{\ell},\pi^{\prime}_{\ell}],\ell\in[L])}\bigg{(}\prod_{\ell\in L}\mathrm{Wg}^ {\mathrm{O}}_{N}(\pi_{\ell},\pi^{\prime}_{\ell})\bigg{)}N^{\#\mathrm{comp}( \boldsymbol{\Gamma},\boldsymbol{\pi})}.\] Here, the sum in the right hand side is over \(\pi\) which is a collection of pairs of matchings \(\pi_{\ell},\pi^{\prime}_{\ell}:[n_{\ell}]\to[n_{\ell}]\), \(\ell\in[L]\). We proceed towards applying Proposition 6.13 to give expressions for Wilson loop expectations of \(\mathrm{O}(N)\) lattice gauge theories. First, we need some setup. Exactly as in the Unitary case, given \((\boldsymbol{\Gamma},\boldsymbol{\pi})\), we may obtain a map whose dual is bipartite as follows. We start with one yellow face for each word in \(\boldsymbol{\Gamma}\). For each letter \(\lambda_{\ell}\), the left and right matchings \(\pi_{\ell},\pi^{\prime}_{\ell}\) giving the interior connections in the portion of the diagram corresponding to \(\lambda_{\ell}\) then specify an additional collection of blue faces which are glued to the yellow faces which contain the letter \(\lambda_{\ell}\) or its inverse. **Definition 6.14**.: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be an unoriented-balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Define \(\mathrm{DBM}_{\mathrm{OS}}(\boldsymbol{\Gamma})\) to be the set of all possible maps which can be obtained from adding interior left and right matchings to the strand diagram corresponding to \(\boldsymbol{\Gamma}\). For a given map \(\mathcal{M}\in\mathrm{DBM}_{\mathrm{OS}}(\boldsymbol{\Gamma})\), and \(\ell\in[L]\), let \(\mu_{\ell}(\mathcal{M})\) be the partition of \(n_{\ell}\) (the total number of occurrences of \(\lambda_{\ell}\) and \(\lambda_{\ell}^{-1}\)) given by the degrees of the blue faces which are glued in to the strand diagram of \(\lambda_{\ell}\) (this is the same as the face profile of the left and right matchings \(\boldsymbol{\pi}=(\pi_{\ell},\pi^{\prime}_{\ell},\ell\in[L])\) used to construct \(\mathcal{M}\)). Figure 51: Let \(\boldsymbol{\Gamma}\) be the same as in Figure 50. For some particular choice of \(\boldsymbol{\pi}\), we may end up with the blue interior connections as displayed. In this case, \(\#\mathrm{comp}(\boldsymbol{\Gamma},\boldsymbol{\pi})=2\). Here, the subscript "OS" is short for Orthogonal and Symplectic, since \(\mathrm{DBM}_{\mathrm{OS}}\) is the set of maps that one obtains in these cases. _Remark 6.15_.: Unlike in the Unitary case, the maps in \(\mathrm{DBM}_{\mathrm{OS}}\) may be unorientable. **Proposition 6.16**.: Let \(G=\mathrm{O}(N)\). Let \(\mathbf{\Gamma}=(\Gamma_{1},\dots,\Gamma_{k})\) be an unoriented-balanced collection of words on \(\{\lambda_{1},\dots,\lambda_{L}\}\). We have that \[\mathbb{E}[\mathrm{Tr}(G(\mathbf{\Gamma}))]=\sum_{\mathcal{M}\in\mathrm{DBM}_{ \mathrm{OS}}(\mathbf{\Gamma})}\bigg{(}\prod_{\ell\in[L]}\overline{\mathrm{Wg} }^{\mathrm{O}}_{N}(\mu_{\ell}(\mathcal{M}))\bigg{)}N^{\chi(M)-k}.\] As in the Unitary case, when the letters \(\{\lambda_{1},\dots,\lambda_{L}\}\) are edges of the lattice \(\Lambda\), then any map \(\mathcal{M}\in\mathrm{DBM}_{\mathrm{OS}}(\mathbf{\Gamma})\) naturally gives an edge-plaquette embedding \((\mathcal{M},\psi)\), where \(\psi\) is determined by the requirement that it maps edges of \(\mathcal{M}\) to the corresponding edges of the lattice. **Definition 6.17**.: Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string, and let \(K:\mathcal{P}\to\mathbb{N}\). Define the set \(\mathrm{EPE}_{\mathrm{OS}}(s,K)\) of edge-plaquette embeddings associated to \(s,K\) to as follows. If \(s,K\) is not unoriented-balanced, then \(\mathrm{EPE}(s,K):=\varnothing\). If \(s,K\) is unoriented-balanced, let \(\mathbf{\Gamma}\) be the collection of words consisting of \(s\) and \(K(p)\) copies of the plaquette \(p\) for each \(p\in\mathcal{P}\). We define \(\mathrm{EPE}_{\mathrm{OS}}(s,K)\) to be the set of edge-plaquette embedding \((\mathcal{M},\psi)\) obtained from maps \(\mathcal{M}\in\mathrm{DBM}_{\mathrm{OS}}(\mathbf{\Gamma})\). Next, define \[\mathrm{EPE}_{\mathrm{OS}}(s):=\bigsqcup_{K:\mathcal{P}\to\mathbb{N}}\mathrm{ EPE}_{\mathrm{OS}}(s,K).\] For \((\mathcal{M},\psi)\in\mathrm{EPE}_{\mathrm{OS}}(s)\), and \(e\in E_{\Lambda}\), let \(\mu_{e}(\psi)\) be the partition of \(|\psi^{-1}(e)|/2\) induced by \(1/2\) times the degrees of the faces of \(\psi^{-1}(e)\). Define \[\mathrm{area}(\mathcal{M},\psi) :=\sum_{p\in\mathcal{P}}|\psi^{-1}(p)|,\] \[(\psi^{-1})! :=\prod_{p\in\mathcal{P}}|\psi^{-1}(p)|!.\] Note that if \((\mathcal{M},\psi)\in\mathrm{EPE}_{\mathrm{OS}}(s,K)\), then \(\mathrm{area}(\mathcal{M},\psi)=\sum_{p}K(p)\) and \((\psi^{-1})!=K!\). We now arrive at the following theorem, which is the analog of Corollary 3.11. Since the proof is very similar to the proof of the corollary, it is omitted. **Theorem 6.18**.: _Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string. For \(\mathrm{O}(N)\) lattice gauge theory, we have that_ \[\langle W_{s}\rangle_{\Lambda,\beta}=Z_{\Lambda,\beta}^{-1}\sum_{(\mathcal{M},\psi)\in\mathrm{EPE}_{\mathrm{OS}}(s)}\frac{\beta^{\mathrm{area}(\mathcal{M}, \psi)}}{(\psi^{-1})!}\bigg{(}\prod_{e\in E_{\Lambda}}\overline{\mathrm{Wg}}^{ \mathrm{O}}_{N}(\mu_{e}(\psi))\bigg{)}N^{\chi(\mathcal{M})-2n}.\] #### 6.1.2 Symplectic surface sums Next, we discuss the surface sums in the Symplectic case. This case is more complicated than before due to a certain sign issue. We start by working towards the definition of the Symplectic Weingarten function. **Definition 6.19** (Symplectic Weingarten function).: Define the Symplectic Weingarten function \(\mathrm{Wg}^{\mathrm{Sp}}_{N,n}\) as follows. First, define the Gram matrix \[\mathbf{G}^{\mathrm{Sp}}_{N,n}(\pi,\pi^{\prime}):=(-1)^{n/2}\mathrm{sgn}( \sigma_{\pi})\mathrm{sgn}(\sigma_{\pi^{\prime}})(-N)^{\#\mathrm{cycles}(\pi, \pi^{\prime})},\ \ \pi,\pi^{\prime}:[n]\to[n]\ \mathrm{matchings}.\] We define \(\mathrm{Wg}^{\mathrm{Sp}}_{N,n}\) to be the pseudo-inverse of \(\mathbf{G}^{\mathrm{Sp}}_{N,n}\). We typically omit the dependence on \(n\) and write \(\mathrm{Wg}^{\mathrm{Sp}}_{N}\). _Remark 6.20_.: This definition of the Symplectic Weingarten function is not so easy to find in the literature. For instance, the first paper on the topic [10] does not give an explicit formula for the Symplectic Weingarten function, nor does the recent survey [13]. The paper [14] which applies the Symplectic Weingarten calculus only posits the existence of some function which can be used to compute Symplectic matrix integrals (see [14, Theorem 3.1]). The paper [11] defines the Symplectic Weingarten function as a certain element \(W\) of the group algebra \(\mathbb{C}[\mathrm{S}_{n}]\) (Matsumoto denotes this element by \(\mathrm{Wg}^{\mathrm{Sp}}\)). The relation between Matsumoto's definition and our definition via pseudo-inverses is precisely stated in [11, Lemma 2.5], which says that the Weingarten weight \(\mathrm{Wg}^{\mathrm{Sp}}_{N}(\pi,\pi^{\prime})\) assigned to a pair of matchings \(\pi,\pi^{\prime}\) is precisely \(W(\sigma_{\pi}^{-1}\sigma_{\pi^{\prime}})\). We prefer to give the pseudo-inverse definition in the present paper, because it is the most easy to state and understand. This way, the reader who only wishes to be able to understand the weights that appear in our surface sums can do so without having to spend too much time on background material. By comparing the definitions of the Orthogonal (Definition 6.7) and Symplectic Weingarten functions, the next lemma follows immediately. (Here we also use the uniqueness of the pseudo-inverse of a matrix.) **Lemma 6.21**.: We have that \[\mathrm{Wg}^{\mathrm{Sp}}_{N}(\pi,\pi^{\prime})=(-1)^{n/2}\mathrm{sgn}(\sigma _{\pi})\mathrm{sgn}(\sigma_{\pi^{\prime}})\mathrm{Wg}^{\mathrm{O}}_{-N}(\pi, \pi^{\prime}),\ \ \pi,\pi^{\prime}:[n]\to[n].\] _Remark 6.22_.: This relation between the Orthogonal and Symplectic Weingarten functions has previously been observed, see for instance the end of [11, Section 2.3.2]. When \(N\geq n\), this identity is also stated as [14, Lemma 3.2]. We note that by defining Weingarten functions as pseudo-inverses of the appropriate Gram matrices, it is trivial to see that the relation holds for general \(N\) (indeed, even general \(\zeta\in\mathbb{C}\)). _Remark 6.23_.: Recall that \(\mathrm{Wg}^{\mathrm{O}}_{-N}(\pi,\pi^{\prime})\) is a function of the face profile \(\ell(\pi,\pi^{\prime})\). Lemma 6.21 shows that \(\mathrm{Wg}^{\mathrm{Sp}}_{N}(\pi,\pi^{\prime})\) is _not_ a function of the face profile \(\ell(\pi,\pi^{\prime})\), because \(\mathrm{sgn}(\sigma_{\pi})\mathrm{sgn}(\sigma_{\pi^{\prime}})\) is not determined by \(\ell(\pi,\pi^{\prime})\). For a simple example, see Figure 52. Thus to obtain weighted sums over surfaces in the Symplectic case, we will use Lemma 6.21 to replace \(\mathrm{Wg}^{\mathrm{Sp}}_{N}\) by \(\mathrm{Wg}^{\mathrm{O}}_{-N}\), which will allow us to express our weights purely in terms of the surfaces. We note that this was also done in [14] - see Theorem 1.2 and Appendix A of the paper. Recall the matrix \(J=\begin{pmatrix}0&I_{N/2}\\ -I_{N/2}&0\end{pmatrix}\) from the definition of \(\mathrm{Sp}(N/2)\). **Definition 6.24**.: For indices \(i_{1},i_{2}\in[N]\), define \(\langle i_{1},i_{2}\rangle_{J}:=J_{i_{1}i_{2}}\). For \(n\geq 1\) even and a permutation \(\sigma\in\mathrm{S}_{n}\), define \[\Delta^{\prime}_{\sigma}(\mathbf{i}):=\prod_{k=1}^{n/2}\langle i_{\sigma(2k-1 )},i_{\sigma(2k)}\rangle_{J}=\langle i_{\sigma(1)},i_{\sigma(2)}\rangle_{J} \cdots\langle i_{\sigma(n-1)},i_{\sigma(n)}\rangle_{J},\ \ \mathbf{i}=(i_{1},\ldots,i_{n})\in[N]^{n}.\] For a matching \(\pi:[n]\to[n]\), we abuse notation and write \(\Delta^{\prime}_{\pi}\) for \(\Delta^{\prime}_{\sigma_{\pi}}\). We next state the matrix-entry version of the Symplectic Weingarten calculus. This is essentially [13, Theorem 2.4] (see also [13, Lemma 2.5]). **Proposition 6.25** (Symplectic Weingarten calculus).: Let \(G=\mathrm{Sp}(N/2)\). Let \(n\geq 1\) be even. For any \(\mathbf{i}=(i_{1},\ldots,i_{n}),\mathbf{j}=(j_{1},\ldots,j_{n})\in[N]^{n}\), we have that \[\mathbb{E}[G_{i_{1}j_{1}}\cdots G_{i_{n}j_{n}}]=\sum_{\pi,\pi^{\prime}:[n]\to[ n]}\Delta^{\prime}_{\pi}(\mathbf{i})\Delta^{\prime}_{\pi^{\prime}}(\mathbf{j}) \mathrm{Wg}^{\mathrm{Sp}}_{N}(\pi,\pi^{\prime}).\] By applying Proposition 6.25 and Lemma 6.21, one can obtain the following word-expectation version of the Symplectic Weingarten calculus. We remark that going from the matrix-entry version to the word-expectation version of Weingarten calculus is not as simple as in the Unitary or Orthogonal cases (where one may use the argument described in Section 4.2), and one has to carefully handle signs. The proof is omitted - see [11, Appendix A] for the relevant details. **Proposition 6.26**.: Let \(G=\mathrm{Sp}(N/2)\). Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be an unoriented-balanced collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). We have that \[\mathbb{E}[\mathrm{Tr}(G(\mathbf{\Gamma}))]=(-1)^{k}\sum_{\pi=([\pi_{\ell}, \pi^{\prime}_{\ell}],\ell\in[L])}\prod_{\ell\in[L]}\mathrm{Wg}^{\mathrm{O}}_{ -N}(\pi_{\ell},\pi^{\prime}_{\ell})(-N)^{\#\mathrm{comp}(\mathbf{\Gamma},\pi)}.\] Consequently, we have that \[\mathbb{E}[\mathrm{Tr}(G(\mathbf{\Gamma}))]=(-1)^{k}\sum_{\mathcal{M}\in\mathrm{ DBM}_{OS}(\mathbf{\Gamma})}\bigg{(}\prod_{\ell\in[L]}\overline{\mathrm{Wg}}_{-N}^{ \mathrm{O}}(\mu_{\ell}(\mathcal{M}))\bigg{)}(-N)^{\chi(M)-k}.\] _Remark 6.27_.: To obtain the second claim in Proposition 6.26, it was crucial that we used the Orthogonal Weingarten function rather than the Symplectic Weingarten function, since the former is a function of the face profile but the latter is not (recall Remark 6.23). In other words, \(\mathrm{Wg}_{N}^{\mathrm{Sp}}\) is not a function of \(\mu_{\ell}(\mathcal{M})\), so we could not have replaced \(\mathrm{Wg}_{N}^{\mathrm{Sp}}(\pi_{\ell},\pi_{\ell^{\prime}})\) with \(\mathrm{Wg}_{N}^{\mathrm{Sp}}(\mu_{\ell}(\mathcal{M}))\). We can now give a representation of Wilson loop expectations in \(\mathrm{Sp}(N/2)\) lattice gauge theories as Weingarten-weighted surface sums. **Theorem 6.28**.: _Let \(s=(\ell_{1},\ldots,\ell_{n})\) be a string. For \(\mathrm{Sp}(N/2)\) lattice gauge theory, we have that_ \[\langle W_{s}\rangle_{\Lambda,\beta}=(-1)^{n}Z_{\Lambda,\beta}^{-1}\sum_{( \mathcal{M},\psi)\in\mathrm{EPE}_{\mathrm{OS}}(s)}\frac{\beta^{\mathrm{area} (\mathcal{M},\psi)}}{(\psi^{-1})!}\bigg{(}\prod_{e\in E_{\Lambda}}\overline{ \mathrm{Wg}}_{-N}^{\mathrm{O}}(\mu_{e}(\psi))\bigg{)}(-N)^{\chi(M)-2n}.\] #### 6.1.3 Exploration process In this subsection, we detail how to obtain the Orthogonal and Symplectic Weingarten calculus from taking limits of Brownian motion, much as we did in Section 4 for the Unitary case. The key is to use a variant of the exploration process we defined in Section 4.1. This will again allow us to extract the Jucys-Murphy elements, which as before will relate to the Weingarten function. First, we work towards describing the analog of Proposition 4.14 for \(\mathrm{O}(N),\mathrm{Sp}(N/2)\). _Remark 6.29_.: Note that \(\mathrm{SO}(N)\) is connected while \(\mathrm{O}(N)\) is not. Thus an \(\mathrm{O}(N)\) Brownian motion started at the identity (or more generally, any element of \(\mathrm{SO}(N)\)) is exactly the same as an \(\mathrm{SO}(N)\) Brownian motion. Thus to reprove the Orthogonal Weingarten calculus, we will need to take the initial value \(O_{0}\) of the \(\mathrm{O}(N)\) Brownian motion to lie in the two connected components of \(\mathrm{O}(N)\) with equal probability. This amounts to multiplying an \(\mathrm{SO}(N)\) Brownian motion started at the identity by \(O_{0}\). **Notation 6.30**.: In the following, to discuss the cases \(G=\mathrm{O}(N),\mathrm{Sp}(N/2)\) simultaneously, we set the notation \(\varepsilon=1\) when \(G=\mathrm{O}(N)\) and \(\varepsilon=-1\) when \(G=\mathrm{Sp}(N/2)\). We found this useful notation from [1]. Next, we define a representation \(\rho_{-}:\mathcal{B}_{n}(-N)\to\mathrm{End}((\mathbb{C}^{N})^{\otimes n})\) which is needed to relate expectations of Symplectic Brownian motion to weighted sums over Brauer algebra elements. Recall the definition of \(\langle\cdot,\cdot\rangle_{J}\) and \(\Delta^{\prime}_{\pi}\) from Definition 6.24. **Definition 6.31**.: Define the representation \(\rho_{-}:\mathcal{B}_{n}(-N)\to\mathrm{End}((\mathbb{C}^{N})^{\otimes n})\) as follows. It suffices to define \(\rho_{-}\) on the generating set \(\{(i\ j),\langle i\ j\rangle,1\leqslant i<j<n\}\) of \(\mathcal{B}_{n}\). We let \(\rho_{-}((i\ j)):=-\rho_{+}((i\ j))\). We let \(\rho_{-}(\langle i\ j\rangle)\) be the matrix whose \((\mathbf{k},\mathbf{l})\) (with \(\mathbf{k}=(k_{1},\ldots,k_{n}),\mathbf{l}=(l_{1},\ldots,l_{n})\in[N]^{n}\)) matrix entry is \[(\rho_{-}(\langle i\ j\rangle))_{\mathbf{kl}}:=-\langle k_{i},k_{j}\rangle_{J} \langle l_{i},l_{j}\rangle_{J}\prod_{r\neq i,j}\delta_{k_{r}l_{r}}.\] _Remark 6.32_.: The minus sign in the definition of \(\rho_{-}\) is crucial, since \(\rho_{-}\) is supposed to be a representation of \(\mathcal{B}_{n}(-N)\), which implies that \(\rho_{-}(\langle i\ j\rangle)^{2}=(-N)\rho_{-}(\langle i\ j\rangle)\) (since \(\langle i\ j\rangle^{2}=(-N)\langle i\ j\rangle\)). The minus sign ensures that this is the case. _Remark 6.33_.: For a matching \(\pi:[n]\to[n]\), observe that \[[\pi\ \pi]=\langle\pi(1)\ \pi(2)\rangle\cdots\langle\pi(2n-1)\ \pi(2n)\rangle.\] This implies that \[\rho_{-}([\pi\ \pi])_{\mathbf{i}\mathbf{j}}=(-1)^{n/2}\Delta^{\prime}_{\pi} (\mathbf{i})\Delta^{\prime}_{\pi}(\mathbf{j}).\] More generally, we have that \[\rho_{-}([\pi\ \pi^{\prime}])_{\mathbf{i}\mathbf{j}}=(-1)^{n/2}\text{sgn}( \sigma_{\pi})\text{sgn}(\sigma_{\pi^{\prime}})\Delta^{\prime}_{\pi}(\mathbf{i })\Delta^{\prime}_{\pi^{\prime}}(\mathbf{j}). \tag{6.2}\] **Notation 6.34**.: We will take \(\rho_{1}\) to mean \(\rho_{+}\) and \(\rho_{-1}\) to mean \(\rho_{-}\). This way we can write \(\rho_{\varepsilon}\). Let \(n\geqslant 1\). Consider a strand diagram with \(n\) total strands. We define a Poisson point process \(\Sigma_{\text{OS}}\) on the strand diagram as follows. We imagine we have \(n\) right-directed strands homeomorphic to \([0,\infty)\). Between any pair of strands, there are two independent rate-1 Poisson processes: one which gives the swaps between the two strands, and one which gives the turnarounds between the two strands. The Poisson processes corresponding to different pairs of strands are also independent. Let \(\Sigma_{\text{OS}}(T)\) be process obtained by keeping only those points of \(\Sigma_{\text{OS}}\) which occur before time \(T\). See Figure 53 for an example realization of \(\Sigma_{\text{OS}}(T)\). _Remark 6.35_ (Comparison with the Unitary case).: In the Orthogonal and Symplectic cases, all strands point the same direction, and there may be turnarounds between same-direction strands. Whereas in the Unitary case, there were only swaps between same-direction strands, and turnarounds between opposite-direction strands. **Definition 6.36**.: Define \(F_{\pm}=F_{\pm 1}\) as follows. Set \(F_{+}:=F_{1}:=F\) (where \(F\) is as defined in Definition 4.3). Define \(F_{-}\) to be a map which takes point process realizations to elements of \(\mathcal{B}_{n}(-N)\) as follows. Define \(F_{-}\) almost exactly as \(F\), except same-direction swaps incur a \(\frac{1}{N}\) factor and turnarounds incur a \(-\frac{1}{N}\) factor (note that this is the reverse of how \(F\) was defined). Figure 53: An example realization of \(\Sigma_{\text{OS}}(T)\) for some finite \(T\). The green lines represent swaps and blue lines represent turnarounds. The following proposition is the analog of Proposition 4.14. It says that for \(G=\mathrm{SO}(N),\mathrm{Sp}(N/2)\), expectations of \(G\)-valued Brownian motion may be expressed in terms of the Poisson point process \(\Sigma_{\mathrm{OS}}\). See [13, Appendix A] for the proof. **Proposition 6.37**.: Let \(n\geq 1\). Let \(G=\mathrm{SO}(N),\mathrm{Sp}(N/2)\). Let \(B_{T}\) be a \(G\)-valued Brownian motion at time \(T\). We have that Equivalently, for \(\mathbf{i}=(i_{1},\ldots,i_{n}),\mathbf{j}=(j_{1},\ldots,j_{n})\in[N]^{n}\), we have that \[\mathbb{E}[(B_{T})_{i_{1}j_{1}}\cdots(B_{T})_{i_{n}j_{n}}]=e^{2\binom{n}{2}T- \frac{n}{2}(1-\frac{\varepsilon}{N})}\rho_{\varepsilon}\big{(}\mathbb{E}[F_{ \varepsilon}(\Sigma_{\mathrm{OS}}(T))]\big{)}_{\mathbf{i}\mathbf{j}}.\] Next, we state the following lemma which relates the Orthogonal Weingarten fuction to Jucys-Murphy elements. This may essentially be found in [15], however it may not be so clear why this is the case without actually reading the paper. Thus, for the reader's convenience, we provide some discussion in Appendix A of why the following lemma follows from the results of [15]. **Lemma 6.38** (Relation of Weingarten function to Jucys-Murphy elements).: Let \(\pi:[n]\to[n]\) be a matching. We have that \[\rho_{\varepsilon}([\pi\ \pi_{0}])\rho_{\varepsilon}((\varepsilon N+J_{n-1})( \varepsilon N+J_{n-3})\cdots(\varepsilon N+J_{1}))^{-1}=\sum_{\pi^{\prime}:[n ]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N}(\pi_{0},\pi^{\prime})\rho_{ \varepsilon}([\pi\ \pi^{\prime}]).\] As in the Unitary case, the main theorem that we will prove using our exploration process is the recovery of the Weingarten calculus, stated as follows. **Theorem 6.39** (Weingarten recovery).: _Let \(n\geq 1\) be even. Let \(O_{0}\in\mathrm{O}(N)\) be a random matrix which has equal probability of being in the two connected components of \(\mathrm{O}(N)\), or equivalently \(\mathbb{E}[\mathrm{det}(O_{0})]=0\). For \(G=\mathrm{O}(N)\), we have that_ \[\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]\mathbb{E}[O_{0}^{\otimes n}]= \sum_{\pi,\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{N}(\pi,\pi^{\prime} )\rho_{+}([\pi\ \pi^{\prime}]).\] _For \(G=\mathrm{Sp}(N/2)\), we have that_ \[\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]=\sum_{\pi,\pi^{\prime}:[n]\to[ n]}\mathrm{Wg}^{\mathrm{O}}_{-N}(\pi,\pi^{\prime})\rho_{-}([\pi\ \pi^{\prime}]). \tag{6.3}\] _Remark 6.40_.: To see why equation (6.3) is equivalent to the matrix-entry version of the Symplectic Weingarten calculus (Proposition 6.25), recall by Remark 6.33 that \[\big{(}\rho_{-}([\pi\ \pi^{\prime}])\big{)}_{\mathbf{i}\mathbf{j}}=(-1)^{n/2} \mathrm{sgn}(\sigma_{\pi})\mathrm{sgn}(\sigma_{\pi^{\prime}})\Delta^{\prime}_ {\pi}(\mathbf{i})\Delta^{\prime}_{\pi^{\prime}}(\mathbf{j}).\] Then by the relation between the Orthogonal and Symplectic Weingarten functions (Lemma 6.21), it follows that \[\mathrm{Wg}^{\mathrm{O}}_{-N}(\pi,\pi^{\prime})\big{(}\rho_{-}([ \pi\ \pi^{\prime}])\big{)}_{\mathbf{i}\mathbf{j}} =\big{(}(-1)^{n/2}\mathrm{sgn}(\sigma_{\pi})\mathrm{sgn}(\sigma_ {\pi^{\prime}})\mathrm{Wg}^{\mathrm{O}}_{-N}(\pi,\pi^{\prime})\big{)}\Delta^{ \prime}_{\pi}(\mathbf{i})\Delta^{\prime}_{\pi^{\prime}}(\mathbf{j})\] \[=\mathrm{Wg}^{\mathrm{Sp}}_{N}(\pi,\pi)\Delta^{\prime}_{\pi}( \mathbf{i})\Delta^{\prime}_{\pi^{\prime}}(\mathbf{j}).\] Next, we define the analog \(\mathcal{Q}_{T}^{\mathrm{OS}}\) of the strand-by-strand exploration process \(\mathcal{Q}_{T}\) from Section 4.1. As before, the exploration is encoded by two processes \((E_{t})_{t\geqslant 0}\), \((\pi_{t})_{t\geqslant 0}\). Here, \(E\) takes values in \([n/2]\) and tracks the current exploration era, and \(\pi\) takes values in \(\mathrm{S}_{n}\). In words, the exploration starts at the top strand, and follows swaps until the first turnaround. At this time, the exploration proceeds to the next-highest strand which hasn't been matched. The exploration continues until all strands have been matched (i.e. until the end of the \(n/2\)th exploration era). **Notation 6.41**.: For notational brevity in what follows, define \[h_{n}(t_{1},\dots,t_{n/2}):=e^{(2(n-1)-(1-\frac{\varepsilon}{N}))t_{1}}e^{(2(n -3)-(1-\frac{\varepsilon}{N}))(t_{2}-t_{1})}\cdots e^{(2(1)-(1-\frac{ \varepsilon}{N}))(t_{n/2}-t_{n/2-1})}\] The following is the analog of Proposition 4.5 which recall encoded the key cancellation property of our strand-by-strand exploration process. **Proposition 6.42**.: Let \(n\geqslant 1\) be even. Let \(G=\mathrm{SO}(N),\mathrm{Sp}(N/2)\). We have that \[e^{2^{\binom{n}{2}T-\frac{n}{2}(1-\frac{\varepsilon}{N})T}}\mathbb{E}[F_{ \varepsilon}(\Sigma_{\mathrm{OS}}(T))\mathbb{1}\left(T_{n/2}\leqslant T\right) ]=\mathbb{E}\big{[}F_{\varepsilon}(\mathcal{Q}_{T_{n/2}}^{\mathrm{OS}}) \mathbb{1}\left(T_{n/2}\leqslant T\right)h_{n}(T_{1},\dots,T_{n/2})\big{]}.\] Proof.: The proof is very similar to the proof of Proposition 4.5, in that we proceed by induction, except now the combinatorics is slightly different. Throughout, we write \(\Sigma\) and \(\mathcal{Q}\) instead of \(\Sigma_{\mathrm{OS}}\) and \(\mathcal{Q}^{\mathrm{OS}}\) for brevity. The base case \(n=2\) may be handled by direct calculation, which we omit. Suppose that the proposition is true for \(n-2\geqslant 2\) and any \(T\geqslant 0\). As before, we condition on the time \(T_{1}\), which is the time of first turnaround, which results in two strands being matched. After this time, we may assume that any swaps or turnarounds involving either of the matched strands must involve precisely the two matched strands (by essentially the same argument as in the proof of the cancellation lemma, Lemma 4.4). Each strand is involved in \(2(n-1)\) independent Poisson processes, and thus the number of independent Poisson processes which must have zero points on the interval \([T_{1},T]\) is \(2(2(n-1))-4=4n-8\). The Poisson process which gives the turnarounds between the two matched strands contributes a factor \(1\), and the Poisson process which gives the swaps between the two matched strands contributes a factor \(e^{-(T-T_{1})}e^{-\varepsilon(T-T_{1})/N}\) (when we condition on \(T_{1}\)). We thus obtain \[\mathbb{E}[F_{\varepsilon}(\Sigma(T))\mathbb{1}\left(T_{n} \leqslant T\right)\ |\ \mathcal{F}_{T_{1}}]=\\ \mathbb{E}[F_{\varepsilon}(\Sigma(T_{1}))\ |\ \mathcal{F}_{T_{1}}] \mathbb{E}[F_{\varepsilon}(\Sigma(T)/\Sigma(T_{1}))\mathbb{1}\left(T_{n} \leqslant T\right)\ |\ \mathcal{F}_{T_{1}}]e^{-(4n-8)(T-T_{1})}e^{-(T-T_{1})}e^{- \varepsilon(T-T_{1})/N}.\] Now observe that \[2\binom{n}{2}-\frac{n}{2}\bigg{(}1-\frac{\varepsilon}{N}\bigg{)} -(4n-7)-\frac{\varepsilon}{N} =n^{2}-5n+6-\frac{n}{2}\bigg{(}1-\frac{\varepsilon}{N}\bigg{)}+ 1-\frac{\varepsilon}{N}\] \[=2\binom{n-2}{2}-\frac{n-2}{2}\bigg{(}1-\frac{\varepsilon}{N} \bigg{)}.\] From this, we obtain \[e^{2\binom{n}{2}T-\frac{n}{2}(1-\frac{1}{N})T}\mathbb{E}[F_{ \varepsilon}(\Sigma(T))\mathbb{1}\left(T_{n/2}\leqslant T\right)\ |\ \mathcal{F}_{T_{1}}]=\bigg{(}e^{2\binom{n}{2}T_{1}-\frac{n}{2}(1-\frac{1}{N})T_{1 }}\mathbb{E}[F_{\varepsilon}(\Sigma(T_{1}))\ |\ \mathcal{F}_{T_{1}}]\bigg{)}\ \times\] \[\bigg{(}e^{2\binom{n-2}{2}(T-T_{1})-\frac{n-2}{2}(1-\frac{\varepsilon}{N})(T-T_{1}) }\mathbb{E}[F_{\varepsilon}(\Sigma(T)\backslash\Sigma(T_{1}))\mathbb{1}\left(T_{n /2}\leqslant T\right)\ |\ \mathcal{F}_{T_{1}}]\bigg{)}.\] At this point, we recognize that the second factor is exactly given by the inductive assumption: \[e^{2\binom{n-2}{2}(T-T_{1})-\frac{n-2}{2}(1-\frac{\varepsilon}{N })(T-T_{1})}\mathbb{E}[F_{\varepsilon}(\Sigma(T)\backslash\Sigma(T_{1})) \mathbb{1}_{A_{T}}\mathbb{1}\left(T_{n/2}\leqslant T\right)\ |\ \mathcal{F}_{T_{1}}]=\] \[\mathbb{E}[F_{\varepsilon}(\mathcal{Q}_{T_{n/2}}\backslash \mathcal{Q}_{T_{1}})\mathbb{1}\left(T_{n/2}\leqslant T\right)h_{n-2}(T_{2}, \ldots,T_{n/2})\ |\ \mathcal{F}_{T_{1}}]\] Thus, to finish the induction, we just need to show that \[e^{2\binom{n}{2}T_{1}-\frac{n}{2}(1-\frac{\varepsilon}{N})T_{1}}\mathbb{E}[F_ {\varepsilon}(\Sigma(T_{1}))F(\mathcal{Q}_{T_{n/2}}\backslash\mathcal{Q}_{T_ {1}})\ |\ \mathcal{F}_{T_{n/2}}]=e^{(2(n-1)-(1-\frac{\varepsilon}{N}))T_{1}}F_{ \varepsilon}(\mathcal{Q}_{T_{1}})F_{\varepsilon}(\mathcal{Q}_{T_{n/2}} \backslash\mathcal{Q}_{T_{1}}).\] Again, this follows by accounting for the contributions before time \(T_{1}\) of all the swaps and turnarounds not involving the top strand. There are a total of \(2\binom{n-1}{2}\) such processes. Out of these, there are \(\frac{n-2}{2}\) processes which contribute \(1\) (the turnarounds between two strands which are matched on the right), there are \(\frac{n-2}{2}\) processes which contribute \(e^{-T_{1}}e^{-\varepsilon T_{1}/N}\) (the swaps between two strands which are matched on the right), and every other process must have zero points on \([0,T_{1}]\), and thus contributes \(e^{-T_{1}}\). In total, we get \[\mathbb{E}[F_{\varepsilon}(\Sigma(T_{1}))F_{\varepsilon}(\mathcal{Q}_{T_{n/2} }\backslash\mathcal{Q}_{T_{1}})\ |\ \mathcal{F}_{T_{n/2}}]=e^{-(2\binom{n-1}{2}-(n-2))T_{1}}e^{-\frac{n-2}{2}T_{1}} e^{-\varepsilon\frac{n-2}{2}T_{1}/N}F_{\varepsilon}(\mathcal{Q}_{T_{1}})F_{ \varepsilon}(\mathcal{Q}_{T_{n/2}}\backslash\mathcal{Q}_{T_{1}}).\] To finish, note that (using that \(2\binom{n}{2}-2\binom{n-1}{2}=2(n-1)\)) \[2\binom{n}{2}-\frac{n}{2}\bigg{(}1-\frac{\varepsilon}{N}\bigg{)}-\bigg{(}2 \binom{n-1}{2}-(n-2)\bigg{)}-\frac{n-2}{2}-\frac{n-2}{2}\frac{\varepsilon}{N} =2(n-1)-\bigg{(}1-\frac{\varepsilon}{N}\bigg{)}.\qed\] Next, we give an explicit expression for the right hand side of Proposition 6.42. First, let \(E_{\pi_{0}}\) be the event that in the exploration process, \(n\) gets matched to \(n-1\), \(n-2\) gets matched to \(n-3\), \(\ldots\), \(2\) gets matched to \(1\). In other words, \(E_{\pi_{0}}\) is the event that the left matching discovered by our exploration is \(\pi_{0}\) (which was defined in Definition 6.11, see also Figure 48). For notational brevity, we make the following definition. **Notation 6.43**.: Define \[I(T,n):=\int_{0}^{T}du_{1}\int_{0}^{T-u_{1}}du_{2}\cdots\int_{0 }^{T-(u_{1}+\cdots+u_{n/2-1})}du_{n/2}\big{(}e^{-u_{1}}e^{-\varepsilon u_{1} J_{n-1}/N}\big{)}\times\big{(}e^{-u_{2}}e^{-\varepsilon u_{2}J_{n-3}/N}\big{)}\] \[\times\cdots\times\big{(}e^{-u_{n/2}}e^{-\varepsilon u_{n/2}J_{1 }/N}\big{)}.\] We will need the following lemma. **Lemma 6.44**.: We have that \[[\pi_{0}\ \pi_{0}]J_{n}=[\pi_{0}\ \pi_{0}](1+J_{n-1}).\] Proof.: In the case \(n=4\), Figure 54 contains the proof by explicitly identifying the terms which appear on the left and right hand sides of the claimed identity. The proof when \(n\geqslant 6\) is essentially same as the case \(n=4\). The case \(n=2\) is trivial. **Lemma 6.45**.: We have that \[\mathbb{E}\big{[}F_{\varepsilon}(\mathcal{Q}_{T_{n/2}}^{\mathrm{OS}})\mathbb{1} \,(T_{n/2}\leqslant T)h_{n}(T_{1},\ldots,T_{n/2})\mathbb{1}_{E_{\pi_{0}}}\big{]} =(\varepsilon N)^{-n/2}[\pi_{0}\ \pi_{0}]I(T,n).\] Proof.: By considering an alternative exploration as in Section 4.1, we may explicitly compute \[\mathbb{E}\big{[}F_{\varepsilon}(\mathcal{Q}_{T_{n/2}}^{\mathrm{OS}})\mathbb{1 }\,(T_{n/2}\leqslant T)h_{n}(T_{1},\ldots,T_{n/2})\mathbb{1}_{E_{\pi_{0}}} \big{]}=(\varepsilon N)^{-n/2}[\pi_{0}\ \pi_{0}]I^{\prime},\] where \[I^{\prime} =\int_{0}^{T}\!du_{1}\int_{0}^{T-u_{1}}du_{2}\cdots\int_{0}^{T-( u_{1}+\cdots+u_{n/2-1})}du_{n/2}\big{(}e^{-2(n-1)u}e^{-\varepsilon u_{1}J_{n}/N} \big{)}\ \times\] \[\qquad\big{(}e^{-2(n-3)u_{2}}e^{-\varepsilon u_{2}J_{n-2}/N} \big{)}\times\cdots\times\big{(}e^{-2u_{n/2}}e^{-\varepsilon u_{n/2}J_{2}/N} \big{)}h_{n}(u_{1},\ldots,u_{n/2}).\] We explain how \(e^{-2(n-1)u}e^{-\varepsilon u_{1}J_{n}/N}\) arises. One factor of \(e^{-(n-1)u}\) comes from the density of \(T_{1}\), which is an exponential random variable with rate \(n-1\). Conditioned on \(T_{1}=u_{1}\), we need to average over all swaps involving the top strand before time \(u\), which contributes \(e^{-(n-1)u}e^{-u_{1}\varepsilon J_{n}/N}\). Next, observe that the formula for \(I^{\prime}\) may be simplified to \[I^{\prime} =\int_{0}^{T}du_{1}\int_{0}^{T-u_{1}}du_{2}\cdots\int_{0}^{T-(u_{ 1}+\cdots+u_{n/2-1})}du_{n/2}\big{(}e^{-(1-\frac{\varepsilon}{N})u_{1}}e^{- \varepsilon u_{1}J_{n}/N}\big{)}\ \times\] \[\qquad\qquad\big{(}e^{-(1-\frac{\varepsilon}{N})u_{2}}e^{- \varepsilon u_{2}J_{n-2}/N}\big{)}\cdots\times\big{(}e^{-(1-\frac{\varepsilon }{N})u_{n/2}}e^{-\varepsilon u_{n/2}J_{2}/N}\big{)}.\] Next, by Lemma 6.44 we have that \([\pi_{0}\ \pi_{0}]J_{n}=[\pi_{0}\ \pi_{0}](1+J_{n-1})\). Since the Jucys-Murphy elements commute, we may then obtain that \[[\pi_{0}\ \pi_{0}]J_{n}^{k}=[\pi_{0}\ \pi_{0}](1+J_{n-1})^{k}\text{ for all }k\geqslant 0.\] This implies that \([\pi_{0}\ \pi_{0}]e^{uJ_{n}}=e^{u}[\pi_{0}\ \pi_{0}]e^{uJ_{n-1}}\) for any \(u\in\mathbb{R}\). More generally, by the same argument as in the proof of Lemma 6.44, we may obtain that \([\pi_{0}\ \pi_{0}]e^{uJ_{2r}}=e^{u}[\pi_{0}\ \pi_{0}]e^{uJ_{2r-1}}\) for \(1\leqslant r\leqslant n/2\). By applying these identities, we obtain \([\pi_{0}\ \pi_{0}]I^{\prime}=[\pi_{0}\ \pi_{0}]I(T,n)\), and the desired result follows. Now suppose that the left matching discovered by our strand-by-strand exploration is some arbitrary \(\pi\). Then we may first permute the strands so that \(\pi\) becomes \(\pi_{0}\), apply Figure 54: Term-by-term identification of the three terms in each of \(\pi_{0}J_{4}\) and \(\pi_{0}(1+J_{3})\). Lemma 6.45 to compute the expectation of our strand-by-strand-exploration in the case where the left matching is \(\pi_{0}\), and then permute the strands back. This gives a formula for the expectation of our strand-by-strand exploration in the case where the left matching is \(\pi\), which then leads to the following lemma. Recall from Definition 6.12 that for each matching \(\pi:[n]\to[n]\), we fixed a permutation \(\sigma_{\pi}\in\mathrm{S}_{n}\) such that \(\sigma_{\pi}[\pi\ \pi]\sigma_{\pi}^{-1}=[\pi_{0}\ \pi_{0}]\). **Lemma 6.46**.: We have that \[\mathbb{E}\big{[}F_{\varepsilon}(\mathcal{Q}_{T_{n/2}}^{\mathrm{OS}}) \mathbbm{1}(T_{n/2}\leq T)h_{n}(T_{1},\ldots,T_{n/2})\big{]}=(\varepsilon N)^{ -n/2}\sum_{\pi:[n]\to[n]}[\pi\ \pi_{0}]I(T,n)\sigma_{\pi}.\] Proof.: By summing over all possible left machings \(\pi\), we obtain \[\mathbb{E}\big{[}F_{\varepsilon}(\mathcal{Q}_{T_{n/2}}^{\mathrm{OS}}) \mathbbm{1}(T_{n/2}\leq T)h_{n}(T_{1},\ldots,T_{n/2})\big{]}=(\varepsilon N)^ {-n/2}\sum_{\pi:[n]\to[n]}[\pi\ \pi]I(T,n,\pi),\] where \(I(T,n,\pi)\) is the analog of \(I(T,n)\) defined for a general \(\pi\). Writing \([\pi\ \pi]=\sigma_{\pi}^{-1}[\pi_{0}\ \pi_{0}]\sigma_{\pi}\), we may write \[[\pi\ \pi]I(T,n,\pi)=\sigma_{\pi}^{-1}[\pi_{0}\ \pi_{0}](\sigma_{\pi}I(T,n, \pi)\sigma_{\pi}^{-1})\sigma_{\pi}.\] Since conjugating by \(\sigma_{\pi}\) corresponds to permuting the strands according to \(\sigma_{\pi}\), we have that \(\sigma_{\pi}I(T,n,\pi)\sigma_{\pi}^{-1}=I(T,n)\). I.e., after permuting the strands according to \(\sigma_{\pi}\), \(I(T,n,\pi)\) gets taken to \(I(T,n)\). To finish, note that \(\sigma_{\pi}^{-1}[\pi_{0}\ \pi_{0}]=[\pi\ \pi_{0}]\) (since if we only permute the labels on the left, then only the left matching gets changed). **Lemma 6.47**.: We have that \[\lim_{T\to\infty}\rho_{\varepsilon}(I(T,n))=\varepsilon^{n/2}N^{n/2}\rho_{ \varepsilon}\big{(}(\varepsilon N+J_{n-1})(\varepsilon N+J_{n-3})\cdots( \varepsilon N+J_{1})\big{)}^{-1}.\] Proof.: We have that \[\lim_{T\to\infty}\rho_{\varepsilon}(I(T,n)) =\int_{0}^{\infty}du_{1}e^{-u_{1}}e^{-\varepsilon u_{1}\rho_{ \varepsilon}(J_{n-1})/N}\cdots\int_{0}^{\infty}du_{n/2}e^{-u_{n/2}}e^{- \varepsilon u_{n/2}\rho_{\varepsilon}(J_{1})/N}\] \[=\rho_{\varepsilon}\bigg{(}\mathrm{id}+\varepsilon\frac{J_{n-1}} {N}\bigg{)}^{-1}\rho_{\varepsilon}\bigg{(}\mathrm{id}+\varepsilon\frac{J_{n-3 }}{N}\bigg{)}^{-1}\cdots\rho_{\varepsilon}\bigg{(}\mathrm{id}+\varepsilon\frac {J_{1}}{N}\bigg{)}^{-1}\] \[=\varepsilon^{n/2}N^{n/2}\rho_{\varepsilon}(\varepsilon N+J_{n-1 })^{-1}\rho_{\varepsilon}(\varepsilon N+J_{n-3})^{-1}\cdots\rho_{\varepsilon}( \varepsilon N+J_{1})^{-1}.\] Here, the operators \(\rho_{\varepsilon}(\mathrm{id}+\varepsilon J_{n-2k}/N)\), \(1\leq k<n/2\), have strictly positive eigenvalues (and thus are invertible) by Lemma 2.28 (recall that \(\rho_{-}((i\ j))=-\rho((i\ j))\) for any transposition \((i\ j)\)). **Lemma 6.48**.: For any matching \(\pi:[n]\to[n]\), we have that \[\rho_{\varepsilon}([\pi\ \pi_{0}])\rho_{\varepsilon}\big{(}(\varepsilon N+J_{n-1})( \varepsilon N+J_{n-3})\cdots(\varepsilon N+J_{1})\big{)}^{-1}\rho_{\varepsilon} (\sigma_{\pi})=\sum_{\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{ \varepsilon N}(\pi,\pi^{\prime})\rho_{\varepsilon}([\pi\ \pi^{\prime}]).\] Proof.: Applying Lemma 6.38, we have that \[\rho_{\varepsilon}([\pi\ \pi_{0}])\rho_{\varepsilon}\big{(}(\varepsilon N+J_{n-1})( \varepsilon N+J_{n-3})\cdots(\varepsilon N+J_{1})\big{)}^{-1}\rho_{\varepsilon}( \sigma_{\pi})=\sum_{\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N }(\pi_{0},\pi^{\prime})\rho_{\varepsilon}([\pi\ \pi^{\prime}]\sigma_{\pi}).\] Since \([\pi\ \pi]=\sigma_{\pi}^{-1}[\pi_{0}\ \pi_{0}]\sigma_{\pi}\), we have that \([\pi\ \pi^{\prime}]\sigma_{\pi}=\sigma_{\pi}^{-1}[\pi_{0}\ \pi^{\prime}]\sigma_{\pi}\). Changing variables \(\pi^{\prime}=\pi^{\prime}\sigma_{\pi}\), we obtain that the above is further equal to \[\sum_{\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N}(\pi_{0},\pi^{\prime}\sigma_{\pi}^{-1})\rho_{\varepsilon}([\pi\ \pi^{\prime}]).\] To finish, observe that \(\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N}(\pi_{0},\pi^{\prime}\sigma_{\pi}^{-1 })=\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N}(\sigma_{\pi}^{-1}\pi_{0},\pi^{ \prime})=\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N}(\pi,\pi^{\prime})\). The first identity follows since \(\sigma_{\pi}^{-1}[\pi_{0}\ \pi^{\prime}\sigma_{\pi}^{-1}]\sigma_{\pi}=[\sigma_{\pi}^{-1} \pi_{0}\ \pi^{\prime}]\), which implies that \((\pi_{0},\pi^{\prime}\sigma_{\pi}^{-1})\) has the same face profile as \((\sigma_{\pi}^{-1}\pi_{0},\pi^{\prime})\), and the second identity follows since \(\sigma_{\pi}^{-1}\pi_{0}=\pi\). Combining Proposition 6.42 and Lemmas 6.46, 6.47, and 6.48, we obtain the following result. **Proposition 6.49**.: We have that \[\lim_{T\to\infty}e^{2^{\binom{n}{2}}T-\frac{n}{2}(1-\frac{\varepsilon}{N})T} \rho_{\varepsilon}\big{(}\mathbb{E}[F_{\varepsilon}(\Sigma_{\mathrm{O}S}(T)) \mathbbm{1}(T_{n/2}\leq T)]\big{)}=\sum_{\pi,\pi^{\prime}:[n]\to[n]}\mathrm{Wg }^{\mathrm{O}}_{\varepsilon N}(\pi,\pi^{\prime})\rho_{\varepsilon}([\pi\ \pi^{\prime}]).\] To complete the proof of Theorem 6.39, we need to show that it suffices to restrict to the event that all exploration eras have finished before time \(T\). This turns out to be much harder to show for \(\mathrm{O}(N)\) than for \(\mathrm{Sp}(N/2)\) - see Remarks 6.53 and 6.55 for some discussion as to why. We begin by introducing some concepts which are needed to handle the \(\mathrm{O}(N)\) case. **Definition 6.50**.: Let \(G=\mathrm{O}(N)\). Define \(\mathrm{P}_{n}:=\mathbb{E}[G^{\otimes n}]\in\mathrm{End}((\mathbb{C}^{N})^{ \otimes n})\). By properties of Haar integration, we have that \(\mathrm{P}_{n}\) is symmetric and \(\mathrm{P}_{n}^{2}=\mathrm{P}_{n}\). Thus, \(\mathrm{P}_{n}\) is the orthogonal projection onto its image, which is precisely the subspace of \(\mathrm{O}(N)\)-invariant vectors \(\{v\in(\mathbb{C}^{N})^{\otimes n}:O^{\otimes n}v=v\}\). Observe moreover that with \(O_{0}\) as in Theorem 6.39, we have that \[\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]\mathbb{E}[O_{0}^{\otimes n}]= \mathbb{E}[G^{\otimes n}]=\mathrm{P}_{n}.\] We thus have that \[\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]\mathbb{E}[O_{0}^{\otimes n}] \mathrm{P}_{n}=\mathrm{P}_{n}^{2}=\mathrm{P}_{n}.\] This discussion shows that when taking limits of \(\mathrm{SO}(N)\) Brownian motion to recover results about \(\mathrm{O}(N)\) Haar integration, we may first project to the space of invariant vectors, and this does not change the limit. This projection is a technical convenience that will make it easier to argue why the contribution from the case where not all eras end by time \(T\) goes to zero as \(T\to\infty\). With this discussion in mind, we state the following proposition. **Proposition 6.51**.: We have that \[\lim_{T\to\infty}\Big{\|}e^{2{n\choose 2}T-\frac{n}{2}(1-\frac{1}{N})T} \rho_{+}\big{(}\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}(T_{n/2}>T)]\big{)} \mathrm{P}_{n}\Big{\|}_{op}=0,\ \ G=\mathrm{O}(N)\] \[\lim_{T\to\infty}\Big{\|}e^{2{n\choose 2}T-\frac{n}{2}(1+\frac{1}{N} )T}\rho_{-}\big{(}\mathbb{E}[F_{-}(\Sigma_{\mathrm{OS}}(T))\mathbb{1}(T_{n/2}> T)]\big{)}\Big{\|}_{op}=0,\ \ G=\mathrm{Sp}(N/2).\] We will prove this proposition by an inductive argument, which rests on the following technical lemmas. The proofs are deferred to Section 6.1.4. **Lemma 6.52**.: Let \(n\) be even. For any \(u\geq 0\), we have that \[\|e^{-u\rho(J_{n})}\|_{op} \leq e^{(N-1)u},\] \[\|e^{-u\rho(J_{n})}\mathrm{P}_{n}\|_{op} \leq e^{(N-2)u}.\] _Remark 6.53_.: The first estimate of Lemma 6.52 immediately follows from Lemma 2.28, which says that all eigenvalues of \(\rho_{+}(J_{n})=\rho(J_{n})\) are at least \(-N+1\). However, this estimate is not good enough for the proof of Proposition 6.51 when \(G=\mathrm{O}(N)\). The point of the second estimate of Lemma 6.52 is that if we restrict to the subspace of \(\mathrm{O}(N)\)-invariant vectors (which is the effect of adding the \(\mathrm{P}_{n}\) term), then we can in fact obtain a better estimate for \(\|e^{-u\rho(J_{n})}\|_{op}\). **Lemma 6.54**.: Let \(n\) be even. For any \(T\geq 0\), we have that \[\big{\|}\big{(}I\otimes\mathbb{E}[B_{T}^{\otimes(n-1)}]\big{)} \mathrm{P}_{n}\big{\|}_{op} \lesssim_{N,n}T^{\frac{n}{2}-1}e^{-\frac{1}{2}(1-\frac{1}{N})u}, \ G=\mathrm{O}(N),\] \[\big{\|}I\otimes\mathbb{E}[B_{T}^{\otimes(n-1)}]\big{\|}_{op} \lesssim_{N,n}T^{\frac{n}{2}-1}e^{-\frac{1}{2}(1+\frac{1}{N})u}, \ G=\mathrm{Sp}(N/2).\] _Remark 6.55_.: Another reason why \(\mathrm{O}(N)\) is more delicate than \(\mathrm{Sp}(N/2)\) may be seen in the statement of Lemma 6.54. For \(\mathrm{O}(N)\), we need to add in the additional projection \(\mathrm{P}_{n}\) in order to obtain the stated estimate. Indeed, in certain cases \(\lim_{T\to\infty}I\otimes\mathbb{E}[B_{T}^{\otimes(n-1)}]\) is not even zero - note that this limit is equal to \(I\otimes\mathbb{E}[S^{\otimes(n-1)}]\), where \(S\) is a Haar-distributed \(\mathrm{SO}(N)\) random matrix. If \(N\) is odd and \(n-1\geq N\) is also odd, then \(\mathbb{E}[S^{\otimes(n-1)}]\) may be nonzero. The most direct example of this is when \(n-1=N\), because if \(\mathbb{E}[S^{\otimes N}]\) were equal to zero, then this would imply that any matrix entry has expectation zero: \[\big{(}\mathbb{E}[S^{\otimes N}]\big{)}_{\mathbf{j}}=\mathbb{E}[S_{i_{1}j_{1}} \cdots S_{i_{n}j_{n}}]=0,\ \ \mathbf{i}=(i_{1},\ldots,i_{N}),\mathbf{j}=(j_{1},\ldots,j_{N})\in[N]^{N}.\] This would further imply that \(\mathbb{E}[\det(S)]=0\). On the other hand, \(\det(S)=1\) deterministically. Thus \(\mathbb{E}[S^{\otimes N}]\neq 0\). Thus to prove Lemma 6.54, we will need to argue why we still have convergence to zero at an exponential rate, if we restrict to the subspace of \(\mathrm{O}(N)\)-invariant vectors. On the other hand, since \(-I\in\mathrm{Sp}(N/2)\), we have by parity that \(I\otimes\mathbb{E}[S^{\otimes(n-1)}]=0\) if \(S\) is a Haar-distributed \(\mathrm{Sp}(N/2)\) random matrix (and \(n\) is even). It then isn't too hard to further prove that the convergence of \(I\otimes\mathbb{E}[B_{T}^{\otimes(n-1)}]\) to zero happens at an exponential rate - one can argue similar to the Unitary case. **Lemma 6.56**.: Let \(n\) be even. We have that \[\rho_{+}(\langle n\ n-1\rangle)\mathrm{P}_{n}=I^{\otimes 2}\otimes\mathrm{P}_{n-2}.\] In the following, we will also use without explicit reference the fact that for any \(O\in\mathrm{O}(N)\), \(O^{\otimes n}\) commutes with any element of \(\rho_{+}(\mathcal{B}_{n})\). As a consequence, \(\mathrm{P}_{n}\) also commutes with any element of \(\rho_{+}(\mathcal{B}_{n})\). Proof of Proposition 6.51.: First, assume \(G=\mathrm{O}(N)\). We proceed by induction. First, in the base case \(n=2\), we may obtain by explicit calculation \[e^{2T-(1-\frac{1}{N})T}\rho_{+}\big{(}\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T)) \mathbb{1}\,(T_{1}>T)]\big{)}=e^{-(1-\frac{1}{N})T}e^{-T\rho(J_{2})/N}.\] Now by Lemma 6.52, we have that \(\|e^{-T\rho(J_{2})/N}\mathrm{P}_{2}\|_{op}\leqslant e^{(1-\frac{2}{N})T}\). The desired result when \(n=2\) then follows by combining the two estimates. Next, suppose the result is true for some even \(n\geqslant 2\). Consider the case \(n+2\). We first show that there is no contribution when the first era doesn't end, that is \[\lim_{T\to\infty}\Big{\|}e^{2\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T} \rho_{+}\big{(}\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}\,(T_{1}>T)] \big{)}\mathrm{P}_{n+2}\Big{\|}_{op}=0. \tag{6.4}\] Towards this end, consider a realization of \(\Sigma_{\mathrm{OS}}(T)\) on the event \(T_{1}>T\), as in the left of Figure 55. By imagining that every time we see a swap involving the current strand of exploration, we "cut and swap" the current strand and the other strand involved in the swap, we obtain a map on point configurations which preserves the law of \(\Sigma_{\mathrm{OS}}(T)\). After applying this map (see the right of Figure 55), we obtain another Poisson point process \(\tilde{\Sigma}_{\mathrm{OS}}(T)\), which has the property that all swaps which involve the first strand of exploration touch the top strand. To determine \(F(\Sigma_{\mathrm{OS}}(T))\) from \(\tilde{\Sigma}_{\mathrm{OS}}(T)\), we split \(\tilde{\Sigma}_{\mathrm{OS}}(T)\) into two parts: all points not involving the top strand, and all points involving the top strand - see Figure 56. Here, the points involving the top strand must be read in reverse order. If we now multiply together the two matchings in Figure 56, we obtain the matching in Figure 57, which is precisely the same matching one obtains by following all the swaps/turnaround in the original points process \(\Sigma_{\mathrm{OS}}(T)\) (recall the left of Figure 55). Let \(\Sigma_{\mathrm{OS}}^{\mathrm{top}}(T)\) be the process obtained by keeping only those points of \(\Sigma_{\mathrm{OS}}(T)\) which involve the top strand. Let \(\Sigma^{\mathrm{rest}}(T)\) be the process made of all other points, i.e. \(\Sigma^{\mathrm{rest}}(T)=\Sigma_{\mathrm{OS}}(T)/\Sigma_{\mathrm{OS}}^{ \mathrm{top}}(T)\). The preceding discussion shows that \[\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}\,(T_{1}>T)]=e^{-(n+1)T} \mathbb{E}[F(\Sigma_{\mathrm{OS}}^{\mathrm{rest}}(T))]\mathbb{E}[F(\Sigma_{ \mathrm{OS}}^{\mathrm{top}}(T))],\] Figure 55: The green lines represent swaps, and the blue lines represent turnarounds. On the event \(\{T_{1}>T\}\), the exploration of the first strand makes it all the way to the right (see left). We may map the left point process into the right point process, which has the property that during the first exploration era, all swaps which are seen by the exploration involve the top strand. By an explicit calculation, we have that \[e^{(n+1)T-\frac{1}{2}(1-\frac{1}{N})T}\rho_{+}\big{(}\mathbb{E}[F(\Sigma^{\rm top }_{\rm OS}(T))]\big{)}=e^{-\frac{1}{2}(1-\frac{1}{N})T}e^{-TJ_{n+2}/N}.\] We also have that \[e^{(2\binom{n+2}{2}-2(n+1))T-\frac{n+1}{2}(1-\frac{1}{N})T}\rho_{+}\big{(} \mathbb{E}[F(\Sigma^{\rm rest}_{\rm OS}(T))]\big{)}=I\otimes\mathbb{E}[B^{ \otimes(n+1)}_{T}].\] Combining, we thus obtain \[e^{2\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}\rho_{+}\big{(} \mathbb{E}[F(\Sigma_{\rm OS}(T))1(T_{1}>T)]\big{)}{\rm P}_{n+2}\] \[=\big{(}I\otimes\mathbb{E}[B^{\otimes(n+1)}_{T}]\big{)}e^{-\frac{ 1}{2}(1-\frac{1}{N})T}e^{-T\rho(J_{n+2})/N}{\rm P}_{n+2}\] \[=\big{(}\big{(}I\otimes\mathbb{E}[B^{\otimes(n+1)}_{T}]\big{)}{ \rm P}_{n+2}\big{)}\big{(}e^{-\frac{1}{2}(1-\frac{1}{N})T}e^{-T\rho(J_{n+2})/ N}{\rm P}_{n+2}\big{)}.\] The second identity follows since \({\rm P}^{2}_{n+2}={\rm P}_{n+2}\), and \({\rm P}_{n+2}\) commutes with \(\rho(J_{n+2})\). By applying Lemmas 6.52 and 6.54, the last term above has operator norm which is bounded by \[\big{\|}\big{(}I\otimes\mathbb{E}[B^{\otimes(n+1)}_{T}]\big{)}{ \rm P}_{n+2}\big{\|}_{op}\big{\|}e^{-\frac{1}{2}(1-\frac{1}{N})T}e^{-T\rho(J_ {n+2})/N}{\rm P}_{n+2}\big{\|}_{op} \lesssim T^{n+2}e^{-\frac{1}{2}(1-\frac{1}{N})T}e^{-\frac{1}{2}(1- \frac{1}{N})T}e^{(1-\frac{2}{N})T}\] \[\lesssim T^{n+2}e^{-\frac{1}{N}T},\] which converges to zero as \(T\to\infty\). This shows the claim (6.4). Figure 56: Left: the points of \(\tilde{\Sigma}_{\rm OS}(T)\) not involving the top strand. Right: the points of \(\tilde{\Sigma}_{\rm OS}(T)\) involving the top strand, arranged in reverse order. Figure 57: The matching one obtains from following all the swaps/turnarounds in \(\Sigma_{\rm OS}(T)\), or equivalently by first following all swaps/turnarounds not involving the top strand in \(\tilde{\Sigma}_{\rm OS}(T)\), and then following in reverse order all swaps involving the top strand in \(\tilde{\Sigma}_{\rm OS}(T)\). Thus to finish, it suffices to show that \[\lim_{T\to\infty}\Big{\|}e^{2\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}\rho_{+ }\big{(}\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}(T_{1}\leq T,T_{(n+2)/2}> T)]\big{)}\mathrm{P}_{n+2}\Big{\|}_{op}=0.\] On the event \(T_{1}\leq T\), we may follow the exploration until the end of the first era. Let \(E\) be the event that the first era ends with the turnaround \(\langle n+2\ n+1\rangle\). We will focus on this case, as the case of a general turnaround may either be reduced to the case by permuting the strands, or may be similarly argued, just with more notation. By a discussion similar to that outlined in Figures 55 - 57, we may compute \[e^{2\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}\rho_{+}\big{(} \mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}(T_{1}\leq T,T_{(n+2)/2}>T) \mathbb{1}_{E}]\big{)}\mathrm{P}_{n+2}\] \[=\int_{0}^{T}du\ (I\otimes\mathbb{E}[B_{u}^{\otimes(n+1)}])\rho_{+ }(\langle n+2\ n+1\rangle)(I^{\otimes 2}\otimes f_{n}(T-u))e^{-u\rho(J_{n+2})/N} \mathrm{P}_{n+2},\] where here \(f_{n}(T-u)\), is the total partition function for a system with \(n\) strands, not all exploration eras end by time \(T-u\). For brevity, let \(I(T)\) denote the term on the right hand side above. Observe that our inductive assumption implies that for any \(u\geq 0\), \[\lim_{T\to\infty}\|f_{n}(T-u)\mathrm{P}_{n}\|_{op}=0.\] To insert \(\mathrm{P}_{n}\), note by Lemma 6.56 that \(\rho_{+}(\langle n+2\ n+1\rangle)\mathrm{P}_{n+2}=I^{\otimes 2}\otimes \mathrm{P}_{n}\). Using this and the fact that \(\mathrm{P}_{n+2}^{2}=\mathrm{P}_{n+2}\), we have that \[I(T)=\int_{0}^{T}du\big{(}(I\otimes\mathbb{E}[B_{u}^{\otimes(n+1)}])\mathrm{P }_{n+2}\big{)}\rho_{+}(\langle n+2\ n+1\rangle)(I^{\otimes 2}\otimes(f_{n}(T-u) \mathrm{P}_{n}))e^{-u\rho(J_{n+2})/N}\mathrm{P}_{n+2}.\] By the inductive assumption, the operator norm of the integrand above converges pointwise to zero as \(T\to\infty\). Recall also the previously obtained bound (via Lemmas 6.52 and 6.54) \[\big{\|}(I\otimes\mathbb{E}[B_{u}^{\otimes(n+1)}])\mathrm{P}_{n+2}\big{\|}_{ op}\big{\|}e^{-u\rho(J_{n+2})/N}\mathrm{P}_{n+2}\big{\|}_{op}\lesssim u^{n+2}e^{- \frac{1}{N}u}.\] We may thus apply dominated convergence to conclude that \(\lim_{T\to\infty}\|I(T)\|_{op}=0\). This finishes the proof of the inductive step. Thus the case \(G=\mathrm{O}(N)\) is proven. The case \(G=\mathrm{Sp}(N/2)\) follows in a similar (and indeed, simpler) fashion. By a similar discussion, in the inductive step we may obtain the following identity when the first exploration era does not end: \[e^{2\binom{n+2}{2}T-\frac{n+2}{2}(1+\frac{1}{N})T}\rho_{-}\big{(} \mathbb{E}[F_{-}(\Sigma_{\mathrm{OS}}(T))\mathbb{1}(T_{1}>T)]\big{)} =\big{(}I\otimes\mathbb{E}[B_{T}^{\otimes(n+1)}]\big{)}e^{-\frac {1}{2}(1+\frac{1}{N})T}e^{T\rho_{-}(J_{n+2})/N}\] \[=\big{(}I\otimes\mathbb{E}[B_{T}^{\otimes(n+1)}]\big{)}e^{-\frac {1}{2}(1+\frac{1}{N})T}e^{-T\rho(J_{n+2})/N},\] where in the second identity we used that (by definition) \(\rho_{-}((i\ j))=-\rho((i\ j))\) for transpositions \((i\ j)\). Then applying Lemmas 6.52 and 6.54, we may bound \[\Big{\|}\big{(}I\otimes\mathbb{E}[B_{T}^{\otimes(n+1)}]\big{)}e^{-\frac{1}{2 }(1+\frac{1}{N})T}e^{-T\rho(J_{n+2})/N}\Big{\|}_{op}\lesssim T^{\frac{n}{2}}e^ {-(1+\frac{1}{N})T}e^{(1-\frac{1}{N})T}\to 0\ \text{as}\ T\to\infty.\] Thus as before, we may work on the event \(\{T_{1}\leq T,T_{(n+2)/2}>T\}\). The contribution from this event may be bounded similar to before. We omit the details. Before we combine everything and prove Theorem 6.39, we state the following lemma which is needed for the case \(G=\mathrm{O}(N)\), whose proof is deferred to Section 6.1.4. **Lemma 6.57**.: For every pair of matchings \(\pi,\pi^{\prime}:[n]\to[n]\), \(\rho_{+}([\pi\ \pi^{\prime}])\) maps into the subspace of \(\mathrm{O}(N)\)-invariant vectors, i.e. \(\mathrm{Im}(\rho_{+}([\pi\ \pi^{\prime}]))\subseteq\mathrm{Im}(\mathrm{P}_{n})\). Proof of Theorem 6.39.: First, consider the case \(G=\mathrm{O}(N)\). By combining Propositions 6.49 and 6.51, we obtain \[\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]\mathbb{E}[O_{0}^{ \otimes n}] =\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]\mathbb{E}[O_{0}^{ \otimes n}]\mathrm{P}_{n}\] \[=\lim_{T\to\infty}e^{2\binom{n}{2}T-\frac{n}{2}(1-\frac{1}{N})T} \rho_{\varepsilon}\big{(}\mathbb{E}[F(\Sigma_{\mathrm{O}S}(T))\mathds{1}(T_{n/ 2}\leqslant T)]\big{)}\mathbb{E}[O_{0}^{\otimes n}]\mathrm{P}_{n}\] \[=\sum_{\pi,\pi^{\prime}:[n]\to[n]}\mathrm{Wg}_{N}^{G}(\pi,\pi^{ \prime})\rho_{\varepsilon}([\pi\ \pi^{\prime}])\mathrm{P}_{n}.\] Let \(A=\sum_{\pi,\pi^{\prime}:[n]}\mathrm{Wg}_{N}^{\mathrm{O}}(\pi,\pi^{\prime}) \rho_{\varepsilon}([\pi\ \pi^{\prime}])\). To conclude that \(\lim_{T\to\infty}\mathbb{E}[B_{T}^{\otimes n}]=A\), use that (by Lemma 6.57) \(\mathrm{Im}(A)\subseteq\mathrm{Im}(\mathrm{P}_{n})\), and \(A\mathrm{P}_{n}=\mathrm{P}_{n}A\). This implies \(A\mathrm{P}_{n}=\mathrm{P}_{n}A=A\). Thus the case \(G=\mathrm{O}(N)\) is proven. The case \(G=\mathrm{Sp}(N/2)\) follows similarly (without the extra considerations involving \(\mathrm{P}_{n}\)). #### 6.1.4 Technical proofs In this section, we prove Lemmas 6.52 and 6.54. The main difficulty is in proving the estimates that involve the projection \(\mathrm{P}_{n}\), because as mentioned in Remarks 6.53 and 6.55, the addition of the \(\mathrm{P}_{n}\) term leads to better estimates. We proceed to introduce the additional representation theory elements that are needed to see why these improved estimates hold. We note that everything we introduce is classical. We first describe a spanning set for the space of \(\mathrm{O}(N)\)-invariant vectors. From classical representation theory (see e.g. [1, Section 3]), when \(n\) is even, the space of \(\mathrm{O}(N)\)-invariants \(\{v\in(\mathbb{C}^{N})^{\otimes n}:O^{\otimes n}v=v\}\) is spanned by a family of vectors \(\{u_{\pi},\pi:[n]\to[n]\}\) which are indexed by matchings \(\pi\). The vector \(u_{\pi}\in(\mathbb{C}^{N})^{\otimes n}\) is given by (with implicit summation over repeated indices) \[u_{\pi}:=\prod_{\{a,b\}\in\pi}\delta^{i_{a}i_{b}}e_{i_{1}}\otimes\cdots\otimes e _{i_{n}}.\] _Remark 6.58_.: Dahqivst [1] uses Brownian motion to prove this fact that \(\{u_{\pi},\pi:[n]\to[n]\}\) is a spanning set for the space of \(\mathrm{O}(N)\)-invariants (i.e. the First Fundamental Theorem of invariant theory). Thus one may wonder if we are cheating a bit in using this explicit knowledge of \(\mathrm{O}(N)\)-invariants in order to Proposition 6.51. We don't think our argument is circular, because our focus is not to re-prove representation theory results using Brownian motion, but rather to show that our particular strand-by-strand exploration process indeed suffices to recover the Weingarten calculus. Moreover, we find our strand-by-strand exploration intrinsically interesting, for the reasons given in Remark 4.13. Observe that for any \(\pi\), there exists a permutation \(\sigma\in\mathrm{S}_{n}\) such that \(\rho(\sigma)u_{\pi_{0}}=u_{\pi}\). Indeed, recall that we previously fixed \(\sigma_{\pi}\) such that \(\sigma_{\pi}[\pi\ \pi]\sigma_{\pi}^{-1}=[\pi_{0}\ \pi_{0}]\), and that visually, this had the interpretation that \([\pi\ \pi]\) may be taken to \([\pi_{0}\ \pi_{0}]\) by permuting the left labels according to \(\sigma_{\pi}\) and the right labels by \(\sigma_{\pi}^{-1}\) - recall Figure 49. From this, we can obtain that \[\rho(\sigma_{\pi})u_{\pi}=u_{\pi_{0}},\ \text{or}\ \rho(\sigma_{\pi}^{-1})u_{\pi_{0}}=u_ {\pi}. \tag{6.5}\] For matchings \(\pi,\pi^{\prime}:[n]\to[n]\), the matrix elements of \(\rho_{+}([\pi\ \pi^{\prime}])\) are given by \[\rho_{+}([\pi\ \pi^{\prime}])_{\mathbf{ij}}=\prod_{\{a,b\}\in\pi}\delta_{ia_{ i}b}\prod_{\{a,b\}\in\pi^{\prime}}\delta_{ja_{j}b},\ \ \mathbf{i}=(i_{k},k\in[n]),\mathbf{j}=(j_{k},k\in[n])\in[N]^{n}.\] The right hand side above is precisely \(\langle u_{\pi},e_{\mathbf{i}}\rangle\langle u_{\pi^{\prime}},e_{\mathbf{j}}\rangle\). In other words, we have that \(\rho_{+}([\pi\ \pi^{\prime}])\) is the rank-one matrix given by \[\rho_{+}([\pi\ \pi^{\prime}])=u_{\pi}u_{\pi^{\prime}}^{T}\] Proof of Lemma 6.57.: The preceding discussion shows \(\operatorname{Im}(\rho_{+}([\pi\ \pi^{\prime}]))\subseteq\operatorname{span}(u_{\pi}) \subseteq\operatorname{Im}(\operatorname{P}_{n})\). **Definition 6.59**.: Let \(\mathcal{H}_{n}\) be the subgroup of \(\operatorname{S}_{n}\) such that \(\sigma[\pi_{0}\ \pi_{0}]\sigma^{-1}=[\pi_{0}\ \pi_{0}]\). In words, \(\mathcal{H}_{n}\) is the subgroup of \(\operatorname{S}_{n}\) which leaves \(\pi_{0}\) fixed upon permutation of the vertices. Let \(P_{\mathcal{H}_{n}}:=\frac{1}{|\mathcal{H}_{n}|}\sum_{h\in\mathcal{H}_{n}}h \in\mathbb{C}[\operatorname{S}_{n}]\). Next, we recall the following classic results from the representation theory of the symmetric group. We closely follow the discussion from [2, Section 1.3]. There is a family of group algebra elements \(e_{T}\) indexed by standard Young tableau \(T\) with \(n\) boxes such that \[e_{T}e_{T^{\prime}}=\delta_{TT^{\prime}}e_{T},\ \ \sum_{T:|T|=n}e_{T}=1.\] The \(e_{T}\) are known as Young's orthogonal idempotents. These elements have the additional property that they diagonalize the Jucys-Murphy elements. That is, \[J_{k}e_{T}=e_{T}J_{k}=c(T,k)e_{T},\ \ k\in[n],\] where \(c(T,k)\) is the content of box \(k\) in \(T\), i.e. \(c(T,k)=j-i\) if box \(k\) has coordinates \((i,j)\) in \(T\). For a Young diagram \(\lambda\), let \(\operatorname{SYT}(\lambda)\) be the set of all standard Young tableau with shape \(\lambda\). Define \[P_{\lambda}:=\sum_{T\in\operatorname{SYT}(\lambda)}e_{T}\in\mathbb{C}[ \operatorname{S}_{n}].\] From the given properties of \(e_{T}\), \(P_{\lambda}\) acts on \(\mathbb{C}[\operatorname{S}_{n}]\) as the projection onto the subspace \(V_{\lambda}\) corresponding to the irrep \(\lambda\). An explicit formula for this projection is given by \[P_{\lambda}=\frac{\chi_{\lambda}(\operatorname{id})}{n!}\sum_{\sigma\in \operatorname{S}_{n}}\chi_{\lambda}(\sigma)\sigma. \tag{6.6}\] Since \(\chi_{\lambda}\) is constant on conjugacy classes, \(P_{\lambda}\) is central, i.e. it commutes with all elements of \(\mathbb{C}[\operatorname{S}_{n}]\). We note that for any Young diagram \(\lambda\vdash n\), the matrix \(\rho(P_{\lambda})\in\operatorname{End}((\mathbb{C}^{N})^{\otimes n})\) is the orthogonal projection onto its image. Similarly, for any Young tableau with \(n\) boxes, \(\rho(e_{T})\) is the orthogonal projection onto its image. Moreover, the subspaces \(\operatorname{Im}(\rho(P_{\lambda}))\) and \(\operatorname{Im}(\rho(P_{\lambda^{\prime}}))\) are orthogonal for \(\lambda\neq\lambda^{\prime}\). Similarly, the subspaces \(\operatorname{Im}(\rho(e_{T})),\operatorname{Im}(\rho(e_{T^{\prime}}))\) are orthogonal for \(T\neq T^{\prime}\). **Notation 6.60**.: Given \(\lambda\), let \(2\lambda\) be the Young tableau obtained by "doubling", i.e. by multiplying each part in the partition by \(2\). The following lemma is the key observation which leads to improved estimates for \(e^{-u\rho(J_{n})}\mathrm{P}_{n}\). **Lemma 6.61** (Proposition 4 of [11]).: In order for \(e_{T}P_{\mathcal{H}_{n}}\neq 0\), \(T\) must have shape \(2\lambda\) for some \(\lambda\vdash\frac{n}{2}\). **Lemma 6.62**.: For all \(\mathrm{O}(N)\)-invariant vectors \(v\in(\mathbb{C}^{N})^{\otimes n}\), we have that \[v=\sum_{\lambda\vdash\frac{n}{2}}\rho(P_{2\lambda})v.\] Proof.: In general, we may write (recall (6.5)) \[v=\sum_{\pi}\alpha_{\pi}u_{\pi}=\sum_{\pi}\alpha_{\pi}\rho(\sigma_{\pi}^{-1})u _{\pi_{0}}=\rho\bigg{(}\sum_{\pi}\alpha_{\pi}\sigma_{\pi}^{-1}\bigg{)}u_{\pi_{ 0}}.\] For brevity, let \(X:=\sum_{\pi}\alpha_{\pi}\sigma_{\pi}^{-1}\in\mathbb{C}[\mathrm{S}_{n}]\). Now, since \(\mathcal{H}_{n}\) stabilizes \(u_{\pi_{0}}\), we have that \[\rho(X)u_{\pi_{0}}=\rho(X)\rho(P_{\mathcal{H}_{n}})u_{\pi_{0}} =\rho(X)\sum_{T}\rho(e_{T}P_{\mathcal{H}_{n}})u_{\pi_{0}}=\rho(X) \sum_{\lambda\vdash\frac{n}{2}}\sum_{\lambda\in\mathrm{SYT}(2\lambda)}\rho(e_{ T}P_{\mathcal{H}_{n}})u_{\pi_{0}}\] \[=\rho(X)\rho\bigg{(}\sum_{\lambda\vdash\frac{n}{2}}\sum_{T\in \mathrm{SYT}(2\lambda)}e_{T}\bigg{)}\rho(P_{\mathcal{H}_{n}})u_{\pi_{0}}\] \[=\rho(X)\rho\bigg{(}\sum_{\lambda\vdash\frac{n}{2}}P_{2\lambda} \bigg{)}u_{\pi_{0}}=\rho\bigg{(}\sum_{\lambda\vdash\frac{n}{2}}P_{2\lambda} \bigg{)}\rho(X)u_{\pi_{0}}.\] In the second identity, we used Lemma 6.61, and in the last identity, we used that \(P_{2\lambda}\) is central. Proof of Lemma 6.52.: The first estimate follows immediately from the fact that all eigenvalues of \(\rho(J_{n})\) are at least \(-N+1\) (by Lemma 2.28). We proceed to prove the second estimate. By Lemma 6.62, we have that \[e^{-u\rho(J_{n})}\mathrm{P}_{n}=e^{-u\rho(J_{n})}\sum_{\lambda\vdash\frac{n} {2}}\rho(P_{2\lambda})\mathrm{P}_{n}.\] It suffices to show that \[\left\|e^{-u\rho(J_{n})}\sum_{\lambda\vdash\frac{n}{2}}\rho(P_{2\lambda}) \right\|_{op}\leqslant e^{(N-2)u}.\] Since \((\rho(P_{2\lambda}),\lambda\vdash\frac{n}{2})\) is a family of projections onto orthogonal subspaces, it suffices to show that for each \(\lambda\vdash\frac{n}{2}\), we have that \[\left\|e^{-u\rho(J_{n})}\rho(P_{2\lambda})\right\|_{op}\leqslant e^{(N-2)u}.\] To see this, first note that in order for \(\rho(P_{2\lambda})\neq 0\), \(2\lambda\) must have at most \(N\) rows. Thus, we will assume that this is the case. Recalling that \(P_{2\lambda}=\sum_{T\in\operatorname{SYT}(2\lambda)}e_{T}\), and \((\rho(e_{T}),T\in\operatorname{SYT}(2\lambda))\) is a family of projections onto orthogonal subspaces, it suffices to show that for each \(T\in\operatorname{SYT}(2\lambda)\), we have that \[\left\|e^{-u\rho(J_{n})}\rho(e_{T})\right\|_{op}\leqslant e^{(N-2)u}.\] Since \(J_{n}e_{T}=c(T,n)e_{T}\), we have that \[e^{-u\rho(J_{n})}\rho(e_{T})=e^{-uc(T,n)}\rho(e_{T}),\] and so it is enough to argue that \(c(T,n)\geqslant-(N-2)\). This follows because \(T\) has shape \(2\lambda\), and \(2\lambda\) has at most \(N\) rows, which implies that the location of \(n\) in \(T\) cannot be \((N,1)\). Any other location in \(T\) must have content at least \(-(N-2)\). Proof of Lemma 6.56.: Since \(\rho_{+}(\langle n\;n-1\rangle)\) acts as the identity on the last \(n-2\) tensor coordinates, it is enough to assume \(n=2\) and prove \(\rho_{+}(\langle 2\;1\rangle)O^{\otimes 2}=\rho_{+}(\langle 2\;1\rangle)\). Since \(O^{\otimes 2}\) commutes with \(\rho_{+}(\langle 2\;1\rangle)\), we have that \(\rho_{+}(\langle 2\;1\rangle)O^{\otimes 2}=O^{\otimes 2}\rho_{+}( \langle 2\;1\rangle)\). Now observe that when \(n=2\), we have that \(\langle 2\;1\rangle=[\pi_{0}\;\pi_{0}]\). Since \(\rho_{+}([\pi_{0}\;\pi_{0}])\) maps into the subspace of \(\operatorname{O}(N)\)-invariants (by Lemma 6.57), it follows that \(O^{\otimes 2}\rho_{+}([\pi_{0}\;\pi_{0}])=\rho_{+}([\pi_{0}\;\pi_{0}])\). **Definition 6.63**.: Following the notation of [16], let \(\varepsilon_{N}\in\mathbb{C}[\operatorname{S}_{N}]\) be given by \[\varepsilon_{N}:=\frac{1}{N!}\sum_{\sigma\in\operatorname{S}_{N}}\operatorname {sgn}(\sigma)\sigma.\] _Remark 6.64_.: Observe that \(\varepsilon_{N}\) is precisely \(P_{\lambda_{\min}}\), where \(\lambda_{\min}=(1,\ldots,1)\) is the Young tableau corresponding to the sign representation of \(\operatorname{S}_{N}\). **Lemma 6.65**.: Suppose \(N\geqslant 3\) is odd. We have that \((I\otimes\rho(\varepsilon_{N}))\mathrm{P}_{N+1}=0\). Also, for any \(1\leqslant i<j\leqslant N\), \(\varepsilon_{N}\langle i\;j\rangle=0\in\mathcal{B}_{N}\). Here, to be clear \(\rho:\mathcal{B}_{N}\to\operatorname{End}((\mathbb{C}^{N})^{\otimes N})\). Proof.: It suffices to show that for any matching \(\pi:[N+1]\to[N+1]\), the corresponding invariant vector \(u_{\pi}\) is annihilated by \(I\otimes\rho(\varepsilon_{N})\), i.e. \((I\otimes\rho(\varepsilon_{N}))u_{\pi}=0\). To see this, note that for any \(\pi\), there is some pair of vertices \(\{i,j\}\) matched by \(\pi\), with both \(i,j\leqslant N\) (here we use the assumption that \(N\geqslant 3\)). Since these vertices are matched, swapping them does not change the matching, and so we have that \((I\otimes\rho((i\;j)))u_{\pi}=u_{\pi}\). On the other hand, we have that \(\varepsilon_{N}(i\;j)=-\varepsilon_{N}\), and thus \((I\otimes\rho(\varepsilon_{N}))\rho((i\;j))=I\otimes\rho(\varepsilon_{N}(i\;j ))=-I\otimes\rho(\varepsilon_{N})\). We thus have \[(I\otimes\rho(\varepsilon_{N}))u_{\pi}=(I\otimes\rho(\varepsilon_{N}))\rho((i \;j))u_{\pi}=-(I\otimes\rho(\varepsilon_{N}))u_{\pi},\] and thus \((I\otimes\rho(\varepsilon_{N}))u_{\pi}=0\). The second claim follows by the a similar argument, i.e. we start from the observation \((i\;j)\langle i\;j\rangle=\langle i\;j\rangle\). **Lemma 6.66**.: We have that \(\frac{1}{N}\rho(J_{N}+\cdots+J_{1})\in\operatorname{End}((\mathbb{C}^{N})^{ \otimes N})\) has eigenvalue \(-\frac{N}{2}+\frac{1}{2}\) with eigenspace \(\operatorname{Im}(\rho(\varepsilon_{N}))\). All other eigenvalues are at least \(-\frac{N}{2}+\frac{3}{2}\). Proof.: From the discussion in Lemma 2.30, recall that for each \(\lambda\vdash N\), \(\rho(P_{\lambda})\) projects onto an eigenspace of \(\rho(J_{N}+\cdots+J_{1})\) with eigenvalue given by the content sum \(c_{\lambda}\). The minimal content sum in this case is achieved when \(\lambda=\lambda_{\min}=(1,\ldots,1)\), i.e. the Young diagram with \(N\) parts of size \(1\), or equivalently a single column of height \(N\). The content sum in this case is \[-(N-1)-(N-2)-\cdots-1=-\frac{N(N-1)}{2}.\] Thus the minimal eigenvalue of \(\frac{1}{N}\rho(J_{N}+\cdots+J_{1})\) is \(-\frac{N-1}{2}=-\frac{N}{2}+\frac{1}{2}\). The associated eigenspace is \(\operatorname{Im}(\rho(P_{\lambda_{\min}}))\). The first claim now follows upon recalling that \(P_{\lambda_{\min}}=\varepsilon_{N}\). The next smallest eigenvalue is given by moving the box \((N,1)\) to \((1,2)\), i.e. by the Young diagram \(\lambda=(2,1,\ldots,1)\). The content sum in this case is \[-(N-2)-\cdots-1+1=-\frac{N(N-1)}{2}+N.\] The second claim now follows. **Definition 6.67**.: Define \[\Delta_{\varepsilon}^{n}(n-1):=-\frac{(n-1)}{2}\bigg{(}1-\frac{ \varepsilon}{N}\bigg{)}I^{\otimes n}-\frac{1}{N}I\otimes\rho_{N,n-1}(J_{n-1}+ \cdots+J_{1})\in\operatorname{End}((\mathbb{C}^{N})^{\otimes n}).\] Here, we write the subscripts \(\rho_{N,n-1}\) to be clear that \(\rho_{N,n-1}:\operatorname{S}_{n-1}\to\operatorname{End}((\mathbb{C}^{N})^{ \otimes(n-1)})\). **Lemma 6.68**.: For any \(n\geq 2\) such that \(n-1\neq N\), we have that \[\|e^{u\Delta_{\varepsilon}^{n}(n-1)}\|_{op}\leq e^{-\frac{1}{2}(1 -\frac{\varepsilon}{N})u},\ \ u\geq 0.\] If \(n-1=N\), we have that \[\|e^{u\Delta_{1}^{n}(n-1)}\mathrm{P}_{n}\|_{op}\leq e^{-u},\ \|e^{u \Delta_{1}^{n}(n-1)}\rho_{+}(\langle 2\ 1\rangle)\|_{op}\leq Ne^{-u},\ \ \|e^{u\Delta_{-1}^{n}(n-1)}\|_{op}\leq e^{-u},\ \ u\geq 0.\] Proof.: For brevity, write \(\rho\) instead of \(\rho_{N,n-1}\). For the first estimate, note that the case \(\varepsilon=-1\) readily follows from the case \(\varepsilon=1\) because \[\Delta_{-1}^{n}(n-1)=\Delta_{1}^{n}(n-1)-\frac{n-1}{N}.\] Thus, we focus on the case \(\varepsilon=1\). Define \[\Delta_{1}^{n-1}(n-1):=-\frac{(n-1)}{2}\bigg{(}1-\frac{1}{N} \bigg{)}I^{\otimes(n-1)}-\frac{1}{N}\rho(J_{n-1}+\cdots+J_{1})\in\operatorname {End}((\mathbb{C}^{N})^{\otimes(n-1)}). \tag{6.7}\] Then \(\Delta_{1}^{n}(n-1)=I\otimes\Delta_{1}^{n-1}(n-1)\). Thus, it suffices to just look at \(\Delta_{1}^{n-1}(n-1)\). By Lemma 2.30, the eigenvalues of \(\rho(J_{n-1}+\cdots+J_{1})\) are lower-bounded by \[-\frac{1}{2}(n-1)+\frac{1}{2}m^{2}+\frac{1}{2}r-\frac{1}{2}\frac {r(r-1)}{N}+\frac{mr}{N}.\] From this, it follows that all eigenvalues of \(\Delta_{1}^{n-1}(n-1)\) are at most \[-\frac{n-1}{2}\bigg{(}1-\frac{1}{N}\bigg{)}+\frac{1}{2}(n-1)-\frac{1}{2}m^{2}- \frac{1}{2}r+\frac{1}{2}\frac{r(r-1)}{N}-\frac{mr}{N}.\] Using that \(n-1=mN+r\), this may be simplified to \[\frac{1}{2}m(1-m)-\frac{1}{2}r\bigg{(}1-\frac{r}{N}\bigg{)}-\frac{mr}{N}.\] If \((m,r)\neq(1,0)\), then the above is easily seen to be at most \(-\frac{1}{2}(1-\frac{1}{N})\), which implies \[\|e^{u\Delta_{1}^{n}(n-1)}\|_{op}=\|e^{u(I\otimes\Delta_{1}^{n-1}(n-1))}\|_{op }=\|e^{u\Delta_{1}^{n-1}(n-1)}\|_{op}\leqslant e^{-\frac{1}{2}(1-\frac{1}{N})u}.\] If \((m,r)=(1,0)\), then \(n-1=N\). We may split \[e^{T\Delta_{1}^{N}(N)}=e^{T\Delta_{1}^{N}(N)}(1-\rho(\varepsilon_{N}))+\rho( \varepsilon_{N}).\] By Lemma 6.66, we have that on \(\operatorname{Im}(1-\rho(\varepsilon_{N}))\), all eigenvalues of \(\Delta_{1}^{N}(N)\) are at most \(-1\), and thus \[\big{\|}e^{T\Delta_{1}^{N}(N)}(1-\rho(\varepsilon_{N}))\big{\|}_{op}\leqslant e ^{-T}.\] By Lemma 6.65, we have that for \(M=\operatorname{P}_{N+1}\) or \(\rho_{+}(\langle i\ j\rangle)\), \((I\otimes\rho(\varepsilon_{N}))M=0\). Combining these two, it follows that \[\big{\|}e^{T\Delta_{1}^{n}(n-1)}M\big{\|}_{op}\leqslant e^{-T}\|M\|_{op}.\] We have that \(\|\operatorname{P}_{N+1}\|_{op}\leqslant 1\) since \(\operatorname{P}_{N+1}\) is an orthogonal projection. Since \(\langle i\ j\rangle^{2}=N\langle i\ j\rangle\), we obtain \(\rho_{+}(\langle i\ j\rangle)^{2}=N\rho_{+}(\langle i\ j\rangle)\), which implies \(\|\rho_{+}(\langle i\ j\rangle)\|_{op}=N\). Finally, when \(n-1=N\), we have by (6.7) that \(\Delta_{-1}^{n}(n-1)=\Delta_{1}^{n}(n-1)-1\). By the preceding discussion, all eigenvalues of \(\Delta_{1}^{n}(n-1)\) are at most \(0\), and thus by equation (6.7) all eigenvalues of \(\Delta_{-1}^{n}(n-1)\) are at most \(-1\), and thus the estimate \(\|e^{u\Delta_{-1}^{n}(n-1)}\|_{op}\leqslant e^{-u}\) immediately follows. Proof of Lemma 6.54.: First, consider the case \(G=\operatorname{O}(N)\). We proceed by induction. First, in the base case \(n=2\), we have that \(\mathbb{E}[B_{u}]=e^{-\frac{1}{2}(1-\frac{1}{N})u}\), and so \[I\otimes\mathbb{E}[B_{u}]=e^{-\frac{1}{2}(1-\frac{1}{N})u}I^{\otimes 2}.\] The desired estimate in this case immediately follows. Now, suppose that the result is true for some even \(n\geqslant 2\). Consider the case \(n+2\). We have that \[I\otimes\mathbb{E}[B_{u}^{\otimes(n+1)}]=e^{2^{\binom{n+2}{2}}T-\frac{n+2}{2}( 1-\frac{1}{N})T}\rho_{+}\big{(}\mathbb{E}[F(\Sigma_{\operatorname{OS}}(T)) \mathbb{1}_{E_{1}}]\big{)},\] where \(E_{1}\) is the event that there are no points touching the top strand. Let \(E_{2}\) be the event that there is some turnaround in \(\Sigma_{\operatorname{OS}}(T)\). Then on the complement of \(E_{2}\), there are only swaps, and we may compute \[e^{2^{\binom{n+2}{2}}T-\frac{n+2}{2}(1-\frac{1}{N})T}\rho_{+}\big{(}\mathbb{E }[F(\Sigma_{\operatorname{OS}}(T))\mathbb{1}_{E_{1}}\mathbb{1}_{E_{2}^{c}}] \big{)}=e^{T\Delta_{1}^{n+2}(n+1)}.\] By Lemma 6.68, we have that \[\left\|e^{T\Delta_{1}^{n+2}(n+1)}\mathrm{P}_{n+2}\right\|_{op}\leqslant e^{-\frac {1}{2}(1-\frac{1}{N})T}.\] Combining, we thus obtain \[\left\|e^{2^{\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}}\rho_{+}\big{(} \mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}_{E_{1}}\mathbb{1}_{E_{2}^{c}} \big{]}\big{)}\mathrm{P}_{n+2}\right\|_{op}\leqslant e^{-\frac{1}{2}(1-\frac{1 }{N})T}.\] To finish, it suffices to show a similar estimate with \(\mathbb{1}_{E_{2}^{c}}\) replaced by \(\mathbb{1}_{E_{2}}\). Let \(E_{2}^{0}\subseteq E_{2}\) be the event that the first turnaround in \(\Sigma_{\mathrm{OS}}(T)\) is \(\langle 2\ 1\rangle\). We will show the estimate with \(\mathbb{1}_{E_{2}}\) replaced by \(\mathbb{1}_{E_{2}^{0}}\). The general estimate will follow by the same argument, just with more notation. On the event \(E_{2}^{0}\), we may condition on the time of the first turnaround to obtain \[e^{2^{\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}}\rho_{+}\big{(} \mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}_{E_{1}}\mathbb{1}_{E_{2}^{0}} \big{]}\big{)}=\] \[\int_{0}^{T}du\ e^{u\Delta_{1}^{n+2}(n+1)}\rho_{+}(\langle 2\ 1 \rangle)\big{(}I\otimes\mathbb{E}[B_{T-u}^{\otimes(n-1)}]\otimes I^{\otimes 2 }\big{)}.\] Here, the \(e^{u\Delta_{1}^{n+2}(n+1)}\) term arises because given that the first turnaround happens at time \(u\), we average over the contribution from all swaps which happen before \(u\). The term \(I\otimes\mathbb{E}[B_{T-u}^{\otimes(n-1)}]\otimes I^{\otimes 2}\) arises because once we see the turnaround \(\langle 2\ 1\rangle\) at time \(u\), we can ignore those strands after time \(u\), and only look at the top \(n-2\) strands on the interval \([u,T]\). Now, observe that (by a variant of Lemma 6.56) \[\rho_{+}(\langle 2\ 1\rangle)\mathrm{P}_{n+2}=\mathrm{P}_{n}\otimes I^{\otimes 2}.\] From this, we obtain \[e^{2^{\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}}\rho_{+} \big{(}\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}_{E_{1}} \mathbb{1}_{E_{2}^{0}}\big{]}\big{)}\mathrm{P}_{n+2}\] \[=\int_{0}^{T}du\ e^{u\Delta_{1}^{n+2}(n+1)}\rho_{+}(\langle 2\ 1 \rangle)I\otimes\mathbb{E}[B_{T-u}^{\otimes(n-1)}]\otimes I^{\otimes 2} \mathrm{P}_{n+2}\] \[=\int_{0}^{T}due^{u\Delta_{1}^{n+2}(n+1)}\rho_{+}(\langle 2\ 1 \rangle)\Big{(}\big{(}(I\otimes\mathbb{E}[B_{T-u}^{\otimes(n-1)}])\mathrm{P}_{n }\big{)}\otimes I^{\otimes 2}\Big{)}.\] By our inductive assumption, we have that \[\left\|\big{(}(I\otimes\mathbb{E}[B_{T-u}^{\otimes(n-1)}])\mathrm{P}_{n}\big{|} _{op}\right\|_{op}\lesssim(T-u)^{\frac{n}{2}-1}e^{-\frac{1}{2}(1-\frac{1}{N})( T-u)}.\] By Lemma 6.68, we have that \[\left\|e^{u\Delta_{1}^{n+2}(n+1)}\rho_{+}(\langle 2\ 1\rangle)\right\|_{op} \lesssim e^{-\frac{1}{2}(1-\frac{1}{N})u}.\] Putting our two estimates, together, we obtain \[\left\|e^{2^{\binom{n+2}{2}T-\frac{n+2}{2}(1-\frac{1}{N})T}}\rho_ {+}\big{(}\mathbb{E}[F(\Sigma_{\mathrm{OS}}(T))\mathbb{1}_{E_{1}}\mathbb{1}_{ E_{2}^{0}}\big{]}\big{)}\right\|_{op} \lesssim\int_{0}^{T}du\ (T-u)^{\frac{n}{2}-1}e^{-\frac{1}{2}(1-\frac{1}{N})T}\] \[\lesssim T^{\frac{n+2}{2}-1}e^{-\frac{1}{2}(1-\frac{1}{N})T},\] which proves the inductive step. Thus the case \(G=\operatorname{O}(N)\) is proven. The case \(G=\operatorname{Sp}(N/2)\) is similar (and indeed, simpler). We sketch the changes. In the first part of the inductive step, we may compute \[e^{2\binom{n+2}{2}T-\frac{n+2}{2}(1+\frac{1}{N})T}\rho_{-}\big{(} \mathbb{E}[F_{-}(\Sigma_{\operatorname{OS}}(T))\mathbb{1}_{E_{1}}\mathbb{1}_{ E_{2}^{c}}]\big{)} =e^{-\frac{n+2}{2}(1+\frac{1}{N})T}e^{\frac{T}{N}\rho_{-}(J_{n+2} +\cdots+J_{1})}\] \[=e^{-\frac{n+2}{2}(1+\frac{1}{N})T}e^{-\frac{T}{N}\rho(J_{n+2}+ \cdots+J_{1})}=e^{T\Delta_{-1}^{n+2}(n+1)},\] where we used that (by definition) \(\rho_{-}((i\ j))=-\rho((i\ j))\) for transpositions \((i\ j)\). By Lemma 6.68, we have that \(\|e^{T\Delta_{-1}^{n+2}(n+1)}\|_{op}\leqslant e^{-\frac{1}{2}(1+\frac{1}{N})T}\). The contribution from the case \(\mathbb{1}_{E_{1}}\mathbb{1}_{E_{2}}\) may be handled similar to before. We omit the details. #### 6.1.5 Makeenko-Migdal/Master loop/Schwinger-Dyson equation We next discuss the Makeenko-Migdal/Master loop/Schwinger-Dyson equation for \(G=\operatorname{O}(N),\operatorname{Sp}(N/2)\). First, we introduce additional string operations which appear for these groups. **Definition 6.69** (Mergers, Twistings).: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Let \((i,j)\) be a location of \(\boldsymbol{\Gamma}\). Define the set of positive and negative mergers \(\mathbb{M}_{+}((i,j),\boldsymbol{\Gamma})\) and \(\mathbb{M}_{-}((i,j),\boldsymbol{\Gamma})\), as well as the set of positive and negative twistings \(\mathbb{T}_{+}((i,j),\Gamma)\) and \(\mathbb{T}_{-}((i,j),\Gamma)\), as follows. Throughout, denote the letter at location \((i,j)\) by \(\lambda\), and suppose \(\Gamma_{i}=A\lambda B\). The set of positive mergers \(\mathbb{M}_{+}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by merging \(\Gamma_{i}\) with some \(\Gamma_{\ell}\), \(\ell\neq i\), in one of two ways. The first way: let \((\ell,m)\) be a location which also has letter \(\lambda\). Suppose \(\Gamma_{\ell}=C\lambda D\). Then \(\Gamma_{i},\Gamma_{\ell}\) are replaced by \(A\lambda DC\lambda B\). The second way: let \((\ell,m)\) be a location which has \(\lambda^{-1}\). Suppose \(\Gamma_{\ell}=C\lambda^{-1}D\). Then \(\Gamma_{i},\Gamma_{\ell}\) are replaced by \(A\lambda C^{-1}D^{-1}\lambda B\). The set of negative mergers \(\mathbb{M}_{-}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by merging \(\Gamma_{i}\) with some \(\Gamma_{\ell}\), \(\ell\neq i\), in one of two ways. The first way: let \((\ell,m)\) be a location which also has letter \(\lambda\). Suppose \(\Gamma_{\ell}=C\lambda D\). Then \(\Gamma_{i},\Gamma_{\ell}\) are replaced by \(AC^{-1}D^{-1}B\). The second way: let \((\ell,m)\) be a location which has \(\lambda^{-1}\). Suppose \(\Gamma_{\ell}=C\lambda^{-1}D\). Then \(\Gamma_{i},\Gamma_{\ell}\) are replaced by \(ADCB\). The set of positive twistings \(\mathbb{T}_{+}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by replacing \(\Gamma_{i}\) with another word as follows. If \(\lambda^{-1}\) does not appear in \(\Gamma_{i}\), the set \(\mathbb{T}_{-}((i,j),\boldsymbol{\Gamma})\) is empty. Thus, suppose \(\lambda^{-1}\) also appears in \(\Gamma_{i}\). Let \((i,k)\) be a location which has \(\lambda^{-1}\). If \(k>j\) then recalling that \(\Gamma_{i}=A\lambda B\), we may write \(B=C\lambda^{-1}D\). We then replace \(\Gamma_{i}=A\lambda C\lambda^{-1}D\) by \(A\lambda C^{-1}\lambda^{-1}D\). If \(k<j\) then we may write \(A=E\lambda^{-1}F\lambda B\) by \(E\lambda^{-1}F^{-1}\lambda B\). The set of negative twistings \(\mathbb{T}_{-}((i,j),\boldsymbol{\Gamma})\) is the set of collections of words \(\boldsymbol{\Gamma}^{\prime}\) obtained by replacing \(\Gamma_{i}\) with another word as follows. If \(\lambda\) appears only once in \(\Gamma_{i}\), the set \(\mathbb{T}_{-}((i,j),\boldsymbol{\Gamma})\) is empty. Thus, suppose \(\lambda\) appears at least twice in \(\Gamma_{i}\). Denote \((i,k)\) be another location which has \(\lambda\). If \(k>j\) then recalling that \(\Gamma_{i}=A\lambda B\), we may write \(B=C\lambda D\). We then replace \(\Gamma_{i}=A\lambda C\lambda D\) by \(AC^{-1}D\). If \(k<j\) then we may write \(A=E\lambda F\). We then replace \(\Gamma_{i}=E\lambda F\lambda C\) by \(EF^{-1}C\). _Remark 6.70_.: From the perspective of our Poisson point process on strand diagrams, the reason why the \(\mathrm{O}(N)\) and \(\mathrm{Sp}(N/2)\) cases result in additional loop operations is because there may now be turnarounds between two same-direction strands, and swaps between two opposite-direction strands. (Recall that in the Unitary case, same-direction strands only had swaps and opposite-direction strands only had turnarounds.) **Proposition 6.71** (Single-location \(\mathrm{O}(N),\mathrm{Sp}(N/2)\) word recursion).: Let \(G=\mathrm{O}(N),\mathrm{Sp}(N/2)\). Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). For any location \((i,j)\) of \(\Gamma\), we have that \[\varepsilon\bigg{(}1-\frac{\varepsilon}{N}\bigg{)}\mathbb{E}[ \mathrm{tr}(G(\boldsymbol{\Gamma}))]= -\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{+}((i,j), \boldsymbol{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\boldsymbol{\Gamma}^{\prime}))] +\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{S}_{-}((i,j),\boldsymbol{ \Gamma})}\mathbb{E}[\mathrm{tr}(G(\boldsymbol{\Gamma}^{\prime}))]\] \[-\frac{1}{N^{2}}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{M} _{+}((i,j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\boldsymbol{\Gamma}^ {\prime}))]+\frac{1}{N^{2}}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{M}_{- }((i,j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\boldsymbol{\Gamma}^{ \prime}))]\] \[-\frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{T}_{+}(( i,j),\boldsymbol{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\boldsymbol{\Gamma}^{\prime}))]+ \frac{1}{N}\sum_{\boldsymbol{\Gamma}^{\prime}\in\mathbb{T}_{-}((i,j), \boldsymbol{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\boldsymbol{\Gamma}^{\prime}))]\] Proof (sketch).: The proof proceeds by stopping our strand-by-strand exploration process at the time of the first point, as in the proof of the \(\mathrm{U}(N)\) word recursion (Proposition 5.2). The main ideas are very similar but the details are a bit different - we sketch out where the differences lie. When we explore the strand-by-strand exploration until the first point, we see either a turnaround or a swap, which may connect same-direction or opposite-direction strands. Moreover, the two strands may be part of the same word or different words. We present the two tables in Figure 58 which indicate which of the loop operations each of these cases contributes to. The word recursion then immediately implies the Makeenko-Migdal/Master loop/Schwinger-Dyson equation. The proof is omitted, as it is very similar to the proof of the Unitary Figure 58: Left: the various cases when the first point is a swap. Right: the various cases when the first point is a turnaround. Makeenko-Migdal/Master loop/Schwinger-Dyson equation using the Unitary word recursion (see Section 5). **Theorem 6.72** (Single-location \(\mathrm{O}(N)\) and \(\mathrm{Sp}(N/2)\) Makeenko-Migdal/Master loop/Schwinger-Dyson equation).: _Let \(s=(\ell_{1},\ldots,\ell_{n})\) be a string. Let \((k,i)\) be a location in \(s\). For \(G=\mathrm{O}(N),\mathrm{Sp}(N/2)\) lattice Yang-Mills theory, we have that_ \[\varepsilon\bigg{(}1-\frac{\varepsilon}{N}\bigg{)}\phi(s)= -\sum_{s^{\prime}\in\mathbb{S}_{+}((k,i),s)}\phi(s^{\prime})+ \sum_{s^{\prime}\in\mathbb{S}_{-}((k,i),s)}\phi(s^{\prime})\] \[-\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M}_{+}((k,i),s)}\phi(s ^{\prime})+\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M}_{-}((k,i),s)}\phi(s^{ \prime})\] \[-\frac{1}{N}\sum_{s^{\prime}\in\mathbb{T}_{+}((k,i),s)}\phi(s^{ \prime})+\frac{1}{N}\sum_{s^{\prime}\in\mathbb{T}_{-}((k,i),s)}\phi(s^{\prime})\] \[-2\beta\sum_{s^{\prime}\in\mathbb{D}_{+}((k,i),s)}\phi(s^{\prime })+2\beta\sum_{s^{\prime}\in\mathbb{D}_{-}((k,i),s)}\phi(s^{\prime}).\] _Remark 6.73_.: In the Unitary case, we had the factor \(\beta\) in front of the deformation terms, whereas in the Orthogonal and Symplectic cases, we have the factor \(2\beta\). This difference is ultimately due to the fact that there may be swaps and turnaround between any two strands, no matter their directions. ### Special Unitary and Special Orthogonal The Weingarten calculus for \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) is far less developed than for \(\mathrm{U}(N),\mathrm{O}(N),\mathrm{Sp}(N/2)\). The only formula we have seen for the \(\mathrm{SU}(N)\) Weingarten function is in the physics literature [1, Equation (20)]. We have not seen a formula for the \(\mathrm{SO}(N)\) Weingarten function. Therefore, in this paper we will not do as much for \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) as we did for the previous three groups. In particular, we will not recover the Weingarten calculus via large-time limits of Brownian motion. Instead, we will focus on giving surface-sum representations of Wilson loop expectations and proving the Makeenko-Migdal/Master loop/Schwinger-Dyson equations. We first show that although we don't have explicit formulas for the \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) Weingarten functions, we can still relate (via soft arguments) \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) Haar expectations to some elements of the Brauer algebra. These "Weingarten elements" will then provide the weights that appear in our surface-sum representations. **Definition 6.74**.: Given a matching \(\pi\in\mathcal{M}(n)\), let \(\pi^{T}\) be the reflection of \(\pi\), or i.e. the matching obtained by swapping the left and right vertices. **Proposition 6.75**.: For \(n,m\geq 0\), there exist elements \(\mathrm{Wg}_{N}^{\mathrm{SU}}\in\mathcal{B}_{n,m},\mathrm{Wg}_{N}^{\mathrm{SO} }\in\mathcal{B}_{n}\) such that \[\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}]= \ \rho_{+}(W_{N}^{\mathrm{SU}}),\ \ G=\mathrm{SU}(N),\] \[\mathbb{E}[G^{\otimes n}]= \ \rho_{+}(\mathrm{Wg}_{N}^{\mathrm{SO}}),\ \ G=\mathrm{SO}(N).\] Moreover, these elements are invariant under reflection: \[\mathrm{Wg}_{N}^{\bullet}(\pi)=\mathrm{Wg}_{N}^{\bullet}(\pi^{T}),\ \ \bullet\in\{\mathrm{SU}(N),\mathrm{SO}(N)\},\] as well as invariant under conjugation: \[\sigma\mathrm{Wg}_{N}^{\mathrm{SU}}\sigma^{-1} =\mathrm{Wg}_{N}^{\mathrm{SU}}\text{ for all }\sigma\in\mathrm{S}_{n}\times\mathrm{S}_{m}\subseteq\mathcal{B}_{n,m},\] \[\sigma\mathrm{Wg}_{N}^{\mathrm{SO}}\sigma^{-1} =\mathrm{Wg}_{N}^{\mathrm{SO}}\text{ for all }\sigma\in\mathrm{S}_{n}.\] There are various ways one can prove this proposition. The representation-theoretic way would be to note that in the \(\mathrm{SU}(N)\) case, \(\mathbb{E}[S^{\otimes n}\otimes\bar{S}^{\otimes m}]\) commutes with \(U^{\otimes n}\otimes\bar{U}^{\otimes m}\) for any \(U\in\mathrm{SU}(N)\), and then to use the fact that any such operator must be of the form \(\rho(W)\) for some \(W\in\mathcal{B}_{n,m}\). The fact that \(W\) may be assumed to be invariant under conjugation can be ensured by averaging over all possible conjugations, since this does not change \(\mathbb{E}[S^{\otimes n}\otimes\bar{S}^{\otimes m}]\). The \(\mathrm{SO}(N)\) case can be handled similarly. Another way to show the proposition is via \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) Brownian motion. We have already introduced how \(\mathrm{SO}(N)\) Brownian motion is related to the Brauer algebra (Proposition 6.37), and we will need to introduce \(\mathrm{SU}(N)\) Brownian motion in order to derive the Makeenko-Migdal/Master loop/Schwinger-Dyson equations for \(\mathrm{SU}(N)\). Thus we will supply a proof of Proposition 6.75 using Brownian motion. The first step is to introduce the analog of Proposition 6.37 for \(\mathrm{SU}(N)\), i.e. to state how expectations of \(\mathrm{SU}(N)\) Brownian motion are related to Brauer algebra elements. We begin with the necessary setup. **Definition 6.76** (Poisson point process for \(\mathrm{SU}(N)\)).: Let \(n,m\geq 0\). Define a Poisson point process \(\Sigma_{\mathrm{SU}}\) as follows. We imagine we have \(n\) right-directed strands and \(m\) left-directed strands. Let \(\Sigma_{\mathrm{U}}\) be the Poisson point process on this strand diagram corresponding to the Unitary case, i.e. between any two same-direction strands, there is a rate-1 Poisson process giving same-direction swaps, and between any two opposite-direction strands, there is a rate-1 Poisson process giving turnarounds. We define \(\Sigma_{\mathrm{SU}}:=\Sigma_{\mathrm{U}}\sqcup\Sigma_{\mathrm{SU}}^{\prime}\), where \(\Sigma_{\mathrm{SU}}^{\prime}\) is an independent Poisson process which is made of independent rate-1 Poisson processes between any two pairs of strands, not necessarily distinct9. We differentiate between the various types of points by assigning the color green to same-direction swaps, blue to turnarounds, and purple to the points of \(\Sigma_{\mathrm{SU}}^{\prime}\). Footnote 9: Whereas the processes for same-direction swaps and turnarounds are always between distinct strands. **Definition 6.77**.: Define \(F^{\mathrm{SU}}\) which maps point process realizations \(P\) to elements of the Brauer algebra \(\mathcal{B}_{n,m}\) as follows. We may split our point process realization \(P=P_{\mathrm{U}}\sqcup P^{\prime}\), where \(P_{\mathrm{U}}\) collects all same-direction swaps and turnarounds, and \(P^{\prime}\) collects all purple points. We define \(F^{\mathrm{SU}}(P):=F(P_{\mathrm{U}})F^{\prime}(P^{\prime})\), where \(F(P_{\mathrm{U}})\in\mathcal{B}_{n,m}\) is exactly as in the Unitary case, and \(F^{\prime}(P^{\prime})\in\mathbb{R}\) is a scalar defined as follows. Let \(K\) be the number of points in \(P^{\prime}\) between same-direction strands, and \(K^{\prime}\) be the number of points in \(P^{\prime}\) between opposite-direction strands. Then \[F^{\prime}(P^{\prime}):=\bigg{(}\frac{1}{N^{2}}\bigg{)}^{K}\bigg{(}-\frac{1}{N ^{2}}\bigg{)}^{K^{\prime}}.\] One should think of this as saying that each point in \(P^{\prime}\) between same-direction strands incurs a factor of \(N^{-2}\), while each point in \(P^{\prime}\) between opposite-direction strands incurs a factor of \(-N^{-2}\). See Figure 59 for an example realization of \(\Sigma_{\mathrm{SU}}\). We can now state how \(\mathrm{SU}(N)\) Brownian motion is related to the Brauer algebra \(\mathcal{B}_{n,m}\). See [13, Appendix A] for the proof. **Proposition 6.78**.: We have that \[\mathbb{E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}]=e^{(n+m)^{2}T- \frac{n+m}{2}T}\rho_{+}\big{(}\mathbb{E}[F^{\mathrm{SU}}(\Sigma_{\mathrm{SU}}( T))]\big{)},\ \ T\geq 0.\] Proposition 6.78 immediately implies Proposition 6.75, as we next show. Proof of Proposition 6.75.: First, consider the \(\mathrm{SU}(N)\) case. Note that \(\rho_{+}:\mathcal{B}_{n,m}\to\mathrm{End}((\mathbb{C}^{N})^{\otimes(n+m)})\) is a linear map into a finite-dimensional vector space. This implies that its image \(\rho_{+}(\mathcal{B}_{n,m})\) is a closed subspace of \(\mathrm{End}((\mathbb{C}^{N})^{\otimes(n+m)})\) (as very subspace of a finite-dimensional vector space is closed). By Proposition 6.78, we have that \(\mathbb{E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}]\in\rho_{+}( \mathcal{B}_{n,m})\) for all \(T\geq 0\). Thus by the preceding discussion, we also have that \(\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}]=\lim_{T\to\infty}\mathbb{ E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}]\in\rho_{+}(\mathcal{B}_{n,m})\). Therefore there exists \(W\in\mathcal{B}_{n,m}\) such that \(\mathbb{E}[S^{\otimes n}\otimes\bar{S}^{\otimes m}]=\rho_{+}(W)\). Using \(W\), we construct \(\mathrm{Wg}_{N}^{\mathrm{SU}}\) which possesses the claimed symmetries. Let \(W^{T}\in\mathcal{B}_{n,m}\) be defined by \(W^{T}(\pi):=W(\pi^{T})\). We have that \[\rho_{+}(W^{T}) =\big{(}\rho_{+}(W)\big{)}^{T}=\big{(}\rho_{+}(W)\big{)}^{*}= \big{(}\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}]\big{)}^{*}\] \[=\mathbb{E}[(G^{*})^{\otimes n}\otimes\bar{G}^{\overbrace{\pi^{ \otimes m}}}]=\mathbb{E}[(G^{-1})^{\otimes m}\otimes\bar{G}^{-1}{}^{\otimes m}]\] \[=\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}]=\rho_{+}(W),\] where the first identity follows by the definition of \(\rho_{+}\), the second follows because \(\rho_{+}\) has real-valued matrix entries, the fourth follows by linearity, the fifth follows since Figure 59: Example realization of \(\Sigma_{\mathrm{SU}}(T)\). The green lines represent same-direction swaps, blue lines represent turnarounds, and purple lines/points represent the points of \(\Sigma_{\mathrm{SU}}^{\prime}\). In particular, a purple point on a given strand belongs to the Poisson process that encodes points between that strand and itself. when \(G\in\mathrm{SU}(N)\), and the sixth follows by the inversion-invariance of Haar measure on compact groups. Next, for any \(\sigma\in\mathrm{S}_{n}\times\mathrm{S}_{m}\), we have that \[\rho_{+}(\sigma W\sigma^{-1})=\rho_{+}(\sigma)\mathbb{E}[S^{\otimes n}\otimes \bar{S}^{\otimes m}]\rho_{+}(\sigma)^{-1}=\mathbb{E}[S^{\otimes n}\otimes\bar{ S}^{\otimes m}].\] We may thus define \[\mathrm{Wg}_{N}^{\mathrm{SU}}:=\frac{1}{n!m!}\sum_{\sigma\in\mathrm{S}_{n} \times\mathrm{S}_{m}}\sigma\bigg{(}\frac{W+W^{T}}{2}\bigg{)}\sigma^{-1}\in \mathcal{B}_{n,m}.\] Then \(\mathrm{Wg}_{N}^{\mathrm{SU}}\) satisfies all the required properties. This shows the \(\mathrm{SU}(N)\) case. The \(\mathrm{SO}(N)\) case follows in the exact same manner. Suppose we have a collection of words \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{n})\) on letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\), along with a collection of Brauer algebra elements \(\boldsymbol{\pi}=(\pi_{\ell},\ell\in[L])\), where \(\pi_{\ell}\in\mathcal{B}_{n_{\ell}+m_{\ell}}\), where \(n_{\ell},m_{\ell}\) are the respective number of times \(\lambda_{\ell},\lambda_{\ell}^{-1}\) appears in \(\boldsymbol{\Gamma}\). In the \(\mathrm{SU}(N)\) case, we may further assume \(\pi_{\ell}\in\mathcal{B}_{n_{\ell},m_{\ell}}\subseteq\mathcal{B}_{n_{\ell}+m_ {\ell}}\). Now as noted in previous sections, the choice of \(\boldsymbol{\Gamma}\) specifies the exterior connections of the strand diagram, while the choice of \(\boldsymbol{\pi}\) specifies the interior connections. Let \(\#\mathrm{comp}(\boldsymbol{\Gamma},\boldsymbol{\pi})\) be the number of components of the graph one obtains by including both the exterior and exterior connections. This slightly generalizes our previous definition of \(\#\mathrm{comp}\) to the case where the \(\pi_{\ell}\) are not of the special form of a combined left and right matching. Proposition 6.75 implies the following proposition about \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) word expectations. The proof is essentially the same as the discussion in Section 4.2, and thus it is omitted. **Proposition 6.79**.: Let \(\boldsymbol{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{n})\) be a collection of words on letters \(\{\lambda_{1},\ldots,\lambda_{L}\}\). Then \[\mathbb{E}[\mathrm{Tr}(G(\boldsymbol{\Gamma}))] =\sum_{\boldsymbol{\pi}=(\pi_{\ell},\ell\in[L])}\bigg{(}\prod_{ \ell\in L}\mathrm{Wg}_{N}^{\mathrm{SU}}(\pi_{\ell})\bigg{)}N^{\#\mathrm{comp} (\boldsymbol{\Gamma},\boldsymbol{\pi})}, G=\mathrm{SU}(N),\] \[\mathbb{E}[\mathrm{Tr}(G(\boldsymbol{\Gamma}))] =\sum_{\boldsymbol{\pi}=(\pi_{\ell},\ell\in[L])}\bigg{(}\prod_{ \ell\in L}\mathrm{Wg}_{N}^{\mathrm{SO}}(\pi_{\ell})\bigg{)}N^{\#\mathrm{comp} (\boldsymbol{\Gamma},\boldsymbol{\pi})}, G=\mathrm{SO}(N).\] Here, the first sum is over \(\pi_{\ell}\in\mathcal{B}_{n_{\ell},m_{\ell}}\), \(\ell\in[L]\), where \(n_{\ell},m_{\ell}\) are the respective number of times that \(\lambda_{\ell},\lambda_{\ell}^{-1}\) appear in \(\boldsymbol{\Gamma}\), and the second sum is over \(\pi_{\ell}\in\mathcal{B}_{n_{\ell}+m_{\ell}}\), \(\ell\in[L]\). _Remark 6.80_.: Unlike in the Unitary case, the collection of words \(\boldsymbol{\Gamma}\) is not required to be balanced when \(G=\mathrm{SU}(N)\), or unoriented-balanced when \(G=\mathrm{SO}(N)\). Ultimately, this is due to the fact that \(\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}]\) may be nonzero even if \(m\neq n\) when \(G=\mathrm{SU}(N)\), and \(\mathbb{E}[G^{\otimes n}]\) may be nonzero for odd \(n\) when \(G=\mathrm{SO}(N)\). Recall Remark 6.55 for an example of the latter. Ultimately, the reason for this is because the elements of \(\mathrm{SU}(N),\mathrm{SO}(N)\) must have determinant 1. Next, we apply Proposition 6.79 to give a surface-sum expression for \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) lattice gauge theories. To do this, we need to explain how an arbitrary element of \(\mathcal{B}_{n,m}\) or \(\mathcal{B}_{n+m}\) can be interpreted as giving a collection of blue faces that are glued in to the existing yellow faces. This contrasts with all the previous cases, where we could restrict to those elements of \(\mathcal{B}_{n,n}\) (Unitary) or \(\mathcal{B}_{n}\) (Orthogonal and Symplectic) which are given by a pair of left and right matchings. We cannot do the same here, because the element \(\mathrm{Wg}_{N}^{\mathrm{SU}}\) (resp. \(\mathrm{Wg}_{N}^{\mathrm{SO}}\) may in general give nonzero weight to elements of \(\mathcal{B}_{n,m}\) (resp. \(\mathcal{B}_{n+m}\)) which are not of this special form. In Figure 60, we explain how to go from the interior connections specified by \(\boldsymbol{\pi}\) to a collection of faces with specified gluings. For \(\pi\in\mathcal{B}_{n+m}\), we define the "face profile" of \(\pi\) to be the collection of faces that one obtains from \(\pi\), as described in Figure 60, additionally with the coloring of the vertices by red, green, or red and green, as specified in the figure. _Remark 6.81_.: The invariance of \(\mathrm{Wg}_{N}^{\mathrm{SU}}\) under conjugation implies that it is a function of the face profile of \(\pi\in\mathcal{B}_{n,m}\subseteq\mathcal{B}_{n+m}\), and similarly the invariance of \(\mathrm{Wg}_{N}^{\mathrm{SO}}\) under conjugation implies that it is a function of the face profile of \(\pi\in\mathcal{B}_{n+m}\). To see this, note that invariance under conjugation is the same as invariance under permutation of the strands (where in the \(\mathrm{SU}(N)\) case, we mean invariance under separate permutations of the top right-directed Figure 60: _A priori_ **setting:** Imagine that at first, the oranges paths at the left are not present and each black edge represents the boundary of a (not shown) yellow face. In this _a priori_ picture every purple segment on the left is connected to a purple segment on the right by a horizontal black line. **Constructing blue faces from orange matching:** Let the orange curves indicate an arbitrary matching of the 16 red and green vertices. There are certain cycles obtained by alternating between orange paths and black paths; if we shrink the orange paths to points, these cycles become the polygons shown on the right, whose interiors are shaded blue. If an orange edge on the left connects a green and red vertex, then the corresponding vertex on the right is colored both both red and green. **Gluing interpretation:** If we start with the _a priori_ set up and then _glue the blue faces into the diagram_ this has the effect of changing the purple-to-purple matching from the black one to the orange one. strands and bottom left-directed strands). If \(\pi,\pi^{\prime}\) have the same face profile, then there exists a permutation of the strands which takes one to the other. Put another way, starting only from the blue faces in Figure 60, we may reconstruct a matching \(\pi^{\prime}\) which will be related the the displayed orange matching by a reflection and permutation of the strands. The vertices of the blue face which are red and green indicate that the corresponding orange edge connects a left vertex to a right vertex, while vertices which are only red or only green indicate that the corresponding orange matching edge connects same-side vertices. We now make the following definition which captures the types of surfaces that one obtains from the gluing procedure described in Figure 60. **Definition 6.82** (Flexible semi-folded maps).: Consider a pair \((\mathcal{M},\phi)\) where \(\mathcal{M}\) is a planar (or higher genus) map and \(\phi:\mathcal{M}\to\Lambda\) is a map from the edges of \(\mathcal{M}\) to the edges of \(\Lambda\), and from the faces of \(\mathcal{M}\) to the plaquettes of \(\Lambda\). We call this pair a **flexible semi-folded map** if the following hold: 1. The dual graph of \(\mathcal{M}\) is bipartite. The faces of \(\mathcal{M}\) in one partite class are designated as "edge-faces" (shown blue in figures) and those in the other class are called "plaquette-faces" (shown yellow in figures). 2. \(\phi\) maps each plaquette-face of \(\mathcal{M}\) isometrically _onto_ a plaquette in \(\mathcal{P}\). 3. \(\phi\) maps each edge-face of \(\mathcal{M}\) onto a single edge of \(\Lambda\). _Remark 6.83_.: Comparing the definitions of flexible semi-folded map and semi-folded map, the main difference is that in the flexible case, \(\phi\) is not necessarily a graph homomorphism. See Figures 61 and 62 for examples and intuition. In anticipation of the eventual application to lattice gauge theory, now suppose that the letters \(\{\lambda_{1},\dots,\lambda_{L}\}\) are edges of the lattice \(\Lambda\). In this case, the preceding discussion shows that the pair \((\boldsymbol{\Gamma},\boldsymbol{\pi})\) is equivalent to a flexible semi-folded map \((\mathcal{M},\psi)\). **Definition 6.84**.: Let \(s=(\ell_{1},\dots,\ell_{n})\) be a string in \(\Lambda\). Let \(K:\mathcal{P}\to\mathbb{N}\). Define \(\operatorname{FSFM}_{\mathrm{SU}}(s,K)\) to be the set of flexible semi-folded maps \((\mathcal{M},\psi)\) that one can obtain when there are \(K(p)\) copies of the plaquette \(p\), for each \(p\in\mathcal{P}\). That is, if we let \(\boldsymbol{\Gamma}(K)\) be the collections of words consisting of \(s\) along with \(K(p)\) copies of \(p\) for each \(p\in\mathcal{P}\), then \(\operatorname{FSFM}_{\mathrm{SU}}(s,K)\) is the set of flexible semi-folded maps that one may obtain by ranging over \(\boldsymbol{\pi}=(\pi_{e},e\in E_{\Lambda}^{+})\), where \(\pi_{e}\in\mathcal{B}_{n_{e}(+),n_{e}(-)}\), using our correspondence between \((\boldsymbol{\Gamma}(K),\boldsymbol{\pi})\leftrightarrow(\mathcal{M},\psi)\). Here, \(n_{e}(+),n_{e}(-)\) are the respective number of times that \(e,e^{-1}\) appear in \(\boldsymbol{\Gamma}(K)\). Let \[\operatorname{FSFM}_{\mathrm{SU}}(s):=\bigsqcup_{K:\mathcal{P}\to\mathbb{N}} \operatorname{FSFM}_{\mathrm{SU}}(s,K).\] For \((\mathcal{M},\psi)\in\operatorname{FSFM}_{\mathrm{SU}}(s)\) and \(e\in E_{\Lambda}\), let \(\mu_{e}(\psi)\) be the profile of edge-faces of \((\mathcal{M},\psi)\) at the edge \(e\). Define \(\operatorname{FSFM}_{\mathrm{SO}}(s,K)\) and \(\operatorname{FSFM}_{\mathrm{SO}}(s)\) in the same way, except we only require \(\pi_{e}\in\mathcal{B}_{n_{e}(+)+n_{e}(-)}\) for each \(e\). **Definition 6.85**.: Let \(n,m\geq 0\). Define the normalized Weingarten function \[\overline{\mathrm{Wg}}_{N}^{\mathrm{SU}}(\pi):=N^{n+m-\#\mathrm{cycles }(\pi)}\mathrm{Wg}_{N}^{\mathrm{SU}}(\pi),\ \ \pi\ \in\mathcal{B}_{n,m},\] \[\overline{\mathrm{Wg}}_{N}^{\mathrm{SO}}(\pi):=N^{n-\#\mathrm{ cycles}(\pi)}\mathrm{Wg}_{N}^{\mathrm{SO}}(\pi),\ \ \pi\ \in\mathcal{B}_{n}\] where \(\#\mathrm{cycles}(\pi)\) is the number of faces in the face profile of \(\pi\). We can now state the following theorem which expresses Wilson loop expectations in \(\mathrm{SU}(N)\) lattice gauge theory as sums over flexible semi-folded maps. Figure 61: **Flexible semi-folded map example:** Shown left is part of an oriented planar map. In a flexible semi-folded map, the embedding function \(\psi\) maps directed edges of the map to directed edges of the lattice, but it is not required that \(\psi\) extends to a single-valued function on vertices of the map. Here the edges of the blue triangle and blue 1-gon all map to the red-green edge on the right; the vertex shared by the triangle and 1-gon is colored both red and green to illustrate that it does not map to a single vertex on the right. Recall that when \(\mathrm{U}(N)\) is replaced by \(\mathrm{O}(N)\) the corresponding surfaces become _non-orientable_. When \(\mathrm{U}(N)\) or \(\mathrm{O}(N)\) is replaced by \(\mathrm{SU}(N)\) or \(\mathrm{SO}(N)\) the corresponding surfaces become _flexible_ in the sense illustrated here. Figure 62: **Flexible semi-folded map example:** In a flexible semi-folded map, a single vertex in the map (left) can in principle correspond to several vertices in the lattice (right). But each plaquette (directed edge) on the left has a uniquely defined image plaquette (directed edge) on the right, and the boundary edges of any single blue face on the left all map to the same undirected blue edge on the right. In some sense, the image of a single vertex on the left is a closed cycle on the right, because as one “moves around the vertex on the left clockwise” one passes through a sequence of plaquette corners whose images on the right trace a cycle (possibly with some repeated vertices). **Theorem 6.86**.: _Let \(s=(\ell_{1},\ldots,\ell_{n})\) be a string. For \(\mathrm{SU}(N)\) lattice gauge theory, we have that_ \[\langle W_{s}\rangle_{\Lambda,\beta}=Z_{\Lambda,\beta}^{-1}\sum_{(\mathcal{M}, \psi)\in\mathrm{FSFM}_{\mathrm{SU}}(s)}\frac{\beta^{\mathrm{area}(\mathcal{M}, \psi)}}{(\psi^{-1})!}\prod_{e\in E_{\mathrm{A}}}\overline{W}_{N}^{\mathrm{SU}} (\mu_{e}(\psi))N^{\chi(M)-2n}.\] _For \(\mathrm{SO}(N)\) lattice gauge theory, we have that_ \[\langle W_{s}\rangle_{\Lambda,\beta}=Z_{\Lambda,\beta}^{-1}\sum_{(\mathcal{M}, \psi)\in\mathrm{FSFM}_{\mathrm{SO}}(s)}\frac{\beta^{\mathrm{area}(\mathcal{M}, \psi)}}{(\psi^{-1})!}\prod_{e\in E_{\mathrm{A}}}\overline{W}_{N}^{\mathrm{SO}} (\mu_{e}(\psi))N^{\chi(M)-2n}.\] #### 6.2.1 Makeenko-Migdal/Master loop/Schwinger-Dyson equation To obtain the single-strand Makeenko-Migdal/Master loop/Schwinger-Dyson equation, we will need to modify the previous argument for \(\mathrm{U}(N),\mathrm{O}(N),\mathrm{Sp}(N/2)\), because in those cases we had the strand-by-strand exploration, while in the \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) cases we do not. Ultimately, this is due to the fact that when \(G=\mathrm{SU}(N),\mathrm{SO}(N)\), there is some nonzero contribution from the event that all exploration eras do not end by time \(T\) (even when we send \(T\to\infty\)). On the other hand, the delicate cancellation properties that we took advantage of when \(G=\mathrm{U}(N),\mathrm{O}(N),\mathrm{Sp}(N/2)\) were only on the event that all exploration eras have ended. Thus when \(G=\mathrm{SU}(N),\mathrm{SO}(N)\), we cannot expect that the same strand-by-strand exploration will suffice - in particular, the key property of the strand-by-strand exploration (Propositions 4.5 and 6.42) no longer holds for \(\mathrm{SU}(N),\mathrm{SO}(N)\). We begin our alternate approach by introducing analogs of Jucys-Murphy elements for the Brauer and walled Brauer algebras. See [14, 1] for more discussion on these elements. **Definition 6.87**.: Let \(n\geq 1\). Define the Brauer algebra elements \(x_{1},\ldots,x_{n}\in\mathcal{B}_{n}\) by \[x_{k}:=J_{k}-\sum_{i=1}^{k-1}\langle k\ i\rangle=\sum_{i=1}^{k-1}\big{(}(k\ i)- \langle k\ i\rangle\big{)},\ \ k\in[n].\] These elements are the generalizations of the Jucys-Murphy elements for the Brauer algebra. In particular, they are mutually commuting (see [14, Corollary 2.2]). Additionally, let \(m\geq 1\). Define the walled Brauer algebra elements \(z_{1},\ldots,z_{n+m}\in\mathcal{B}_{n,m}\) by \[z_{k}:=\begin{cases}\sum_{i=1}^{k-1}(k\ i)-\sum_{i=n+1}^{n+m} \langle k\ i\rangle&\ k\in[n]\\ \sum_{i=n+1}^{k-1}(k\ i)&\ k\in(n:n+m].\end{cases}\] These elements are the generalizations of the Jucys-Murphy elements for the walled Brauer algebra. In particular, they are mutually commuting (see [1, Proposition 2.6]). **Lemma 6.88**.: Let \(Z:=z_{1}+\cdots+z_{n+m}\) for brevity. For \(G=\mathrm{SU}(N)\), we have that \[\mathbb{E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}]=e^{\frac{(n-m)^{2 }}{2N^{2}}T}e^{-\frac{n+m}{2}T}e^{-\frac{T}{N}\rho_{+}(Z)}\] \[=e^{\frac{(n-m)^{2}}{2N^{2}}T}e^{-\frac{n+m}{2}T}e^{-\frac{T}{N}\rho_{+}(z_{n})}e^{ -\frac{T}{N}\rho_{+}(Z-z_{n})}.\] Let \(X:=x_{1}+\cdots+x_{n}\) for brevity. For \(G=\operatorname{SO}(N)\), we have that \[\mathbb{E}[B_{T}^{\otimes n}] =e^{-\frac{n}{2}(1-\frac{1}{N})T}e^{-\frac{T}{N}X}\] \[=e^{-\frac{n}{2}(1-\frac{1}{N})T}e^{-\frac{T}{N}\rho_{+}(x_{n})}e ^{-\frac{T}{N}\rho_{+}(X-x_{n})}\] Proof.: In both cases, the second identity follows from the first by mutual commutativity of the Jucys-Murphy elements. The first identity for \(G=\operatorname{SU}(N)\) (resp. \(\operatorname{SO}(N)\)) follows by Proposition 6.78 (resp. 6.37) and an explicit Poisson calculation. _Remark 6.89_.: In terms of our Poisson point process, this lemma has the following interpretation: we may first explore all points involving the top strand, and then all points which do not involve the top strand. **Definition 6.90**.: Let \(\Sigma_{\operatorname{SU}}^{\operatorname{top}}(T)\subseteq\Sigma_{ \operatorname{SU}}(T)\) be the process defined by keeping only those points which involve the top strand. Define \(\Sigma_{\operatorname{SU}}^{\operatorname{rest}}\) to be the complement of \(\Sigma_{\operatorname{SU}}^{\operatorname{top}}\), i.e. the process defined by keeping only those points which do not involve the top strand. Define \(\Sigma_{\operatorname{SO}}^{\operatorname{top}},\Sigma_{\operatorname{SO}}^ {\operatorname{rest}}\) in the same manner. _Remark 6.91_.: By Poisson thinning, \(\Sigma_{\operatorname{SU}}^{\operatorname{top}}\) and \(\Sigma_{\operatorname{SU}}^{\operatorname{rest}}\) are independent Poisson processes. By Lemma 6.88 and explicit calculation, we have the following identity, which states Lemma 6.88 in terms of our Poisson point process. The proof is omitted. **Lemma 6.92**.: For \(G=\operatorname{SU}(N)\), we have that \[\mathbb{E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}]=e^{-\frac{n+m}{2} (1+\frac{1}{N^{2}})T}\rho_{+}\Big{(}e^{(2(n+m)-1)T}\mathbb{E}[F_{\operatorname {SU}}(\Sigma_{\operatorname{SU}}^{\operatorname{top}}(T))]e^{(n+m-1)^{2}T} \mathbb{E}[F_{\operatorname{SU}}(\Sigma_{\operatorname{SU}}^{\operatorname{ rest}}(T))]\Big{)}.\] For \(G=\operatorname{SO}(N)\), we have that \[\mathbb{E}[B_{T}^{\otimes n}]=e^{-\frac{n}{2}(1-\frac{1}{N})T}\rho_{+}\Big{(} e^{2(n-1)T}\mathbb{E}[F(\Sigma_{\operatorname{SO}}^{\operatorname{top}}(T))]e^{2 \binom{n-1}{2}T}\mathbb{E}[F(\Sigma_{\operatorname{OS}}^{\operatorname{rest}}( T))]\Big{)}.\] _Remark 6.93_.: In the above, when \(G=\operatorname{SU}(N)\) we choose to split the exponential prefactor \((n+m)^{2}T=(2(n+m)-1)T+(n+m-1)^{2}T\), since \(2(n+m)-1\) is the number of independent rate \(1\) Poisson processes contributing to \(\Sigma_{\operatorname{SU}}^{\operatorname{top}}(T)\), and \((n+m-1)^{2}\) is the number of independent rate \(1\) Poisson processes contributing to \(\Sigma_{\operatorname{SU}}^{\operatorname{rest}}(T)\). Similar considerations hold when \(G=\operatorname{SO}(N)\). **Notation 6.94**.: For notational brevity in what follows, define \[X_{\operatorname{SU}}(T) :=e^{-\frac{1}{2}(1+\frac{1}{N^{2}})T}e^{(2(n+m)-1)T}\mathbb{E}[F (\Sigma_{\operatorname{SU}}^{\operatorname{top}}(T))]\in\mathcal{B}_{n,m},\] \[Y_{\operatorname{SU}}(T) :=e^{-\frac{n+m-1}{2}(1+\frac{1}{N^{2}})T}e^{(n+m-1)^{2}T}\mathbb{ E}[F(\Sigma_{\operatorname{SU}}^{\operatorname{rest}}(T))]\in\mathcal{B}_{n,m},\] \[X_{\operatorname{SO}}(T) :=e^{-\frac{1}{2}(1-\frac{1}{N})T}e^{2(n-1)T}\mathbb{E}[F(\Sigma_{ \operatorname{SO}}^{\operatorname{top}}(T))]\in\mathcal{B}_{n},\] \[Y_{\operatorname{SO}}(T) :=e^{-\frac{n-1}{2}(1-\frac{1}{N})T}e^{2^{\binom{n-1}{2}T}}\mathbb{ E}[F(\Sigma_{\operatorname{OS}}^{\operatorname{rest}}(T))]\in\mathcal{B}_{n}.\] By Lemma 6.92, we have that \[\mathbb{E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}] =\rho_{+}(X_{\mathrm{SU}}(T)Y_{\mathrm{SU}}(T)),\ \ G=\mathrm{SU}(N) \tag{6.8}\] \[\mathbb{E}[B_{T}^{\otimes n}] =\rho_{+}(X_{\mathrm{SO}}(T)Y_{\mathrm{SO}}(T)),\ \ G=\mathrm{SO}(N).\] The starting point to deriving an eventual recursion for \(\mathrm{SU}(N)\) or \(\mathrm{SO}(N)\) Haar measure is the following recursion for \(X\). **Lemma 6.95**.: We have that \[X_{\mathrm{SU}}(T) =e^{-\frac{1}{2}(1+\frac{1}{N^{2}})T}+\left(\frac{n-m}{N^{2}}- \frac{z_{n}}{N}\right)\int_{0}^{T}du\ e^{-\frac{1}{2}(1+\frac{1}{N^{2}})u}X_{ \mathrm{SU}}(T-u),\] \[X_{\mathrm{SO}}(T) =e^{-\frac{1}{2}(1-\frac{1}{N})T}-\frac{x_{n}}{N}\int_{0}^{T}du \ e^{-\frac{1}{2}(1-\frac{1}{N})u}X_{\mathrm{SO}}(T-u).\] Proof.: This follows by considering the time of the first point in \(\Sigma_{\mathrm{SU}}^{\mathrm{top}}(T)\) (resp. \(\Sigma_{\mathrm{SO}}^{\mathrm{top}}(T)\)). For \(\bullet\in\{\mathrm{SU},\mathrm{SO}\}\), if we substitute the identities given by Lemma 6.95 into \(X_{\bullet}(T)Y_{\bullet}(T)\), the term \(X_{\bullet}(T-u)Y_{\bullet}(T)\) for \(U\in[0,T]\) appears. The following lemma interprets this term as an appropriate Brownian motion expectation. **Lemma 6.96**.: For any \(0\leq u\leq T\), we have that \[\rho_{+}(X_{\mathrm{SU}}(T-u)Y_{\mathrm{SU}}(T)) =\mathbb{E}[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)}\otimes \bar{B}_{T}^{\otimes m}],\] \[\rho_{+}(X_{\mathrm{SO}}(T-u)Y_{\mathrm{SO}}(T)) =\mathbb{E}[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)}].\] Proof.: We only prove the \(\mathrm{SU}(N)\) case as the \(\mathrm{SO}(N)\) is very similar. We may write \[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)}\otimes\bar{B}_{T}^{\otimes m}= \left((B_{T}B_{u}^{-1})^{\otimes m}\otimes\overline{B_{T}B_{u}^{-1}}^{\otimes m }\right)\ \big{(}I\otimes B_{u}^{\otimes(n-1)}\otimes\bar{B}_{u}^{\otimes m}\big{)}.\] Since \(B_{u}\) and \(B_{T}B_{u}^{-1}\) are independent, upon taking expectations we obtain \[\mathbb{E}[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)}\otimes \bar{B}_{T}^{\otimes m}] =\left(\mathbb{E}[(B_{T}B_{u}^{-1})^{\otimes n}\otimes\overline{ B_{T}B_{u}^{-1}}^{\otimes m}]\right)\ \big{(}I\otimes\mathbb{E}[B_{u}^{\otimes(n-1)}\otimes\bar{B}_{u}^{ \otimes m}]\big{)}\] \[=\mathbb{E}[B_{T-u}^{\otimes n}\otimes\bar{B}_{T-u}^{\otimes m}] \ \big{(}I\otimes\mathbb{E}[B_{u}^{\otimes(n-1)}\otimes\bar{B}_{u}^{ \otimes m}]\big{)}.\] Writing \(Z=z_{1}+\cdots+z_{n+m}\) for brevity, we have by Lemma 6.88 that \[\mathbb{E}[B_{T-u}^{\otimes n}\otimes\bar{B}_{T-u}^{\otimes m}] =e^{\frac{(n-m)^{2}}{2N^{2}}(T-u)}e^{-\frac{n+m}{2}(T-u)}e^{-\frac {T-u}{N}\rho_{+}(Z)},\] \[I\otimes\mathbb{E}[B_{u}^{\otimes(n-1)}\otimes\bar{B}_{u}^{ \otimes m}] =e^{\frac{(n-m-1)^{2}}{2N^{2}}u}e^{-\frac{n+m-1}{2}u}e^{-\frac{u}{N} \rho_{+}(Z-z_{n})}.\] Combining, we obtain (using the mutual commutativity of \(z_{1},\ldots,z_{n+m}\)) \[\mathbb{E}[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)}]=e^{\frac{2(n-m)-1}{2 N^{2}}(T-u)}e^{-\frac{1}{2}(T-u)}e^{-\frac{T-u}{N}\rho_{+}(z_{n})}e^{\frac{(n-m-1)^{2} }{2N^{2}}T}e^{-\frac{n+m-1}{2}T}e^{-\frac{T}{N}\rho_{+}(Z-z_{n})}.\] By an explicit calculation, we have that the right hand side above is exactly \[\rho_{+}\bigg{(}e^{-\frac{1}{2}(1+\frac{1}{N^{2}})(T-u)}e^{(2(n+m)-1)(T-u)} \mathbb{E}[F(\Sigma_{\mathrm{SU}}^{\mathrm{top}}(T-u))]e^{-\frac{n+m-1}{2}(1+ \frac{1}{N^{2}})T}e^{(n+m-1)^{2}T}\mathbb{E}[F(\Sigma_{\mathrm{SU}}^{\mathrm{ rest}}(T))]\bigg{)},\] which is exactly \(\rho_{+}(X_{\mathrm{SU}}(T-u)Y_{\mathrm{SU}}(T))\), and thus the desired result follows. Next, we show that the Brownian motion expectation which appears in Lemma 6.96 has a nice limit as \(T\to\infty\). We prove this for more general \(G\) than needed, as the argument is exactly the same. **Lemma 6.97**.: Let \(G=\mathrm{U}(N),\mathrm{SU}(N),\mathrm{SO}(N),\mathrm{Sp}(N/2)\). For any \(u\geq 0\), we have that \[\lim_{T\to\infty}\mathbb{E}[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)} \otimes\bar{B}_{T}^{\otimes m}]=e^{\frac{c_{\mathfrak{g}}}{2}u}\mathbb{E}[G^{ \otimes n}\otimes\bar{G}^{\otimes m}].\] Proof.: We may write \[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)}\otimes\bar{B}_{T}^{\otimes m}= \big{(}B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}\big{)}\ \big{(}B_{u}^{-1}\otimes I^{\otimes(n+m-1)}\big{)}.\] For fixed \(u\), the conditional distribution \(B_{T}\mid B_{u}\) converges to normalized Haar measure on \(G\) as \(T\to\infty\). We thus obtain \[\lim_{T\to\infty}\mathbb{E}[(B_{T}B_{u}^{-1})\otimes B_{T}^{\otimes(n-1)} \otimes\bar{B}_{T}^{\otimes m}]=\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{ \otimes m}]\big{(}\mathbb{E}[B_{u}^{-1}]\otimes I^{\otimes(n+m-1)}\big{)}.\] By an explicit calculation, we have that \[\mathbb{E}[B_{u}^{-1}]=\mathbb{E}[B_{u}^{*}]=\big{(}\mathbb{E}[B_{u}]\big{)}^ {*}=\big{(}e^{\frac{c_{\mathfrak{g}}}{2}u}I\big{)}^{*}=e^{\frac{c_{\mathfrak{g }}}{2}u}I.\] The desired result now follows. Now by combining the previous few preliminary results, we obtain the following recursions for expectations with respect to \(\mathrm{SU}(N)\) or \(\mathrm{SO}(N)\) Haar measure. **Proposition 6.98**.: For \(n\geq 1,m\geq 0\), we have that \[\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}] =\rho_{+}\bigg{(}\frac{n-m}{N^{2}}-\frac{z_{n}}{N}\bigg{)} \mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}],\ \ G=\mathrm{SU}(N),\] \[\bigg{(}1-\frac{1}{N}\bigg{)}\mathbb{E}[G^{\otimes n}] =-\frac{\rho_{+}(x_{n})}{N}\mathbb{E}[G^{\otimes n}],\ \ G=\mathrm{SO}(N).\] Proof.: We prove the \(\mathrm{SU}(N)\) case as the \(\mathrm{SO}(N)\) is very similar. For brevity, let \(E_{T}(n,m)=\mathbb{E}[B_{T}^{\otimes n}\otimes\bar{B}_{T}^{\otimes m}]\). Combining equation (6.8) with Lemma 6.95, we obtain \[E_{T}(n,m)=e^{-\frac{1}{2}(1+\frac{1}{N^{2}})T}\rho_{+}(Y_{\mathrm{SU}}(T))+ \bigg{(}\frac{n-m}{N^{2}}-\frac{\rho_{+}(z_{n})}{N}\bigg{)}\int_{0}^{T}e^{- \frac{1}{2}(1+\frac{1}{N^{2}})u}\rho_{+}(X_{\mathrm{SU}}(T-u)Y_{\mathrm{SU}}(T )).\] Note that \(\rho_{+}(Y_{\mathrm{SU}}(T))=I\otimes E_{T}(n-1,m)\), which is \(O(1)\) as \(T\to\infty\). Thus as \(T\to\infty\), the first term in the right hand side above is \(o(1)\). Combining this with Lemmas 6.96 and 6.97, we obtain upon taking \(T\to\infty\), \[\mathbb{E}[G^{\otimes n}\otimes\bar{G}^{\otimes m}]=\bigg{(}\frac{n-m}{N^{2}} -\frac{\rho_{+}(z_{n})}{N}\bigg{)}\int_{0}^{\infty}due^{-\frac{1}{2}(1+\frac {1}{N^{2}})u}e^{\frac{c_{\mathfrak{su}(N)}}{2}u}\mathbb{E}[G^{\otimes n} \otimes\bar{G}^{\otimes m}].\] To finish, we use that \(c_{\mathfrak{su}(N)}=-1+\frac{1}{N^{2}}\). **Definition 6.99**.: Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). For each letter \(\lambda_{i}\), \(i\in[L]\), let \(t(\lambda_{i})\) be equal to the number of occurrences of \(\lambda_{i}\) in \(\mathbf{\Gamma}\) minus the number of occurrences of \(\lambda_{i}^{-1}\) in \(\mathbf{\Gamma}\). Let \(t(\lambda_{i}^{-1}):=-t(\lambda_{i})\). Given a location \((i,j)\) of \(\mathbf{\Gamma}\), let \(t(i,j):=t(\lambda_{k}^{s})\), where \(\lambda_{k}^{s}\), \(s\in\{\pm 1\}\), is the letter at location \((i,j)\). Proposition 6.98 leads immediately (by similar considerations as in the proof of Proposition 5.2) to the following recursions for expectations of words. The proof is omitted. In the following, recall the various string operations defined in Definitions 5.1 and 6.69. **Proposition 6.100** (Single-location \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) word recursion).: Let \(\mathbf{\Gamma}=(\Gamma_{1},\ldots,\Gamma_{k})\) be a collection of words on \(\{\lambda_{1},\ldots,\lambda_{L}\}\). For any location \((i,j)\) of \(\Gamma\), we have that for \(G=\mathrm{SU}(N)\), \[\left(1-\frac{t(i,j)}{N^{2}}\right)\mathbb{E}[\mathrm{tr}(G( \mathbf{\Gamma}))]= -\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{S}_{+}((i,j),\mathbf{ \Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))]+\sum_{\mathbf{ \Gamma}^{\prime}\in\mathbb{S}_{-}((i,j),\mathbf{\Gamma})}\mathbb{E}[\mathrm{tr }(G(\mathbf{\Gamma}^{\prime}))]\] \[-\frac{1}{N^{2}}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{M}_{+}^ {U}((i,j),\mathbf{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}) )]+\frac{1}{N^{2}}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{M}_{-}^{U}((i,j), \mathbf{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))].\] For \(G=\mathrm{SO}(N)\), we have that \[\left(1-\frac{1}{N}\right)\mathbb{E}[\mathrm{tr}(G(\mathbf{ \Gamma}))]= -\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{S}_{+}((i,j),\mathbf{ \Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))]+\sum_{\mathbf{ \Gamma}^{\prime}\in\mathbb{S}_{-}((i,j),\mathbf{\Gamma})}\mathbb{E}[\mathrm{ tr}(G(\mathbf{\Gamma}^{\prime}))]\] \[-\frac{1}{N^{2}}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{M}_{+}( (i,j),\mathbf{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))]+ \frac{1}{N^{2}}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{M}_{-}((i,j),\mathbf{ \Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))]\] \[-\frac{1}{N}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{T}_{+}((i,j ),\mathbf{\Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))]+ \frac{1}{N}\sum_{\mathbf{\Gamma}^{\prime}\in\mathbb{T}_{-}((i,j),\mathbf{ \Gamma})}\mathbb{E}[\mathrm{tr}(G(\mathbf{\Gamma}^{\prime}))]\] By applying Proposition 6.100 to lattice Yang-Mills theories, we may obtain the single-location \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) Makeenko-Migdal/Master loop/Schwinger-Dyson equation. The proof is entirely analogous to the proof of the \(\mathrm{U}(N)\) Makeenko-Migdal/Master loop/Schwinger-Dyson equation (Theorem 5.6) using the \(\mathrm{U}(N)\) word recursion (Proposition 5.2), and thus it is omitted. Before we state the theorem, we first define the following new string operation which appears for \(\mathrm{SU}(N)\). **Definition 6.101** (Expansion).: Let \(s=(\ell_{1},\ldots,\ell_{n})\) be a string. Let \((k,i)\) be a location in \(s\). We define the sets of positive and negative expansions \(\mathbb{E}_{+}((k,i),s)\) and \(\mathbb{E}_{-}((k,i),s)\) as follows. Denote by \(e\) the oriented edge of the lattice at location \((k,i)\) in \(s\). The set of positive expansions \(\mathbb{E}_{+}((k,i),s)\) is the set of all possible strings \(s^{\prime}\) which can be obtained by adding an oriented plaquette \(p\in\mathcal{P}\) which contains \(e^{-1}\) to the collection of loops \(s\). The set of negative expansions \(\mathbb{E}_{+}((k,i),s)\) is the set of all possible strings \(s^{\prime}\) which can be obtained by adding an oriented plaquette \(p\in\mathcal{P}\) which contains \(e\) to the collection of loops **Theorem 6.102** (Single-location \(\mathrm{SU}(N)\) and \(\mathrm{SO}(N)\) Makeenko-Migdal/Master loop/Schwinger-Dyson equation).: _Let \(s=(\ell_{1},\ldots,\ell_{n})\) be a string. Let \((k,i)\) be a location in \(s\). For \(\mathrm{SU}(N)\) lattice Yang-Mills theory, we have that_ \[\bigg{(}1-\frac{t(k,i)}{N^{2}}\bigg{)}\phi(s)= -\sum_{s^{\prime}\in\mathbb{S}_{+}((k,i),s)}\phi(s^{\prime})+ \sum_{s^{\prime}\in\mathbb{S}_{-}((k,i),s)}\phi(s^{\prime})\] \[-\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M}_{+}((k,i),s)}\phi(s ^{\prime})+\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M}_{-}((k,i),s)}\phi(s^{ \prime})\] \[-\beta\sum_{s^{\prime}\in\mathbb{D}_{+}((k,i),s)}\phi(s^{\prime} )+\beta\sum_{s^{\prime}\in\mathbb{D}_{-}((k,i),s)}\phi(s^{\prime})\] \[-\beta\sum_{s^{\prime}\in\mathbb{E}_{+}((k,i),s)}\phi(s^{\prime} )+\beta\sum_{s^{\prime}\in\mathbb{E}_{-}((k,i),s)}\phi(s^{\prime}).\] _For \(\mathrm{SO}(N)\) lattice Yang-Mills theory, we have that_ \[\bigg{(}1-\frac{1}{N}\bigg{)}\phi(s)= -\sum_{s^{\prime}\in\mathbb{S}_{+}((k,i),s)}\phi(s^{\prime})+ \sum_{s^{\prime}\in\mathbb{S}_{-}((k,i),s)}\phi(s^{\prime})\] \[-\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M}_{+}((k,i),s)}\phi(s ^{\prime})+\frac{1}{N^{2}}\sum_{s^{\prime}\in\mathbb{M}_{-}((k,i),s)}\phi(s^{ \prime})\] \[-\frac{1}{N}\sum_{s^{\prime}\in\mathbb{T}_{+}((k,i),s)}\phi(s^{ \prime})+\frac{1}{N}\sum_{s^{\prime}\in\mathbb{T}_{-}((k,i),s)}\phi(s^{\prime})\] \[-2\beta\sum_{s^{\prime}\in\mathbb{D}_{+}((k,i),s)}\phi(s^{\prime })+2\beta\sum_{s^{\prime}\in\mathbb{D}_{-}((k,i),s)}\phi(s^{\prime}).\] _Remark 6.103_.: Observe that the \(\mathrm{SO}(N)\) Makeenko-Migdal/Master loop/Schwinger-Dyson equation is exactly the \(\mathrm{O}(N)\) one. This is natural since \(\mathrm{O}(N)\) Brownian motion is essentially \(\mathrm{SO}(N)\) Brownian motion (recall Remark 6.29). ## 7 Open problems Although lattice gauge theory has been very thoroughly studied in physics, there are many simple ideas about the relationship between random surfaces and Yang-Mills theory that have not been so thoroughly explored on the math side. There is also room for innovation: producing clever variants and toy models whose limits might be easier to describe in terms of continuum random surfaces (including those related to Liouville quantum gravity and conformal field theory). If the ultimate goal is to get a handle on a continuum theory, there is a good deal of flexibility in how one sets up the discrete models that are meant to approximate that theory. We present a series of open problems along those lines, ranging from very general and open-ended to very technical and specific. 1. For which lattice models can we establish a version of the "area law" using the surface sum point of view? Recall that the area law states that the Wilson loop expectation decays exponentially in the minimal area spanned by the loop, at least for reasonably nice loops; see the definitions and discussion in [10] about the relationship between the area law and "quark confinement." Many such results are known (from various points of view) for small \(\beta\), and these results apply in any dimension \(d\geq 2\), see e.g. [11] for the proof of the \(\mathrm{SU}(N)\) area law for small \(\beta\) and general \(N\), and the discussion in [10] which explains a string-trajectory-based derivation of such a result in the \(N\to\infty\) limit for small \(\beta\). For general \(\beta\), the known results are dimension-dependent: 1. When \(d=2\) the area law is well-known for general groups for any \(\beta\)[12, 12, 13]. 2. When \(d=3\) and \(N=1\), the area law holds for all \(\beta\), see [14]. Because \(\mathrm{U}(1)\) is the center of \(\mathrm{U}(N)\) for general \(N\), this appears to also imply that the \(\mathrm{U}(N)\) area law holds for all \(\beta\) when \(N>1\), see [15]. It is not known whether the \(\mathrm{SU}(N)\) area law holds for large \(\beta\) when \(N>1\). 3. When \(d=4\), interestingly enough, the \(\mathrm{U}(1)\) area law holds for small \(\beta\) but _fails_ for large \(\beta\), see [11]. It remains a major open problem to prove the area law for any non-commutative group when \(\beta\) is large, \(N\geq 2\) and \(d=4\). 2. For which lattice models can we establish exponential decay of correlations for the Wilson loop traces using the surface sum point of view? This is related to the so-called "mass gap" problem, see e.g. discussion in [10]. In the settings above, one is usually able to prove exponential decay of correlations in the same settings where one is able to prove the area law. (See [10] for an argument that certain strong forms of exponential decay imply the area law.) In particular, it remains a major open problem to prove exponential decay of correlations for any non-commutative group when \(\beta\) is large, \(N\geq 2\) and \(d=4\). 3. In the \(\mathrm{U}(N)\) setting, what can we say about the _conditional_ law of the surface _given_ the number and type of blue plaquettes at each edge? Once the blue plaquettes are fixed, we no longer need to consider the Weingarten function, and the remaining combinatorics are simpler: in fact one obtains precisely the sort of model used to study words in GUE matrices using Wick's formula [11]. In this setting all ways of hooking up yellow to blue along edges are allowed and all contribute with the same sign, but there is still a weighting according to the genus, which leads the surface to concentrate around minimal genus configurations in the large \(N\) limit. As a simplified model, we could even imagine that we fix the number of blue faces of each type to be exactly the same at each edge. Can we say anything about the scaling limits in this setting? Is the GUE correspondence at all helpful here? 4. Within a three-dimensional lattice like \(\mathbb{Z}^{3}\), one way to try to understand the scaling limit of an oriented random surface (which could become space-filling in the fine mesh limit, with genus tending to infinity) is to try to understand the limit of the "height function" on the dual lattice that changes by \(\pm 1\) (depending on orientation) each time one crosses a layer of the surface. Is there a setting in which such a limit can be obtained? The gradient of such a function is in some sense the normal vector field corresponding to the surface. (It is a flow in which one unit of current is assigned for each face of the surface, in the direction orthogonal to that face; the flow is not divergence free but it is curl-free except along the boundary loops.) Is there a qualitative difference between \(N=1\) and general \(N\) in the limit? The \(N=1\) case has been understood by Frolich and Spencer [10] and has an interesting \(\beta\)-dependent phase transition (from area law to perimeter law, as mentioned above) that we would not expect to see for larger \(N\). 5. Can we prove anything interesting about the variants in which there are many plaquettes but only three can meet along any given edge? For example \({\cal L}\) might be the truncated octahedron tessellation (one example of a tessellation by cells where only three cells ever meet along the same edge, see [11]) and \({\cal P}\) can be the collection of of square and hexagonal faces in the tessellation. If we require that each plaquette appears zero times or once, then the only non-zero terms in the surface expansion involve surface in which either zero or two of the three plaquettes contain each given interior edge (i.e. each edge not on the Wilson loop). In this case the surfaces we obtain are simpler: all of the blue faces are 2-gons and the surfaces are self-avoiding. There is no need to consider the Weingarten function in this simplified setting. We remark that this would be the surface analog of the loop \({\rm O}(n)\) model, studied for instance in [12]. (Requiring the number of copies of a given plaquette to be small--here either 0 or 1--is somehow related to taking a small \(\beta\) in the unrestricted-plaquette-number setting.) 6. Recall that in certain contexts it is enough to consider _connected surfaces_, such as when there is a single Wilson loop and \(N\to\infty\) (recall the discussion just after Corollary 3.11). Are there other contexts in which it is sufficient to consider connected surfaces? 7. A surface sum like the one in Corollary 1.10 includes many terms of both signs. Our intuition is that most of these surfaces somehow "cancel each other out." For example, there may be local changes one can make to a surface that change the sign of the associated Weingarten product but do not change the genus of the surface. Is there a clean way to group together the surfaces in this sum that makes this cancellation more transparent? One could begin with the case \(d=2\), and aim to show that the surfaces that are not locally flat somehow cancel each other out. 8. Is there a simpler expression (or at least asymptotic expression in the limit of a large number of plaquettes) for the Weingarten function in the case that \(N\) is a small integer? Recall that in this case, the sum over representations in (2.5) involves only those corresponding to Young tableaux with at most \(N\) rows. 9. What is the most natural way to express the finite-\(T\) (i.e. Brownian motion at time \(T\), as in Section 2.1) analog of the Weingarten function and the corresponding random planar maps? Note that adding a few single-edge loops may have a similar effect to switching to finite \(T\). This is because weighting Haar measure on \({\rm U}(N)\) by a power of the real part of the trace _biases_ the measure toward matrices that are near the identity; Brownian motion on a Lie group stopped at a finite time \(T\) is also (compared to Haar measure) biased toward matrices that are near the identity. 10. Are there any natural random surface models emerging in the lattice Yang-Mills framework that lead to planar maps similar to those whose limits (can be conjectured to) correspond to Liouville quantum gravity surfaces with \(c\in(1,25)\)? Those surfaces are multi-ended and infinite, see e.g. [1, 1, 1, 2]. 11. There have been many recent results about random planar maps of high genus and/or random hyperbolic planar maps, see, e.g. [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 43, 434, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 499, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 515, 515, 522, 535, 540, 509, 515, 536, 540, 515, 541, 556, 557, 558, 56, 57, 58, 592, 593, 510, 515, 52, 540, 52, 55, 55, 56, 57, 594, 503, 51, 53, 55, 57, 59, 51, 55, 58, 59, 504, 51, 55, 59, 51, 505, 52, 53, 540, 53, 55, 56, 57, 59, 51, 52, 54, 50, 53, 57, 59, 52, 54, 55, 58, 59, 53, 54, 59, 55, 56, 57, 59, 53, 58, 59, 54, 51, 55, 59, 56, 57, 59, 54, 52, 55, 59, 56, 57, 59, 58, 59, 59, 59, 57, 59, 58, 59, 59, 59, 59, 59, 59, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 60, 61, 63, 65, 67, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 141, 143, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 206, 207, 208, 209, 210, 209, 211, 209, 222, 212, 232, 246, 247, 248, 252, 256, 257, 258, 262, 277, 282, 283, 290, 213, 229, 233, 232, 248, 285, 286, 287, 287, 288, 289, 290, 17. The abelian versions of "spin foam" are simpler and were used e.g. by Frolich and Spencer [113] to understand the phase transition structure of \(\mathrm{U}(1)\) lattice gauge theory. Can an alternative proof of these results be given using the surface expansion described in this paper? 18. Adding extra single-edge faces in both directions has the effect of changing the underlying measure from Haar measure to another conjugation-invariant measure on \(\mathrm{U}(N)\) (which can be a signed measure if we add associate sign weights to different edge configurations). Can one obtain a natural connection between a signed-measure variant of Yang-Mills theory and the sort of random surfaces that arise in conformal field theory? 19. What can one say about supersymmetric variants of this question? Can a supersymmetric version of Yang-Mills theory be connected to random planar maps whose scaling limits can be understood in terms of Liouville quantum gravity or some other probabilistic continuum random surface model? What about fermionic variants or variants involving Higgs fields? On the latter point, let us remark that the introduction to [10] contains a list of references about the lattice Yang-Mills-Higgs model. A configuration in this context assigns a vector to each lattice vertex (in addition to assigning a matrix to each directed edge). In this context, one also considers open Wilson paths (whose endpoints are lattice vertices) in addition to closed Wilson loops. ## Appendix A Properties of the Orthogonal Weingarten function In this appendix, we give more detail on why Lemmas 6.9 and 6.38 are true, and in particular why it essentially follows from [10]. Fix \(n\geq 1\) even and \(\zeta\in\mathbb{C}\). To help the reader, we indicate how to translate between our notation and the notation of [10, Section 2.2.2]. Our \(\zeta\) translates to \(z\). Our \(n\) is the equivalent of \(2k\). The subgroup \(\mathcal{H}_{n}\subseteq\mathrm{S}_{n}\) we defined in Definition 6.59 is \(H_{k}\) in [10]. One can show that \(|\mathcal{H}_{n}|=2^{n/2}(n/2)!\), which translates to \(|H_{k}|=2^{k}k!\). Matsumoto defines the Orthogonal Weingarten function \(\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)\) as an element of the group algebra \(\mathbb{C}[\mathrm{S}_{n}]\). As part of its definition, this element is \(\mathcal{H}_{n}\) bi-invariant, i.e. \[\mathrm{Wg}^{\mathrm{O}}(h\sigma;\zeta)=\mathrm{Wg}^{\mathrm{O}}(\sigma;\zeta )=\mathrm{Wg}^{\mathrm{O}}(\sigma h;\zeta)\text{ for all }\sigma\in\mathbb{C}[\mathrm{S}_{n}],\,h \in\mathcal{H}_{n}.\] (A.1) The relation between Matsumoto's definition and our definition via pseudo-inverses is as follows: \[\mathrm{Wg}^{\mathrm{O}}_{\zeta}(\pi,\pi^{\prime})=\mathrm{Wg}^{\mathrm{O}}( \sigma_{\pi}^{-1}\sigma_{\pi^{\prime}};\zeta),\] (A.2) where \(\sigma_{\pi}\) is the permutation associated to \(\pi\) as in Definition 6.12. Here and in the following, we will write \(\mathrm{Wg}^{\mathrm{O}}_{\zeta}\) for definition of the Weingarten function as a pseudo-inverse, and \(\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)\) for Matsumoto's definition of the Weingarten function as a group algebra element. Now, one can show that the face profile \(\ell(\pi,\pi^{\prime})\) is precisely the coset-type of \(\sigma_{\pi}^{-1}\sigma_{\pi^{\prime}}\) (which is defined in [10, Section 2.2.1]). As mentioned in in [10, Section 2.2.1], two permutations \(\sigma,\sigma^{\prime}\) have the coset-type if and only if they are part of the same double \(\mathcal{H}_{n}\) coset, i.e. \(\mathcal{H}_{n}\sigma\mathcal{H}_{n}=\mathcal{H}_{n}\sigma^{\prime}\mathcal{H} _{n}\). By the \(\mathcal{H}_{n}\) bi-invariance (A.1), it follows that \(\mathrm{Wg}^{\mathrm{O}}(\sigma;\zeta)\) is a function of the coset-type of \(\sigma\), and then by (A.2), it follows that \(\mathrm{Wg}^{\mathrm{O}}_{\zeta}(\pi,\pi^{\prime})\) is a function of the face profile \(\ell(\pi,\pi^{\prime})\) of \(\pi,\pi^{\prime}\). This shows Lemma 6.9. Next, we discuss Lemma 6.38. Recall we defined (Definition 6.59) \(P_{\mathcal{H}_{n}}=\frac{1}{|\mathcal{H}_{n}|}\sum_{h\in\mathcal{H}_{n}}h\). This translates to \((2^{k}k!)^{-1}\mathbf{1}_{k}\). The "zonal spherical function" \(\omega^{\lambda}\) from the paper is for us \(\chi^{2\lambda}P_{\mathcal{H}_{n}}\in\mathbb{C}[\mathrm{S}_{n}]\) (where here \(\lambda\vdash\frac{n}{2}\)). We have that (by Lemma 6.61, as argued in the proof of Lemma 6.62) \[P_{\mathcal{H}_{n}}=P_{\mathcal{H}_{n}}\sum_{\lambda\vdash\frac{n}{2}}P_{2 \lambda},\] (A.3) where recall that (equation (6.6)) \(P_{2\lambda}=\frac{\chi_{2\lambda}(\mathrm{id})}{n!}\sum_{\sigma\in\mathrm{S }_{n}}\chi_{2\lambda}(\sigma)\sigma\in\mathbb{C}[\mathrm{S}_{n}]\). Next, as in [10], we define for \(\lambda\vdash\frac{n}{2}\) the quantity \(D_{\lambda}(\zeta)\) as \[D_{\lambda}(\zeta):=\prod_{(i,j)\in\lambda}(\zeta+2j-i-1).\] This quantity relates to Jucys-Murphy elements as follows. Define \(X_{\varepsilon}:=(\varepsilon N+J_{n-1})(\varepsilon N+J_{n-3})\cdots( \varepsilon N+J_{1})\). **Lemma A.1**.: For any \(\lambda\vdash\frac{n}{2}\), we have that \[P_{\mathcal{H}_{n}}P_{2\lambda}X_{\varepsilon}=D_{\lambda}(\varepsilon N)P_{ \mathcal{H}_{n}}P_{2\lambda}.\] (A.4) Proof.: This is proven towards the end of [11, Section 3]. For the reader's convenience, we reproduce the argument here. Recalling the discussion of Young's orthogonal idempotents from Section 6.1.4, we may expand \[P_{2\lambda}=\sum_{\lambda\in\mathrm{SYT}(2\lambda)}e_{T}.\] By [11, Proposition 4], \(P_{\mathcal{H}_{n}}e_{T}\neq 0\) implies that \(T\) is obtained by the "doubling" procedure described on [11, Page 7]. As noted in the paper, by direct calculation, for any such \(T\), we have that \(e_{T}X_{\varepsilon}=D_{\lambda}(\varepsilon N)e_{T}\). The desired result now follows by combining these observations. In our notation, Matsumoto defines the Orthogonal Weingarten function as an element \(\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)\in\mathbb{C}[\mathrm{S}_{n}]\) given by the formula \[\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta):=|\mathcal{H}_{n}|\sum_{ \begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(\zeta)\neq 0\end{subarray}}D_{\lambda}(\zeta)^{-1}P_{2\lambda}P_{ \mathcal{H}_{n}}.\] (A.5) This element is \(\mathcal{H}_{n}\) bi-invariant, that is \(h\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)=\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)= \mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)h\) for all \(h\in\mathcal{H}_{n}\) (these identities are equivalent to (A.1)). The second identity follows since \(P_{\mathcal{H}_{n}}h=P_{\mathcal{H}_{n}}\) for any \(h\in\mathcal{H}_{n}\). The first identity follows since \(P_{2\lambda}\) is central, so that \(P_{2\lambda}P_{\mathcal{H}_{n}}=P_{\mathcal{H}_{n}}P_{2\lambda}\), combined with \(hP_{\mathcal{H}_{n}}=P_{\mathcal{H}_{n}}\) for all \(h\in\mathcal{H}_{n}\). Combining the \(\mathcal{H}_{n}\) bi-invariance with the fact that the collection \((\sigma_{\pi},\pi:[n]\to[n])\) forms a complete set of coset representatives of \(\mathcal{H}_{n}\) as a subgroup of \(\mathrm{S}_{n}\) (as mentioned in the beginning of [13, Section 2.2.1]), we may express \[\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)=\sum_{\pi:[n]\to[n]}\mathrm{Wg}^{\mathrm{O }}(\sigma_{\pi};\zeta)\mathcal{H}_{n}\sigma_{\pi}.\] From this, we obtain (using that by definition, \(\mathcal{H}_{n}\) stabilizes \(\pi_{0}\) for the first identity, and equation (A.2) for the second) \[[\pi\ \pi_{0}]\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)=|\mathcal{H}_{n}|\sum_{\pi^ {\prime}:[n]\to[n]}[\pi\ \pi_{0}]\sigma_{\pi^{\prime}}\mathrm{Wg}^{\mathrm{O}}(\sigma_{\pi^{\prime}}; \zeta)=|\mathcal{H}_{n}|\sum_{\pi^{\prime}:[n]\to[n]}[\pi\ \pi^{\prime}]\mathrm{Wg}^{ \mathrm{O}}_{\zeta}(\pi_{0},\pi^{\prime}).\] On the other hand, inserting equation (A.5), we have the formula \[[\pi\ \pi_{0}]\mathrm{Wg}^{\mathrm{O}}(\cdot;\zeta)=|\mathcal{H}_{n}|[\pi\ \pi_{0}]\sum_{ \begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(z)\neq 0\end{subarray}}D_{\lambda}(\zeta)^{-1}P_{2\lambda}P_{ \mathcal{H}_{n}}.\] Upon equating the previous two identities (and using that \(P_{2\lambda}\) is central), we obtain \[\sum_{\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{\zeta}(\pi_{0},\pi^{ \prime})[\pi\ \pi^{\prime}]=[\pi\ \pi_{0}]\sum_{\begin{subarray}{c}\lambda \vdash\frac{n}{2}\\ D_{\lambda}(\zeta)\neq 0\end{subarray}}D_{\lambda}(\zeta)^{-1}P_{\mathcal{H}_{n}}P_{2\lambda}.\] Setting \(\zeta=\varepsilon N\) and applying the representation \(\rho_{\varepsilon}\) to both sides of the identity, we obtain \[\sum_{\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{\varepsilon N}(\pi_{0}, \pi^{\prime})\rho_{\varepsilon}([\pi\ \pi^{\prime}])=\rho_{\varepsilon}([\pi\ \pi_{0}])\sum_{ \begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(\varepsilon N)\neq 0\end{subarray}}D_{\lambda}(\varepsilon N)^{-1} \rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{\varepsilon}(P_{2\lambda}).\] (A.6) We now claim that \[\sum_{\begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(\varepsilon N)\neq 0\end{subarray}}D_{\lambda}(\varepsilon N)^{-1} \rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{\varepsilon}(P_{2\lambda})=\sum _{\lambda\vdash\frac{n}{2}}\rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{ \varepsilon}(P_{2\lambda})\rho_{\varepsilon}(X_{\varepsilon})^{-1}.\] (A.7) Given this claim, we obtain that (A.6) is further equal to \[\rho_{\varepsilon}([\pi\ \pi_{0}])\sum_{\lambda\vdash\frac{n}{2}} \rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{\varepsilon}(P_{2\lambda})\rho_{ \varepsilon}(X_{\varepsilon})^{-1} =\rho_{\varepsilon}([\pi\ \pi_{0}])\rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{ \varepsilon}(X_{\varepsilon})^{-1}\] \[=\rho_{\varepsilon}([\pi\ \pi_{0}])\rho_{\varepsilon}(X_{ \varepsilon})^{-1},\] where we used (A.3) in the second equality and the fact that \(\mathcal{H}_{n}\) by definition stabilizes \(\pi_{0}\) in the second. Combining the previous few identities, we see that \[\sum_{\pi^{\prime}:[n]\to[n]}\mathrm{Wg}^{\mathrm{O}}_{\zeta}(\pi_{0},\pi^{ \prime})\rho_{\varepsilon}([\pi\ \pi^{\prime}])=\rho_{\varepsilon}([\pi\ \pi_{0}])\rho_{ \varepsilon}(X_{\varepsilon})^{-1},\] which is precisely Lemma 6.38. To see the claim (A.7), first note that by (A.4), we have that \[\sum_{\begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(\varepsilon N)\neq 0\end{subarray}}D_{\lambda}(\varepsilon N)^{-1} \rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{\varepsilon}(P_{2\lambda})\rho_{ \varepsilon}(X_{\varepsilon})=\sum_{\begin{subarray}{c}\lambda\vdash\frac{n}{2 }\\ D_{\lambda}(\varepsilon N)\neq 0\end{subarray}}\rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{ \varepsilon}(P_{2\lambda}).\] As mentioned in the proof of Lemma 6.47, \(\rho_{\varepsilon}(X_{\varepsilon})\) is always invertible, and thus the above implies \[\sum_{\begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(\varepsilon N)\neq 0\end{subarray}}D_{\lambda}(\varepsilon N)^{-1} \rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{\varepsilon}(P_{2\lambda})=\sum_ {\begin{subarray}{c}\lambda\vdash\frac{n}{2}\\ D_{\lambda}(\varepsilon N)\neq 0\end{subarray}}\rho_{\varepsilon}(P_{\mathcal{H}_{n}})\rho_{ \varepsilon}(P_{2\lambda})\rho_{\varepsilon}(X_{\varepsilon})^{-1}.\] To finish, it suffices to show that for any \(\lambda\vdash\frac{n}{2}\) such that \(D_{\lambda}(\varepsilon N)=0\), we have that \(\rho_{\varepsilon}(P_{2\lambda})=0\). In the case \(\varepsilon=1\), this follows because (as observed in the proof of Lemma 2.30) \(\rho(P_{2\lambda})=0\) unless \(\ell(2\lambda)\leqslant N\), and one may directly check that \(\ell(2\lambda)\leqslant N\) implies \(D_{\lambda}(N)\neq 0\) (the worst case is the box at location \((i,j)=(\ell(2\lambda),1)\)). Next, suppose \(\varepsilon=-1\). We claim that \(\rho_{-}(P_{2\lambda})=\rho(P_{(2\lambda)^{\prime}})\), where \((2\lambda)^{\prime}\) is the conjugate partition to \(2\lambda\). Given this claim, we obtain that \(\rho_{-}(P_{2\lambda})=0\) unless \(\ell((2\lambda)^{\prime})\leqslant N\). Note that \(\ell((2\lambda)^{\prime})=w(2\lambda)=2w(\lambda)\), where \(w(\lambda)\) is the number of columns of \(\lambda\). By direct calculation, \(2w(\lambda)\leqslant N\) implies that that \(D_{\lambda}(-N)\neq 0\) (the worst case is the box at location \((i,j)=(1,w(\lambda))\)). To see why \(\rho_{-}(P_{2\lambda})=\rho(P_{(2\lambda)^{\prime}})\), note that \(\rho_{-}(\sigma)=\operatorname{sgn}(\sigma)\rho(\sigma)\), and so \[\rho_{-}(P_{2\lambda})=\rho\bigg{(}\frac{\chi_{2\lambda}(\operatorname{id})}{ n!}\sum_{\sigma\in\operatorname{S}_{n}}\operatorname{sgn}(\sigma)\chi_{2 \lambda}(\sigma)\sigma\bigg{)}.\] Using the classical fact that \(\chi_{(2\lambda)^{\prime}}(\sigma)=\operatorname{sgn}(\sigma)\chi_{2\lambda}(\sigma)\), the above is seen to be equal to \(\rho(P_{(2\lambda)^{\prime}})\), as desired.
2303.00616
Prediction of SLAM ATE Using an Ensemble Learning Regression Model and 1-D Global Pooling of Data Characterization
Robustness and resilience of simultaneous localization and mapping (SLAM) are critical requirements for modern autonomous robotic systems. One of the essential steps to achieve robustness and resilience is the ability of SLAM to have an integrity measure for its localization estimates, and thus, have internal fault tolerance mechanisms to deal with performance degradation. In this work, we introduce a novel method for predicting SLAM localization error based on the characterization of raw sensor inputs. The proposed method relies on using a random forest regression model trained on 1-D global pooled features that are generated from characterized raw sensor data. The model is validated by using it to predict the performance of ORB-SLAM3 on three different datasets running on four different operating modes, resulting in an average prediction accuracy of up to 94.7\%. The paper also studies the impact of 12 different 1-D global pooling functions on regression quality, and the superiority of 1-D global averaging is quantitatively proven. Finally, the paper studies the quality of prediction with limited training data, and proves that we are able to maintain proper prediction quality when only 20 \% of the training examples are used for training, which highlights how the proposed model can optimize the evaluation footprint of SLAM systems.
Islam Ali, Bingqing, Wan, Hong Zhang
2023-03-01T16:12:47Z
http://arxiv.org/abs/2303.00616v1
Prediction of SLAM ATE Using an Ensemble Learning Regression Model and 1-D Global Pooling of Data Characterization ###### Abstract Robustness and resilience of simultaneous localization and mapping (SLAM) are critical requirements for modern autonomous robotic systems. One of the essential steps to achieve robustness and resilience is the ability of SLAM to have an integrity measure for its localization estimates, and thus, have internal fault tolerance mechanisms to deal with performance degradation. In this work, we introduce a novel method for predicting SLAM localization error based on the characterization of raw sensor inputs. The proposed method relies on using a random forest regression model trained on 1-D global pooled features that are generated from characterized raw sensor data. The model is validated by using it to predict the performance of ORB-SLAM3 on three different datasets running on four different operating modes, resulting in an average prediction accuracy of up to 94.7%. The paper also studies the impact of 12 different 1-D global pooling functions on regression quality, and the superiority of 1-D global averaging is quantitatively proven. Finally, the paper studies the quality of prediction with limited training data, and proves that we are able to maintain proper prediction quality when only 20 % of the training examples are used for training, which highlights how the proposed model can optimize the evaluation footprint of SLAM systems. ## I Introduction Simultaneous localization and mapping (SLAM) is a fundamental building block that gives modern robotic systems the ability to estimate its location while building a map of the navigated environment [1]. Over the last few decades, SLAM research has evolved significantly in terms of architecture, accuracy, requirements, and challenges [2]. One of the major challenges faced by SLAM is robustness and resilience of the system when deployed in the real world [3]. Robustness of SLAM is the ability of the system to provide acceptable performance when operating under pre-defined conditions. While resilience is the ability of a system to converge to an acceptable performance when operating outside of the pre-defined conditions, which implicitly highlights the importance of having internal error prediction and tolerance mechanisms in SLAM to allow for this convergence to happen effectively [4]. For that reason, researchers have directed their attention towards the introduction of integrity indicators of either some blocks in the SLAM pipeline [5], or the final SLAM outcome [6]. _Absolute Trajectory Error (ATE)_[7] is considered the defacto metric for measuring the accuracy of localization in SLAM and is used by most of state-of-the-art solutions such as ORB-SLAM3 [8], VINS-Mono [9], among many others. Therefore, on-line prediction of SLAM ATE is an integral part of the quest to reach robust and resilient SLAM as it provides SLAM systems with internal indicators of the integrity of their estimates, which can be used to correct estimation errors, govern switching between localization alternatives, and improve robotics safety when deployed. In this paper, we propose a novel methodology for predicting the absolute trajectory error (ATE) of a SLAM algorithm using 1-D global pooling of input data characteristics and an ensemble learning-based regression model. This methodology is motivated by the high correlation observed and reported in our previous work [4] between the SLAM performance of multiple algorithms on one side, and the characterization metrics measured on different SLAM datasets on the other side. Throughout this work, several design decisions are made such as the selection of the 1-D global pooling function and the generation of sufficient examples for regression training. To ensure proper selection of these choices, a quantitative analysis of the impact of different options on the final accuracy is conducted and presented in this work. The rest of the paper is organized as follows. Section II presents a brief review of related work. Then, Section III provides a background overview. Next, Section IV describes our proposed method in detail. After that, results are presented and discussed in Section V. Finally, Section VI presents our conclusions from this study. ## II Related Work Predicting and estimating system performance is crucial for the safe use of robots and autonomous systems and has been extensively studied in closely related disciplines. For example, navigation systems like INS/GPS use statistical models to estimate errors in sensor measurements and improve localization through Kalman filters [10][11]. Additionally, integrity measures for robot localization have been proposed to minimize deployment risks in real-world scenarios [12]-[13]. Despite being essential for robust and resilient perception and navigation, there is limited research on integrity of localization outcomes of SLAM due to unclear design requirements, as reported in [14]. The existing literature on integrity measures for SLAM has primarily focused on two areas. The first area is predicting the overall performance of the entire SLAM pipeline. This approach aims to evaluate the integrity of the entire SLAM pipeline as a whole and provide a single measure for SLAM output integrity. For instance, the travelled path of a robot is modeled using Voronoi Graphs to train a model for predicting SLAM performance [6]. Training data are acquired using the method outlined in [15] which generates training examples by simulating a SLAM algorithm several times on selected environments. The same training examples generation methodology is used in [16] where a number of univariate and multivariate linear regression models are trained to predict the normalized relative translational error and the absolute trajectory errors. The two approaches proposed are useful in determining the overall performance of SLAM but would not allow for on-line prediction of localization integrity. That is due to their reliance on an overall descriptor of the whole path the robot will traverse rather than incremental raw sensor data (e.g. images). On the other hand, other methods are proposed to identify an upper bound of the localization uncertainty in SLAM providing a guarantee of the system performance when the spatial distribution of features is known [17]. The second area is investigating the integrity of specific components within the SLAM pipeline. This approach aims to evaluate the integrity of individual SLAM components and how the quantification of this integrity measure can be utilized to properly correct potential anomalies in localization estimates. For examples, in [5] a learning-based integrity measure for visual loop closure is proposed to decrease false positives and ultimately improve the overall performance of SLAM localization accuracy through the reduction of loop closure false positives. Our approach is unique in both its design and goal. We use a sequence of key-frame measurements such as images and/or inertial measurements as inputs, which are characterized to generate a corresponding characterization matrix of the sequence traversed. Then, we apply a 1-D global pooling function on the rows of the characterization matrix, which results in a 1-D vector descriptor of the sequence. After that, the descriptor is sent to a prediction model to predict the expected ATE at the end of the input sequence. Consequently, this approach allows for on-line monitoring of the system performance, and provides an integrity measure of localization estimates at any time. This is essential for ensuring the robustness and resilience of the SLAM system, particularly in challenging environments or under uncertain conditions. ## III Background To provide a foundation for this study, this section explores two concepts which are: 1-D global pooling and random forests regression. We discuss how these techniques are used and examine their applicability to the proposed work. ### _1-D Global Pooling_ This technique was introduced in [18] as a solution to the problem of overfitting in neural networks. The technique does this by reducing the spatial dimension of a feature map to a single value using a global pooling function (e.g. average, min, max...etc.) across all features. Essentially, this reduction replaces a detailed feature map with an abstract, descriptive characteristic of it. Learning those characteristics instead of the examples themselves was proven to enhance the generalization of the learnt model [19]. In this work, we utilize this technique and examine the impact of 12 different global pooling functions on prediction quality. Moreover, we examine the impact of concatenating the outcomes of all global pooling functions into a single extended feature vector which provides the learning algorithm with more abstract information about input examples. ### _Random Forest Regression_ Random forest is an ensemble learning technique that relies on the concept of bagging [20] where several decision tree prediction models are trained on independent random sub-samples of the input features in the bootstrapping phases. Bootstrapping is a statistical technique which involves random sub-sampling of the training data pool while allowing replacement [21] to generate bootstraps. For each decision tree in the random forest, a bootstrap is selected for training on a random sub-set of available descriptor features. The outcomes of all decision trees are then combined either by averaging (regression) or by majority voting (classification) [22]. Due to the independence and low correlation among decision trees, the prediction error is not accumulated or propagated, thus resulting in a lower prediction error. Random forests provide a way to balance accuracy and generalization, and was proven to be superior to competing methods. For instance, they were proven to outperform neural networks [23] on tabular structured data, and handle overfitting well when compared to boosting algorithms [24]. Fig. 1: A block diagram of the proposed ATE prediction methodology ## IV Methodology This section presents the proposed methodology to predict ATE in SLAM, along with an overview of the design choices made in this work. Figure 1 illustrates the proposed pipeline and shows how the different system components interact with each other. The figure provides a visual representation of the flow of data and processes in the proposed approach. Given a dataset \(\mathcal{D}\) of \(N\) sequences, defined as: \[\mathcal{D}=\{(\mathbf{x_{i}},y_{i}),i=1,...,N\} \tag{1}\] where \(\mathbf{x_{i}}\) is a characterization matrix of size \((m\times n)\) corresponds to \(m\) characterization metrics applied on an input sequence of \(n\) measurements (e.g. images/inertial measurements) and \(y_{i}\) is a scalar corresponding to the ATE of the trajectory. Each characterization matrix \(\mathbf{x_{i}}\) is transformed to a 1-D vector \(\mathbf{z_{i}}\) of size \((m\times 1)\) by applying a 1-D global pooling function \(f(.)\) as such: \[\mathbf{z_{i}}=f(\mathbf{x_{i}}) \tag{2}\] Consequently, the transformed dataset \(\mathcal{D^{\prime}}\) is defined as: \[\mathcal{D^{\prime}}=\{(\mathbf{z_{i}},y_{i}),i=1,...,N\} \tag{3}\] We seek to learn a model to predict SLAM ATE \(\hat{y_{i}}\) given an unseen 1-D vector \(\mathbf{z_{i}}\) from the transformed dataset \(\mathcal{D^{\prime}}\). ### _Data example generation_ To generate examples for training the ATE prediction model, we run a SLAM algorithm on all sequences available in several datasets. For each of the run sequences, a number of sub-sequences are calculated that corresponds to the keyframes selected by the SLAM algorithm. Thus, we utilize the concept of sub-trajectories [7] in order to expand the number of training examples. Given an input sequence of size \(\mathcal{K}\) keyframes, we can extract \(\mathcal{K}\) examples, where each example is a sub sequence of keyframes (\(kf\)) in the inclusive range of \([(kf)_{1},(kf)_{k}]\) where \(k=\{1,2...K\}\). The corresponding ATE of each sub trajectory is calculated and is associated with each trajectory to construct a training example. Figure 2 illustrate the process in detail and shows how the training examples extraction and ATE association take place. Sub-trajectories used for model training and testing are generated sequentially from available data sequences in a dataset running in a specific operation mode. The split of the available data for training and testing is done without randomization, meaning that training and testing are conducted on sub-sequences generated from the same dataset but from different sequences. ### _Sequence characterization and 1-D global pooling_ Each generated sub-sequence is considered to be an independent sequence of images/sensor readings. We apply the characterization framework introduced in [4] which contains an array of characterization metrics (e.g. measuring brightness, contrast... etc.) that generate a characterization vector for each image/sensor reading in the sequence. As seen in Figure 3, characterization generates a 2D matrix of size \((m\times n)\), where each row represents a characterization metric outcome, and each column represents an input sub-sequence. Due to the variability in sequence sizes, the generated 2D matrices are not of the same dimension. Thus, to reduce the dimensionality and provide unified feature vectors for training, we apply a 1-D global pooling function on 2-D matrices to generate 1-D vectors of unified size of \((m\times 1)\). This is achieved by reducing each row in the characterization matrix into a single scalar value using the pooling function. In this work, we utilize one of 12 different pooling functions that include statistical pooling functions (e.g. mean, min, max... etc.) and diversity pooling functions (e.g. entropy, simpson diversity index and its variants). In order to provide the prediction model with more descriptive features, we also concatenate all 1-D global pooled features into a single feature vector, study its impact on the prediction quality, and compare its performance to using a single 1-D global pooling function. ### _Removal of highly correlated features_ The existence of collinearity between independent input features is a potential problem in regression and can lead to numerically unstable results [25]. To detect highly correlated features, we calculate Pearson correlation coefficient (PMCC) [26] between each feature and all other available features. After that, highly correlated features are grouped where the PMCC between any two features in a group is Fig. 3: Generation of feature vectors using input sub-sequence characterization and 1-D global pooling Fig. 2: Extraction of training examples based on utilizing sub-trajectory data and corresponding ATE greater than a threshold of \(95\%\). Then, only one feature is selected from each group which is then used for the training of the regression model in order to ensure prediction stability of the trained regression model. ### _Random forest regression model_ A random forest regression model is trained and tuned on 70 % of the data examples available for each test case. After that, the model is tested on the remaining unseen 30 % in order to determine its performance. We utilize the random forest regression implementation provided in scikit learn library [27] due to its efficiency and ease-of-use. It also exposes a number of hyperparameters that we can tune for optimal performance of the model. Tuning the random forest hyperparameters is essential to achieve the best prediction performance. For that, we perform a randomized grid search with cross validation on the multi-dimensional space of hyperparameters provided in Table I. This method is proven efficient in selecting the best hyperparameters while maintaining reasonable complexity and execution time [28]. ### _Performance Evaluation_ To quantitatively evaluate the regression quality of our method, four different metrics are utilized, which are defined as follows. 1. Coefficient of determination (\(R^{2}\)) \[R^{2}=1-\frac{\sum_{i=1}^{n}(\hat{y_{i}}-y_{i})^{2}}{\sum_{i=0}^{n}(y_{i}-\hat {y_{i}})^{2}}\] (4) 2. Mean absolute percentage error (\(MAPE\)) \[MAPE=\frac{100\%}{n}\sum_{i=1}^{n}\left|\frac{y_{i}-\hat{y_{i}}}{y_{i}}\right|\] (5) 3. Mean absolute error (\(MAE\)) \[MAE=\frac{1}{n}\sum_{i=1}^{n}\left|y_{i}-\hat{y_{i}}\right|\] (6) 4. Root mean squared errors (\(RMSE\)) \[RMSE=\sqrt{\frac{\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}}{n}}\] (7) Where \(y\) is the ATE ground truth, \(\hat{y}\) is the predicted ATE, and \(n\) is the number of testing samples. Those metrics differ in terms of their allowable range, and their indication of the quality of performance. Together, they give a clear indication of the performance and suppress any corner case or anomalies any metric can suffer from. ## V Experimental Results and Discussion In this section, we describe and discuss our experimental setup and associated experimental results. As mentioned in Section IV, we run ORB-SLAM3 [8] on three different datasets and in four different modes of operations, resulting in 10 test cases as illustrated in Table II. For each test case, we examine each of the 12 different 1-D pooling function, train and tune the random forest, and evaluate the model performance. Additionally, we study the impact of reducing the amount of training data on the ATE prediction quality to show how our proposed prediction model can still perform relatively well when limited data is available for training. The experimental results show that the proposed method is able to predict SLAM ATE with an accuracy up to 94.7% which is a direct indication of the efficacy of our method, the validity of using the characterization metrics as data descriptors, and the proper choice of the 1-D global pooling function for the SLAM ATE prediction task. ### _Training data generation_ To generate examples for training the ATE prediction model, we ran ORB-SLAM3 [8] on all sequences available in three different datasets, which are: KITTI [29], EuroC-MAV [30], and TUM-VI [31]. We apply our proposed data example generation process, which resulted in a great increase in the number of available examples for training, testing, and validation. The method is applied on four different modes of ORB-SLAM3 [8] which are monocular, monocular-inertial, stereo, and stereo-inertial, resulting in 10 different test cases. Table II shows the number of training examples generated for each of the test cases. with their default hyperparameter values provided in [27]. These algorithms are: dummy regression that takes the average of input features, linear regression, decision tree, random forest, Ada boosting, and gradient boosting. The evaluation is done using \(R^{2}\) and \(MAPE\) metrics to allow comparison of different test cases as they provide an absolute measure of performance regardless of the value and range of the predicted variable. As shown in Figure 4, random forests outperform other regression algorithms resulting in the highest \(R^{2}\) value and the lowest \(MAPE\) value as well. Additionally, we can clearly observe the overfitting problem of boosting algorithms [24] when we examine Ada boosting and gradient boosting performance compared to random forests. Another experiment included identifying if the algorithm is able to perform regression on a given test case. For that, we utilize the out-of-range concept for both \(R^{2}\) and \(MAPE\) as an indication for failed regression as such: \(R^{4}\in[0,1]\) and \(MAPE\in[0,1]\). In Figure 5, we can observe that random forests had the least amount of failed regressions when considering both metrics compared to other algorithms. In fact, we observed that the failure case for random forests happen when testing ORB-SLAM3 in stereo mode on TUM-VI due to the lack of enough samples for training. However, with further tuning, we are able to perform regression on this test cases as well. ### _Performance comparison to baseline_ A standard practice in machine learning is to compare regression results to a baseline model to prove the integrity of the learning process [32]. For that reason, we compare the outcomes of the random forest model to two different baselines. The first is a simple dummy regression model that uses the mean of the input features as the regression output. While the second is a linear regression model with default hyperparameter settings. As shown in Figure 6, we can observe the superiority of the random forest model compared to the two baseline models in terms of all considered performance metrics, which is a direct indication of the value of using our model for the task in hand. ### _Impact of limited training data on ATE prediction_ Not surprisingly, reducing the data available for training will reduce the quality of ATE prediction. However, the question is how much reduction should one expect in case of having limited amount of data available for training. In addition to that, we seek to prove that the proposed method can be utilized to reduce evaluation efforts of SLAM by training on a small portion of the dataset and the prediction of the rest of the dataset. In this experiment, we fixed the pooling function to be 1-D global average pooling due to its superior performance compared to other pooling functions. We vary the ratio of training data relative to a testcase from \(10\%\) to \(90\%\) and measure the four regression quality metrics. As shown in Figure 8, we can observe the normal behaviour of increased prediction quality when more training data is utilized. When we look at the \(R^{2}\) and \(MAPE\) metrics, we can observe that we are able to properly predict ATE while training on only \(20\%\) of each test case. In that case, the reduction in \(R^{2}\) is limited to only \(6.51\%\) on average. On the other hand, \(MAPE\) also dropped by only \(4.7\%\). Moreover, we can also observe that utilizing less than \(20\%\) of the data for training will produce unreliable results which is indicated by out-of-range values of \(R^{2}\) metric. This experiments directly implies that we are able to reduce the Fig. 5: Failure rate statistics of different regression models Fig. 6: Comparison of random forest regression model performance vs. selected baseline models \begin{table} \begin{tabular}{c c c|c c} \cline{2-5} & \multicolumn{2}{c|}{**Baseline**} & \multicolumn{2}{c}{**Random forest**} \\ \hline **Mode - Dataset** & \(R^{2}\) & \(MAPE\) & \(R^{2}\) & \(MAPE\) \\ \hline M-EuroC & 0.1557 & 0.3683 & 0.9973 & 0.0106 \\ M-KITTI & 0.7621 & 0.3710 & 0.9992 & 0.0079 \\ M-TUMVI & 0.9036 & 0.2503 & 0.9663 & 0.1246 \\ \hline MI-EuroC & -0.0637 & 0.2909 & 0.9888 & 0.0199 \\ MI-TUMVI & 0.8571 & 0.2539 & 0.9835 & 0.1221 \\ \hline S-EuroC & 0.9899 & 0.2699 & 0.9993 & 0.0125 \\ S-KITTI & 0.9559 & 0.1627 & 0.9999 & 0.0013 \\ S-TUMVI & 0.6202 & 0.1904 & 0.8458 & 0.0359 \\ \hline SI-EuroC & 0.6864 & 0.1954 & 0.9900 & 0.0189 \\ SI-TUMVI & 0.9381 & 0.1681 & 0.9723 & 0.1979 \\ \hline **Mean** & **0.6805** & **0.2521** & **0.9743** & **0.0552** \\ \hline \end{tabular} \end{table} TABLE III: Comparison between using the ATE at 20 % of the trajectory as a predictor for the whole trajectory ATE vs. our proposed random forest method when trained on 20 % of the available data for all testcases evaluation efforts of SLAM by \(80\%\) while maintaining a decent level of confidence in ATE prediction quality. Due to the nature of the ATE error and how it evolves over time, we aspire to compare our model when trained on only 20 % of the data to the ATE observed after traversing 20 % of the a given data example. This experiments highlights the need for training a prediction model to predict the ATE as the observed ATE during the course of the trajectory is not correlated to that at the end of the trajectory. For that, we compute \(R^{2}\) and \(MAPE\) between the ATE at 20 % of the trajectory and the ATE at the end of the trajectory. Then, we compare the outcomes with that of our prediction model. As shown in Table III, we can observe that the prediction model outperforms the baseline (ATE at 20 %) and provides a more accurate outcomes with higher confidence levels. ### _Evaluation of 1-D global pooling functions_ Another major component of our proposed solution is the 1-D global pooling of training sequences. As described in Section IV, we examine 12 different pooling functions where 11 of them are either statistical or diversity indicators, while the last one is the concatenation of the outcomes of all of those functions. In order to compare the impact of those functions on the regression quality, all test cases are repeated using each pooling function, and performance is evaluated and compared. Figure 7 shows the comparison results for each metric and highlights the mean and median performance across all test cases for each pooling function. In this figure, each box in the boxplot is constructed from 10 data points where each point represents one of the testcases outlined in Table II. For each testcase, training is done on 70 % of the data and testing is done on the remaining 30 %. It can be observed that the concatenated version of the pooling functions scored a close performance to both the minimum and mean 1-D global pooling functions. However the mean/average 1-D global function provided a balanced and consistent performance when all evaluation metrics are considered. Thus, the comparison between the actual ATE and the predicted ATE using 1-D global average pooling and random forests is presented in Figure 9 for the 10 testcases we examined. Moreover, we present the kernel distribution estimate (KDE) of the absolute error percentage of all testing example in each of the 10 testcases in Figure 10. One can observe that we are able to predict the ATE value within an average error of \(6\%\) to \(7\%\) of the actual ATE with peak performance of an average error that is less than \(5\%\) of the actual ATE value in 7 out of 10 testcases examined. Fig. 8: Effect of reducing training data size on ATE prediction quality using 1-D global average pooling Fig. 7: Comparison of regression quality for different 1-D global pooling functions after training on 70 % of the data ## VI Conclusions In this paper, the problem of performance prediction in SLAM is addressed as a fundamental requirement for robustness and resilience in SLAM. The study starts by giving a brief review of the literature related to this topic, and provides a basis for the proposed algorithm. After that, we introduce our methodology for predicting SLAM ATE using an ensemble learning technique and 1-D global pooling of input data characterization results. Our methodology is first compared to a multitude of regression models to validate our selection of random forests as our regression model. Then, the methodology is tested on 10 different test cases and using 12 different 1-D global pooling functions, resulting in 120 different experiments. The experimental results showed a superiority in using random forest compared to our selected baseline and provided an evidence for the ability to predict ORB-SLAM3 ATE using characterized and pooled features with accuracy that can reach 94.7% on average for test cases examined. The paper also evaluated the impact of the selection of a certain pooling function on the regression quality of ATE. It is shown that very limited gain is observed by concatenating multiple pooling functions. It also provided evidence on the suitability of using 1-D global averaging for achieving a good balance on all performance metrics measured. Additionally, the paper studied the impact of reducing the amount of training data on ATE prediction quality, and it is shown that we are able to use only \(20\%\) and maintain an average absolute percentage error of \(90.4\%\) with a degradation of only \(4.7\%\) compared to training on \(90\%\) of available data. Finally, the study highlights the suitability of the used characterization results to be treated as a data sequence descriptor. Fig. 10: Absolute error percentage of all examined testcases using 1-D global average pooling and random forests after training on 70 % of the data Fig. 9: Actual vs. predicated ATE for all evaluated testcases using the 1-D global average pooling and random forests after training on 70 % of the data
2307.03830
3d Quantum Gravity Partition Function at 3 Loops: a brute force computation
The partition function of 3-dimensional quantum gravity has been argued to be 1-loop exact. Here, we verify the vanishing of higher-orders in perturbation theory by explicit computation in the second-order, metric formulation at 3-loops. The number of 1-particle irreducible Feynman diagrams involving both gravitons and ghosts turns out to be 17. Using dimensional regularization, we solve all the diagrams. At 2-loops, we find that all such diagrams vanish separately after regularization. At 3-loops, in contrast, a series of remarkable cancellations between different diagrams takes place, with 9 diagrams beautifully conspiring to yield a vanishing result. Our techniques are suitable to be applied to higher loops as well as to similar computations in higher dimensions.
Mauricio Leston, Andrés Goya, Guillem Pérez-Nadal, Mario Passaglia, Gaston Giribet
2023-07-07T20:57:22Z
http://arxiv.org/abs/2307.03830v3
# \(3d\) Quantum Gravity Partition Function at 3 Loops : a brute force computation ###### Abstract The partition function of 3-dimensional quantum gravity has been argued to be 1-loop exact. Here, we verify the vanishing of higher-orders in perturbation theory by explicit computation in the second-order, metric formulation at 3-loops. The number of 1-particle irreducible Feynman diagrams involving both gravitons and ghosts turns out to be 17. Using dimensional regularization, we solve all the diagrams. At 2-loops, we find that all such diagrams vanish separately after regularization. At 3-loops, in contrast, a series of remarkable cancellations between different diagrams takes place, with 9 diagrams beautifully conspiring to yield a vanishing result. Our techniques are suitable to be applied to higher loops as well as to similar computations in higher dimensions. ## I Introduction The partition function of 3-dimensional quantum gravity has been shown to be 1-loop exact. In the case of the theory around Anti-de Sitter (AdS) space, this was proven by Maloney and Witten by the explicit computation of the sum over configurations [1], and this turns out to be consistent with the following argument : In 3-dimensional gravity around AdS, in addition to the classical action, the partition function receives contributions of states that are Virasoro descendants of the background geometry. This follows from the analysis of the asymptotic symmetries in AdS performed by Brown and Henneaux [2]. Such contributions, often referred to as boundary gravitons, organize themselves as Virasoro descendants, and their logarithm, being independent of the Planck length, is identified as the 1-loop contribution to the effective action. This insight led Maloney and Witten to argue that, without further contributions, the full gravity partition function around AdS turns out to be 1-loop exact, with the only non-vanishing contributions being the classical action and the Virasoro character, cf. [1; 3; 4]. In the case of the theory with zero cosmological constant the 1-loop exactness of the \(3d\) gravity partition function was discussed by Witten in an earlier paper [6], where a computation in the first-order formulation was given, cf. [7]. Witten argued that the perturbative expansion in the theory must terminate at 1-loop. More recently, the authors of [5] found that the 1-loop determinant computation of the gravity partition function reproduces the character of the Bondi-Metzner-Sachs (BMS) group, namely the group of asymptotic diffeomorphisms preserving the boundary conditions of Minkowski space at null infinity [8; 9; 10]. This led to the conclusion that, as it happens in AdS, the partition function of \(3d\) Einstein gravity in flat space is also 1-loop exact, with the full effective action being given by the classical contribution plus a group character, cf. [11; 12; 13; 14]. However, it still remains to be seen how the 1-loop exactness of the partition function manifests itself in the second-order, metric formulation. As the authors of [5] stated it, it would be interesting to verify the 1-loop exacteness of the \(3d\) gravity from a direct gravitational computation. This is exactly the computation we will address in this paper : we will compute the partition function of \(3d\) gravity partition function around flat space in the metric formalism at third order in perturbation theory. That is to say, we will perform an explicit field theory computation of the gravitational effective action at 2- and 3-loops. The paper is organized as follows : In section II, we present the tools that will equip us for the perturbative computation. We write down the gravity action in a convenient form, discuss the Faddeev-Popov gauge fixing terms and the action for the ghost fields ; then, we write the vertices and the propagators for the ghost and the graviton ; all these ingredients suffice to derive the Feynman rules. In section III, we compute all the Feynman diagrams. At 2-loops, we find that all connected diagrams vanish separately after dimensional regularization, in agreement with previous computations in the literature. At 3-loops, in contrast, a series of remarkable cancellations between different diagrams takes place, with 9 1-particle irreducible (1PI) diagrams beautifully conspiring to yield a vanishing result. As our techniques are suitable to be applicable to higher loops as well as to higher dimensions, we briefly comment on that at the end of section III. Section IV contains our conclusions. ## II Perturbation Theory The Einstein-Hilbert gravitational action is \[S_{\rm EH}=-\frac{2}{\kappa^{2}}\int_{M_{d}}d^{d}x\sqrt{|g|}\,R\,+\,B_{\partial M _{d}}\,, \tag{1}\] where \(|g|\) is the determinant of the metric and \(R\) is the scalar curvature on the \(d\)-dimensional manifold \(M_{d}\). \(B_{\partial M_{d}}\) stands for boundary terms that render the variational principle well-posed. The coupling constant \(\kappa^{2}=32\pi\ell_{P}^{d-2}\) gives the Planck length, \(\ell_{P}\), in \(d\) spacetime di mensions. The quantity \(\kappa^{2}\hbar\) organizes the loop expansion. Hereafter, we set \(\hbar=1\) and keep track of powers of \(\kappa\). A convenient way to rewrite the Einstein-Hilbert action is the following \[S_{\rm EH}=-\frac{1}{2\kappa^{2}}\int_{M_{d}}d^{d}x\sqrt{|g|}\,g^{ mn}g^{ab}g^{rs}\Big{(}\partial_{m}g_{ab}\partial_{n}g_{rs}-\] \[\partial_{m}g_{ar}\partial_{n}g_{bs}+2\partial_{m}g_{br}\partial_ {a}g_{ns}-2\partial_{m}g_{na}\partial_{b}g_{rs}\Big{)}\] where we are excluding boundary terms [15]. We will perform an expansion of the gravitational field around a background metric \(\bar{g}_{ab}\), namely \[g_{ab}=\bar{g}_{ab}+\kappa\,h_{ab}\,. \tag{2}\] In our case, the background metric will be that of flat space, _i.e._\(\bar{g}_{ab}\equiv\eta_{ab}\) ; nevertheless, for completeness, let us write some formulae in full generality ; this will allow to show that the techniques we employ are applicable to computations around other maximally symmetric solutions. In the action above one replaces the perturbation around \(\bar{g}_{ab}\) and obtains an infinite series in \(h_{ab}\) ; this obviously follows from expanding \(\sqrt{|g|}\), \(g_{ab}\) and \(g^{ab}\) in terms of \(h_{mn}\). The first terms of the expansion come from \(|g|^{\frac{1}{2}}=|\bar{g}|^{\frac{1}{2}}(1+\frac{1}{2}\kappa\bar{g}^{mn}h_{mn }+\ldots)\) and \(g^{ab}=\bar{g}^{ab}-\kappa\bar{g}^{am}\bar{g}^{bn}h_{mn}+\ldots\), where the ellipsis stand for subleading (higher order) contributions. At order \(\mathcal{O}(\kappa^{0}h^{2})\) we have the canonically normalized quadratic kinematic operator ; at order \(\mathcal{O}(\kappa h^{3})\), the 3-graviton vertex ; at order \(\mathcal{O}(\kappa^{2}h^{4})\), the 4-graviton vertex, and so on and so forth. At 3-loops, there are be contributions up to order \(\mathcal{O}(\kappa^{4}h^{6})\) ; see diagram \(D_{4}^{(3)}\) below. Einstein-Hilbert action has to be supplemented with gauge-fixing terms. The piece of the full action that implements the gauge fixing reads \[S_{\rm gf}=\int_{M_{d}}d^{d}x\sqrt{|g|}\,f^{m}\bar{g}_{mn}f^{n}\,, \tag{3}\] where the function \(f^{m}\) on the background metric \(\bar{g}_{mn}\) with a perturbation \(h_{mn}\) is given by \[f^{m}=\left(\bar{g}^{lm}\bar{\nabla}^{n}-\frac{1}{2}\bar{g}^{ln}\bar{\nabla}^{ m}\right)h_{nl}\,; \tag{4}\] \(\bar{\nabla}\) is the covariant derivative compatible with the background metric \(\bar{g}\) ; namely \(\bar{\nabla}_{a}\bar{g}_{cb}=0\). The action for the ghost field \(c^{s}\) and the anti-ghost field \(\bar{c}^{l}\) is \[S_{\rm gh}=\int_{M_{d}}d^{d}x\sqrt{|g|}\,\bar{c}^{m}\,\frac{\delta f_{m}}{ \delta h_{rs}}\,\mathcal{L}_{c}g_{rs}\,, \tag{5}\] where \(\mathcal{L}_{c}g_{rs}\) is a Lie derivative of the full metric \(g_{ab}\) with the respect to the ghost field \(c^{l}\) \[\mathcal{L}_{c}g_{rs}=2\bar{g}_{l(s}\bar{\nabla}_{r)}c^{l}+c^{l}\bar{\nabla}_{l }h_{rs}+2h_{l(s}\bar{\nabla}_{r)}c^{l}\,. \tag{6}\] Then, up to a total derivative, the ghost action takes the form \[S_{\rm gh} =\int_{M_{d}}d^{d}x\sqrt{|g|}\,\Big{[}\bar{g}_{ls}\bar{c}^{s}\bar {\nabla}^{2}c^{l}+\bar{c}^{r}\bar{\nabla}_{l}\bar{\nabla}_{r}c^{l}-\bar{c}^{m} \bar{\nabla}_{m}\bar{\nabla}_{l}c^{l} \tag{7}\] \[-\kappa\Big{(}\bar{\nabla}^{r}c^{r}\bar{\nabla}_{l}h_{sr}c^{l}+ \bar{\nabla}^{r}\bar{c}^{s}h_{ls}\bar{\nabla}_{r}c^{l}+\bar{\nabla}^{s}\bar{c} ^{r}h_{ls}\bar{\nabla}_{r}c^{l}\] \[\qquad\qquad-\tfrac{1}{2}\bar{\nabla}_{m}\bar{c}^{m}\bar{\nabla}_{ l}h_{r}^{r}\,c^{l}-\bar{\nabla}_{m}\bar{c}^{m}h_{ls}\bar{\nabla}^{s}c^{l} \Big{)}\Big{]}\] These formulae are consistent with the ones in the literature [16; 17; 18]. Hereafter, we restrict the discussion to flat space. This amounts to replacing \(\bar{g}_{ab}\to\eta_{ab}\) and \(\bar{\nabla}_{m}\to\partial_{m}\) in the expressions above. In flat space, the ghost propagator is \[\Delta^{ab}_{\rm(gh)}[k]=-i\frac{\eta^{ab}}{k^{2}}\,. \tag{8}\] with \(k\) being the momentum. The Einstein-Hilbert action takes the form \[S_{\rm EH}= -\frac{1}{2}\int_{M_{d}}d^{d}x\sqrt{|g|}\,g^{mn}g^{ab}g^{rs}\Big{(} \partial_{m}h_{ab}\partial_{n}h_{rs}-\] \[\partial_{m}h_{ar}\partial_{n}h_{bs}+2\partial_{m}h_{br}\partial_ {a}h_{ns}-2\partial_{m}h_{na}\partial_{b}h_{rs}\Big{)}\,,\] which can further be expanded in powers of \(\kappa h^{ab}\). This yields the graviton vertices and the graviton propagator \[\Delta^{mnab}_{\rm(gr)}[k]=\frac{i}{2k^{2}}\left(\eta^{ma}\eta^{nb}+\eta^{mb} \eta^{na}-\frac{2}{d-2}\eta^{mn}\eta^{ab}\right) \tag{9}\] Propagators (8) and (9) are written Minkowski signature. Here, however, we will work in the Euclidean formalism. This amounts to carefully collect relative signs of vertices and propagators in the diagrams. In momentum space \(k^{\mu}=(k^{0},k^{1},k^{2})\), we perform the Wick rotation \(k^{0}\to ik^{3}\), with \(M_{3}\) being now locally equivalent to \(\mathbb{R}^{3}\) with Euclidean signature. Since we are interested in computing the partition function at finite temperature, we consider the periodic Euclidean time direction, \(M_{3}=\mathbb{R}^{2}\times S^{1}_{\beta}\), which demands the momentum \(k^{3}\) along the thermal cycle to be quantized ; namely \(k^{3}=2\pi n/\beta\), with \(n\in\mathbb{Z}\) and \(\beta\in\mathbb{R}_{>0}\) being the inverse of the temperature. Nevertheless, we will abuse of notation and write the formal sum \(\int d^{d}k_{l}\,(.)\) to refer to the integration measure on the \(l^{\rm th}\)\(d\)-momentum, while keeping in mind the sum over discrete values for the component \(k^{3}_{l}\). In the next section, we will apply the ingredients presented above to the computation of the gravitational effective action at 2-loops and 3-loops. ## III 3-loop partition function ### Partition function The statement that the \(3d\) gravitational partition function is 1-loop exact can easily be translated into a statement about the effective action. Consider the expansion of partition function \[\log Z\,=\,\sum_{n=0}^{\infty}\hbar^{n-1}S_{\rm eff}^{(n)} \tag{10}\] with \(S_{\rm eff}^{(n)}\) being the \(n^{th}\) order contribution to the effective action, with \(S_{\rm eff}^{(n)}\sim\mathcal{O}(\kappa^{2(n-1)})\,;\,S_{\rm eff}^{(0)}\) and \(S_{\rm eff}^{(1)}\) being given by the classical action and the 1-loop determinant, respectively. Then, 1-loop exactness is equivalent to assert that \(S_{\rm eff}^{(n)}=0\) to all order \(n>1\) in perturbation theory - or, more precisely, that higher orders in perturbation theory only contribute to the renormalization of the parameters appearing in the semiclassical theory [4]-. Here, we will prove that \(S_{\rm eff}^{(2)}\) and \(S_{\rm eff}^{(3)}\) are actually zero. ### 2-loops At 2-loops, there are only three 1PI diagrams, which are depicted in Figure 1. While we have to solve all connected diagrams, all those that are reducible (Figure 2) vanish by dimensional regularization. This means that we have to focus on the 1PI. We have to solve the Feynman type integrals corresponding to each of such diagrams and perform dimensional regularization. We introduce the notation \(d_{i}^{(\ell)}=[D_{i}^{(\ell)}]\) to denote the result of the calculation of the \(i^{\rm th}\), \(\ell\)-loop diagram \(D_{i}^{(\ell)}\). The symbol \([X]\) refers to the value of the quantity \(X\) after dimensional regularization. It turns out that the three diagrams in Figure 1 also vanish after dimensional regularization. That is to say, \(d_{i}^{(2)}=[D_{1}^{(2)}]=0\), \(d_{2}^{(2)}=[D_{2}^{(2)}]=0\), and \(d_{2,1}^{(2)}=[D_{2,1}^{(2)}]=0\). Therefore, we find \[S_{\rm eff}^{(2)}\,=\,0\,, \tag{11}\] in full agreement with the argument in [5] and the calculations in [19; 20]. In the next subsection we will prove that this results holds at 3-loops. ### 3-loops At 3-loops the story is much more involved. The difficulty resides, not only in the fact that the number of 1PI diagrams is notably larger, but also in that many diagrams do not vanish after dimensional regularization, and, therefore, nontrivial cancellations have to occur for the effective action to be zero. Some of the diagrams have terms proportional to the integral \[I=\kappa^{4}\int\prod_{i=1}^{3}d^{3}k_{i}\ \frac{\left(k_{1}\cdot k_{2}\right) \left(k_{1}\cdot k_{3}\right)}{k_{1}^{2}k_{2}^{2}k_{3}^{3}(k_{1}+k_{2}+k_{3}) ^{2}}\,. \tag{12}\] This integral is relatively simple to treat at zero temperature, but it becomes more subtle at finite temperature : we have to be reminded of the fact that the formal integral over momenta in (12) actually comprises the sum on the component \(k_{i}^{3}=2\pi n_{i}/\beta\) with \(n_{i}\in\mathbb{Z}\), \(i=1,2,3\) and \(\beta\in\mathbb{R}_{>0}\). That is to say, \(I\) is defined as an infinite sum over integers \(n_{1}\), \(n_{2}\), \(n_{3}\,;\) more precisely, \[I = 32\pi^{5}\kappa^{4}\beta^{-5}\sum_{n_{1},n_{2},n_{3}}\int\prod_{i =1}^{3}d^{2}\vec{k}_{i}\,\prod_{j\neq 1}^{3}(\vec{k}_{j}\cdot\vec{k}_{1}+n_{j}\,n_{ 1}) \tag{13}\] \[\left[\big{|}\sum_{t=1}^{3}\vec{k}_{t}\big{|}^{2}+\big{(}\sum_{t= 1}^{3}n_{t}\big{)}^{2}\,\right]^{-1}\,\prod_{l=1}^{3}\big{(}|\vec{k}_{l}|^{2}+ n_{l}^{2}\,\big{)}^{-1}\] where \(\vec{k}_{i}=\frac{\beta}{2\pi}(k_{i}^{1},k_{i}^{2})\), so that \(k_{i}=\frac{2\pi}{\beta}(\vec{k}_{i},n_{i}^{3})\). This integral is similar to those appearing in other 3-loop computations at finite temperature [24; 25; 26]. As these integrals appear in several diagrams at 3-loops, cancellations among different diagrams are actually possible - and, as we will see, they actually take place. Other divergent integrals that may potentially appear in the 3-loop 1PI diagrams are of the form \[J_{i}=\kappa^{4}\int\prod_{l=1}^{3}d^{3}k_{l}\ j_{i}(k_{1},k_{2},k_{3})\,, \tag{14}\] with integrands \[j_{1}=\frac{\mathcal{O}(k^{2})}{k_{1}^{2}k_{2}^{2}k_{3}^{2}}\,,\ \ j_{2}=\frac{\mathcal{O}(k^{4})}{k_{1}^{2}k_{2}^{2}k_{3}^{3}(k_{2}+k_{3})^{2} }\,,\ \ j_{3}=\frac{\mathcal{O}(k^{4})}{k_{1}^{4}k_{2}^{2}k_{3}^{2}}\,;\] however, the latter cancel in \(d=3\), either by dimensional regularization and/or the presence of factors \((d-3)\). This is one of the reasons why the diagrams appearing in Figure 3 do not effectively contribute ; _i.e._\(d_{1}^{(3)}=d_{2}^{(3)}=d_{3}^{(3)}=d_{4}^{(3)}=0\). Figure 1: 2-loop 1PI diagrams. Figure 2: Two examples of the 6 reducible 3-loop diagrams Next, we focus on the diagrams shown in Figure 4, whose propagator contributions make their dimensional regularization analysis more involved. In order to solve the diagrams \(D_{5}^{(3)},D_{6}^{(3)},D_{7}^{(3)},\,D_{8}^{(3)}\), first one needs to compute the multiplicity factor associated to each of them. If \(g_{i}^{(3)}\) denotes the multiplicity factor corresponding to the diagram \(D_{i}^{(3)}\), then combinatorics yields \[g_{5}^{(3)}=\frac{1}{24}\,,\quad g_{6}^{(3)}=\frac{1}{16}\,,\quad g_{7}^{(3)}= \frac{1}{8}\,,\quad g_{8}^{(3)}=\frac{1}{24}\,.\] After multiplying these factors by the result of each diagram obtained after dimensional regularization, we find \[d_{5}^{(3)}=\frac{45}{16}I\,,\;\;d_{6}^{(3)}=I\,,\;\;d_{7}^{(3)}=-\frac{61}{16 }I\,,\;\;d_{8}^{(3)}=\frac{3}{2}I\] where \(I\) is the integral given in (13). The evaluation of these diagrams, especially the Melon \(D_{5}^{(3)}\) and the Benz \(D_{8}^{(3)}\), is lengthy and requires precision. Now, let us consider the diagrams with ghost field contributions. We begin with the ghostly Benz diagrams shown in Figure 5. It turns out that, after dimensional regularization, these diagrams also result proportional to (13). Concretely, they yield \[d_{8,1}^{(3)}=\frac{11}{4}I\,,\quad\;d_{8,2}^{(3)}=-\frac{13}{8}I\,;\] having multiplicity factors \(g_{8,1}^{(3)}=1/3\) and \(g_{8,2}^{(3)}=1/4\), respectively. The different signs between \(d_{8,1}^{(3)}\) and \(d_{8,2}^{(3)}\) follows from the number of ghost propagators in each diagram. There are also diagrams with ghost contributions that vanish directly by dimensional regularization ; the diagram \(D_{1,1}^{(3)}\) shown in Figure 6 is of that sort, _i.e._\(d_{1,1}^{(3)}=0\). The non-vanishing 3-loop diagrams with ghost contributions are depicted in Figure 7. Their evaluation is lengthy but can be done systematically. It yields \[d_{6,1}^{(3)}=\,-\frac{1}{2}\,I\,,\quad d_{6,2}^{(3)}=\,-\frac{3}{8}\,I\,,\quad d _{6,3}^{(3)}=\,-\frac{7}{4}\,I\,.\] The multiplicity factors of these diagrams are \(g_{6,1}^{(3)}=1/2\), \(g_{6,2}^{(3)}=1/4\), and \(g_{6,3}^{(3)}=1/4\), respectively. Finally, putting all this together, we find that the 3-loop contribution to the gravitational effective action reduces to an expression proportional to \(I\), with a coefficient proportional to \[d_{5}^{(3)}+d_{6}^{(3)}+d_{6,1}^{(3)}+d_{6,2}^{(3)}+d_{6,3}^{(3)}+d_{7}^{(3)}+ d_{8}^{(3)}+d_{8,1}^{(3)}+d_{8,2}^{(3)}=0\] Therefore, we finally find \[S_{\rm eff}^{(3)}\,=\,0\,. \tag{15}\] in full agreement with [5]. This result follows from a notable cancellation among different diagrams ; a cancellation that decomposes as follows \[\Big{(}\frac{45}{16}+1-\frac{61}{16}+\frac{3}{2}+\frac{11}{4}-\frac{13}{8}- \frac{1}{2}-\frac{3}{8}-\frac{7}{4}\Big{)}\,I\,=\,0\] Figure 4: 3-loop graviton diagrams. Figure 5: 3-loop diagrams with ghosts fields ; cf. [18]. Figure 3: 3-loop diagrams that vanish after regularization. Figure 6: 3-loop diagrams with ghosts fields. with each term in the sum coming from a different diagram. Notice that in this cancellation there are also partial cancellations, _e.g._\(d_{5}^{(3)}+d_{6}^{(3)}=-d_{6,1}^{(3)}\). It would be desirable to understand if there is a precise reason for the cancellation of different subsets of diagrams, and thus gain intuition that could serve us for calculations at higher loops. The fact that all diagrams cancel is remarkable, as it might have happened that the graviton and ghost contributions did not completely cancel at higher loops ; see the discussion in [5] about this point ; see also [27] and references thereof. ### Higher dimensions In order to further study the origin of the cancellation expressed by equation (15), we find illustrative to discuss the computation in \(d\) dimensions. Besides, this allows us to emphasize that our techniques are well suited to be extended to arbitrary dimension \(d\geq 3\), something that to some extent is obvious as we have been working with dimensional regularization. It can be shown that the \(d\)-dimensional analog to (15) turns out to be proportional to sum of terms of the form \[\kappa^{4}\,\frac{(d-3)}{(d-2)^{2}}\,P_{i}(d)\,I_{i} \tag{16}\] where \(I_{i}\) stand for the the \(d\)-dimensional extension of integral (13) and for other integrals that appear in \(d>3\), and \(P_{i}(d)\) are polynomials. This manifestly shows that the cancellation in (15) only happens for \(d=3\). It is worth mentioning that, in addition to the eight diagrams depicted in Figures 4, 5 and 7, other diagrams also contribute to (16) when \(d>3\). In \(d\) dimensions there may be additional integrals to be solved as we have checked that several contributions throughout the computation vanish due to factors of \((d-3)\). In spite of all that, our techniques can well be adapted to \(d\geq 3\). ### Higher loops Before concluding, a few words about higher loops : While our techniques can in principle be applied to higher-loops, the calculation becomes rapidly unmanageable due to the increasing number of 1PI diagrams. At 4-loops the number of diagrams happens to be, roughly, one order of magnitude larger than the number of them at 3-loops. Graviton vertices of order \(\mathcal{O}(\kappa^{6}h^{8})\) start to contribute and the plethora of graphs becomes unwieldy. Still, one can say a few things about the 4-loop 1PI contributions ; for example, that there are many diagrams that vanish after dimensional regularization, while other diagrams, _e.g._ the one depicted in Figure 8, happen to be more involved. Therefore, one also expects cancellations similar to the one we obtained in (15) to take place at 4-loops. ## IV Conclusions In this paper, we computed the 3-dimensional gravitational effective action in flat spacetime at 2- and 3-loops in the second-order, metric formalism. We showed that the result vanishes, in agreement with [1; 4; 5; 6]. The computation amounted to handle the ghost and graviton contributions, and solve the integrals associated to all connected Feynman diagrams. The calculation, being lengthy and requiring precision, demanded the implementation of a systematic procedure. By dealing with dimensional regularization, we solved all the connected 2- and 3-loop diagrams. At 2-loops, we found that all the diagrams vanish in any number of dimensions. This led us to explore the next order in the loop expansion. At 3 loops, there are 14 1PI diagrams, 9 of them surviving after carefully performing dimensional regularization. Crucial to the final result was a remarkable cancellation among the latter, with the 9 diagrams conspiring to yield a vanishing result in \(d=3\). Consequently, our computation turns out to be a non-trivial check of the 1-loop exactness of the 3\(d\) partition function [6]. In other words, we have provided a consistency check of the result presented in [5], where the authors argued that the quantum corrections to 3\(d\) gravity partition function around flat space is fully determined by a 1-loop determinant that reproduces the character of the BMS group. Figure 8: Example of a 4-loop graviton diagram. Figure 7: 3-loop diagrams with ghosts fields. ###### Acknowledgements. The authors thank Glenn Barnich, David Blanco, Stephen Carlip, Alan Garbarz for discussions. M.L. thanks the CCPP at NYU for the hospitality during his stay, where part of this work was done. The computation presented in this paper partially resorted to FeynCalc [21; 22; 23]. The computational resources used in this work were provided in part by the HPC center DIRAC, funded by IFIBA (UBA-CONICET) and part of SNCAD-MinCyT initiative, Argentina. This work has been partially supported by grants PIP-(2022)-11220210100685CO, PIP-(2022)-11220210100225CO, PICT-(2021)-GRFTI-00644, PICT-2020-SERIEA-00164.
2308.02373
Well-posedness for the extended Schrödinger-Benjamin-Ono system
In this work we prove that the initial value problem associated to the Schr\"odinger-Benjamin-Ono type system \begin{equation*} \left\{ \begin{array}{ll} \mathrm{i}\partial_{t}u+ \partial_{x}^{2} u= uv+ \beta u|u|^{2}, \partial_{t}v-\mathcal{H}_{x}\partial_{x}^{2}v+ \rho v\partial_{x}v=\partial_{x}\left(|u|^{2}\right) u(x,0)=u_{0}(x), \quad v(x,0)=v_{0}(x), \end{array} \right. \end{equation*} with $\beta,\rho \in \mathbb{R}$ is locally well-posed for initial data $(u_{0},v_{0})\in H^{s+\frac12}(\mathbb{R})\times H^{s}(\mathbb{R})$ for $s>\frac54$. Our method of proof relies on energy methods and compactness arguments. However, due to the lack of symmetry of the nonlinearity, the usual energy has to be modified to cancel out some bad terms appearing in the estimates. Finally, in order to lower the regularity below the Sobolev threshold $s=\frac32$, we employ a refined Strichartz estimate introduced in the Benjamin-Ono setting by Koch and Tzvetkov, and further developed by Kenig and Koenig.
Felipe Linares, Argenis Mendez, Didier Pilod
2023-08-04T15:13:21Z
http://arxiv.org/abs/2308.02373v1
# Well-posedness for the extended Schrodinger-Benjamin-Ono system ###### Abstract. In this work we prove that the initial value problem associated to the Schrodinger-Benjamin-Ono type system \[\left\{\begin{array}{l}\mathrm{i}\partial_{t}u+\partial_{x}^{2}u=uv+\beta u |u|^{2},\\ \partial_{t}v-\mathcal{H}_{x}\partial_{x}^{2}v+\rho v\partial_{x}v=\partial_ {x}\left(|u|^{2}\right)\\ u(x,0)=u_{0}(x),\quad v(x,0)=v_{0}(x),\end{array}\right.\] with \(\beta,\rho\in\mathbb{R}\) is locally well-posed for initial data \((u_{0},v_{0})\in H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\) for \(s>\frac{5}{4}\). Our method of proof relies on energy methods and compactness arguments. However, due to the lack of symmetry of the nonlinearity, the usual energy has to be modified to cancel out some bad terms appearing in the estimates. Finally, in order to lower the regularity below the Sobolev threshold \(s=\frac{3}{2}\), we employ a refined Strichartz estimate introduced in the Benjamin-Ono setting by Koch and Tzvetkov, and further developed by Kenig and Koenig. Key words and phrases:Schrodinger Equation. Benjamin-Ono Equation. Smoothing effects 2020 Mathematics Subject Classification: Primary: 35Q53. Secondary: 35Q05. ## 1. Introduction ### The Schrodinger-Benjamin-Ono equation We are interested in the study of the initial value problem (IVP) associated to the following system of nonlinear dispersive equation, i.e. \[\begin{cases}\mathrm{i}\partial_{t}u+\partial_{x}^{2}u=uv+\beta|u|^{2}u,&x\in \mathbb{R},\ t>0,\\ \partial_{t}v-\mathcal{H}\partial_{x}^{2}v+\rho v\partial_{x}v=\partial_{x}( |u|^{2}),\\ u(x,0)=u_{0}(x),&v(x,0)=v_{0}(x),\end{cases} \tag{1.1}\] where \(u=u(x,t)\) is a complex valued function, \(v=v(x,t)\) is a real valued function, the parameters \(\beta,\rho\in\mathbb{R}\), and \(\mathcal{H}\) denotes the Hilbert transform, defined on the line as \[\mathcal{H}f(x)=\text{p.v.}\ \frac{1}{\pi}\int\frac{f(y)}{x-y}\ dy. \tag{1.2}\] When \(\rho\neq 0\), we will refer to the system in (1.1) as the _extended Schrodinger-Benjamin-Ono system_. The system (1.1) appears as a particular case (under appropriate transformations) of the more general system describing the interaction phenomenon between long waves and short waves under a weakly coupled nonlinearity, \[\begin{cases}i\partial_{t}S+ic_{S}\partial_{x}S+\partial_{x}^{2}S=\alpha SL+ \gamma|S|^{2}S,&c_{s},\alpha,\gamma\in\mathbb{R},\\ \partial_{t}L+c_{L}\partial_{x}L+\nu P(D_{x})L+\lambda\partial_{x}L^{2}=\beta \partial_{x}|S|^{2},&c_{L},\nu,\lambda,\beta\in\mathbb{R},\end{cases} \tag{1.3}\] Introduction Let \(\mathbb{R}^{n}\) be a real-valued function on \(\mathbb{R}^{n}\), \(n\geq 2\), and \(\mathbb{R}^{n}\) be a real-valued function on \(\mathbb{R}^{n}\). We consider the following system \[\mathcal{E}_{1}(t):=\int_{\mathbb{R}}\nu(x,t)\,dx=\mathcal{E}_{1}(0), \tag{1.1}\] \[\mathcal{E}_{2}(t):=\int_{\mathbb{R}}|u(x,t)|^{2}\,dx=\mathcal{E}_{2}(0), \tag{1.2}\] where \(\nu(x,t)\) is a real-valued function on \(\mathbb{R}^{n}\). The system (1.1) is a well-posedness result for the type \(\mathcal{E}_{1}(0)\) of the system (1.1). The system (1.1) is a well-posed system with \(\rho=\beta=0\), \ \(\nu\neq 0\). System (1.4) was also considered in the periodic setting (see [1] for the well-posedness theory and [24] for the construction of invariant measures). Notice that the weak nonlinear interaction \(L\partial_{x}L\) is missing in (1.4). Nowadays it is known that the Benjamin-Ono (BO) equation \[\partial_{t}v-\mathcal{H}\partial_{x}^{2}v+v\partial_{x}v=0 \tag{1.9}\] is quasilinear in the sense that none well-posedness for the IVP associated to the BO equation (1.9) in \(H^{s}(\mathbb{R})\) for any \(s\in\mathbb{R}\) can be established by an argument based only the contraction principle argument. This important result was proved in [22] and the argument could be used with little modifications to show that the same is true for the system in (1.1). Thus we shall employ compactness methods in order to establish local well-posedness for the IVP (1.1). Our main result here is as follows: **Theorem 1.1**.: _Let \(s>\frac{5}{4}\). For any \((u_{0},v_{0})\in H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\), there exist a positive time \(T=T(\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}_{x}\times H^{s}_{x}})\), which can be chosen as a non-increasing function of its argument, and a unique solution \((u,v)\) of the IVP (1.1) satisfying_ \[(u,v)\in C\big{(}[0,T]:H^{s+\frac{1}{2}}_{x}(\mathbb{R})\times H^{s}_{x}( \mathbb{R})\big{)} \tag{1.10}\] _and_ \[\partial_{x}v\in L^{1}\big{(}(0,T):L^{\infty}_{x}(\mathbb{R})\big{)}. \tag{1.11}\] _Moreover, for any \(0<T^{\prime}<T\), there exists a neighborhood \(\mathcal{U}\) of \((u_{0},v_{0})\) in \(H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\) such that the flow map data-to-solution_ \[S:\mathcal{U}\to C\big{(}[0,T]:H^{s+\frac{1}{2}}_{x}(\mathbb{R})\times H^{s}_{ x}(\mathbb{R})\big{)}\,,(\bar{u}_{0},\bar{v}_{0})\mapsto(\bar{u},\bar{v})\] _is continous._ Our strategy is to use a refined Strichartz estimates as it was done by Koch and Tzvetkov in [16], and by Kenig and Koenig in [14] for the Benjamin-Ono equation. There are some difficulties to overcome in order to implement this method. The first one is related to the loss of derivatives present in the deduction of the energy estimates in \(H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\) for solutions of the IVP (1.1). To get around this obstruction, we modify the energy following the idea of Kwon in [17] for the fifth-order KdV equation. Adding an extra lower-order term to the energy permits to close the energy estimates (see also [12]). Note that in this case, the modified term added to the energy allows to cancel out two different bad nonlinear interactions in the energy estimates. Since we need the coercivity of the modified energy, this method only works in the case of small initial data. Well-posedness for arbitrary large data is generally obtained by a scaling argument, but the system (1.1) does not enjoy this special property. Nevertheless, we still can perform the following change of variables \[u_{\lambda}(x,t) =\lambda u(\lambda x,\lambda^{2}t)\] \[v_{\lambda}(x,t) =\lambda v(\lambda x,\lambda^{2}t) \tag{1.12}\] for \(0<\lambda\leq 1\). Then \((u_{\lambda},v_{\lambda})\) is solution of \[\begin{cases}\mathrm{i}\partial_{t}u_{\lambda}+\partial_{x}^{2}u_{\lambda}= \lambda u_{\lambda}v_{\lambda}+\beta|u_{\lambda}|^{2}u_{\lambda},\\ \partial_{t}v_{\lambda}-\mathcal{H}\partial_{x}^{2}v_{\lambda}+\rho v_{ \lambda}\partial_{x}v_{\lambda}=\partial_{x}(|(u_{\lambda})|^{2}),\\ u_{\lambda}(x,0)=\lambda u_{0}(\lambda x),\quad v_{\lambda}(x,0)=\lambda v_{0} (\lambda x).\end{cases} \tag{1.13}\] Since \[\|u_{\lambda}(\cdot,0)\|_{H^{\mathrm{s}+\frac{1}{2}}} \lesssim\lambda^{\frac{1}{2}}(1+\lambda^{s+\frac{1}{2}})\|u_{0}\|_{H ^{\mathrm{s}+\frac{1}{2}}}\] \[\|v_{\lambda}(\cdot,0)\|_{H^{\mathrm{s}}} \lesssim\lambda^{\frac{1}{2}}(1+\lambda^{s})\|v_{0}\|_{H^{ \mathrm{s}}}, \tag{1.14}\] given \(\delta>0\), we can always choose \(\lambda\) small enough such that \[\|\left(u_{\lambda}(\cdot,0),w_{\lambda}(\cdot,0)\right)\|_{H^{\mathrm{s}+ \frac{1}{2}}\times H^{\mathrm{s}}}\leq\delta. \tag{1.15}\] Moreover, suppose that there exists \(\delta>0\) such that for all \(0<\lambda\leq 1\), there exists a solution of the IVP (1.13) \((u_{\lambda},v_{\lambda})\in C([0,T_{\lambda}]:H^{s+\frac{1}{2}}(\mathbb{R}) \times H^{\mathrm{s}}(\mathbb{R}))\) whenever \(\|(u_{\lambda}(\cdot,0),v_{\lambda}(\cdot,0))\|_{H^{\mathrm{s}+\frac{1}{2}} \times H^{\mathrm{s}}}\leq\delta\). Then we obtain, letting \((u(x,t),v(x,t))=(\lambda^{-1}u_{\lambda}(\lambda^{-1}x,\lambda^{-2}t),\lambda ^{-1}v_{\lambda}(\lambda^{-1}x,\lambda^{-2}t))\), a solution of (1.1) in the function space \(C([0,T]:H^{s+\frac{1}{2}}(\mathbb{R})\times H^{\mathrm{s}}(\mathbb{R}))\) with a time of existence satisfying \(T\gtrsim\lambda^{2}T_{\lambda}\). Note that the requirement on the smallness of the solutions appearing in the energy estimates must be independent of \(\lambda\in(0,1]\) (see Remarks 3.1 and 3.2 below). This idea was already used by Zaiter for the Ostrovsky equation in [28]. The plan of this paper is as follows: in Section 2, we recall some commutators estimates, which will be used in Section 3 to derive the energy estimates. Finally, Section 4 is devoted to the proof of the main theorem. **Notation.** * For two quantities \(A\) and \(B\), we denote \(A\lesssim B\) if \(A\leq cB\) for some constant \(c>0.\) Similarly, \(A\gtrsim B\) if \(A\geq cB\) for some \(c>0.\) Also for two positive quantities, \(A\)\(B\) we say that are _comparable_ if \(A\lesssim B\) and \(B\lesssim A\), when such conditions are satisfied we indicate it by writing \(A\sim B.\) The dependence of the constant \(c\) on other parameters or constants are usually clear from the context and we will often suppress this dependence whenever it be possible. * For any pair of quantities \(X\) and \(Y\), we denote \(X\ll Y\) if \(X\leq cY\) for some sufficiently small positive constant \(c.\) The smallness of such constant is usually clear from the context. The notation \(X\gg Y\) is similarly defined. * For \(s\in\mathbb{R}\), \(D^{\mathrm{s}}_{x}\) and \(J^{\mathrm{s}}_{x}\) denote respectively the Riesz and Bessel potentials of order \(-s\), respectively defined as Fourier multipliers by \[\mathcal{F}(D^{\mathrm{s}}_{x}f)(\xi)=|\xi|^{s}\,\mathcal{F}(f)(\xi)\quad\text {and}\quad\mathcal{F}(J^{\mathrm{s}}_{x}f)(\xi)=(1+\xi^{2})^{\frac{s}{2}} \mathcal{F}(f)(\xi).\] * For \(1\leq p\leq\infty\), \(L^{p}(\mathbb{R})\) denotes the classical Lebesgue spaces. * For \(s\in\mathbb{R}\), \(H^{\mathrm{s}}(\mathbb{R})\) denotes the \(L^{2}\)-based sobolev space of order \(s\), which consists of all distributions \(f\in\mathcal{S}^{\prime}(\mathbb{R})\) such that \(\|f\|_{H^{\mathrm{s}}(\mathbb{R})}=\|J^{\mathrm{s}}_{x}f\|_{L^{2}}<\infty\). ## 2. Commutator estimates The following fractional Leibniz rule was proved by Kenig, Ponce and Vega in the appendix of [15] **Lemma 2.1**.: _Let \(\alpha=\alpha_{1}+\alpha_{2}\in(0,1)\) with \(\alpha_{1},\alpha_{2}\in(0,\alpha),p\in[1,\infty)\), and \(p_{1},p_{2}\in(1,\infty)\) such that \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}.\) Then_ \[\|D^{\alpha}_{x}(fg)-fD^{\alpha}_{x}g-gD^{\alpha}_{x}f\|_{L^{p}}\lesssim\|D^{ \alpha_{1}}_{x}f\|_{L^{p_{1}}}\|D^{\alpha_{2}}_{x}g\|_{L^{p_{2}}}.\] _Moreover, if \(p>1\), then the case \(\alpha_{2}=0\), the exponent \(p_{2}\) with \(1<p_{2}\leq\infty\) is also allowed._ These results as well as the Kato-Ponce commutator estimates (see [13]) were recently extended by Li in [18]. In particular, we will use the following estimates (see Theorem 5.1 and Corollary 5.3 in [18]). **Lemma 2.2**.: _Let \(1<p<\infty\). Let \(1<p_{1},p_{2},p_{3},p_{4}\leq\infty\), satisfy_ \[\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p_{3}}+\frac{1}{p_{4}}=\frac{1}{p}.\] _Then, for all \(f,g\in\mathcal{S}(\mathbb{R})\), the following estimates are true._ 1. _If_ \(0<s\leq 1\)_, then_ \[\|[D_{x}^{s}.f]\,g\|_{L^{p}}\lesssim\|D_{x}^{s-1}\partial_{x}f\|_{L^{p_{1}}}\| g\|_{L^{p_{2}}}.\] 2. _If_ \(s>1\)_, then_ (2.1) \[\|[D_{x}^{s}.f]\,g\|_{L^{p}}\lesssim\|D_{x}^{s}f\|_{L^{p_{1}}}\|g\|_{L^{p_{2} }_{x}}+\|\partial_{x}f\|_{L^{p_{3}}}\|D_{x}^{s-1}g\|_{L^{p_{4}}}.\] _Remark 2.1_.: The results in Lemmas 2.1 and 2.2 are still valid for \(\mathcal{H}D_{x}^{s}\) instead of \(D_{x}^{s}\). In the appendix of [6] Dawson, McGahagan and Ponce proved the following estimates. **Lemma 2.3**.: _Let \(1<p,q<\infty\). Then for all \(f,g\in\mathcal{S}(\mathbb{R})\), the following estimates are true._ 1. _if_ \(l,m\) _are integers such that_ \(l,m\geq 0\)_, then_ (2.2) \[\left\|\partial_{x}^{l}\left[\mathcal{H};f\right]\partial_{x}^{m}g\right\|_{L ^{p}}\lesssim_{l,m,p}\|\partial_{x}^{l+m}f\|_{L^{\infty}}\|g\|_{L^{p}}.\] 2. _if_ \(0\leq\alpha<1,0<\beta\leq 1,0<\alpha+\beta\leq 1\) _and_ \(\delta>1/q\)_, then_ (2.3) \[\|D_{x}^{\alpha+\beta}(gD_{x}^{1-(\alpha+\beta)}f)-D_{x}^{\alpha}(gD_{x}^{1- \alpha}f)\|_{L^{p}}\lesssim_{\alpha,\beta,p,\delta}\|f^{\delta}\partial_{x}g \|_{L^{q}}\|f\|_{L^{p}}.\] _Remark 2.2_.: Recently, in Proposition 3.10 of [18], D. Li improved estimate (2.3) by giving the following sharp version. For any \(0\leq\alpha<1,0<\beta\leq 1-\alpha\) and \(1<p<\infty\), we have \[\|D_{x}^{\alpha+\beta}(gD_{x}^{1-(\alpha+\beta)}f)-D_{x}^{\alpha}(gD_{x}^{1- \alpha}f)\|_{L^{p}}\lesssim_{\alpha,\beta,p}\|D_{x}g\|_{\mathrm{BMO}}\|f\|_{L^ {p}}.\] Consequently, \[\|D_{x}^{\alpha+\beta}(gD_{x}^{1-(\alpha+\beta)}f)-D_{x}^{\alpha}(gD_{x}^{1- \alpha}f)\|_{L^{p}}\lesssim_{\alpha,\beta,p}\|\partial_{x}g\|_{L^{\infty}}\| f\|_{L^{p}}. \tag{2.4}\] Finally, we list some estimates involving the Bessel and Riesz potentials. **Lemma 2.4**.: _The linear operators \(T_{1}(D):=J_{x}^{-1}\mathcal{H}\partial_{x}^{2}-\partial_{x}\) and \(T_{2}(D)=D_{x}^{\frac{3}{2}}J_{x}^{-1}-D_{x}^{\frac{1}{2}}\) are bounded in \(L^{2}(\mathbb{R})\). More precisely, there exist \(c_{1}>0\), \(c_{2}>0\) such that_ \[\|T_{1}(D)f\|_{L^{2}}\leq c_{1}\|f\|_{L^{2}},\quad\forall\,f\in L^{2}(\mathbb{ R}) \tag{2.5}\] _and_ \[\|T_{2}(D)f\|_{L^{2}}\leq c_{2}\|f\|_{L^{2}},\quad\forall\,f\in L^{2}(\mathbb{ R}). \tag{2.6}\] Proof.: Observe that \(T_{1}(D)\) and \(T_{2}(D)\) are Fourier multipliers whose symbols given by \[T_{1}(\xi) = \frac{i\xi}{(1+|\xi|^{2})^{\frac{1}{2}}}\left(|\xi|-(1+|\xi|^{2}) ^{\frac{1}{2}}\right)\] \[T_{2}(\xi) = \frac{|\xi|^{\frac{1}{2}}}{(1+|\xi|^{2})^{\frac{1}{2}}}\left(|\xi |-(1+|\xi|^{2})^{\frac{1}{2}}\right)\] are bounded over \(\mathbb{R}\). Estimates (2.5) and (2.6) follow then from Plancherel's identity by choosing \(c_{1}:=\sup_{\mathbb{R}}|T_{1}(\xi)|\) and \(c_{2}:=\sup_{\mathbb{R}}|T_{2}(\xi)|\). ## 3. Energy estimates In order to simplify the notations we will use \(\rho=\beta=1\) in (1.1). We notice that in the case that \(\rho\) would be negative we has to consider in the second equation in (1.1) the operator \(\mathcal{H}\partial_{x}^{2}\) with a positive sign in front due to the trick introduced to handle the loss of derivatives in the modified energy estimate. ### Energy estimates for the solutions of (1.1) As we commented in the introduction we cannot obtain an a priori estimate directly for solutions of the system (1.1). We have extra terms that we cannot handle. More precisely, \[2\operatorname{Im}\int_{\mathbb{R}}u\;D_{x}^{s+1}\overline{u}D_{x}^{s}v\,dx \quad\text{and}\quad\int_{\mathbb{R}}\;D_{x}^{s}v\partial_{x}D_{x}^{s}(|u|^{2} )\,dx.\] To get a priori estimates we need to modify the usual energy. We define this new functional as follows: **Definition 3.1**.: For \(t\geq 0\) and \(s\geq\frac{1}{2}\), we define the _modified energy_ as being \[\begin{split} E_{m}^{s}(t):=&\|u(t)\|_{L_{x}^{2}}^{ 2}+\|D_{x}^{s+\frac{1}{2}}u(t)\|_{L_{x}^{2}}^{2}+\|v(t)\|_{L_{x}^{2}}^{2}+ \frac{1}{2}\|D^{s}v(t)\|_{L_{x}^{2}}^{2}\\ &+\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}v(t)D_{x}^{s-\frac{1}{2}} \left(|u(t)|^{2}\right)\,dx.\end{split} \tag{3.1}\] With this definition at hand, we can establish the a priori estimates we need in this work. **Proposition 3.1**.: _Let \(s>\frac{1}{2}\) and \(T>0\). There exist positive constants \(c_{s}\) and \(\kappa_{1,s}\) such that for any \((u,v)\in C([0,T]:H^{s+\frac{1}{2}}(\mathbb{R}))\times H^{s}(\mathbb{R}))\) solution of (1.1) satisfying_ \[\|\left(u(t),v(t)\right)\|_{H_{x}^{s+\frac{1}{2}}\times H_{x}^{s}}\leq\frac{1 }{c_{s}},\] _the following estimates hold true._ 1. _Coercivity:_ (3.2) \[\frac{1}{2}\Big{(}\|u(t)\|_{H_{x}^{s+\frac{1}{2}}}^{2}+\|v(t)\|_{H_{x}^{s}}^{ 2}\Big{)}\leq E_{m}^{s}(t)\leq\frac{3}{2}\Big{(}\|u(t)\|_{H_{x}^{s+\frac{1}{2 }}}^{2}+\|v(t)\|_{H_{x}^{s}}^{2}\Big{)}\] 2. _Energy estimate:_ (3.3) \[\frac{d}{dt}E_{m}^{s}(t)\lesssim\Big{(}1+\|\partial_{x}v(t)\|_{L_{x}^{\infty} }\Big{)}E_{m}^{s}(t),\] _and as a consequence,_ (3.4) \[\sup_{t\in[0,T]}\|(u(t),v(t))\|_{H_{x}^{s+\frac{1}{2}}\times H_{x}^{s}}\leq 2 e^{\kappa_{1,s}(T+\|\partial_{x}v\|_{L_{x}^{1,\infty}})}\|(u_{0},v_{0})\|_{H_{x} ^{s+\frac{1}{2}}\times H_{x}^{s}}.\] _Remark 3.1_.: Proposition 3.1 also holds for solutions of the system (1.13) with implicit constant independent of \(0<\lambda\leq 1\). Proof.: The proof of (3.2) follows from the Cauchy-Schwarz inequality, the Leibniz rule and the Sobolev embedding. Let us consider \(E_{m}^{s}(t)\) in (3.1). Differentiating with respect to \(t\) it follows that \[\begin{split}\frac{d}{dt}E_{m}^{s}&=\frac{1}{2}\frac{ d}{dt}\left(\int_{\mathbb{R}}(D_{x}^{s}v)^{2}\,dx\right)+\frac{d}{dt}\left(\int_{ \mathbb{R}}|D_{x}^{s+\frac{1}{2}}u|^{2}\,dx\right)\\ &\quad+\frac{d}{dt}\left(\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}vD_ {x}^{s-\frac{1}{2}}(|u|^{2})\,dx\right)+\frac{d}{dt}\left(\int_{\mathbb{R}}v^{ 2}\,dx\right)\\ &=\mathrm{I}+\mathrm{II}+\mathrm{III}+\mathrm{IV}.\end{split} \tag{3.5}\] On one hand, using the second equation in (1.1) we have, \[\begin{split}\mathrm{I}&=-\int_{\mathbb{R}}D_{x}^{s} vD_{x}^{s}(v\partial_{x}v)\,dx+\int_{\mathbb{R}}D_{x}^{s}v\,\partial_{x}D_{x}^{s} (|u|^{2})\,dx\\ &=I_{1}+I_{2}.\end{split} \tag{3.6}\] We obtain after integrating by parts \[\begin{split}\mathrm{I}_{1}&=-\int_{\mathbb{R}}D_{x} ^{s}v\,[D_{x}^{s};v]\partial_{x}v\,dx+\frac{1}{2}\int_{\mathbb{R}}\partial_{x} v\,(D_{x}^{s}v)^{2}\,dx\\ &=\mathrm{I}_{1,1}+\mathrm{I}_{1,2}.\end{split}\] Thus, in virtue of Lemma 2.2 we get \[|\mathrm{I}_{1,1}|\lesssim\|\partial_{x}v\|_{L^{\infty}}\|D_{x}^{s}v\|_{L^{2}} ^{2}.\] For \(\mathrm{I}_{1,2}\), it readily follows that \[|\mathrm{I}_{1,2}|\lesssim\|\partial_{x}v\|_{L^{\infty}}\|D_{x}^{s}v\|_{L^{2}} ^{2}.\] Thus \[|\mathrm{I}_{1}|\lesssim\|\partial_{x}v\|_{L^{\infty}}\|D_{x}^{s}v\|_{L^{2}} ^{2}. \tag{3.7}\] Later on we will come back on \(\mathrm{I}_{2}\) since it requires to obtain the full description of \(\mathrm{III}\). After using integration by parts and system (1.1) we have that \[\begin{split}\mathrm{II}&=-2\,\mathrm{Re}\left( \mathrm{i}\int_{\mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline{u}D_{x}^{s+\frac{1}{ 2}}(uv)\,dx\right)\\ &\quad-2\,\mathrm{Re}\left(\mathrm{i}\int_{\mathbb{R}}D_{x}^{s+ \frac{1}{2}}\overline{u}D_{x}^{s+\frac{1}{2}}(u|u|^{2})\,dx\right)\\ &=\mathrm{II}_{1}+\mathrm{II}_{2}.\end{split}\] In the case of \(\mathrm{II}_{1}\) we rewrite it as \[\begin{split}\mathrm{II}_{1}&=2\,\mathrm{Im}\int_{ \mathbb{R}}D_{x}^{s+1}\overline{u}[D_{x}^{s};u]v\,dx+2\,\mathrm{Im}\int_{ \mathbb{R}}u\,D_{x}^{s+1}\overline{u}D_{x}^{s}v\,dx\\ &=\mathrm{II}_{1,1}+\mathrm{II}_{1,2}.\end{split}\] Notice that for \(\mathrm{II}_{1,1}\) we have \[\begin{split}\mathrm{II}_{1,1}&=2\,\mathrm{Im}\int_{ \mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline{u}\,D_{x}^{\frac{1}{2}}\,[D_{x}^{s};u ]\,v\,dx\\ &=2\,\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline{ u}\,[D_{x}^{s+\frac{1}{2}};u]v\,dx-2\,\mathrm{Im}\int_{\mathbb{R}}\,D_{x}^{s+ \frac{1}{2}}\overline{u}\,[D_{x}^{\frac{1}{2}};u]D_{x}^{s}v\,dx\\ &=\Pi_{1,1,1}+\Pi_{1,1,2}.\end{split}\] The terms \(\Pi_{1,1,1}\) and \(\Pi_{1,1,2}\) can be controlled after applying Lemma 2.2 and the Sobolev embedding (recalling \(s>\frac{1}{2}\)). In the first place, we have \[\begin{split}|\Pi_{1,1,1}|&\lesssim\|D_{x}^{s+\frac{1 }{2}}u\|_{L^{2}}\Big{(}\|D_{x}^{s+\frac{1}{2}}u\|_{L^{2}}\|v\|_{L^{\infty}}+\| \partial_{x}u\|_{L^{2+}}\|D^{s-\frac{1}{2}}v\|_{L^{\infty_{-}}}\Big{)}\\ &\lesssim\|u\|_{H^{s+\frac{1}{2}}}^{2}\|v\|_{H^{s}},\end{split} \tag{3.8}\] and in the second place, we get \[|\Pi_{1,1,2}|\lesssim\|D_{x}^{s+\frac{1}{2}}u\|_{L^{2}}\|D_{x}^{s}v\|_{L^{2}} \|D_{x}^{-\frac{1}{2}}\partial_{x}u\|_{L^{\infty}}\lesssim\|u\|_{H^{s+\frac{1 }{2}}}^{2}\|v\|_{H^{s}}. \tag{3.9}\] To handle \(\Pi_{1,2}\) we require to deploy the terms that compounds III, so that for the moment we will continue estimating the remainder terms. Concerning \(\Pi_{2}\) we have since \(H^{s+\frac{1}{2}}\) is a Banach algebra that \[|\Pi_{2}|\lesssim\|u\|_{H^{s+\frac{1}{2}}}^{4}. \tag{3.10}\] Next we focus our attention on III that is by itself the most complicated term, since it contains the interaction between \(u\) and \(v\). In the first place, \[\begin{split}\text{III}&=\int_{\mathbb{R}}D_{x}^{s- \frac{1}{2}}v_{t}D_{x}^{s-\frac{1}{2}}(|u|^{2})\,dx+2\,\text{Re}\left(\int_{ \mathbb{R}}D_{x}^{s-\frac{1}{2}}vD_{x}^{s-\frac{1}{2}}(\overline{u}u_{t})\,dx \right)\\ &=\text{III}_{1}+\text{III}_{2}.\end{split}\] Since \(v\) satisfies the BO equation in system (1.1) is quite clear that \[\begin{split}\text{III}_{1}&=\int_{\mathbb{R}}D_{x}^ {s-1}\mathcal{H}\partial_{x}^{2}v\,D_{x}^{s}(|u|^{2})\,dx-\frac{1}{2}\int_{ \mathbb{R}}D_{x}^{s-1}\partial_{x}(v^{2})D_{x}^{s}(|u|^{2})\,dx\\ &=\text{III}_{1,1}+\text{III}_{1,2}.\end{split}\] After observing that \(D_{x}^{1}=\mathcal{H}\partial_{x}\), we obtain that \[\text{III}_{1,1}+\text{I}_{2}=0.\] In the case of \(\text{III}_{1,2}\), we deduce from the Cauchy-Schwarz inequality and the fact that \(H^{s}\) is a Banach algebra for \(s>\frac{1}{2}\), that \[|\text{III}_{1,2}|\lesssim\|D_{x}^{s}(v^{2})\|_{L^{2}}\|D_{x}^{s}(|u|^{2})\|_{ L^{2}}\lesssim\|u\|_{H^{s}}^{2}\|v\|_{H^{s}}^{2}. \tag{3.11}\] Now we turn our attention to \(\text{III}_{2}\). We obtain after using the Schrodinger equation in (1.1) and integrating by parts that \[\text{III}_{2}=2\,\text{Re}\left(\text{i}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2 }}vD_{x}^{s-\frac{1}{2}}(\overline{u}\partial_{x}^{2}u)\,dx\right)=2\,\text{ Im}\int_{\mathbb{R}}D_{x}^{s-1}\partial_{x}vD_{x}^{s}(\overline{u}\partial_{x}u)\,dx.\] Since \(\partial_{x}=-\mathcal{H}D_{x}\) it follows that \[\begin{split}\text{III}_{2}&=2\,\text{Im}\int_{ \mathbb{R}}D_{x}^{s}v\,[\mathcal{H}D_{x}^{s};\overline{u}]\partial_{x}u\,dx+2 \,\text{Im}\int_{\mathbb{R}}\overline{u}\,D_{x}^{s}vD_{x}^{s+1}u\,dx\\ &=\text{III}_{2,1}+\text{III}_{2,2}.\end{split}\] After gathering together \(\text{III}_{2,2}\) and \(\text{II}_{1,2}\) we get \[\text{II}_{1,2}+\text{III}_{2,2}=0.\] For \(\text{III}_{2,1}\) there is no cancellation. Instead we estimate directly by using Lemma 2.2 \[|\text{III}_{2,1}|\lesssim\|D_{x}^{s}v(t)\|_{L^{2}}\big{|}[\mathcal{H}D_{x}^{s},\overline{u}]\partial_{x}u\big{\|}_{L^{2}}\lesssim\|D_{x}^{s}v(t)\|_{L^{2}}\|D ^{s-1}\partial_{x}u\|_{L^{\infty_{-}}}\|\partial_{x}u\|_{L^{2+}}\] so that \[\left|\Pi_{2,1}\right|\lesssim\left\|D_{x}^{s}v\right\|_{L^{2}}\|u\|_{H^{+}\frac{ 1}{2}}^{2} \tag{3.12}\] since \(s>\frac{1}{2}\). Finally, we estimate IV. Using the second equation in (1.1), integrating by parts and using properties of the Hilbert transform we have \[\frac{d}{dt}\int_{\mathbb{R}}v^{2}\,dx=2\int_{\mathbb{R}}vv_{t}\,dx=2\int v \partial_{x}(|u|^{2})\,dx. \tag{3.13}\] Thus it follows since \(H^{s+\frac{1}{2}}\) is a Banach algebra for \(s>\frac{1}{2}\) that \[|\text{IV}|\lesssim\|v\|_{L^{2}}\|u^{2}\|_{H^{1}}\lesssim\|v\|_{L^{2}}\|u\|_{H^ {s+\frac{1}{2}}}^{2}. \tag{3.14}\] Gathering the estimates from (3.5) to (3.14), using the definition of \(E_{m}^{s}(t)\) and requiring \(\|\left(u(t),v(t)\right)\|_{H^{s+\frac{1}{2}}\times H^{s}}\leq\frac{1}{c_{s}}\) we obtain \[\frac{d}{dt}E_{m}^{s}(t)\lesssim\left(1+\|\partial_{x}v(t)\|_{L^{\infty}} \right)E_{m}^{s}(t), \tag{3.15}\] The proof of estimate (3.3) follows by by combining (3.2) and (3.15), while estimate (3.4) is deduced by applying Gronwall's inequality. ### Energy estimates for the differences of two solutions of (1.1) Let \((u_{1},v_{1})\) and \((u_{2},v_{2})\) be two solutions of (1.1). We define \[\begin{cases}w=u_{1}-u_{2},&u=u_{1}+u_{2},\\ z=v_{1}-v_{2},&v=v_{1}+v_{2}.\end{cases} \tag{3.16}\] Then \((w,z)\) is solution of the system \[\begin{cases}\text{i}\partial_{t}w+\partial_{x}^{2}w=\frac{1}{2}(vw+uz)+\frac {1}{4}|w|^{2}w+\frac{1}{2}|u|^{2}w+\frac{1}{4}u^{2}\overline{w},\\ \partial_{t}z-\mathcal{H}\partial_{x}^{2}z+\frac{1}{2}\partial_{x}(vz)= \partial_{x}\text{Re}(\overline{u}w).\end{cases} \tag{3.17}\] The aim of this subsection is to derive energy estimates on the difference of two solutions \((w,z)\). As in subsection 3.1, we define a modified energy for \((w,z)\). **Definition 3.2**.: Let \(t\geq 0\). (i) Case \(\sigma=0\). \[\widetilde{E}_{m}^{0}(t):=\|w(t)\|_{L^{2}_{x}}^{2}+\|D_{x}^{\frac{1}{2}}w(t)\| _{L^{2}_{x}}^{2}+\frac{1}{2}\|z(t)\|_{L^{2}_{x}}^{2}+\text{Re}\int_{\mathbb{R} }J_{x}^{-1}z\left(\overline{u}w\right)\left(t\right)dx. \tag{3.18}\] where \(J_{x}^{-1}\) is the Bessel potential of order \(1\), defined in the notation. (ii) Case \(\sigma=s\geq\frac{1}{2}\). \[\begin{split}\widetilde{E}_{m}^{s}(t):=&\|w(t)\|_{L^ {2}_{x}}^{2}+\|D_{x}^{s+\frac{1}{2}}w(t)\|_{L^{2}_{x}}^{2}+\|z(t)\|_{L^{2}_{x }}^{2}+\frac{1}{2}\|D^{s}z(t)\|_{L^{2}_{x}}^{2}\\ &+\text{Re}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}z(t)D_{x}^{s- \frac{1}{2}}\left(\overline{u}w\right)\,dx.\end{split} \tag{3.19}\] With this definition at hand, we can establish the a priori estimates we need in this work. **Proposition 3.2**.: _Let \(s>\frac{1}{2}\) and \(T>0\). There exists a positive constant \(\widetilde{c}_{s}\) such that for any \((u_{1},v_{1})\), \((u_{2},v_{2})\in C([0,T]:H^{s+\frac{1}{2}}(\mathbb{R}))\times H^{s}(\mathbb{R})\)) solutions of (1.1) satisfying_ \[\|\left(u_{i}(t),v_{i}(t)\right)\|_{H^{s+\frac{1}{2}}_{x}\times H^{s}_{x}}\leq \frac{1}{\widetilde{c}_{s}},\quad i=1,2, \tag{3.20}\] _the following estimates hold true._ 1. _Coercivity: for_ \(\sigma=0\) _or_ \(\sigma=s>\frac{1}{2}\)__ (3.21) \[\frac{1}{2}\Big{(}\|w(t)\|^{2}_{H^{\sigma+\frac{1}{2}}_{x}}+\|z(t)\|^{2}_{H^{ \sigma}_{x}}\Big{)}\leq\widetilde{E}^{\sigma}_{m}(t)\leq\frac{3}{2}\Big{(}\| w(t)\|^{2}_{H^{\sigma+\frac{1}{2}}_{x}}+\|z(t)\|^{2}_{H^{\sigma}_{x}}\Big{)}.\] 2. \(H^{\frac{1}{2}}\times L^{2}\)_-energy estimate:_ (3.22) \[\frac{d}{dt}\widetilde{E}^{0}_{m}(t)\lesssim\Big{(}1+\|\partial_{x}v_{1}(t)\| _{L^{\infty}_{x}}+\|\partial_{x}v_{2}(t)\|_{L^{\infty}_{x}}\Big{)}\widetilde{E }^{0}_{m}(t).\] 3. \(H^{s+\frac{1}{2}}\times H^{s}\)_-energy estimate:_ (3.23) \[\frac{d}{dt}\widetilde{E}^{s}_{m}(t)\lesssim\Big{(}1+\|\partial_{x}v_{1}(t)\| _{L^{\infty}_{x}}+\|\partial_{x}v_{2}(t)\|_{L^{\infty}_{x}}\Big{)}\widetilde{ E}^{s}_{m}(t)+f_{s}(t)\] _where_ \(f_{s}=f_{s}(t)\) _is defined by_ (3.24) \[f_{s}(t) =\|D^{s}_{x}\partial_{x}v\|_{L^{\infty}}\|D^{s}_{x}z\|_{L^{2}}\|z \|_{L^{2}}+\|D^{s}_{x}v\|_{L^{2}}\|D^{s}_{x}z\|_{L^{2}}\|\partial_{x}z\|_{L^{ \infty}}\] \[\quad+\|D^{s+\frac{1}{2}}_{x}v\|_{L^{\infty}}\|w\|_{L^{2}}\|D^{s+ \frac{1}{2}}_{x}w\|_{L^{2}}+\|D^{s+1}u\|_{L^{\infty}}\|w\|_{L^{2}}\|D^{s}_{x}z\| _{L^{2}}.\] _Remark 3.2_.: Proposition 3.2 also holds for solutions of the system (1.12) with implicit constant independent of \(0<\lambda\leq 1\). _Remark 3.3_.: The terms gathered in \(f_{s}(t)\) cannot be estimated directly, but they have always more derivatives on the functions \(u_{i}\), \(v_{i}\) than on the terms for the differences \(w\) and \(z\). These terms will be handled with the Bona-Smith argument as in Proposition 2.18 in [20]. Proof.: The proof of (3.21) follows from the Cauchy-Schwarz inequality, the Leibniz rule and the Sobolev embedding. Next, we prove (3.22). Differentiating with respect to \(t\) it follows that (3.25) \[\begin{split}\frac{d}{dt}\widetilde{E}^{0}_{m}&= \frac{1}{2}\frac{d}{dt}\left(\int_{\mathbb{R}}z^{2}\,dx\right)+\frac{d}{dt} \left(\int_{\mathbb{R}}|w|^{2}\,dx\right)\\ &+\frac{d}{dt}\left(\int_{\mathbb{R}}|D^{\frac{1}{2}}_{x}w|^{2}\, dx\right)+\frac{d}{dt}\left(\operatorname{Re}\int_{\mathbb{R}}J^{-1}_{x}z\left( \overline{u}w\right)\,dx\right)\\ &=\widetilde{1}+\widetilde{1}+\widetilde{1}+\widetilde{1}\widetilde {1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1} \widetilde{1}\widetilde{1}\widetilde{1}\widetilde{1 To deal with \(\widetilde{\Pi}\), we use the first equation in (3.17), Holder's inequality and the Sobolev embedding to deduce that \[\begin{split}|\widetilde{\Pi}|&\leq\left|\text{Im} \int_{\mathbb{R}}\overline{w}uzdx\right|+\frac{1}{2}\left|\text{Im}\int_{ \mathbb{R}}u^{2}\overline{w}^{2}dx\right|\\ &\lesssim\|u\|_{H^{s}}\|w\|_{L^{2}}\|z\|_{L^{2}}+\|u\|_{H^{s}}^{2 }\|w\|_{L^{2}}^{2}.\end{split} \tag{3.27}\] Next, we decompose \(\widetilde{\Pi}\) as \[\begin{split}\widetilde{\Pi}&=\text{Im}\int_{ \mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w}D_{x}^{\frac{1}{2}}(vw)dx+\text{Im} \int_{\mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w}D_{x}^{\frac{1}{2}}(uz)dx+ \frac{1}{2}\text{Im}\int_{\mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w}D_{x}^{ \frac{1}{2}}(|w|^{2}w)dx\\ &\quad+\text{Im}\int_{\mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w} D_{x}^{\frac{1}{2}}(|u|^{2}w)dx+\frac{1}{2}\text{Im}\int_{\mathbb{R}}D_{x}^{ \frac{1}{2}}\overline{w}D^{\frac{1}{2}}(u^{2}\overline{w})dx\\ &=\widetilde{\Pi}_{1}+\widetilde{\Pi}_{2}+\widetilde{\Pi}_{3}+ \widetilde{\Pi}_{4}+\widetilde{\Pi}_{5}.\end{split}\] First it follows from the Cauchy-Schwarz inequality, the Kato-Ponce estimate in Lemma 2.2, and the Sobolev embedding (recalling \(s>\frac{1}{2}\)) that \[\begin{split}|\widetilde{\Pi}_{1}|&=\left|\text{Im} \int_{\mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w}[D_{x}^{\frac{1}{2}},v]wdx \right|\\ &\leq\|D_{x}^{\frac{1}{2}}w\|_{L^{2}}\|D_{x}^{\frac{1}{2}}v\|_{L^ {2_{+}}}\|w\|_{L^{\infty_{-}}}\lesssim\|v\|_{H^{s}}\|w\|_{H^{\frac{1}{2}}}^{2}.\end{split} \tag{3.28}\] Secondly, we write \(\widetilde{\Pi}_{2}\) as \[\widetilde{\Pi}_{2}=\text{Im}\int_{\mathbb{R}}(D_{x}^{\frac{1}{2}}\overline {w})uD_{x}^{\frac{1}{2}}zdx+\text{Im}\int_{\mathbb{R}}D_{x}^{\frac{1}{2}} \overline{w}[D_{x}^{\frac{1}{2}},u]zdx=\widetilde{\Pi}_{2,1}+\widetilde{\Pi}_ {2,2}.\] While \(\widetilde{\Pi}_{2,1}\) cannot be handled directly and will be cancelled out by a term coming from \(\widetilde{\text{IV}}\), we use the Cauchy-Schwarz inequality, the Kato-Ponce estimate in Lemma 2.2, and the Sobolev embedding (recalling \(s>\frac{1}{2}\)) to get that \[|\widetilde{\Pi}_{2,2}|\lesssim\|D_{x}^{\frac{1}{2}}w\|_{L^{2}}\|D^{\frac{1}{2 }}u\|_{L^{\infty}}\|z\|_{L^{2}}\lesssim\|u\|_{H^{s+\frac{1}{2}}}\|D_{x}^{\frac {1}{2}}w\|_{L^{2}}\|z\|_{L^{2}}. \tag{3.29}\] Finally, the fractional Leibniz rule (see Lemma 2.1) and the Sobolev embedding yield \[|\widetilde{\Pi}_{3}|\lesssim\|w\|_{L^{\infty}}^{2}\|D_{x}^{\frac{1}{2}}w\|_ {L^{2}}^{2}\lesssim\left(\|u_{1}\|_{H^{s+\frac{1}{2}}}+\|u_{2}\|_{H^{s+\frac{ 1}{2}}}\right)^{2}\|w\|_{H^{\frac{1}{2}}}^{2}, \tag{3.30}\] while the Kato-Ponce inequality (see Lemma 2.2) and the Sobolev embedding yield \[|\widetilde{\Pi}_{4}|=\left|\int_{\mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w} [D_{x}^{\frac{1}{2}},|u|^{2}]wdx\right|\lesssim\|D^{\frac{1}{2}}(|u|^{2})\|_{ L^{\infty}}\|w\|_{H^{\frac{1}{2}}}^{2}\lesssim\|u\|_{H^{s+\frac{1}{2}}}^{2}\|w\|_{H^{ \frac{1}{2}}}^{2}. \tag{3.31}\] and \[\begin{split}|\widetilde{\Pi}_{5}|&=\frac{1}{2} \left|\int_{\mathbb{R}}u^{2}(D_{x}^{\frac{1}{2}}\overline{w})^{2}dx+\int_{ \mathbb{R}}D_{x}^{\frac{1}{2}}\overline{w}[D_{x}^{\frac{1}{2}},u^{2}]\overline{ w}dx\right|\\ &\lesssim\left(\|u\|_{L^{\infty}}^{2}+\|D^{\frac{1}{2}}(u^{2})\|_ {L^{\infty}}\right)\|w\|_{H^{\frac{1}{2}}}^{2}\\ &\lesssim\|u\|_{H^{s+\frac{1}{2}}}^{2}\|w\|_{H^{\frac{1}{2}}}^{2}. \end{split} \tag{3.32}\] Finally, we deal with \(\widetilde{\mathrm{IV}}\). We get after differentiating in time \[\widetilde{\mathrm{IV}} =\mathrm{Re}\int_{\mathbb{R}}(J_{x}^{-1}\partial_{t}z)\overline{u}w \,dx+\mathrm{Re}\int_{\mathbb{R}}(J_{x}^{-1}z)(\overline{\partial_{t}u})w\,dx+ \mathrm{Re}\int_{\mathbb{R}}(J_{x}^{-1}z)\overline{u}(\partial_{t}w)\,dx\] \[=\widetilde{\mathrm{IV}}_{1}+\widetilde{\mathrm{IV}}_{2}+ \widetilde{\mathrm{IV}}_{3}.\] By using the second equation in (3.17), we observe that \[\widetilde{\mathrm{IV}}_{1}=\mathrm{Re}\int_{\mathbb{R}}(J_{x}^{-1}\mathcal{H }\partial_{x}^{2}z)\overline{u}w\,dx-\frac{1}{2}\mathrm{Re}\int_{\mathbb{R}}J_ {x}^{-1}\partial_{x}(vz)\overline{u}w\,dx=\widetilde{\mathrm{IV}}_{1,1}+ \widetilde{\mathrm{IV}}_{1,2},\] where we used that \(\int_{\mathbb{R}}J_{x}^{-1}\partial_{x}\mathrm{Re}(\overline{u}w)\mathrm{Re} (\overline{u}w)\,dx=0\). On the one hand, we rewrite \(\widetilde{\mathrm{IV}}_{1,1}\) as \[\widetilde{\mathrm{IV}}_{1,1}=\mathrm{Re}\int_{\mathbb{R}}\left((J_{x}^{-1} \mathcal{H}\partial_{x}^{2}-\partial_{x})z\right)\overline{u}w\,dx+\mathrm{Re }\int_{\mathbb{R}}(\partial_{x}z)\overline{u}w\,dx=\widetilde{\mathrm{IV}}_{1, 1,1}+\widetilde{\mathrm{IV}}_{1,1,2}.\] It follows from Lemma 2.4 that \[|\widetilde{\mathrm{IV}}_{1,1,1}|\lesssim\|u\|_{H^{s}}\|z\|_{L^{2}}\|w\|_{L^{ 2}}, \tag{3.33}\] and we use the identity \[\widetilde{\mathrm{I}}_{2}+\widetilde{\mathrm{IV}}_{1,1,2}=0 \tag{3.34}\] to handle \(\widetilde{\mathrm{IV}}_{1,1,2}\). On the other hand, we deduce from Holder's inequality and the Sobolev embedding that \[|\widetilde{\mathrm{IV}}_{1,2}|\lesssim\|v\|_{L^{\infty}}\|z\|_{L^{2}}\|u\|_{ L^{\infty}}\|w\|_{L^{2}}\lesssim\|v\|_{H^{s}}\|u\|_{H^{s}}\|z\|_{L^{2}}\|w\|_{L^{2}}. \tag{3.35}\] By using the first equation in (1.1), we decompose \(\widetilde{\mathrm{IV}}_{2}\) as \[\widetilde{\mathrm{IV}}_{2} =\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)(\partial_{x}^{2} \overline{u})w\,dx-\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)\left(\overline{u _{1}}v_{1}+\overline{u_{2}}v_{2}\right)w\,dx\] \[\quad-\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)\left(|u_{1}|^{2} \overline{u_{1}}+|u_{2}|^{2}\overline{u_{2}}\right)w\,dx\] \[=\widetilde{\mathrm{IV}}_{2,1}+\widetilde{\mathrm{IV}}_{2,2}+ \widetilde{\mathrm{IV}}_{2,3}.\] Observe after integrating by parts that \[\widetilde{\mathrm{IV}}_{2,1} =-\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}\partial_{x}z)(\partial _{x}\overline{u})w\,dx-\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)(\partial_{ x}\overline{u})(\partial_{x}w)\,dx\] \[=\widetilde{\mathrm{IV}}_{2,1,1}+\widetilde{\mathrm{IV}}_{2,1,2}.\] We deduce from Holder's inequality and the Sobolev embedding (under the restriction \(s>\frac{1}{2}\)) that \[|\widetilde{\mathrm{IV}}_{2,1,1}|\lesssim\|J_{x}^{-1}\partial_{x}z\|_{L^{2}}\| \partial_{x}u\|_{L^{2+}}\|w\|_{L^{\infty-}}\lesssim\|u\|_{H^{s+\frac{1}{2}}}\| z\|_{L^{2}}\|w\|_{H^{\frac{1}{2}}}. \tag{3.36}\] The contribution \(\widetilde{\mathrm{IV}}_{2,1,2}\) will be compensated by a term coming form \(\widetilde{\mathrm{IV}}_{3}\). Moreover, Holder's inequality and the Sobolev embedding imply \[|\widetilde{\mathrm{IV}}_{2,2}| \lesssim (\|u_{1}\|_{H^{s}}\|v_{1}\|_{L^{2}}+\|u_{2}\|_{H^{s}}\|v_{2}\|_{L^{ 2}})\,\|z\|_{L^{2}}\|w\|_{L^{2}}; \tag{3.38}\] \[|\widetilde{\mathrm{IV}}_{2,3}| \lesssim (\|u_{1}\|_{H^{s}}+\|u_{2}\|_{H^{s}})^{3}\,\|z\|_{L^{2}}\|w\|_{L^{ 2}}. \tag{3.37}\] Now, we use the first equation in (3.17) to decompose \(\widetilde{\mathrm{IV}}_{3}\) as \[\widetilde{\mathrm{IV}}_{3} =-\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)\overline{u}(\partial_{ x}^{2}w)\,dx+\frac{1}{2}\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)\overline{u}(vw+uz)\,dx\] \[\quad+\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)\overline{u}( \frac{1}{4}|w|^{2}w+\frac{1}{2}|u|^{2}w+\frac{1}{4}u^{2}\overline{w})\,dx\] \[=\widetilde{\mathrm{IV}}_{3,1}+\widetilde{\mathrm{IV}}_{3,2}+ \widetilde{\mathrm{IV}}_{3,3}.\] By integration by parts, we have \[\widetilde{\mathrm{IV}}_{3,1} =\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}\partial_{x}z)\overline{ u}(\partial_{x}w)\,dx+\mathrm{Im}\int_{\mathbb{R}}(J_{x}^{-1}z)(\partial_{x} \overline{u})(\partial_{x}w)\,dx\] \[=\widetilde{\mathrm{IV}}_{3,1,1}+\widetilde{\mathrm{IV}}_{3,1,2}.\] On the one hand, observe that \[\widetilde{\mathrm{IV}}_{2,1,2}+\widetilde{\mathrm{IV}}_{3,1,2}=0. \tag{3.39}\] On the other hand by using \(D_{x}^{1}=\mathcal{H}\partial_{x}\), we decompose \(\widetilde{\mathrm{IV}}_{3,1,1}\) as \[\widetilde{\mathrm{IV}}_{3,1,1} =\mathrm{Im}\int_{\mathbb{R}}(D_{x}^{\frac{1}{2}}z)\overline{u}( D_{x}^{\frac{1}{2}}w)+\mathrm{Im}\int_{\mathbb{R}}(D_{x}^{\frac{3}{2}}J_{x}^{-1}-D_{x }^{\frac{1}{2}})z\overline{u}(D_{x}^{\frac{1}{2}}w)\] \[\quad+\mathrm{Im}\int_{\mathbb{R}}[\mathcal{H}D_{x}^{\frac{1}{2} },\overline{u}]J_{x}^{-1}\partial_{x}z(D_{x}^{\frac{1}{2}}w)\,dx\] \[=\widetilde{\mathrm{IV}}_{3,1,1,1}+\widetilde{\mathrm{IV}}_{3,1,1,2}+\widetilde{\mathrm{IV}}_{3,1,1,3}.\] Then, we use the cancellation \[\widetilde{\mathrm{II}}_{2,1}+\widetilde{\mathrm{IV}}_{3,1,1,1}=0. \tag{3.40}\] Moreover, estimate (2.6) implies \[|\widetilde{\mathrm{IV}}_{3,1,1,2}|\lesssim\|u\|_{L^{\infty}}\|z\|_{L^{2}}\|D _{x}^{\frac{1}{2}}w\|_{L^{2}}. \tag{3.41}\] The Kato-Ponce commutator estimate in Lemma 2.2 and the Sobolev embedding yield \[|\widetilde{\mathrm{IV}}_{3,1,1,3}|\lesssim\|D_{x}^{\frac{1}{2}}u\|_{L^{ \infty}}\|z\|_{L^{2}}\|D_{x}^{\frac{1}{2}}w\|_{L^{2}}\lesssim\|u\|_{H^{s+ \frac{1}{2}}}\|z\|_{L^{2}}\|D_{x}^{\frac{1}{2}}w\|_{L^{2}}. \tag{3.42}\] Finally, Holder's inequality and the Sobolev embedding imply \[|\widetilde{\mathrm{IV}}_{3,2}| \lesssim \|u\|_{H^{s}}\|z\|_{L^{2}}\left(\|v\|_{H^{s}}\|w\|_{L^{2}}+\|u\|_ {H^{s}}\|z\|_{L^{2}}\right); \tag{3.44}\] \[|\widetilde{\mathrm{IV}}_{3,3}| \lesssim \left(\|u_{1}\|_{H^{s}}+\|u_{2}\|_{H^{s}}\right)^{3}\|z\|_{L^{2} }\|w\|_{L^{2}}. \tag{3.43}\] Therefore, we conclude the proof of (3.22) gathering (3.25)-(3.44). Now, we turn to the proof of (3.23). Differentiating with respect to \(t\) it follows that \[\begin{split}\frac{d}{dt}\widetilde{E}_{m}^{s}&= \frac{d}{dt}\left(\int_{\mathbb{R}}z^{2}\,dx\right)+\frac{d}{dt}\left(\int_{ \mathbb{R}}|w|^{2}\,dx\right)+\frac{1}{2}\frac{d}{dt}\left(\int_{\mathbb{R}}(D _{x}^{s}z)^{2}\,dx\right)\\ &\quad+\frac{d}{dt}\left(\int_{\mathbb{R}}|D_{x}^{s+\frac{1}{2}}w| ^{2}\,dx\right)+\frac{d}{dt}\left(\mathrm{Re}\int_{\mathbb{R}}D_{x}^{s-\frac{ 1}{2}}zD_{x}^{s-\frac{1}{2}}\left(\overline{u}w\right)\,dx\right)\\ &=\widetilde{\mathcal{I}}+\widetilde{\mathcal{I}}\mathcal{I}+ \widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}+\widetilde{\mathcal{I}}\mathcal{ V}+\widetilde{\mathcal{V}}.\end{split} \tag{3.45}\] We argue as in the proof of (3.25) and compute each term on the right-hand side of (3.45). After integrating by parts, we get that \[\widetilde{\mathcal{I}}=-\frac{1}{2}\int_{\mathbb{R}}(\partial_{x}v)z^{2}dx+2 \mathrm{Re}\int_{\mathbb{R}}z\partial_{x}(\overline{u}w)dx=\widetilde{ \mathcal{I}}_{1}+\widetilde{\mathcal{I}}_{2}.\] Then, we deduce from Holder's inequality that \[|\widetilde{\mathcal{I}}_{1}|\lesssim\|\partial_{x}v\|_{L^{\infty}}\|z\|_{L^{2 }}^{2} \tag{3.46}\] and \[\begin{split}|\widetilde{\mathcal{I}}_{2}|& \lesssim\|z\|_{L^{2}}\|\partial_{x}u\|_{L^{2}}\|w\|_{L^{\infty}}+\|z\|_{L^{2} }\|u\|_{L^{\infty}}\|\partial_{x}w\|_{L^{2}}\\ &\lesssim\|z\|_{L^{2}}\|u\|_{H^{s+\frac{1}{2}}}\|w\|_{H^{s+\frac {1}{2}}},\end{split} \tag{3.47}\] where we used also the Sobolev embedding in the last estimate and recall that \(s>\frac{1}{2}\). Moreover, we find arguing as in (3.27) that \[|\widetilde{\mathcal{I}}\mathcal{I}|\lesssim\|u\|_{H^{s}}\|w\|_{L^{2}}\|z\|_ {L^{2}}+\|u\|_{H^{s}}^{2}\|w\|_{L^{2}}^{2}. \tag{3.48}\] To deal with \(\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}\), we use the second equation in (3.17) and integrate by parts to obtain \[\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}=-\frac{1}{2}\int_{\mathbb{R}}D _{x}^{s}zD^{s}\partial_{x}(vz)\,dx+\mathrm{Re}\int_{\mathbb{R}}D_{x}^{s}zD_{ x}^{s}\partial_{x}(\bar{u}w)\,dx=\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1 }+\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{2}.\] On the one hand, we decompose \(\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1}\) as \[\begin{split}\widetilde{\mathcal{I}}\mathcal{I}_{1}& =-\frac{1}{2}\int_{\mathbb{R}}D_{x}^{s}z[D_{x}^{s},z]\partial_{ x}v\,dx-\frac{1}{2}\int_{\mathbb{R}}(D_{x}^{s}z)zD^{s}\partial_{x}v\,dx\\ &\quad-\frac{1}{2}\int_{\mathbb{R}}D_{x}^{s}z[D_{x}^{s},v] \partial_{x}z\,dx-\frac{1}{2}\int_{\mathbb{R}}(D_{x}^{s}z)vD^{s}\partial_{x}z \,dx\\ &=\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1,1}+\widetilde {\mathcal{I}}\mathcal{I}\mathcal{I}_{1,2}+\widetilde{\mathcal{I}}\mathcal{I} \mathcal{I}_{1,3}+\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1,4}.\end{split}\] The Holder inequality and Lemma 2.2 yield (3.49) \[\begin{split}\left|\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_ {1,1}\right|&\lesssim\|D_{x}^{s}z\|_{L^{2}}\left(\|\partial_{x}v\| _{L^{\infty}}\|D^{s}z\|_{L^{2}}+\|\partial_{x}z\|_{L^{\infty}}\|D_{x}^{s}v\|_{L ^{2}}\right);\\ \left|\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1,2}\right|& \lesssim\|D_{x}^{s}z\|_{L^{2}}\|z\|_{L^{2}}\|D_{x}^{s}\partial_{x}v\|_{L^{ \infty}};\\ \left|\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1,3}\right|& \lesssim\|D_{x}^{s}z\|_{L^{2}}\left(\|\partial_{x}v\|_{L^{\infty}}\|D^{s}z\|_{ L^{2}}+\|\partial_{x}z\|_{L^{\infty}}\|D_{x}^{s}v\|_{L^{2}}\right);\\ \left|\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{1,A}\right|& \lesssim\|\partial_{x}v\|_{L^{\infty}}\|D_{x}^{s}z\|_{L^{2}}^{2}.\end{split}\] (3.50) On the other hand, \(\widetilde{\mathcal{I}}\mathcal{I}\mathcal{I}_{2}\) cannot be estimated directly and will instead be cancelled out by a term coming from \(\widetilde{\mathcal{V}}\). Next, we decompose \(\widetilde{IV}\) as \[\begin{split}\widetilde{\mathcal{IV}}&=\mathrm{Im} \int_{\mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline{w}D_{x}^{s+\frac{1}{2}}(vw)\,dx +\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline{w}D_{x}^{s+\frac{1 }{2}}(uz)\,dx\\ &\quad+\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline {w}D_{x}^{s+\frac{1}{2}}\left(\left(\frac{1}{2}|w|^{2}+|u|^{2}\right)w+\frac{1 }{2}u^{2}\overline{w}\right)\,dx\\ &=\widetilde{\mathcal{IV}}_{1}+\widetilde{\mathcal{IV}}_{2}+ \widetilde{\mathcal{IV}}_{3}.\end{split}\] Firstly, we observe from Lemma 2.2 (ii) that \[\begin{split}\left|\widehat{\mathcal{IV}}_{1}\right|&= \left|\operatorname{Im}\int_{\mathbb{R}}D_{x}^{s+\frac{1}{2}}\overline{w}[D_{ x}^{s+\frac{1}{2}},v]w\,dx\right|\\ &\lesssim\|D_{x}^{s+\frac{1}{2}}w\|_{L^{2}}\left(\|D_{x}^{s+ \frac{1}{2}}v\|_{L^{\infty}}\|w\|_{L^{2}}+\|\partial_{x}v\|_{L^{\infty}}\|D^{ s-\frac{1}{2}}w\|_{L^{2}}\right).\end{split} \tag{3.53}\] Secondly, we decompose \(\widehat{\mathcal{IV}}_{2}\) as \[\widehat{\mathcal{IV}}_{2}=\operatorname{Im}\int_{\mathbb{R}}(D_{x}^{s+\frac{ 1}{2}}\overline{w})uD_{x}^{s+\frac{1}{2}}z\,dx+\operatorname{Im}\int_{\mathbb{ R}}D_{x}^{s+\frac{1}{2}}\overline{w}[D_{x}^{s+\frac{1}{2}},u]z\,dx=\widehat{ \mathcal{IV}}_{2,1}+\widehat{\mathcal{IV}}_{2,2}.\] While \(\widehat{\mathcal{IV}}_{2,1}\) will be canceled out by a term coming from \(\widetilde{\mathcal{V}}\), we use Lemma 2.2 (ii) and the Sobolev embedding (with \(s>\frac{1}{2}\)) to get \[\begin{split}\left|\widehat{\mathcal{IV}}_{2,2}\right|& \lesssim\|D_{x}^{s+\frac{1}{2}}w\|_{L^{2}}\left(\|D_{x}^{s+\frac{1}{2}}u \|_{L^{2}}\|z\|_{L^{\infty}}+\|\partial_{x}u\|_{L^{2}+}\|D^{s-\frac{1}{2}}z\|_ {L^{\infty_{-}}}\right)\\ &\lesssim\|D_{x}^{s+\frac{1}{2}}u\|_{L^{2}}\|z\|_{H^{s}}\|D_{x}^ {s+\frac{1}{2}}w\|_{L^{2}}.\end{split} \tag{3.54}\] Thirdly, using the fact that \(H^{s+\frac{1}{2}}\) is a Banach algebra, we deduce that \[\left|\widehat{\mathcal{IV}}_{3}\right|\lesssim\left(\|D_{x}^{s+\frac{1}{2}} u_{1}\|_{L^{2}}+\|D_{x}^{s+\frac{1}{2}}u_{2}\|_{L^{2}}\right)^{2}\|D_{x}^{s+ \frac{1}{2}}w\|_{L^{2}}^{2}. \tag{3.55}\] Now, we decompose \(\widetilde{\mathcal{V}}\) as \[\begin{split}\widetilde{\mathcal{V}}&= \operatorname{Re}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}\partial_{t}zD_{x}^{s- \frac{1}{2}}\left(\overline{u}w\right)\,dx+\operatorname{Re}\int_{\mathbb{R} }D_{x}^{s-\frac{1}{2}}zD_{x}^{s-\frac{1}{2}}\left(\left(\partial_{t}\overline {u}\right)w\right)\,dx\\ &\quad+\operatorname{Re}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_ {x}^{s-\frac{1}{2}}\left(\overline{u}\partial_{t}w\right)\,dx\\ &=\widetilde{\mathcal{V}}_{1}+\widetilde{\mathcal{V}}_{2}+ \widetilde{\mathcal{V}}_{3}.\end{split}\] Firstly, by using the second equation in (3.17), we write \[\begin{split}\widetilde{\mathcal{V}}_{1}&= \operatorname{Re}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}\mathcal{H}\partial_{x}^ {2}zD_{x}^{s-\frac{1}{2}}\left(\overline{u}w\right)\,dx-\frac{1}{2} \operatorname{Re}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}\partial_{x}(vz)D_{x}^ {s-\frac{1}{2}}\left(\overline{u}w\right)\,dx\\ &=\widetilde{\mathcal{V}}_{1,1}+\widetilde{\mathcal{V}}_{1,2}. \end{split}\] On the one hand, by using \(\mathcal{H}\partial_{x}=D_{x}^{1}\), we see that \[\widehat{\mathcal{III}}_{2}+\widetilde{\mathcal{V}}_{1,1}=0. \tag{3.56}\] On the other, by using that \(H^{s}\) is a Banach algebra for \(s>\frac{1}{2}\), we have that \[\left|\widetilde{\mathcal{V}}_{1,1}\right|\lesssim\|vz\|_{H^{s}}\|\bar{u}w\|_ {H^{s}}\lesssim\|v\|_{H^{s}}\|u\|_{H^{s}}\|w\|_{H^{s}}\|z\|_{H^{s}}. \tag{3.57}\] Secondly, by using the first equation in (1.1), we decompose \(\widetilde{\mathcal{V}}_{2}\) as \[\begin{split}\widetilde{\mathcal{V}}_{2}&= \operatorname{Im}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_{x}^{s-\frac{1}{2}} \left(\left(\partial_{x}^{2}\overline{u}\right)w\right)\,dx-\operatorname{Im} \int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_{x}^{s-\frac{1}{2}}\left(\left( \overline{u_{1}}v_{1}+\overline{u_{2}}v_{2}\right)w\right)\,dx\\ &\quad-\operatorname{Im}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD _{x}^{s-\frac{1}{2}}\left(\left(|u_{1}|^{2}\overline{u_{1}}+|u_{2}|^{2} \overline{u_{2}}\right)w\right)\,dx\\ &=\widetilde{\mathcal{V}}_{2,1}+\widetilde{\mathcal{V}}_{2,2}+ \widetilde{\mathcal{V}}_{2,3}.\end{split}\] By integrating by parts and using the identity \(\partial_{x}=-\mathcal{H}D_{x}^{1}\), we rewrite \(\widetilde{\mathcal{V}}_{2,1}\) as \[\widetilde{\mathcal{V}}_{2,1} =-\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s}z\mathcal{H}D_{x}^{s}\left( \left(\partial_{x}\overline{u}\right)w\right)\,dx-\mathrm{Im}\int_{\mathbb{R}} D_{x}^{s-\frac{1}{2}}zD_{x}^{s-\frac{1}{2}}\left(\left(\partial_{x}\overline{u} \right)\left(\partial_{x}w\right)\right)\,dx\] \[=\widetilde{\mathcal{V}}_{2,1,1}+\widetilde{\mathcal{V}}_{2,1,2}.\] The contribution \(\widetilde{\mathcal{V}}_{2,1,2}\) will be canceled out by a term coming from \(\widetilde{\mathcal{V}}_{3}\). To handle the contribution \(\widetilde{\mathcal{V}}_{2,1,1}\), we use \(\partial_{x}=-\mathcal{H}D_{x}^{\frac{1}{2}}D_{x}^{\frac{1}{2}}\) and we decompose it further as \[\widetilde{\mathcal{V}}_{2,1,1} =-\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s}zD_{x}^{s+\frac{1}{2}} \left(\left(D_{x}^{\frac{1}{2}}\overline{u}\right)w\right)\,dx-\mathrm{Im} \int_{\mathbb{R}}D_{x}^{s}z\mathcal{H}D_{x}^{s}[\mathcal{H}D_{x}^{\frac{1}{2}},w]D_{x}^{\frac{1}{2}}\overline{u}\,dx\] \[=-\mathrm{Im}\int_{\mathbb{R}}(D_{x}^{s}z)wD_{x}^{s+1}\overline{ u}\,dx-\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s}z[D_{x}^{s+\frac{1}{2}},w]D_{x}^{\frac{1}{ 2}}\overline{u}\,dx\] \[\quad-\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s}z\mathcal{H}[ \mathcal{H}D_{x}^{s+\frac{1}{2}},w]D_{x}^{\frac{1}{2}}\overline{u}\,dx- \mathrm{Im}\int_{\mathbb{R}}D_{x}^{s}z\mathcal{H}[D_{x}^{s},w]\partial_{x} \overline{u}\,dx\] \[=\widetilde{\mathcal{V}}_{2,1,1,1}+\widetilde{\mathcal{V}}_{2,1,1,2}+\widetilde{\mathcal{V}}_{2,1,1,3}+\widetilde{\mathcal{V}}_{2,1,1,4}.\] On the one hand, it follows from Holder's inequality that \[\left|\widetilde{\mathcal{V}}_{2,1,1,1}\right|\lesssim\|D_{x}^{s}z\|_{L^{2}} \|w\|_{L^{2}}\|D^{s+1}u\|_{L^{\infty}}. \tag{3.58}\] On the other hand, Lemma 2.2 that \[\left|\widetilde{\mathcal{V}}_{2,1,1,2}\right|+\left|\widetilde{ \mathcal{V}}_{2,1,1,3}\right| \lesssim\|D_{x}^{s}z\|_{L^{2}}\big{(}\|D_{x}^{s+\frac{1}{2}}w\|_{ L^{2}}\|D_{x}^{\frac{1}{2}}u\|_{L^{\infty}}+\|\partial_{x}w\|_{L^{2+}}\|D_{x}^{s}u\| _{L^{\infty_{-}}}\big{)}\] \[\lesssim\|D_{x}^{s}z\|_{L^{2}}\|u\|_{H^{s+\frac{1}{2}}}\|w\|_{H^ {s+\frac{1}{2}}} \tag{3.59}\] and \[\left|\widetilde{\mathcal{V}}_{2,1,1,4}\right| \lesssim\|D_{x}^{s}z\|_{L^{2}}\big{(}\|D_{x}^{s}w\|_{L^{\infty_{ -}}}\|\partial_{x}u\|_{L^{2+}}+\|\partial_{x}w\|_{L^{2+}}\|D_{x}^{s}u\|_{L^{ \infty_{-}}}\big{)}\] \[\lesssim\|D_{x}^{s}z\|_{L^{2}}\|u\|_{H^{s+\frac{1}{2}}}\|w\|_{H^ {s+\frac{1}{2}}}. \tag{3.60}\] Moreover, by using that \(H^{s}\) is a Banach algebra for \(s>\frac{1}{2}\), we deduce that \[\left|\widetilde{\mathcal{V}}_{2,2}\right| \lesssim\|D_{x}^{s-\frac{1}{2}}z\|_{L^{2}}\left\|\left(\overline{u _{1}}v_{1}+\overline{u_{2}}v_{2}\right)w\right\|_{H^{s}}\] \[\lesssim\left(\|u_{1}\|_{H^{s}}\|v_{1}\|_{H^{s}}+\|u_{2}\|_{H^{s} }\|v_{2}\|_{H^{s}}\right)\|z\|_{H^{s}}\|w\|_{H^{s}} \tag{3.61}\] and \[\left|\widetilde{\mathcal{V}}_{2,3}\right| \lesssim\|D_{x}^{s-\frac{1}{2}}z\|_{L^{2}}\|\left(\|u_{1}|^{2} \overline{u_{1}}+|u_{2}|^{2}\overline{u_{2}}\right)w\|_{H^{s}}\] \[\lesssim\left(\|u_{1}\|_{H^{p}}^{3}+\|u_{2}\|_{H^{s}}^{3}\right) \|z\|_{H^{p}}\|w\|_{H^{s}}. \tag{3.62}\] Finally, by using the first equation in (3.17), we decompose \(\widetilde{\mathcal{V}}_{3}\) as \[\widetilde{\mathcal{V}}_{3} =-\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_{x}^{s-\frac{ 1}{2}}\left(\overline{u}\partial_{x}^{2}w\right)\,dx+\frac{1}{2}\mathrm{Im} \int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_{x}^{s-\frac{1}{2}}\left(\overline{u} \left(vw+uz\right)\right)\,dx\] \[\quad+\mathrm{Im}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_{x}^{s- \frac{1}{2}}\left(\overline{u}\big{(}\frac{1}{4}|w|^{2}w+\frac{1}{2}|u|^{2}w+ \frac{1}{4}u^{2}\overline{w}\big{)}\right)\,dx\] \[=\widetilde{\mathcal{V}}_{3,1}+\widetilde{\mathcal{V}}_{3,2}+ \widetilde{\mathcal{V}}_{3,3}.\] By integrating by parts, we further decompose \(\widetilde{\mathcal{V}}_{3,1}\) as \[\widetilde{\mathcal{V}}_{3,1} =\operatorname{Im}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}zD_{x}^{s- \frac{1}{2}}\left(\partial_{x}\overline{u}\partial_{x}w\right)\,dx+\operatorname {Im}\int_{\mathbb{R}}D_{x}^{s-\frac{1}{2}}\partial_{x}zD_{x}^{s-\frac{1}{2}} \left(\overline{u}\partial_{x}w\right)\,dx\] \[=\widetilde{\mathcal{V}}_{3,1,1}+\widetilde{\mathcal{V}}_{3,1,2}.\] On the one hand, we have the cancellation \[\widetilde{\mathcal{V}}_{2,1,2}+\widetilde{\mathcal{V}}_{3,1,1}=0. \tag{3.63}\] On the other hand, by using the identity \(\partial_{x}=-\mathcal{H}D_{x}^{1}\), we rewrite \(\widetilde{\mathcal{V}}_{3,1,2}\) as \[\widetilde{\mathcal{V}}_{3,1,2} =-\operatorname{Im}\int_{\mathbb{R}}\mathcal{H}D_{x}^{s+\frac{1 }{2}}zD_{x}^{s-\frac{1}{2}}\left(\overline{u}\partial_{x}w\right)\,dx\] \[=\operatorname{Im}\int_{\mathbb{R}}(D_{x}^{s+\frac{1}{2}}z) \overline{u}D_{x}^{s+\frac{1}{2}}w\,dx+\operatorname{Im}\int_{\mathbb{R}}(D_ {x}^{s+\frac{1}{2}}z)[\mathcal{H}D_{x}^{s-\frac{1}{2}},\overline{u}]\partial_{ x}w\,dx\] \[=\widetilde{\mathcal{V}}_{3,1,2,1}+\widetilde{\mathcal{V}}_{3,1, 2,2}\] and we use the cancellation \[\widetilde{\mathcal{IV}}_{2,1}+\widetilde{\mathcal{V}}_{3,1,2,1}=0 \tag{3.64}\] to deal with the first term. To handle the second term, we rewrite it as \[\widetilde{\mathcal{V}}_{3,1,2,2} =\operatorname{Im}\int_{\mathbb{R}}D_{x}^{s}z[\mathcal{H}D_{x}^{ s},\overline{u}]\partial_{x}w\,dx-\operatorname{Im}\int_{\mathbb{R}}D_{x}^{s}z[D_ {x}^{\frac{1}{2}},\overline{u}]D_{x}^{s+\frac{1}{2}}w\,dx\] \[=\widetilde{\mathcal{V}}_{3,1,2,2,1}+\widetilde{\mathcal{V}}_{3, 1,2,2,2},\] and we deduce from Lemma 2.2 that \[\left|\widetilde{\mathcal{V}}_{3,1,2,2,1}\right| \lesssim\|D_{x}^{s}z\|_{L^{2}}\big{(}\|D_{x}^{s}u\|_{L^{\infty_{ -}}}\|\partial_{x}w\|_{L^{2_{+}}}+\|\partial_{x}u\|_{L^{2_{+}}}\|D_{x}^{s}w\|_ {L^{\infty_{-}}}\big{)}\] \[\lesssim\|D_{x}^{s}z\|_{L^{2}}\|u\|_{H^{s+\frac{1}{2}}}\|w\|_{H^{ s+\frac{1}{2}}} \tag{3.65}\] and \[\left|\widetilde{\mathcal{V}}_{3,1,2,2,2}\right| \lesssim\|D_{x}^{s}z\|_{L^{2}}\|D_{x}^{\frac{1}{2}}u\|_{L^{ \infty}}\|D^{s+\frac{1}{2}}w\|_{L^{2}}\lesssim\|D_{x}^{s}z\|_{L^{2}}\|u\|_{H^{ s+\frac{1}{2}}}\|w\|_{H^{s+\frac{1}{2}}}. \tag{3.66}\] Moreover, by using that \(H^{s}\) is a Banach algebra for \(s>\frac{1}{2}\), we deduce that \[\left|\widetilde{\mathcal{V}}_{3,2}\right| \lesssim\|D_{x}^{s-\frac{1}{2}}z\|_{L^{2}}\|\left(\overline{u}( \overline{w}w+uz)\,\|_{H^{s}}\right.\] \[\lesssim\|u\|_{H^{s}}\|\overline{v}\|_{H^{s}}\|z\|_{H^{s}}\|w\|_{H ^{s}}+\|u\|_{H^{s}}^{2}\|z\|_{H^{s}}^{2} \tag{3.67}\] and \[\left|\widetilde{\mathcal{V}}_{3,3}\right| \lesssim\|D_{x}^{s-\frac{1}{2}}z\|_{L^{2}}\|\left(\overline{u}( \frac{1}{4}|w|^{2}w+\frac{1}{2}|u|^{2}w+\frac{1}{4}u^{2}\overline{w})\right)\, \|_{H^{s}}\] \[\lesssim\left(\|u_{1}\|_{H^{s}}+\|u_{2}\|_{H^{s}}\right)^{3}\|z\|_ {H^{s}}\|w\|_{H^{s}}. \tag{3.68}\] Therefore, we conclude the proof of estimate (3.23) by gathering (3.45)-(3.68). This finishes the proof of Proposition 3.2. ## 4. Proof of Theorem 1.1 ### Refined Strichartz estimates One of the main ingredients in our analysis is a refined Strichartz estimates for solutions of the linear non-homogeneous Benjamin-Ono equation. This estimate is proved by Kenig and Koenig in Proposition 2.8 in [14] and is based on previous ideas by Koch and Tzvetkov in [16]. **Proposition 4.1**.: _Let \(s>\frac{1}{2}\), \(\delta\in[0,1]\) and \(0<T\leq 1\). Assume that \(v\in C([0,T]:H^{s}(\mathbb{R}))\) is a solution to the equation_ \[\partial_{t}v-\mathcal{H}\partial_{x}^{2}v=F. \tag{4.1}\] _Then, for any \(\epsilon>0\),_ \[\|\partial_{x}v\|_{L^{2}_{T}L^{\infty}_{x}}\lesssim T^{\frac{1}{2}}\|I^{1+ \frac{\delta}{4}+\epsilon}_{x}v\|_{L^{\infty}_{T}L^{2}_{x}}+\|I^{1-\frac{3 \delta}{4}+\epsilon}_{x}F\|_{L^{2}_{T}L^{2}_{x}}. \tag{4.2}\] By relying on this estimate, we can control the term \(\|\partial_{x}v\|_{L^{1}_{T}L^{\infty}_{x}}\) appearing in the energy estimates. **Lemma 4.1**.: _Let \(s>\frac{5}{4}\) and \(0<T\leq 1\). There exists \(\kappa_{2,s}>0\) such that for any solution \((u,v)\in C([0,T]:H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R}))\) of (1.1), we have_ \[\|\partial_{x}v\|_{L^{1}_{T}L^{\infty}_{x}}\leq\kappa_{2,s}T\left(\|v\|_{L^{ \infty}_{T}H^{s}_{x}}+\|v\|_{L^{\infty}_{T}H^{s}_{x}}^{2}+\|u\|_{L^{\infty}_{ T}H^{s}_{x}}^{2}\right). \tag{4.3}\] Proof.: The proof of estimate (4.3) follows directly by applying the Holder inequality in time, estimate (4.2) with \(\delta=1\) and \(F=-\frac{1}{2}\partial_{x}(v^{2})+\partial_{x}(|u|^{2})\), and the fact that \(H^{s}(\mathbb{R})\) is a Banach algebra for \(s>\frac{1}{2}\). ### Well-posedness for smooth initial data As far as we know, there does not exist a well-posedness theory for the system (1.1) when \(\rho\neq 0\). The next result is based on the energy estimates derived in Section 3. **Theorem 4.4**.: _Let \(s>\frac{3}{2}\). Then, for any \((u_{0},v_{0})\in H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\), there exist a positive time \(T=T(\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}})\) and a unique maximal solution \((u,v)\) of the IVP (1.1) in \(C\big{(}[0,T^{*}):H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\big{)}\) with \(T^{*}>T(\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}})\). If the maximal time of existence \(T^{*}\) is finite, then_ \[\lim_{t\nearrow T^{*}}\|(u(t),v(t))\|_{H^{s+\frac{1}{2}}\times H^{s}}=+\infty.\] _Moreover, for any \(0<T^{\prime}<T\), there exists a neighborhood \(\mathcal{U}\) of \((u_{0},v_{0})\) in \(H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\) such that the flow map data-to-solution_ \[S:\mathcal{U}\to C\big{(}[0,T]:H^{s+\frac{1}{2}}_{x}(\mathbb{R})\times H^{s}_{ x}(\mathbb{R})\big{)}\,,(\tilde{u}_{0},\tilde{v}_{0})\mapsto(\tilde{u},\tilde{v})\] _is continous._ Proof.: First observe that by using the rescaled version (1.12) of the system, we can assume by choosing \(\lambda\) small enough that the norm initial datum \(\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\) is small. Then, the proof of the existence and the uniqueness is a combination of the parabolic regularization method with the energy estimates obtained in Propositions 3.1 and 3.2 and taking into account Remarks 3.1 and 3.2. We refer to the proof of Theorem 1.6 in [25] for the details in a similar setting. The continuous dependence and persistence is obtained by applying the Bona-Smith approximation method. We refer to [4], [11], [19] and [25] for the details. ### A priori estimates Let \((u_{0},v_{0})\in H^{\infty}(\mathbb{R})\times H^{\infty}(\mathbb{R})\). From the above result there exists a solution \((u,v)\in C\big{(}[0,T^{*}):H^{\infty}(\mathbb{R})\times H^{\infty}(\mathbb{R}) \big{)}\), where \(T^{*}\) is the maximal time of existence satisfying \(T^{*}\geq T(\|(u_{0},v_{0})\|_{H^{\frac{5}{2}}\times H^{2}})\). Moreover, we have the blow-up alternative \[\lim_{t\nearrow T^{*}}\|(u(t),v(t))\|_{H^{\frac{5}{2}}\times H^{2}}=+\infty \quad\text{if}\quad T^{*}<+\infty. \tag{4.5}\] Let \(\frac{5}{4}<s\leq\frac{3}{2}\). By using a bootstrap argument, we prove that the solution \((u,v)\) satisfies a suitable _a priori_ estimate on a positive time interval depending only on the \(H^{s+\frac{1}{2}}\times H^{s}\) norm of the initial datum. **Lemma 4.2**.: _Let \(\frac{5}{4}<s\leq\frac{3}{2}\). There exist positive constant \(\delta_{s}\), \(K_{s}\) and \(A_{s}\) such that if_ \[\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\leq\delta_{s},\] _then \(T^{*}>T_{s}:=(A_{s}(1+\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}))^{-2}\),_ \[\|\left(u,v\right)\|_{L^{\infty}_{T_{s}}(H^{s+\frac{1}{2}}\times H^{s})_{x}} \leq 8\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\quad\text{and}\quad\| \partial_{x}v\|_{L^{1}_{T_{s}}L^{\infty}_{x}}\leq K_{s}. \tag{4.6}\] Proof.: Let \(\frac{5}{4}<s\leq\frac{3}{2}\). We set \(\delta_{s}:=2^{-6}\min\{c_{s}^{-1},\widetilde{c}_{s}^{-1}\}\), where \(c_{s}\) and \(\widetilde{c}_{s}\) are respectively defined in Propositions 3.1 and 3.2. Assume that \(\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\leq\delta_{s}\). Then, we define \[\widetilde{T}_{s}:=\sup\left\{T\in(0,T^{*}):\|\left(u,v\right)\|_{L^{\infty}_{ T}(H^{s+\frac{1}{2}}\times H^{s})_{x}}\leq 8\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}} \times H^{s}}\right\}.\] Note that the above set is nonempty since \((u,v)\in C\big{(}[0,T^{*}):H^{\infty}(\mathbb{R})\times H^{\infty}(\mathbb{R} )\big{)}\), so that \(\widetilde{T}_{s}\) is well-defined. We argue by contradiction and assume that \[0<\tilde{T}_{s}<(A_{s}(1+\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}))^{ -2}\leq 1\] for \(A_{s}=2^{6}(\log 2)^{-1}(1+\kappa_{1,s}+\kappa_{1,2})(1+\kappa_{2,s})\), where \(\kappa_{1,s}\) and \(\kappa_{2,s}\) are respectively defined in Proposition 3.1 and Lemma 4.1. Let \(0<T<\tilde{T}_{s}\). We have from the definition of \(\tilde{T}_{s}\) that \[\|\left(u,v\right)\|_{L^{\infty}_{T}(H^{s+\frac{1}{2}}\times H^{s})_{x}}\leq 8 \|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\leq\frac{1}{8}c_{s}^{-1}.\] Then, estimate (4.3) yields \[\|\partial_{x}v\|_{L^{1}_{T}L^{\infty}_{x}}\leq 8\kappa_{2,s}T\left(1+16\|(u_{0},v _{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\right)\|(u_{0},v_{0})\|_{H^{s+\frac{1} {2}}\times H^{s}}\leq\frac{\log 2}{8(1+\kappa_{1,s}+\kappa_{1,2})}.\] Hence, we deduce by using the energy estimate (3.4) at the level \(s=2\) that \[\|\left(u,v\right)\|_{L^{\infty}_{T}(H^{\frac{5}{2}}\times H^{2})_{x}}\leq 4\|(u_{ 0},v_{0})\|_{H^{\frac{5}{2}}\times H^{2}},\quad\forall 0<T<\tilde{T}_{s}.\] This implies in view of the blow-up alternative (4.5) that \(\tilde{T}_{s}<T^{*}\). Now, applying again the energy estimate (3.4) at the level \(s\) implies that \[\|\left(u,v\right)\|_{L^{\infty}_{T_{s}}(H^{s+\frac{1}{2}}\times H^{s})_{x}} \leq 4\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\] so that by continuity, there exists some \(T_{s}^{\sharp}\) satisfying \(\tilde{T}_{s}<T_{s}^{\sharp}<T^{*}\) such that \[\|\left(u,v\right)\|_{L^{\infty}_{T_{s}^{\sharp}}(H^{s+\frac{1}{2}}\times H^{s })_{x}}\leq 6\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}.\] This contradicts the definition of \(\tilde{T}_{s}\). Therefore, \(\tilde{T}_{s}\geq T_{s}:=(A_{s}(1+\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}) )^{-2}\) and we argue as above to get the bound for \(\|\partial_{x}v\|_{L^{1}_{T_{s}}L^{\infty}_{x}}\). This concludes the proof of the lemma. ### Uniqueness and \(H^{\frac{1}{2}}\times L^{2}\)-Lipschitz bounds for the flow Let \(s>\frac{5}{4}\) and let \((u_{1},v_{1})\) and \((u_{2},v_{2})\) be two solutions of (1.1) in the class (1.10)-(1.11) corresponding to initial data \((u_{1}^{0},v_{1}^{0})\) and \((u_{2}^{0},v_{2}^{0})\). By using the change of variables (1.12), we can always assume that \(\|\left(u_{i}^{0},v_{i}^{0}\right)\|_{H^{s+\frac{1}{2}}_{x}\times H^{s}_{x}} \leq\frac{1}{2\widetilde{c}_{s}}\), for \(i=1,2\), where \(\widetilde{c}_{s}\) is the positive constant given in Proposition 3.2. Then, by possibly restricting the time interval, we can assume that (3.20) holds on \([0,T]\). We define the positive number \[K:=\max\left\{\|\partial_{x}v_{1}\|_{L^{1}_{T}L^{\infty}_{x}},\|\partial_{x}v_ {2}\|_{L^{1}_{T}L^{\infty}_{x}}\right\}.\] Therefore, we deduce from (3.21), (3.22) and the Gronwall inequality that \[\left\|(u_{1}-u_{2},v_{1}-v_{2})\right\|_{L^{\infty}_{T}(H^{\frac{1}{2}}_{x} \times L^{2}_{x})}\leq 2\,e^{\varepsilon(K+1)T}\|(u_{1}^{0}-u_{2}^{0},v_{1}^{0}-v_ {2}^{0})\|_{H^{\frac{1}{2}}\times L^{2}}. \tag{4.7}\] Estimate (4.7) provides the uniqueness result in Theorem 1.1. by choosing \((u_{1}^{0},v_{1}^{0})=(u_{2}^{0},v_{2}^{0})=(u_{0},v_{0})\). ### Existence, persistence and continuous dependence Let \(\frac{5}{4}<s\leq\frac{3}{2}\) and let \((u_{0},v_{0})\in H^{s+\frac{1}{2}}(\mathbb{R})\times H^{s}(\mathbb{R})\). By using the change of variables (1.12), we can always assume that \(\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}\leq\delta_{s}\), where \(\delta_{s}\) is the positive constant given by Lemma 4.2. We regularize the initial datum as follows. Let \(\chi\) be a smooth cutoff function satisfying \[\chi\in C_{0}^{\infty}(\mathbb{R}),\quad 0\leq\chi\leq 1,\quad\chi_{|_{[-1,1]}} =1\quad\text{and}\quad\operatorname{supp}(\chi)\subset[-2,2]. \tag{4.8}\] Then we define \[(u_{0,n},v_{0,n})=(P_{\leq n}u_{0},P_{\leq n}v_{0})=\left(\left(\chi(|\xi|/n) \widehat{u}_{0}(\xi)\right)^{\vee},\left(\chi(|\xi|/n)\widehat{v}_{0}(\xi) \right)^{\vee}\right)\,\] for any \(n\in\mathbb{N}\), \(n\geq 1\). Then, the following estimates are well-known (see for example Lemma 5.4 in [20]). **Lemma 4.3**.: 1. _Let_ \(\sigma\geq 0\) _and_ \(n\geq 1\)_. Then,_ (4.9) \[\|u_{0,n}\|_{H^{s+\frac{1}{2}+\sigma}}\lesssim n^{\sigma}\|u_{0}\|_{H^{s+\frac{ 1}{2}}}\quad\text{and}\quad\|v_{0,n}\|_{H^{s+\sigma}}\lesssim n^{\sigma}\|v_{ 0}\|_{H^{s}}.\] 2. _Let_ \(0\leq\sigma\leq s\) _and_ \(m\geq n\geq 1\)_. Then,_ (4.10) \[\|u_{0,n}-u_{0,m}\|_{H^{s+\frac{1}{2}-\sigma}}\underset{n\to+\infty}{=}o(n^{- \sigma})\quad\text{and}\quad\|v_{0,n}-v_{0,m}\|_{H^{s-\sigma}}\underset{n\to+ \infty}{=}o(n^{-\sigma}).\] Now, for each \(n\in\mathbb{N}\), \(n\geq 1\), we consider the solution \((u_{n},v_{n})\) to (1.1) emanating from \((u_{0,n},v_{0,n})\) defined on their maximal time interval \([0,T^{*}_{n})\). From Lemmas 4.2 and 4.3 (i) with \(\sigma=0\), there exists a positive time \[T:=(A_{s}(1+\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}}))^{-2}\,, \tag{4.11}\] (where \(A_{s}\) is a positive constant), independent of \(n\), such that \((u_{n},v_{n})\in C([0,T]:H^{\infty}(\mathbb{R})\times H^{\infty}(\mathbb{R}))\) is defined on the time interval \([0,T]\) and satisfies \[\|\left(u_{n},v_{n}\right)\|_{L^{\infty}_{x}(H^{s+\frac{1}{2}}\times H^{s})_{x} }\leq 8\|(u_{0},v_{0})\|_{H^{s+\frac{1}{2}}\times H^{s}} \tag{4.12}\] and \[K:=\sup_{n\geq 1}\left\{\|\partial_{x}v_{n}\|_{L^{1}_{1}L^{\infty}_{x}}\right\}<+ \infty\,. \tag{4.13}\] Let \(m\geq n\geq 1\). We set \(w_{n,m}:=u_{n}-u_{m}\) and \(z_{n,m}:=v_{n}-v_{m}\). Then, \((w_{n,m},z_{n,m})\) satisfies (3.17) with initial datum \((w_{n,m}(\cdot,0),z_{n,m}(\cdot,0))=(u_{0,n}-u_{0,m},v_{0,n}-v_{0,m})\). Then, by using (4.7) and (4.10) with \(\sigma=s\), we deduce that \[\|(w_{n,m},z_{n,m})\|_{L^{\infty}_{T}(H^{\frac{1}{2}}_{x}\times L^{2}_{x})} \leq 2\,e^{c(K+1)T}\|(u_{0,n}-u_{0,m},v_{0,n}-v_{0,m})\|_{H^{\frac{1}{2}} \times L^{2}}\underset{n\to+\infty}{=}o(n^{-s}), \tag{4.14}\] which implies interpolating with (4.12) that \[\|(w_{n,m},z_{n,m})\|_{L^{\infty}_{T}(H^{\sigma+\frac{1}{2}}_{x} \times H^{\sigma}_{x})} \leq\|(w_{n,m},z_{n,m})\|^{\frac{\sigma}{s}}_{L^{\infty}_{T}(H^{ \frac{s+1}{2}}_{x}\times H^{\varepsilon}_{x})}\|(w_{n,m},z_{n,m})\|^{1-\frac{ \sigma}{s}}_{L^{\infty}_{T}(H^{\frac{1}{2}}_{x}\times L^{2}_{x})}\] \[\underset{n\to+\infty}{=}o(n^{-(s-\sigma)})\,,\] for all \(0\leq\sigma<s\). Therefore, we deduce that \(\{(u_{n},v_{n})\}\) is a Cauchy sequence in \(L^{\infty}([0,T]:H^{\sigma+\frac{1}{2}}(\mathbb{R})\times H^{\sigma}(\mathbb{ R}))\), for any \(0\leq\sigma<s\). Hence, it is not difficult to verify passing to the limit as \(n\to+\infty\) that \((u,v)=\lim_{n\to+\infty}(u_{n},v_{n})\) is a weak solution to (1.1) in the class \(C([0,T]:H^{\sigma+\frac{1}{2}}(\mathbb{R})\times H^{\sigma}(\mathbb{R}))\) and satisfying \(\|\partial_{x}v\|_{L^{1}_{1}L^{\infty}_{x}}\leq K\), for any \(0\leq\sigma<s\). Finally, the proof that \(u\) belongs to the class (1.10) and of the continuous dependence of the flow follows from the classical Bona-Smith argument [4]. The proof relies on the energy estimate (3.23) and we refer the readers to [20] for more details in this setting. ### Acknowledgements The authors are grateful to the Mathematics Department of Bergen University and to the Instituto de Matematica Pura e Applicada where part of this work was done. F.L. was partially supported by CNPq grant 305791/2018-4 and FAPERJ grant E-26/202.638/2019 and Math-AmSud EEQUADD II. D.P. was supported by a Trond Mohn Foundation grant. They authors would like to thank the anonymous referees for their careful proofreading of the manuscript.
2310.18161
Lens Absorber Coupled MKIDs for Far Infrared Imaging Spectroscopy
Future generation of astronomical imaging spectrometers are targeting the far infrared wavelengths to close the THz astronomy gap. Similar to lens antenna coupled Microwave Kinetic Inductance Detectors (MKIDs), lens absorber coupled MKIDs are a candidate for highly sensitive large format detector arrays. However, the latter is more robust to misalignment and assembly issues at THz frequencies due to its incoherent detection mechanism while requiring a less complex fabrication process. In this work, the performance of such detectors is investigated. The fabrication and sensitivity measurement of several lens absorber coupled MKID array prototypes operating at 6.98 and 12 THz central frequencies is on-going.
Shahab O. Dabironezare, Sven van Berkel, Pierre M. Echternach, Peter K. Day, Charles M. Bradford, Jochem J. A. Baselmans
2023-10-27T14:10:55Z
http://arxiv.org/abs/2310.18161v1
# Lens Absorber Coupled MKIDs for Far Infrared Imaging Spectroscopy ###### Abstract Future generation of astronomical imaging spectrometers are targeting the far infrared wavelengths to close the THz astronomy gap. Similar to lens antenna coupled Microwave Kinetic Inductance Detectors (MKIDs), lens absorber coupled MKIDs are a candidate for highly sensitive large format detector arrays. However, the latter is more robust to misalignment and assembly issues at THz frequencies due to its incoherent detection mechanism while requiring a less complex fabrication process. In this work, the performance of such detectors is investigated. The fabrication and sensitivity measurement of several lens absorber coupled MKID array prototypes operating at 6.98 and 12 THz central frequencies is ongoing. ## I Introduction Future generation of cooled space-based observatories are aiming towards hosting Far Infrared (FIR) imaging spectrometers operating between 1 to 12 THz. These instruments will fill the science gap between ground-based astronomy (up to 1 THz) and mid infrared observations by James Webb Space Telescope [1]. To realize imaging spectrometers at FIR band at reasonable observation times, highly sensitive detectors with noise equivalent powers (NEP) in the order of \(10^{-20}\) W/\(\sqrt{\text{Hz}}\) are required. Microwave Kinetic Inductance Detectors (MKIDs) are a promising detector technology which satisfy this requirement [2] while providing multiplexing capabilities to realize large imaging arrays [3]. The state-of-the-art NEP performance of lens antenna coupled MKIDs are demonstrated previously [2]. However, classic antenna concepts borrowed from millimetre wave applications are faced with major challenges in terms of fabrication, assembly, and alignment at frequency bands above 5 THz. As a result, the community is in need for reliable and scalable technologies to bypass these challenges. Absorber based systems, although more limited in sensitivity with respect to antenna systems and unable to distinguish phase information, have been used classically at infrared and optical wavelengths as incoherent detectors. In recent years, major efforts are immerging toward designing such absorber-based systems for various applications including security and astronomical observations [4] - [6]. Currently, lens absorber based detectors are also attracting the attention of the community as a detector scheme at THz frequencies for astronomical observations [7]. The focus of this work is on the development of lens absorber coupled MKIDs, see Fig. 1, as a complementary detector scheme to the lens antenna one for FIR astronomical observations above 5 THz. Several lens absorbers coupled MKID prototypes are under fabrication and their sensitivity measurements in the dark and optical loadings are scheduled for the upcoming months. ## II Design Methodology The considered lens absorber geometry is based on aluminum (Al) strips placed below high resistivity silicon elliptical lens elements. A standard quarter wavelength matching layer of Parylene C covers the lens elements. Two sets of detectors operating at central frequencies of 7.8 and 12 THz are considered in this work. An efficient absorption mechanism can be realized by matching the impedance of a periodic Al strip to the impedance of the incident THz radiation inside a lens element. However, strip absorber design guidelines such as [8] require narrow Al lines to reduce their inductance and provide the required impedance match at THz frequencies. On the other hand, wider lines are desired to provide better power handling for the readout of the MKIDs to improve their sensitivity. As a compromise, meander strip geometries similar to ones discussed in [9] are designed to compensate the high inductance of wide Al lines by adding capacitance effect in the unit cells. Here, two types of stratifications are considered: I) absorber over solid substrate; and II) absorber over a thin SiN based membrane to investigate the expected enhancement of their NEP [10]. The latter stratification requires a thin airgap between the two wafer layers for stability of the membrane. The thickness of this airgap affects the absorption performance of the detectors by modifying their impedance. The performance of the lens absorber focal plane arrays (FPAs) below the reflector system are evaluated using the computationally efficient analysis method described in [11]. In particular, the plane wave response of the considered tightly periodic absorbing meanders are modeled using the Fig. 1: (a) Schematic representation of a lens absorber based FPA where (b) absorber layer is fabricated over the backside of the lens wafer, and (c) absorber layer is fabricated over a SiN membrane on the detector wafer which is glued to the lens wafer. fundamental Floquet wave modes as an admittance matrix similar to [12]; and the response of the quasi-optical chain (reflector system and elliptical lens elements) is represented as a summation of plane waves using the Coherent Fourier Optics approach [13]. ## III Preliminary Results The performance of several lens absorber geometries coupled to an equivalent parabolic reflector was evaluated in terms of aperture efficiency. This study was performed for the two considered stratifications, and a range of absorber side lengths, \(w\), and lens focal to diameter ratios, \(f_{\#}^{1}\). The diameter of the considered lens elements is \(D_{l}=\lambda_{0}f_{\#}^{T}\) where \(\lambda_{0}\) is the wavelength in free space at the central frequency and \(f_{\#}^{T}\) is the reflector's focal length to diameter ratio. This FPA sampling leads to a maximum theoretical aperture efficiency of 50%. On the other hand, the absence of a quarter wavelength backing reflector reduces the aperture efficiency by an extra factor of 77% leading to a maximum theoretical aperture efficiency of \(\sim\)38%. The aperture efficiency for several stratifications is shown in Fig. 2 at the central wavelength of \(\lambda_{0}\). As it can be seen, in the case of the membrane stratification, an airgap thickness in the order of \(\lambda_{0}/50\) or smaller is required to reach a performance similar to the one of the solid substrate case. The results obtained from the above parametric study is used to design four lens absorber coupled MKID FPAs with two types of stratifications operating at central frequencies of 7.8 and 12 THz. ## IV Conclusion Due to the relaxed requirements on the fabrication complexity, assembly and alignment, lens absorber coupled MKIDs are a promising candidate for highly sensitive detectors operating at frequencies above 5 THz. In this work, several focal plane arrays of such detectors are being developed using meandering Al strips coupled to silicon elliptical lens arrays to assess their sensitivity. The design of these geometries is achieved by resorting to a Floquet wave representation coupled to Coherent Fourier Optics. The fabrication of four FPAs is ongoing and the measurement of their sensitivity is scheduled for the upcoming months.
2303.18059
Inferring networks from time series: a neural approach
Network structures underlie the dynamics of many complex phenomena, from gene regulation and foodwebs to power grids and social media. Yet, as they often cannot be observed directly, their connectivities must be inferred from observations of the dynamics to which they give rise. In this work we present a powerful computational method to infer large network adjacency matrices from time series data using a neural network, in order to provide uncertainty quantification on the prediction in a manner that reflects both the degree to which the inference problem is underdetermined as well as the noise on the data. This is a feature that other approaches have hitherto been lacking. We demonstrate our method's capabilities by inferring line failure locations in the British power grid from its response to a power cut, providing probability densities on each edge and allowing the use of hypothesis testing to make meaningful probabilistic statements about the location of the cut. Our method is significantly more accurate than both Markov-chain Monte Carlo sampling and least squares regression on noisy data and when the problem is underdetermined, while naturally extending to the case of non-linear dynamics, which we demonstrate by learning an entire cost matrix for a non-linear model of economic activity in Greater London. Not having been specifically engineered for network inference, this method in fact represents a general parameter estimation scheme that is applicable to any high-dimensional parameter space.
Thomas Gaskin, Grigorios A. Pavliotis, Mark Girolami
2023-03-30T15:51:01Z
http://arxiv.org/abs/2303.18059v3
# Inferring networks from time series: ###### Abstract Network structures underlie the dynamics of many complex phenomena, from gene regulation and foodwebs to power grids and social media. Yet, as they often cannot be observed directly, their connectivities must be inferred from observations of their emergent dynamics. In this work we present a powerful computational method to infer large network adjacency matrices from time series data using a neural network, in order to provide uncertainty quantification on the prediction in a manner that reflects both the non-convexity of the inference problem as well as the noise on the data. This is useful since network inference problems are typically underdetermined, and a feature that has hitherto been lacking from such methods. We demonstrate our method's capabilities by inferring line failure locations in the British power grid from its response to a power cut. Since the problem is underdetermined, many classical statistical tools (e.g. regression) will not be straightforwardly applicable. Our method, in contrast, provides probability densities on each edge, allowing the use of hypothesis testing to make meaningful probabilistic statements about the location of the power cut. We also demonstrate our method's ability to learn an entire cost matrix for a non-linear model of economic activity in Greater London. Our method outperforms OLS regression on noisy data in terms of both speed and prediction accuracy, and scales as \(N^{2}\) where OLS is cubic. Not having been specifically engineered for network inference, our method represents a general parameter estimation scheme that is applicable to any parameter dimension. **Keywords**: Network inference, Neural differential equations, Model calibration, Power grids. ###### Contents * I Introduction * II Inferring line failures in the British power grid * III Inferring economic cost networks from noisy data * IV Performance analysis and comparison with OLS * V Quantifying uncertainty * VI Discussion * VII Supporting Information * V Supporting Information * VII Supporting Information Introduction Networks are important objects of study across the scientific disciplines. They materialise as physical connections in the natural world, for instance as the mycorrhizal connections between fungi and root networks that transport nutrients and warning signals between plants [1, 2], human traffic networks [3, 4], or electricity grids [5, 6]. However, they also appear as abstract, non-physical entities, such as when describing biological interaction networks and food webs [7, 8, 9], gene or protein networks [10, 11], economic cost relations [12, 13], or social links between people along which information (and misinformation) can flow [14, 15, 16]. In all examples, though the links constituting the network may not be tangible, the mathematical description is the same. In this work, we are concerned with inferring the structure of a network from observations of its dynamics. The problem is of great scientific bearing: for instance, one may wish to understand the topology of an online social network from observing how information is passed through it, and some work has been done on this question [17, 18, 19]. Another important application is inferring the connectivity of neurons in the brain by observing their responses to external stimuli [20, 21]. In an entirely different setting, networks crop up in statistics in the form of conditional independence graphs, describing dependencies between different variables, which again are to be inferred from data [22, 23]. Our approach allows inferring network connectivities from time series data with uncertainty quantification. Uncertainty quantification for network inference is important for two reasons: first, the observations will often be noisy, and one would like the uncertainty on the data to translate to an uncertainty on the predicted network. Secondly however, completely inferring large networks requires equally large amounts of data - typically at least \(N-1\) equations per node, \(N\) being the number of nodes - and these observations must furthermore be linearly independent. Data of such quality and quantity will often not be available, leading to an underdetermined inference problem. The uncertainty on the predicted network should thus also reflect (at least to a certain degree) the non-convexity of the problem under consideration: how many networks are compatible with the observed data? To the best of our knowledge, no current network inference method is able to provide this information. Network inference can be performed using ordinary least squares (OLS) regression [6, 24], but this is confined to the case where the dynamics are linear in the adjacency matrix. In addition, without additional constraints OLS usually only works when the network is uniquely identifiable, and its computational cost typically scales as \(N^{3}\), making it infeasible for larger networks. Our method avoids these limitations. Computationally fast network inference methods have been developed [17, 18, 19], but these tend to be highly specialised to a particular type of observation data and give no uncertainty quantification on the network prediction. Our method, by contrast, is versatile, not having been specifically engineered to fit the network case, and having previously been applied to learn low-dimensional vectors of parameters for stochastic differential equations [25]. The presented approach is thus not limited to networks, but constitutes a general parameter estimation method. Method descriptionWe apply the method proposed in [25] to the network case. The approach consists of training a neural network to find a graph adjacency matrix \(\hat{\mathbf{A}}\in\mathbb{R}^{N\times N}\) that, when inserted into the model equations, reproduces a given time series \(\mathbf{T}=(\mathbf{x}_{1},...,\mathbf{x}_{L})\). A neural network is a function \(u_{\theta}:\mathbb{R}^{N\times q}\rightarrow\mathbb{R}^{p}\), where \(q\geq 1\) represents the number of time series steps that are passed as input. Its output is the (vectorised) estimated adjacency matrix \(\hat{\mathbf{A}}\), which is used to run a numerical solver for \(B\) iterations (\(B\) is the batch size) to produce an estimated time series \(\hat{\mathbf{T}}(\hat{\mathbf{A}})=(\hat{\mathbf{x}}_{i},...,\hat{\mathbf{x}} _{i+B})\). This in turn is used to train the internal parameters \(\boldsymbol{\theta}\) of the neural net (the weights and biases) via a loss function \(J(\hat{\mathbf{T}},\mathbf{T})\). As \(\hat{\mathbf{A}}=\hat{\mathbf{A}}(\boldsymbol{\theta})\), we may calculate the gradient \(\nabla_{\boldsymbol{\theta}}J\) and use it to optimise the internal parameters of the neural net using a backpropagation method of choice; popular choices include stochastic gradient descent, Nesterov schemes, or the Adam optimizer [26]. Calculating \(\nabla_{\boldsymbol{\theta}}J\) thus requires differentiating the predicted time series \(\hat{\mathbf{T}}\), and thereby the system equations, with respect to \(\hat{\mathbf{A}}\). In other words: the loss function contains knowledge of the dynamics of the model. Finally, the true data is once again input to the neural net to produce a new parameter estimate \(\mathbf{\hat{A}}\), and the cycle starts afresh. A single pass over the entire dataset is called an epoch; see fig. 1. Using a neural net allows us to exploit the fact that, as the net trains, it traverses the space of all graphs, calculating a loss at each point. By tracking its path and gathering the loss values we are able to construct a probability density on the adjacency matrix, allowing for uncertainty quantification. The method is in effect a Gibbs sampling scheme [27], where we wish to obtain the joint distribution \[p\left(\mathbf{\hat{A}},\mathbf{\hat{T}}\;\middle|\;\mathbf{T}\right)=\int p \left(\mathbf{\hat{A}},\mathbf{\hat{T}},\boldsymbol{\theta}\;\middle|\; \mathbf{T}\right)\mathrm{d}\boldsymbol{\theta}\] by marginalising over the training steps. We sample from this joint density by cyclically sampling from the estimated network \(\mathbf{\hat{A}}\), the estimated data \(\mathbf{\hat{T}}\), and the neural network weights and biases \(\boldsymbol{\theta}\) from conditional densities and using each sample for the next conditional: \[\mathbf{\hat{A}}^{i+1} \sim p\left(\mathbf{\hat{A}}\;\middle|\;\mathbf{\hat{T}}^{i}, \boldsymbol{\theta}^{i},\mathbf{T}\right) \tag{1}\] \[\mathbf{\hat{T}}^{i+1} \sim p\left(\mathbf{\hat{T}}\;\middle|\;\mathbf{\hat{A}}^{i+1}, \boldsymbol{\theta}^{i},\mathbf{T}\right)\] (2) \[\boldsymbol{\theta}^{i+1} \sim p\left(\boldsymbol{\theta}\;\middle|\;\mathbf{\hat{A}}^{i+1}, \mathbf{\hat{T}}^{i+1},\mathbf{T}\right). \tag{3}\] Here, \(i\) is the training iteration index. Each of the three densities represents different components in the cycle: eq. [1] is the neural net's output, which is deterministic: \[\mathbf{\hat{A}}^{i+1}=u_{\theta^{i}}(\mathbf{\hat{T}}^{i}).\] Eq. [2] is the output of the numerical solver, which may be stochastic, depending on whether the numerical solver is run with noise: \[p\left(\mathbf{\hat{T}}\;\middle|\;\mathbf{\hat{A}}^{i+1},\boldsymbol{\theta}^ {i},\mathbf{T}\right)\sim\rho_{NS}(\mathbf{\hat{A}}^{i+1}).\] Finally, the updated neural network parameters eq. [3] are drawn using the gradient descent scheme. By Bayes' rule, \[p\left(\mathbf{\hat{A}},\mathbf{\hat{T}},\boldsymbol{\theta}\;\middle|\; \mathbf{T}\right)\sim\exp(-\|\mathbf{\hat{T}}-\mathbf{T}\|_{2})\pi^{0}( \mathbf{\hat{A}})\pi^{0}(\boldsymbol{\theta})\pi^{0}(\mathbf{\hat{T}}),\] with \(\pi^{0}\) the priors, and \(\exp(-\|\mathbf{\hat{T}}-\mathbf{T}\|_{2})\) the likelihood function [28]. We begin this article with two application studies: first, we infer locations of a power line cut in the British power grid from observations of the network response to the cut; and secondly, we infer economic cost relations between retail centres in Greater London. Thereafter we conduct a comparative analysis of our method's performance, before finally analysing the relationship between uncertainty in the data, the convexity of the problem, and the prediction uncertainty. ## 2 Inferring line failures in the British power grid Power grids can be modelled as networks of coupled oscillators using the Kuramoto model [33, 34, 35, 36, 37]. Each node \(i\) in the network either produces or consumes electrical power \(P_{i}\) while oscillating at the grid reference frequency \(\Omega\). The nodes are connected through a weighted Figure 1: The methodological pipeline from [25] used in this work. The neural net \(u_{\theta}\) takes \(q\) time series elements as input and outputs a predicted network adjacency matrix. These predictions are fed into a numerical solver, which produces a predicted time series. The true and predicted time series are used to generate a loss functional, which in turn can be used to train the neural net’s internal parameters \(\boldsymbol{\theta}\). The goal is to retrieve the true network \(\mathbf{A}\) from the data. A single pass over the entire dataset is called an _epoch_. The dataset is processed in _batches_, meaning the loss is calculated over \(B\) steps of the time series before the weights are updated. If \(B=L\), training is equivalent to batch gradient descent; if \(B=1\), training is equivalent to stochastic gradient descent. undirected network \(\mathbf{A}=(a_{ij})\), where the link weights \(a_{ij}\sim Y_{ij}U_{ij}^{2}\) are obtained from the electrical admittances \(Y_{ij}\) and the voltages \(U_{ij}\) of the lines. The network coupling allows the phases \(\varphi_{i}(t)\) of the nodes to synchronise according to the differential equation [36] \[\alpha\frac{\mathrm{d}^{2}\varphi_{i}}{\mathrm{d}t^{2}}+\beta\frac{\mathrm{d} \varphi_{i}}{\mathrm{d}t}=P_{i}+\kappa\sum_{j}a_{ij}\sin(\varphi_{j}-\varphi_ {i}), \tag{4}\] where \(\alpha\), \(\beta\), and \(\kappa\) are the inertia, friction, and coupling coefficients respectively. A requirement for dynamical stability of the grid is that \(\sum_{i}P_{i}=0\), i.e. that as much power is put into the grid as is taken out through consumption and energy dissipation [35]. A power line failure causes the network to redistribute the power loads, causing an adjustment cascade to ripple through the network until equilibrium is restored [5]. In this work we recover the location of a line failure in the British power grid from observing these response dynamics. Figure 2a shows the high-voltage transmission grid of Great Britain as of January 2023, totalling 630 nodes (representing power stations, substations, and line intersections) and 763 edges with their operating voltages. Of the roughly 1300 power stations dotted around the island, we include those 38 with installed capacities of at least 400 MW that are directly connected to the national grid [32]; following [5, 35] we give all other nodes a random value \(P_{i}\sim\mathcal{U}[-200,+200]\) such that \(\sum_{i}P_{i}=0\). Figure 2: **(a)** Approximate high-voltage electricity transmission grid of Great Britain. Shown are 630 accurately placed nodes, representing power stations, substations, and transmission line intersections, and their connectivity as of January 2023 [29, 30, 31]. Colours indicate the operating voltage of the lines. The size of the nodes indicate their power generation or consumption capacity (absolute values shown). White ringed nodes indicate the 38 nodes that are real power stations with capacities over 400 MW [32], with all other nodes assigned a random capacity in \([-200,+200]\). The two dotted edges in the northeast of England are the edges affected by a simulated power cut, labelled by the indices of their start and end vertices. **(b)** The network response to the simulated power line failure, measured at four different nodes in the network (marked A–D). The equation parameters were tuned to ensure phase-locking of the oscillators (\(\alpha=1\), \(\beta=0.2\), \(\kappa=30\)). Nodes closer to the location of the line cut (A and B) show a stronger and more immediate response than nodes further away (C and D). The shaded area indicates the 4-second window we use to infer the line location. We simulate a power cut in the northeast of England by iterating the Kuramoto dynamics until the system reaches a steady state of equilibrium (defined as \(|\dot{\varphi}_{i}|/\varphi_{i}\leq 0.01\ \forall i\)) and then removing two links and recording the network response (fig. 2b). From the response we can infer the adjacency matrix of the perturbed network \(\mathbf{\tilde{A}}\) (with missing links) and, by comparing with \(\mathbf{A}^{0}\), the line failure locations. We let a neural network output a (vectorised) adjacency matrix \(\mathbf{\hat{A}}\) and use this estimated adjacency matrix to run the differential equation eq. [4], which will produce an estimate of the phase vector \(\boldsymbol{\hat{\varphi}}\). A hyperparameter sweep on synthetic data showed that using a deep neural network with 5 layers, 20 nodes per layer, and no bias provides optimal results. We use the hyperbolic tangent as an activation function on each layer except Figure 3: Estimating the line failure location. **(a)** The densities on four edges with the highest relative prediction error \(|\hat{a}_{ij}-a_{ij}^{0}|/a_{ij}^{0}\) and their respective \(p\)-values for measuring the unperturbed value \(a_{ij}^{0}\). Red dotted lines indicate the values of the unperturbed network, green lines the expectation values of the distributions. The marginals are smoothed using a Gaussian kernel. We use a training set of length \(L=400\) steps, and the batch size is \(B=2\). CPU runtime: 24 minutes. **(b)** True (black) and predicted network responses at three different locations in the network. The responses are each normalised to the value at \(t=0\). The shaded area represents the 400 time steps used to train the model. While the model is able to perfectly fit the response within the training range, it is not able to learn the full network from insufficient data, causing the time series to diverge for larger \(t\). the last, where we use the 'hard sigmoid' \[\sigma(x)=\begin{cases}0,\ x\leq-3,\\ 1,\ x\geq+3,\\ x/6+1/2,\ \text{else},\end{cases}\] which allows neural net output components to actually become zero, and not just asymptotically close, thereby ensuring sparsity of the adjacency matrix. We use the Adam optimizer [26] with a learning rate of \(0.002\) for the gradient descent step, and initialise the neural network's weights with a prior \(\pi^{0}(\boldsymbol{\theta})\) in such a way that the prior \(\pi^{0}(\mathbf{\hat{A}})\) is a delta distribution on the complete graph, \(\pi(\hat{a}_{ij})\sim\delta(1)\ \forall i,j\). This maximises the sampling domain on each edge. Since the neural network outputs are in \([0,1]\), we scale the network weights \(a_{ij}\to\lambda a_{ij}\) such that \(a_{ij}\in[0,1]\), and absorb \(\lambda\) into the coupling constant \(\kappa\); see the Supplementary Information for details on the calculations. We use the following loss function \(J(\mathbf{\hat{A}})\) to train the internal weights \(\boldsymbol{\theta}\) of the neural network such that it will output an adjacency matrix that reproduces the observed data: \[J(\mathbf{\hat{A}})= \|\boldsymbol{\varphi}(\mathbf{\hat{A}})-\boldsymbol{\varphi}\|_ {2}+\|\mathbf{\hat{A}}-\mathbf{\hat{A}}^{T}\|_{2}+\text{tr}(\mathbf{\hat{A}})\] \[+\nu\|\mathbf{\hat{A}}-\mathbf{A}^{0}\|_{2}.\] The first term is the error on the data, the second penalises asymmetry to enforce undirectedness of the network, and the third sets the diagonal to zero (which cannot be inferred from the data). \(\nu\) is a function designed to let the neural network search for \(\mathbf{\tilde{A}}\) in the vicinity of \(\mathbf{A}^{0}\), since we can assume a prior that the two will be similar in most entries. To this end we set \(\nu=10\) while \(|\langle\partial_{s}J\rangle|>10^{-10}\) and \(|\langle\partial_{ss}J\rangle|>10^{-10}\), and \(\nu=0\) thereafter, where \(s\) is the iteration count (number of training steps), and \(\langle J\rangle\) the total loss averaged over a window of \(20\) iterations, see fig. 4. In other words, we push the neural network towards a stable minimum in the neighbourhood of \(\mathbf{A}^{0}\) and, once the loss stabilises, permanently set \(\nu=0\). In theory \(L=N-1\) observations are needed to completely infer the network, though symmetries in the data usually means \(L>N\) is required in practice [38]. Traditional regression methods are not applicable in the underdetermined case \(L<N-1\), since they involve inversion of the Gram matrix \(\mathbf{G}_{i}\mathbf{G}_{i}^{T}\), \(\mathbf{G}_{i}\in\mathbb{R}^{N\times L}\) containing all \(L\) interactions of the \(i\)-th node with all other nodes, and additional constraints (e.g. sparsity assumptions) must thus be placed on the network to infer its topology [39]. We purposefully underdetermine the problem by only using \(L<N-1\) steps, leading to an average Gram matrix rank of \(6\) (about \(100\) times less than required for complete inference). Additionally, we train the network on data recorded \(1\) second after the power cut, where many nodes will still be close to equilibrium. Our method does not involve matrix inversion, and can thus deal naturally with non-convex problems. Though the neural network may be unable to completely infer the network, it can nevertheless produce a density \(\rho(\mathbf{\hat{A}})\), recorded during the training, that allows us to perform hypothesis testing on the line failure location. We show the results in fig. 3, where we plot the densities on the four network edges with the highest relative prediction error \(|\hat{a}_{ij}-a_{ij}^{0}|/a_{ij}^{0}\), \(\hat{a}_{ij}\) being the most likely value. The advantage of obtaining uncertainty quantification on the network is now immediately clear: even in the underdetermined case we are able to make meaningful statistical statements about the line failure location. We see that the missing edges consistently have the highest relative prediction errors, and that the \(p\)-values for measuring the unperturbed value \(a_{ij}^{0}\) are around \(5\%\), while being statistically insignificant for all other edges. It is interesting to note that the other candidate locations are also within the vicinity of the line failure, though their predicted values are much closer to the unperturbed value. In fig. 3b, we see that Figure 4: The total loss \(J\) and its derivatives \(\partial_{s}J\) and \(\partial_{ss}J\), averaged over a window of \(20\) iterations. The red dotted line indicates the value at which \(\nu\) is set to \(0\). the predicted network reproduces the response dynamics for the range covered by the training data when inserted into eq. [4], but, since the problem was purposefully underdetermined, the errors in the prediction \(\mathbf{\hat{A}}\) cause the predicted and true time series to diverge for larger \(t\). Densities were obtained in about twenty minutes on a regular laptop CPU. ## III Inferring economic cost networks from noisy data In the previous example the underlying network was a physical entity, but in many cases networks model abstract connections. We therefore now consider a commonly used economic model of the coupling of supply and demand [12, 13, 43] and a dataset of economic activity across Greater London. The goal is to learn the entire coupling network, not just to infer the (non-)existence of individual edges. In the model, \(N\) origin zones, representing economic demand, are coupled to \(M\) destination zones, modelling the supply side, through a network whose weights quantify the convenience with which demand from zone \(i\) can supplied from zone \(j\): the higher the weight, the more demand flows through that edge (see fig. 5a). Such a model is applicable e.g. to an urban setting [12], the origin zones representing residential areas, the destination zones e.g. commercial centres, and the weights quantifying the connectivity between the two (transport times, distances, etc.). The resulting cumulative demand at destination zone \(j\) depends both on the current size \(W_{j}(t)\) of the destination zone and the network weights \(c_{ij}\): \[D_{j}=\sum_{i=1}^{N}\frac{W_{j}(t)^{\alpha}c_{ij}^{\beta}}{\sum_{k=1}^{M}W_{k} (t)^{\alpha}c_{ik}^{\beta}}O_{i}(t).\] The sizes \(W_{j}\) are governed by a system of \(M\) coupled logistic Stratonovich stochastic differential equations \[\mathrm{d}W_{j}=\epsilon W_{j}(D_{j}-\kappa W_{j})\mathrm{d}t+\sigma W_{j} \circ\mathrm{d}B_{j},\] with given initial conditions \(W_{j}(0)\), see fig. 5a. \(\alpha\), \(\beta\), \(\kappa\), and \(\epsilon\) are scalar parameters. Our goal is to infer the connectivities \(c_{ij}\) from observations of the time series \(\mathbf{O}(t)\) and \(\mathbf{W}(t)\). The model includes multiplicative noise \(B_{j}\) with variance \(\sigma\geq 0\), with \(\circ\) signifying Stratonovich integration. Crucially, the model depends non-linearly on the network \(\mathbf{C}\): classical regression methods are thus not applicable here. We apply this model to a previously studied dataset of economic activity in Greater London [13, 25]. We use the ward-level household income from \(N=625\) wards for 2015 [41] and the retail floor space of the \(M=49\) largest commercial centres in London [40] as the initial origin zone and destination zone sizes respectively, i.e. \(\mathbf{O}(0)\) and \(\mathbf{W}(0)\), and from this generate a synthetic time series using the parameters estimated in [25] for a high noise level of \(\sigma=0.14\). For the network \(\mathbf{C}\) we use the Google Distance Matrix API1 to extract the shortest travel time \(d_{ij}\) between nodes, using either public transport or driving. The network weights are derived in [44] as Footnote 1: developers.google.com/maps/documentation/distance-matrix \[c_{ij}=e^{-d_{ij}/\tau}\] where the scale factor \(\tau=\sup_{i,j}d_{ij}\) ensures a unitless exponent. We generate a synthetic time series of length \(L=10000\) from which we subsample 2500 2-step windows, giving a total training set size of 5000. This is to ensure we sample a sufficiently broad spectrum of the system's dynamics, thereby fully determining the inference problem and isolating the effect of the training noise. A hyperparameter sweep on synthetic data showed that using a neural network with 2 layers, 20 nodes per layer, and no bias yields optimal results. We use the hyperbolic tangent as the activation function on all layers except the last, where we use the standard sigmoid function (since the network is complete, there is no need to use the hard sigmoid, since all edges are nonzero). To train the neural network we use the simple loss function \[J_{\theta}(\mathbf{\hat{A}})=\|\boldsymbol{\hat{\varphi}}(\mathbf{\hat{A}})- \boldsymbol{\varphi}\|_{2}.\] Since the dynamics are invariant under scaling of \(\mathbf{C}\rightarrow\lambda\mathbf{C}\), we normalise the rowsums of the predicted and true networks, \(\sum_{j}c_{ij}=1\). Figure 5c shows the inferred distribution \(P(k)\) of the (weighted) origin zone node degrees \(k_{i}=\sum_{j}c_{ij}\). The solid line is the maximum likelihood prediction, and the dotted red line the true distribution. Even with a high level of noise, the model manages to accurately predict the underlying connectivity matrix, comprising over 30.000 weights, in under 5 minutes on a regular laptop CPU. We quantify the uncertainty on the distribution using the Hellinger metric, \[\bar{d}_{\mathrm{H}}(k)\sim\] \[\int\left(\sqrt{\hat{\rho}(k)}-\sqrt{\hat{\rho}\left(k\;\middle|\; \hat{\mathbf{T}}\right)}\right)^{2}\exp(-\|\hat{\mathbf{T}}-\mathbf{T}\|_{2}) \mathrm{d}\hat{\mathbf{T}}. \tag{5}\] Here, we define the maximum likelihood estimate as the predicted density \(\hat{\rho}(k)\) for which uncertainty is to be calculated. As we will discuss in the last section, this method meaningfully captures the uncertainty due to the noise in the data. Figure 5: Inferring economic cost networks. **(a)** In the model, \(N\) origin zones (red) are connected to \(M\) destination zones (blue) through a weighted directed network. Economic demand flows from the origin zones to the destination zones, which supply the demand. We model the origin zones \(O_{i}(t)\) as a Wiener process with variance \(\sigma_{O}=0.1\). The resulting cumulative demand at destination zone \(j\) is given by \(W_{j}\). Note that the origin zone sizes fluctuate more rapidly than the destination zones, since there is a delay in the destination zones’ response to changing consumer patterns, controlled by the parameter \(\epsilon\). We use the parameters as estimated in [25], \(\alpha=0.92\), \(\beta=0.54\), \(\kappa=8.3\), and set \(\epsilon=2\). **(b)** The initial origin and destination zone sizes, given by the total household income of the \(N=629\) wards in London (blue nodes) and the retail floor space of \(M=49\) major centres (red nodes) [40; 41]. The network is given by travel times as detailed in the text. Background map: [42]. **(c)** Predicted degree distribution (sold line) of the inferred network, for a high noise level of \(\sigma=0.14\), with Hellinger uncertainty eq. [5] (shaded area), and the true distribution (red dotted line). CPU runtime: 3 min 41 s. ## IV Performance analysis and comparison with OLS We now analyse our method's performance, both in terms of prediction quality and computational speed, by comparing it to that of a classical regression method, presented e.g. in [6, 39]. As mentioned in the introdution, more efficient network learning methodologies have been developed for specific problems; however, we compare our method with OLS since both methods are general and do not rely on a specific data structure. Consider noisy Kuramoto dynamics, \[\alpha\frac{\mathrm{d}^{2}\varphi_{i}}{\mathrm{d}t^{2}}+\frac{\mathrm{d} \varphi_{i}}{\mathrm{d}t}=\omega_{i}+\sum_{j}a_{ij}\sin(\varphi_{j}-\varphi_{ i})+\xi_{i}, \tag{6}\] with \(\xi_{i}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,\sigma)\), and \(\omega_{i}\) the eigenfrequencies of the Figure 6: Computational performance analysis. **(a)**\(L^{1}\) prediction error of the neural scheme (black) and OLS as a function of the noise variance \(\sigma\) on the training data. For very high noise levels, the training data is essentially pure noise, and the prediction errors begin to plateau. First-order Kuramoto dynamics are used (\(\alpha=0\)). **(b)** The same plot as in (a), but using second-order dynamics (with \(\alpha=1\)). **(c)** Compute times for a single epoch of the neural method as a function of the network size \(N\). Shown are the compute times on a standard CPU (Apple M1, green line) and GPU (NVidia GeForce RTX 3090, pink line), averaged over 100 runs, with the shaded areas showing one standard deviation. Also shown is the compute time for complete inference using ordinary least squares (black dotted line), which scales as \(N^{3}\) (note the logarithmic axis). On the right axis, the average \(L^{1}\) prediction error \(\frac{1}{N}\|\hat{\mathbf{A}}-\mathbf{A}\|_{1}\) of the neural scheme after 10 epochs is shown, which remains fairly constant as a function of \(N\), showing that the number of gradient descent steps required to achieve a given average prediction error does not depend on \(N\). **(d)** Predicted degree distribution and **(e)** triangle distribution of an inferred network, trained on first-order noisy Kuramoto data (\(\sigma=0.001\)). The blue shaded areas indicate the Hellinger uncertainty eq. [5], and the red dotted lines are the true distributions. CPU runtime: 1 hour 3 minutes. nodes. Gathering the \(L\) terms on the left side into a single vector \(\mathbf{X}_{i}\) for each node, we obtain \(N\) equations \[\mathbf{X}_{i}=\mathbf{A}_{i}\cdot\mathbf{G}_{i}+\epsilon_{i}, \tag{7}\] with \(\mathbf{X}_{i}\in\mathbb{R}^{1\times L}\), \(\mathbf{A}_{i}\in\mathbb{R}^{1\times N}\) the \(i\)-th row of \(\mathbf{A}\), and \(\mathbf{G}_{i}\in\mathbb{R}^{N\times L}\) the \(L\) observations of the \(i\)-th column of \(\mathbf{\Gamma}\). From this we can then naturally estimate the \(i\)-th row of \(\mathbf{A}\) using ordinary least squares: \[\mathbf{\hat{A}}_{i}=\operatorname*{argmin}_{\boldsymbol{\gamma}\in\mathbb{R }^{1\times N}}\|\mathbf{X}_{i}-\boldsymbol{\gamma}\cdot\mathbf{G}_{i}\|_{2}^{ 2}=\mathbf{X}_{i}\mathbf{G}_{i}^{T}\left(\mathbf{G}_{i}\mathbf{G}_{i}^{T} \right)^{-1}. \tag{8}\] Given sufficiently many linearly independent observations, the Gram matrix \(\mathbf{G}_{i}\mathbf{G}_{i}^{T}\) will be invertible and the network can be inferred (with the diagonal manually set to \(0\)). We use synthetic Kuramoto data to compare our method's performance to that of OLS. In order to ensure invertibility of \(\mathbf{G}_{i}\mathbf{G}_{i}^{T}\), we generate \(L\gtrsim N/2\) datasets of \(2\) time steps each. Figures 5(a)-5(b) show our method's prediction accuracy alongside that of OLS regression as a function of the noise \(\sigma\) on the training data; the accuracy here is defined as the \(L^{1}\) error \[\|\mathbf{\hat{A}}-\mathbf{A}\|_{1}=\sum_{i,j}|\hat{a}_{ij}-a_{ij}|.\] For the practically noiseless case of \(\sigma<10^{-5}\), the regression scheme on average outperforms the neural approach; however, even for very low noise levels \(\sigma\geq 10^{-5}\) and above, the neural approach proves far more robust, outperforming OLS by up to one order of magnitude and maintaining its prediction performance up to low noise levels of \(\sigma\leq 10^{-3}\). These results hold both for first-order (\(\alpha=0\)) and second-order Kuramoto equations [4] (see fig. 5(b)); in the second-order case, the neural method begins outperforming OLS at even lower levels of \(\sigma\) than in the first-order case, though the improvement is not as significant. In figure 5(c) we show the average training time per epoch as a function of \(N\), with training conducted both on a standard laptop CPU and a standard GPU. Each epoch of the model equation requires \(\mathcal{O}(LN^{2})\) operations for the vector-matrix multiplication in eq. [8], and \(\mathcal{O}(LN^{2}/B)\) for the stochastic gradient descent update, where we are holding \(L/B\) constant to ensure comparability. We thus obtain an algorithmic complexity of \(\mathcal{O}(LN^{2})\). As is visible, the average \(L^{1}\) error per edge weight remains constant over \(N\), indicating that the number of gradient descent steps required to achieve a given node-averaged prediction accuracy is independent of \(N\). The total complexity of our method is thus \(\mathcal{O}(n_{E}\times LN^{2})\), with \(n_{E}\) the number of training epochs. By comparison, OLS in general has a cubic dependency on \(N\), due to the required matrix inversion. Being a neural approach, our method can make use of GPU accelerated training, leading to an order of magnitude faster performance. Lastly, figures 5(d)-5(e) show the estimated weighted degree and triangle distributions of a graph with \(1000\) nodes, or \(1\) million edge weights to be estimated, for noisy training data. The number of weighted, undirected triangles on each node \(i\) is given by \(\frac{1}{2}\sum_{jk}a_{ij}a_{jk}a_{ki}\). The model robustly finds the true adjacency matrix, and we again quantify uncertainty on the prediction using the Hellinger distance (eq. [5]) between each distribution estimated during training and the maximum likelihood estiamte. Estimating a network with \(1000\) nodes on a standard laptop CPU took about \(1\) hour, which reduces to \(6\) minutes when using a GPU. Most high-performance network inference techniques demonstrate their viability on graphs with at most this number of nodes, e.g. Con-Nle [17] and NetINF [19]. In [17], the authors state that graphs with \(1000\) nodes can typically be inferred from cascade data in under \(10\) minutes on a standard laptop. Similarly, the authors of NetINF [19] state that it can infer a network with \(1000\) nodes in a matter of minutes, though this algorithm does not infer edge weights, only the existence of edges, and neither technique provides uncertainty quantification. ## V Quantifying uncertainty There are two sources of uncertainty when inferring adjacency matrices: the non-convexity of the loss function \(J\), and the noise \(\sigma\) on the data. In general, it is hard to quantify the convexity of \(J\), since we do not know how many networks fit the equation at hand. However, when the dynamics are linear in the adjacency matrix \(\mathbf{A}\), we can do so using the Gram matrix of the observations of each node \(i\), \(\mathbf{G}_{i}\mathbf{G}_{i}^{T}\). For regression methods to be applicable, \(\mathbf{G}_{i}\mathbf{G}_{i}^{T}\) must be invertible for each \(i\). The (non-)convexity of the problem can thus be quantified for example by the minimum rank of all the Gram matrices, \[\mathfrak{c}:=\min_{i}\mathrm{rk}\left(\mathbf{G}_{i}\mathbf{G}_{i}^{T}\right). \tag{9}\] The problem is fully determined if \(\mathfrak{c}=N-1\). In figure 7a we show the Hellinger uncertainty eq. [5] on the predicted degree distribution as a function of \(\mathfrak{c}\). As is visible, the error on the distribution decreases almost linearly as \(\mathfrak{c}\) tends to its maximum value \(N-1\). For \(\mathfrak{c}=N-1\), some residual uncertainty remains due to the uncertainty on the neural network parameters \(\boldsymbol{\theta}\). In figures 7b-c we again show the Hellinger error on the predicted degree and triangle distributions, this time as a function of the noise \(\sigma\). To demonstrate that the uncertainty is not an artefact of the choice of metric, we also show the behaviour of the relative entropy, \[\bar{d}_{\mathrm{KL}}(k)\sim\] \[\int\hat{\rho}\left(k\;\middle|\;\hat{\mathbf{T}}\right)\log \left(\frac{\hat{\rho}\left(k\;\middle|\;\hat{\mathbf{T}}\right)}{\hat{\rho}( k)}\right)\exp(-\|\hat{\mathbf{T}}-\mathbf{T}\|_{2})\mathrm{d}\hat{\mathbf{T}}. \tag{10}\] Both metrics reflect the noise on the training data, providing similarly behaved, meaningful uncertainty quantification. As the noise tends to \(0\), some residual uncertainty again remains. Our method thus manages to capture the uncertainty arising from both sources: the non-convexity of \(J\) and the noise \(\sigma\) on the data. ## VI Discussion In this work we have demonstrated a performative method to estimate network adjacency matrices from time series data. We showed its effectiveness at correctly and reliably inferring networks in a variety of scenarios: convex and non-convex cases, low to high noise regimes, and equations that are both linear and non-linear in \(\mathbf{A}\). We were able to reliably infer power line failures in the national power grid of Great Britain, and the connectivity matrix of an economic system covering all of Greater London. We showed that our method is well able to handle inference of hundreds of thousands to a million edge weights, while simultaneously giving uncertainty quantification that meaningfully reflects both the non-convexity of the loss function as well as the noise on the training data. Our method outperforms classical ordinary least squares regression, both in terms of prediction accuracy and computational speed, while extending to the case of non-linear dynamics. In conjunction with our previous work [25], we have now also demonstrated the Figure 7: Quantifying the two types of uncertainty: **(a)** Total Hellinger error on the degree distribution \(P(k)\) as a function of \(\mathfrak{c}\) (eq. [9]) in the noiseless case. The error is normalised to the value at \(\mathfrak{c}=0.21(N-1)\). As \(\mathfrak{c}\) increases, the error on the prediction decreases almost linearly. We run the model from \(10\) different initialisations and average over each (shaded area: standard deviation). **(b)** and **(c)**: Prediction uncertainty due to noise in the data. Shown are the average Hellinger error (eq. [5]) and average relative entropy (eq. [10]) to the maximum likelihood estimate for the degree distribution \(P(k)\) and triangle distribution \(P(t)\) as a function of the noise \(\sigma\) in the data. Each line is an average over \(10\) different initialisations. In all cases, training was conducted on synthetic, first-order Kuramoto data (eq. [6] with \(\alpha=0\)). viability of using neural networks for parameter calibration in both the low- and high-dimensional case. Our method is simple to implement as well as highly versatile, giving excellent results across a variety of problems. All experiments in this work were purposefully conducted on a standard laptop CPU, typically taking on the order of minutes to run; however, being a neural scheme, our method can make use of GPU acceleration, further reducing the compute times. Many lines for future research open up from this work. Firstly, a thorough theoretical investigation of the method and a more quantitative comparison to other, similar methods is warranted, e.g. approximate Bayesian computing [45, 46], physics-informed neural networks [47], and using dropout for uncertainty quantification in neural networks [48]. Another direction is further reducing the amount of data required to learn parameters, which in many applications may not be abundantly available, and in future research the authors aim to address the question of learning systems properties from observations of a single particle trajectory at the mean-field limit [49, 50]. Data, materials, and Software AvailabilityCode and synthetic data can be found under [https://github.com/ThGaskin/NeuralABM](https://github.com/ThGaskin/NeuralABM). It is easily adaptable to new models and ideas. The code uses the utopya package2[51, 52] to handle simulation configuration and efficiently read, write, analyse, and evaluate data. This means that the model can be run by modifying simple and intuitive configuration files, without touching code. Multiple training runs and parameter sweeps are automatically parallelised. The neural core is implemented using pytorch3. All synthetic datasets as well as the London dataset have been made available, together with the configuration files needed to reproduce the plots. Detailed instructions are provided in the supplementary material and the repository. The British power grid data [29, 30, 31] is property of the respective organisations and cannot be made available without permission; however, as of early 2023 it is freely available from those organisations upon request. The code used to run the experiments is available in the repository. Footnote 2: utopia-project.org, utopya.readthedocs.io/en/latest Footnote 3: pytorch.org ###### Acknowledgements. The authors are grateful to Dr Andrew Duncan for the fruitful discussions on power grid dynamics. TG was funded by the University of Cambridge School of Physical Sciences VC Award via DAMTP and the Department of Engineering, and supported by EPSRC grants EP/P020720/2 and EP/R018413/2. The work of GP was partially funded by EPSRC grant EP/P031587/1, and by J.P. Morgan Chase & Co through a Faculty Research Award 2019 and 2021. MG was supported by EPSRC grants EP/T000414/1, EP/R018413/2, EP/P020720/2, EP/R034710/1, EP/R004889/1, and a Royal Academy of Engineering Research Chair.
2307.05299
Discovering Symbolic Laws Directly from Trajectories with Hamiltonian Graph Neural Networks
The time evolution of physical systems is described by differential equations, which depend on abstract quantities like energy and force. Traditionally, these quantities are derived as functionals based on observables such as positions and velocities. Discovering these governing symbolic laws is the key to comprehending the interactions in nature. Here, we present a Hamiltonian graph neural network (HGNN), a physics-enforced GNN that learns the dynamics of systems directly from their trajectory. We demonstrate the performance of HGNN on n-springs, n-pendulums, gravitational systems, and binary Lennard Jones systems; HGNN learns the dynamics in excellent agreement with the ground truth from small amounts of data. We also evaluate the ability of HGNN to generalize to larger system sizes, and to hybrid spring-pendulum system that is a combination of two original systems (spring and pendulum) on which the models are trained independently. Finally, employing symbolic regression on the learned HGNN, we infer the underlying equations relating the energy functionals, even for complex systems such as the binary Lennard-Jones liquid. Our framework facilitates the interpretable discovery of interaction laws directly from physical system trajectories. Furthermore, this approach can be extended to other systems with topology-dependent dynamics, such as cells, polydisperse gels, or deformable bodies.
Suresh Bishnoi, Ravinder Bhattoo, Jayadeva, Sayan Ranu, N M Anoop Krishnan
2023-07-11T14:43:25Z
http://arxiv.org/abs/2307.05299v1
# Discovering Symbolic Laws Directly from Trajectories with Hamiltonian Graph Neural Networks ###### Abstract The time evolution of physical systems is described by differential equations, which depend on abstract quantities like energy and force. Traditionally, these quantities are derived as functionals based on observables such as positions and velocities. Discovering these governing symbolic laws is the key to comprehending the interactions in nature. Here, we present a Hamiltonian graph neural network (Hgnn), a physics-enforced Gnn that learns the dynamics of systems directly from their trajectory. We demonstrate the performance of Hgnn on \(n-\)springs, \(n-\)pendulums, gravitational systems, and binary Lennard Jones systems; Hgnn learns the dynamics in excellent agreement with the ground truth from small amounts of data. We also evaluate the ability of Hgnn to generalize to larger system sizes, and to hybrid spring-pendulum system that is a combination of two original systems (spring and pendulum) on which the models are trained independently. Finally, employing symbolic regression on the learned Hgnn, we infer the underlying equations relating the energy functionals, even for complex systems such as the binary Lennard-Jones liquid. Our framework facilitates the interpretable discovery of interaction laws directly from physical system trajectories. Furthermore, this approach can be extended to other systems with topology-dependent dynamics, such as cells, polydisperse gels, or deformable bodies. Any system in the universe is always in a continuous state of motion. This motion, also known as the dynamics, is observed and noted in terms of the trajectory, which comprises the system's configuration (that is, positions and velocities) as a function of time. Any understanding humans have developed about the universe is through analyzing the dynamics of different systems. Traditionally, the dynamics governing a physical system are expressed as governing differential equations derived from fundamental laws such as energy or momentum conservation, which, when integrated, provide the system's time evolution. However, these equations require the knowledge of functionals that relate abstract quantities such as energy, force, or stress with the configuration [1]. Thus, discovering these governing equations directly from the trajectory remains the key to understanding and comprehending the phenomena occurring in nature. Alternatively, several symbolic regression (SR) approaches have been used to discover free-form laws directly from observations [2, 3, 4]. However, the function space to explore in such cases is prohibitively large, and appropriate assumptions and constraints regarding the equations need to be provided to obtain a meaningful and straightforward equation [5, 6, 7]. Learning the dynamics of physical systems directly from their trajectory is a problem of interest in wide areas such as robotics, mechanics, biological systems such as proteins, and atomistic dynamics [8, 9, 10, 11, 12]. Recently, machine learning (ML) tools have been widely used to learn the dynamics of systems directly from the trajectory of systems [13, 14, 15, 16, 17, 18, 19]. Specifically, there have been three broad approaches to this extent, namely, data-driven, physics-informed, and physics-enforced approaches. Data-driven approaches try to develop models that learn the dynamics directly from ground-truth trajectories [13, 10, 12]. Physics-informed approaches rely on an additional term in the loss function, which is the governing differential equation: data loss and physics loss [9]. In contrast, physics-enforced approaches directly infuse the inductive biases in terms of the ordinary differential equations directly in the formulation as a hard constraint. These approaches are known as Hamiltonian (Hnn) [20, 21, 22, 14], and Lagrangian neural networks (Lnn) [15, 16, 17], and Graph Neural ODEs [23, 18, 24]. Adding the inductive bias in a physics-enforced fashion instead of a soft constraint in the loss function can significantly enhance the learning efficiency while also leading to realistic trajectories in terms of conservation laws [14, 22, 25]. Additionally, combining these formulations with graph neural networks (Gnns) [26, 27, 28, 25] can lead to superior properties such as zero-shot generalizability to unseen system sizes and hybrid systems unseen during the training, more efficient learning, and inference. However, although efficient in learning the dynamics, these approaches remain black-box in nature with poor interpretability of the learned function, which questions the robustness and correctness of the learned models [29]. Here, we present a framework combining Hamiltonian graph neural networks (Hgnn) and symbolic regression (SR), which enables the discovery of symbolic laws governing the energy functionals directly from the trajectory of systems. Specifically, we propose a Hgnn architecture that decouples kinetic and potential energies and, thereby, efficiently learns the Hamiltonian of a system directly from the trajectory. We evaluate our architecture on several complex systems such as \(n\)-pendulum, \(n\)-spring, \(n\)-particle gravitational, and binary LJ systems. Further, the modular nature of Hgnn enables the interpretability of the learned functions, which, when combined with SR, enables the discovery of the governing laws in a symbolic form, even for complex interactions such as binary LJ systems. ## Hamiltonian mechanics Here, we briefly introduce the mathematical formulation of Hamiltonian mechanics that govern the dynamics of physical systems. Consider a system of \(n\) particles that are interacting with their positions at time \(t\) represented by the Cartesian coordinates as \(\mathbf{x}(t)=(\mathbf{x}_{1}(t),\mathbf{x}_{2}(t),...\mathbf{x}_{n}(t))\). The Hamiltonian \(H\) of the system is defined as \(H(\mathbf{p}_{\mathbf{x}},\mathbf{x})=T(\dot{\mathbf{x}})+V(\mathbf{x})\), where \(T(\dot{\mathbf{x}})\) represents the total kinetic energy and \(V(\mathbf{x})\) represents the potential energy of the system. The Hamiltonian equations of motion for this system in Cartesian coordinates are given by [30, 31, 32] \[\dot{\mathbf{x}}=\nabla_{\mathbf{p}_{\mathbf{x}}}H,\qquad\dot{\mathbf{p}}_{ \mathbf{x}}=-\nabla_{\mathbf{x}}H \tag{1}\] where \(\mathbf{p}_{\mathbf{x}}=\nabla_{\dot{x}}H=\mathbf{M}\dot{\mathbf{x}}\) represents the momentum of the system in Cartesian coordinates and \(\mathbf{M}\) represents the mass matrix. Assuming \(Z=[\mathbf{x};\mathbf{p}_{\mathbf{x}}]\) and \(J=[0,I;-I,0]\), the acceleration of a particle can be obtained from the Hamiltonian equations as \[\dot{Z}=J(\nabla_{Z}H) \tag{2}\] since \(\nabla_{Z}H+J\dot{Z}=0\) and \(J^{-1}=-J\). Sometimes systems may be subjected to constraints that depend on positions (holonomic) or velocities (Pfaffian). For example, in the case of a pendulum, the length between the bobs remains constant, or in multi-fingered grasping, the velocity of two fingers should be such that the combined geometry is able to hold the object. In such cases, the constrain equation is represented as \(\Phi(\mathbf{x})\dot{\mathbf{x}}=0\), where \(\Phi(\mathbf{x})\in\mathbb{R}^{k\times D}\) correspond to the \(k\) velocity constraints in a \(D\)-dimensional system. For instance, in the case of a pendulum, the constraint equation for two bobs located at \((0,0)\) and \((x_{1},x_{2})\) may be written as \(x_{1}\dot{x}_{1}+x_{2}\dot{x}_{2}=0\), which is the gradient of \(x_{1}^{2}+x_{2}^{2}=0\). Following this, the Hamiltonian equations of motion can be modified to feature the constraints explicitly as [16, 32] \[\nabla_{Z}H+J\dot{Z}+(D_{Z}\Psi)^{T}\lambda=0 \tag{3}\] where \(\Psi(Z)=(\Phi;\dot{\Phi})\), \(D_{Z}\Psi\) is the Jacobian of \(\Psi\) with respect to \(Z\), and \((D_{Z}\Psi)^{T}\lambda\) represents the effect of constraints on \(\dot{\mathbf{x}}\) and \(\dot{\mathbf{p}}_{\mathbf{x}}\)[16, 32]. Thus, \((D_{Z}\Psi)\dot{Z}=0\). Substituting for \(\dot{Z}\) from Eq. 3 and solving for \(\lambda\) yields [17, 25, 18, 30] \[\lambda=-[(D_{Z}\Psi)J(D_{Z}\Psi)^{T}]^{-1}[(D_{Z}\Psi)J(\nabla_{Z}H)] \tag{4}\] Substituting \(\lambda\) in the Eq. 3 and solving for \(\dot{Z}\) yields \[\dot{Z}=J[\nabla_{Z}H-(D_{Z}\Psi)^{T}[(D_{Z}\Psi)J(D_{Z}\Psi)^{T}]^{-1}(D_{Z}\Psi )J\nabla_{Z}H] \tag{5}\] Note that in the absence of constraint, Eq. 5 reduces to Eq. 2. In Hamiltonian mechanics, Eq.5 is used to obtain the acceleration of the particles, which, when integrated, provides the updated configuration of the system. Thus, the only unknown in the previous equation is the \(H\), which is represented as a function of \(\mathbf{p_{x}}\) and \(\mathbf{x}\). ### Hamiltonian graph neural network Now, we introduce our ML framework proposed to learn the Hamiltonian of a system directly from the trajectory, that is, only using the time evolution of the observable quantities \((\mathbf{x},\mathbf{p_{x}})\). To this extent, we develop the Hamiltonian graph neural network (Hgnn) that parametrizes the actual \(H\) as a Gnn to obtain the learned \(\hat{H}\). Henceforth, all the terms with a hat, for example, \(\hat{\mathbf{x}}\) represent the approximate function obtained from Hgnn. Further, the \(\hat{H}\) obtained from Hgnn is substituted in the Eq.(5) to obtain the acceleration and velocity of the particles. These values are integrated using a symplectic integrator to compute the updated position. First, we describe the architecture of Hgnn (see Fig. 1(a)). The physical system is modeled as an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with nodes as particles and edges as connections between them. For instance, in an \(n\)-ball-spring system, the balls are represented as nodes and springs as edges. The raw node features are \(t\) (type of particle) as one-hot encoding, \(\mathbf{x}\), and \(\mathbf{x}\), and the raw edge feature is the distance, \(d=||\mathbf{x}_{j}-\mathbf{x}_{i}||\), between two particles \(i\) and \(j\). A notable difference in the Hgnn architecture from previous graph architectures is the presence of global and local features--local features participate in message passing and contribute to quantities that depend on topology. In contrast, global features do not Figure 1: **Hamiltonian graph architecture and systems studied.** (a) Hamiltonian graph neural network (Hgnn) architecture, (b) Visualization of the systems studied, namely, 3-pendulum, 5-spring, \(75\)-particles binary Lennard Jones system, \(4\)-particle gravitational system, and a hybrid spring-pendulum system. Note that the hybrid spring-pendulum system is used only to evaluate the generalizability of Hgnn. take part in message passing. Here, we employ the position \(\mathbf{x}\), velocity \(\dot{\mathbf{x}}\) as global features for a node, while \(d\) and \(t\) are used as local features. For the Gnn, we employ an \(L\)-layer message passing Gnn, which takes an embedding of the node and edge features created by multi-layer perceptrons (MLPs) as input. Detailed hyper-parameters are provided in the Supplementary Material. The local features participate in message passing to create an updated node and edge embeddings. The final representations of the nodes and edges, \(\mathbf{z}_{i}\) and \(\mathbf{z}_{ij}\), respectively, are passed through MLPs to obtain the Hamiltonian of the system. The Hamiltonian of the system is predicted as the sum of kinetic energy \(T\) and potential energy \(V\) in the Hgnn. Specifically, the potential energy is predicted as \(V_{i}=\sum_{i}\texttt{MLP}_{v}(\mathbf{z}_{i})+\sum_{ij}\texttt{MLP}_{v}( \mathbf{z}_{ij}))\), where \(\texttt{MLP}_{v}\) and \(\texttt{MLP}_{e}\) represent the contribution from the node (particles themselves) and edges (interactions) toward the potential energy of the system, respectively. Kinetic energy is predicted as \(T=\sum_{i}\texttt{MLP}_{T}\left(\mathbf{h}_{i}^{0}\right)\), where \(\mathbf{h}_{i}^{0}\) is the embedding of particle \(i\). To train the Hgnn, we use only the time evolution of positions and momenta. This approach does not assume any knowledge of the functional form or knowledge of the Hamiltonian. The training approach, purely based on direct observables, can be used for any system (for example, trajectories from experiments) where the true Hamiltonian is unavailable. Thus, the loss function of Hgnn is computed by using the predicted and actual positions at the timestep \(t+1\) in a trajectory based on positions and velocities at \(t\), which is then back-propagated to train the MLPs. Specifically, we use _mean squared error (MSE)_ on the true and predicted \(Z\), which is the concatenation of positions and velocities. \[\mathcal{L}=\frac{1}{n}\left(\sum_{i=1}^{n}\left(Z_{i}^{t+1}-\hat{Z}_{i}^{t+1} \right)^{2}\right) \tag{6}\] ### Case studies **Systems studied.** Now, we evaluate the ability of Hgnn to learn the dynamics directly from the trajectory. To evaluate Hgnn, we selected four different types of systems, _viz_, \(5\)-pendulums with explicit internal constraints and subjected to an external gravitational field, \(5\)-springs with harmonic inter-particle harmonic interactions, \(75\)-particle binary LJ system with two types of a particle interacting based on the Kob-Andersen LJ potential [33], and \(4\)-particle gravitational system with purely repulsive gravitational potential. Finally, in order to test the generalizability of Hgnn to completely unseen system which is combination two systems on which it is trained, a hybrid system containing spring and pendulum is also considered. In this system, while the dynamics of pendulum is governed by the external gravitational field, the dynamics of the spring system depends on the internal forces generated in the system due to the expansion and compression of the spring. Thus, the systems selected here covers a broad range of cases, that is, dynamics (i) with internal constraints (pendulum), (ii) under the influence of an external field (gravitational), (iii) harmonic interactions (springs), (iv) complex breakable interactions (LJ potential), and (v) hybrid system with and without internal constraints. The training of Hgnn is carried out for each system separately. A training dataset of \(100\) trajectories, each having \(100\) steps, were used for each system. For spring and pendulum, a 5-particle system is considered with random initial conditions. In the pendulum system, the initial conditions are considered in such a fashion that the constraints are respected. In the spring system, each ball is connected only to two other balls forming a loop structure. For gravitational system, a 4-particle system is considered where two particles are rotating in the clockwise direction, and two remaining particles are rotating in the anti-clockwise direction about their center of mass. For LJ system, a binary Kob-Andersen system with 75 particles are considered. The initial structure is generated by randomly placing the particles in a box with periodic boundary conditions. Further, the systems are simulated in a microcanonical ensemble (NVE) with temperatures corresponding to the liquid state to obtain equilibrium structures. Only once the system is equilibrated, the training data is collected for this system. Hgnn models were trained on this dataset with a \(75:25\) split for training and validation. Further, to test the long-term stability and energy and momentum conservation error, the trained model was evaluated on a forward simulation for \(10^{5}\) timesteps on 100 random initial configurations. See Methods for detailed equations for the interactions, datasets, and training parameters. **Learning the dynamics.** Now, we evaluate the performance of the trained Hgnn models. To evaluate the long-term stability of the dynamics learned by Hgnn, we analyze the trajectory predicted by Hgnn for 100 random initial configurations. Specifically, we compare the predicted and actual phase space, trajectory, kinetic energy, potential energy, and forces on all the particles of the system during the trajectory. Note that the systems studied in this case are chaotic; hence, the exact trajectory followed by Hgnn will diverge with time. However, the phase space and the errors in energy and forces can be effective metrics to analyze whether the trajectory generated by Hgnn is statistically equivalent to that of the original system, that is, sampling the same regions of the energy landscape. Further, in contrast to purely data-driven [8] or physics-informed methods, the physics-enforced architecture of Hgnn strictly follows all the characteristics of the Hamiltonian equations of motion, such as the conservation laws of energy and momentum (see Supplementary Materials). This is due to the fact that the graph architecture only predicts the Hamiltonian of the Figure 2: **Evaluation of Hgnn on the pendulum, spring, binary LJ, and gravitational systems.** (a) Predicted and (b) actual phase space (that is, \(x_{1}\)-position vs. \(x_{2}\)-velocity), predicted with respect to actual (c) kinetic energy, (d) potential energy, and (e) forces in 1 (blue square), and 2 (red triangle) directions of the 5-pendulum system. (f) Predicted and (g) actual phase space (that is, 1-position, \(x_{1}\) vs \(2\)-velocity, \(\dot{x}_{2}\)), predicted with respect to actual (h) kinetic energy, (i) potential energy, and (j) forces in 1 (blue square) and 2 (red triangle) directions of the \(5\)-spring system. (k) Predicted and (l) actual positions (that is, \(x_{1}\) and \(x_{2}\) positions), predicted with respect to actual (m) kinetic energy, (n) pair-wise potential energy, \(V_{\rm ij}\) for the (0-0), (0-1), and (1-1) interactions, and (o) forces in 1 (blue square), 2 (red triangle), and 3 (green circle) directions of the \(75\)-particle LJ system. (p) Predicted and (q) actual positions (that is, \(x_{1}-\) and \(x_{2}-\)positions), predicted with respect to actual (r) kinetic energy, (s) potential energy, and (t) forces in 1 (blue square), and 2 (red triangle) directions of the gravitational system. system, which is then substituted in the Hamiltonian equations of motion to obtain the updated configuration. Due to this feature, the trajectory predicted by the Hgnn is more realistic and meaningful in terms of the system's underlying physics. Fig. 2 shows the performance of Hgnn for the pendulum (Figs. 2(a)-(e), first row), spring (Figs. 2(f)-(j), second row), binary LJ (Figs. 2(k)-(o), third row), and gravitational systems (Figs. 2(p)-(t), fourth row). For pendulum and spring systems, we observe that the phase space represented by the positions in 1-direction (\(x_{1}\)) and velocities in the orthogonal direction (\(z_{2}\)) predicted by Hgnn (Figs. 2(a) and (f)) exhibit an excellent match with the ground truth trajectory. It is interesting to note that Hgnn trained only on a trajectory of a single step (\(t\) to \(t+1\)) is able to learn the dynamics accurately and simulate a long-term stable trajectory of \(10^{5}\) timesteps that exactly matches the simulated trajectory. Similarly, for the binary LJ and gravitational systems, we observe that the predicted (Figs. 2(k) and (p)) and actual (Figs. 2(j) and (q)) positions in the trajectory of random unseen initial configurations explored by the systems exhibit an excellent match. Further, we observe that the predicted kinetic (Figs. 2(c), (h), (m), (r)) and potential (Figs. 2(d), (i), (n), and (s)) energies and forces (Figs. 2(e), (j), (o), and (t)) exhibit an excellent match with the ground truth values with a mean squared error almost close to zero. Additional evaluation of the Hgnn architecture is performed by comparing it with two baselines, namely, Hnn (which is a physics-enforced MLP) and Hgn, which does not decouple potential and kinetic energies (see Supplementary Materials) and on additional metrics such as energy and momentum error. We observe that Hgnn significantly outperforms Hgn and Hnn in terms of rollout and energy error (see Supplementary Materials). These results confirm that the Hgnn architecture can learn the systems' dynamics directly from the trajectory and hence can be used for systems where the Hamiltonian is unknown or inaccessible (such as experimental or coarse-grained systems). **Zero-shot generalizability.** Now, we evaluate the generalizability of the Hgnn to unseen systems, for instance, systems larger than those on which Hgnn is trained or a completely new system that is a combination of two systems on which it is independently trained. While traditional neural networks based on approaches are restricted to the system sizes on which it is trained, Hgnn is inductive to larger (and smaller) systems than those on which they are trained. This is due to the modular nature of the Hgnn, thanks to the graph-based approach, where the learning occurs at the node and edge level. Fig. 3 shows the generalizability of Hgnn to larger system sizes than those on which it is trained. Specifically, we evaluate Hgnn on \(10-\)endulum (Fig. 3(a)-(e)), \(50-\)spring (Fig. 3(f)-(j)), and \(600-\)particle binary LJ systems (Fig. 3(k)-(o)). We observe that Hgnn is able to generalize to larger system sizes accurately without any additional training or fine-tuning, exhibiting excellent match with the ground truth trajectory in terms of positions, energies, and forces. Additional results on \(50\)-pendulum systems and \(500\)-spring systems are included in the Supplementary Material. We also evaluate the ability of Hgnn to simulate a hybrid spring-pendulum system (see Fig. 1(b) Hybrid system). To this extent, we model the Hamiltonian of the hybrid as the superposition of the Hamiltonian of spring and pendulum systems. Further, we model two graphs based on the spring and pendulum elements and use the Hgnn trained on the spring and pendulum systems to obtain the Hamiltonian of the system. Fig. 3(p)-(t) shows the performance of Hgnn on the hybrid system. Hgnn provides the dynamics in excellent agreement with the ground truth for the unseen hybrid system as well in terms of positions, energies, and forces. Additional results on the force predicted on each particle by Hgnn in comparison to the ground truth for a trajectory of 100 steps is shown in Supplementary Material. These results confirm that Hgnn is able to learn the dynamics of systems directly from their trajectory and simulate the long-term dynamics for new initial conditions and system sizes. This is a highly desirable feature as Hgnn can be used to learn the Hamiltonian from sparse experimental data of physical systems or _ab-initio_ simulations of atomic systems. This learned model can then be used to simulate larger system sizes to investigate phenomena with higher length scales. ## Interpretability and discovering symbolic laws Neural networks, while exhibiting excellent capability to learn functions, are notorious for their black-box nature allowing poor or no interpretability to the learned function. In contrast, we demonstrate the interpretability of the learned Hgnn. Thanks to the modular nature of Hgnn, we analyze the functions learned by the individual MLPs that represent the node and edge level potential energies (\(\texttt{MLP}_{v}\) and \(\texttt{MLP}_{e}\), respectively) and kinetic energy (\(\texttt{MLP}_{T}\)) of the particles as a function of the learned embeddings. Fig. 4(a)-(f) show the learned functions with respect to the input features such as positions, velocities, or inter-particle distances. We observe that learned functions by Hgnn for the potential energies for (i) pendulum bob (\(mgx_{2}\); Fig. 4(a)), (ii) spring (\(0.5k(r_{ij}-1)^{2}\); Fig. 4(c)), and (iii) binary LJ systems (0-0, 0-1, 1-1; Figs. 4(d)-(f), respectively) and kinetic energy of particles (\(0.5m|\dot{\textbf{x}}_{i}|^{2}\); Fig. 4(b)) exhibits a close match with the known governing equations. This shows the interpretability of the Hgnn and the additional ability to provide insights into the nature of interactions between the particles directly from their trajectory. Thus, Hgnn can be used to discover interaction laws directly from their trajectory, even when they are not accessible or available. Figure 3: **Generalizability to unseen systems.** (a) Predicted and (b) actual phase space (that is, \(1-\)position vs. \(2-\)velocity) and predicted with respect to actual (c) kinetic energy, (d) potential energy, and (e) forces of the \(10-\)pendulum system, using Hønn trained on \(5-\)pendulum system. (f) Predicted and (g) actual phase space (that is, \(1-\)position, \(x_{1}\) vs. \(2-\)velocity, \(\dot{x}_{2}\)) and predicted with respect to actual (h) kinetic energy, (i) potential energy, and (j) forces of the \(50-\)spring system using Hønn trained on \(5-\)spring system. (k) Predicted and (l) actual positions (that is, \(1-\) and \(2-\)positions; blue and red represent type 0 and type 1 particles), and predicted with respect to actual (m) kinetic energy, (n) pair-wise potential energy, Vij and (o) forces, of the \(600-\)particle binary LJ system, using Hønn trained on \(75\) particle binary LJ system. (p) Predicted and (q) actual positions (that is, \(1-\) and \(2-\)positions), and predicted with respect to actual (r) kinetic energy, (s) potential energy, and (t) forces of the 10-particle hybrid system, using Hønn trained on \(5\)-spring and \(5\)-pendulum system. While the interpretability of Hgnn can provide insights into the nature of energy functionals, abstracting it further as a symbolic expression can enable discovering the underlying interaction laws and energy functions. Such functionals can then be used for simulating the system or understanding the dynamics independent of the Hgnn. Thus, beyond learning the dynamics of systems, Hgnn can be used to discover underlying energy functionals and interaction laws. To this extent, we apply SR [2, 3, 4] on the learned functions by Hgnn. Specifically, we focus on the kinetic energy function, the harmonic function of the spring, gravitational potential, and the binary LJ systems. Specifically, we employ simple operations such as addition, multiplication, and polynomials to identify the governing equations that minimize the error between the values predicted by the discovered equation and those predicted by the Hgnn. The optimal equation is identified based on a score that balances complexity and loss of the equation (see Methods for details). Table 1 shows the original equation and the equation discovered based on SR of the learned Hgnn functionals. Note for each system, the equation that exhibits the maximum score is chosen as the final equation (see Methods for details). All the equations discovered by SR with their loss, complexity, polynomials used, and other hyper-parameters are included in the Supplementary material. We observe that the recovered equations exhibit a close match for kinetic energy, harmonic spring, gravitational potential, and binary LJ. In the case of the binary LJ system, we observe that the equations reproduced for (0-0) and (1-1) interactions are very close to the original equation, while for (0-1) interaction, the equation is slightly different, although it exhibits low loss. Interestingly, we observe that for LJ (0-1) interaction, one of the equations provided by SR given by \(V_{ij}=\left(\frac{0.203}{r_{ij}^{12}}-\frac{0.773}{r_{ij}^{2}}\right)\) is closer to the original equation in its functional form. However, this predicted equation has a score of \(2.22\) with a loss of \(0.000109\). Thus, both the loss and the score of the equation are higher and lower, respectively, than the best equation obtained in Table 1. This also suggests that for more complex interactions, an increased number of data points, especially along the inflection points, might be required to improve the probability of discovering the original equation. \begin{table} \begin{tabular}{l c c c c} \hline \hline Functions & Original Eq. & Discovered Eq. & Loss & Score \\ \hline Kinetic energy & \(T_{i}=0.5m|\dot{\mathbf{x}}_{i}|^{2}\) & \(T_{i}=0.500m|\dot{\mathbf{x}}_{i}|^{2}\) & \(7.96\times 10^{-10}\) & \(22.7\) \\ Harmonic spring & \(V_{ij}=0.5(r_{ij}-1)^{2}\) & \(V_{ij}=0.499\left(r_{ij}-1.00\right)^{2}\) & \(1.13\times 10^{-9}\) & \(3.15\) \\ Binary LJ (0-0) & \(V_{ij}=\left(\frac{2.0}{r_{ij}^{12}}-\frac{2.0}{r_{ij}^{12}}\right)\) & \(V_{ij}=\left(\frac{1.90}{r_{ij}^{12}}-\frac{1.95}{r_{ij}^{12}}\right)\) & \(0.00159\) & \(2.62\) \\ Binary LJ (0-1) & \(V_{ij}=\left(\frac{0.275}{r_{ij}^{12}}-\frac{0.786}{r_{ij}^{12}}\right)\) & \(V_{ij}=\left(\frac{2.33}{r_{ij}^{2}}-\frac{2.91}{r_{ij}^{2}}\right)\) & \(3.47\times 10^{-5}\) & \(5.98\) \\ Binary LJ (1-1) & \(V_{ij}=\left(\frac{0.216}{r_{ij}^{12}}-\frac{0.464}{r_{ij}^{12}}\right)\) & \(V_{ij}=\left(\frac{0.215}{r_{ij}^{12}}-\frac{0.464}{r_{ij}^{12}}\right)\) & \(1.16\times 10^{-5}\) & \(5.41\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Discovering governing laws with symbolic regression.** Original equation and the best equation discovered by symbolic regression based on the score for different functions. The loss represents the mean squared error between the data points from Hgnn and the predicted equations. Figure 4: **Interpreting the learned functions in Hgnn.** (a) Potential energy of pendulum system with the \(2\)-position of the bobs. (b) Kinetic energy of the particles with respect to the velocity for the pendulum bobs. (c) Potential energy with respect to the pair-wise particle distance for the spring system. (d) The pair-wise potential energy of the binary LJ system for 0-0, 0-1, and 1-1 type of particles. The results from Hgnn are shown with the markers, while the original function is shown as dotted lines. ## Outlook Altogether, in this work, we present a framework Hgnn that allows the discovery of energy functionals directly from the trajectory of physical systems. The Hgnn could be extended to address several challenging problems where the dynamics depends on the topology such as the dynamics of polydisperse gels [34], granular materials [35], biological systems such as cells [36], or even rigid body dynamics. A topology to graph mapping can be developed in such cases which can then be used to learn the dynamics and further abstracted it out in terms of the governing interaction laws. At this juncture, it is worth mentioning some outstanding questions the present work raises. Although Hgnn presents a promising approach, it is applied to only particle-based systems with at most two-body interactions. Extending Hgnn to more complex systems, such as complex atomic structures with multi-body interactions or to deformable bodies in continuum mechanics could be addressed as future challenges. Further, the graph architecture presented in Hgnn could be enhanced by adding additional inductive biases such as equivariance [37]. Finally, extending the framework to non-Hamiltonian systems such as colloidal systems [38] exhibiting Brownian or Langevin dynamics could be pursued to widen the scope of the Hgnn framework to capture realistic systems. ## Methods ### Experimental systems To simulate the ground truth, physics-based equations derived using Hamiltonian mechanics are employed. The equations for \(n\)-pendulum and spring systems are given in detail below. #### \(n\)-Pendulum For an \(n\)-pendulum system, \(n\)-point masses, representing the bobs, are connected by rigid (non-deformable) bars. These bars, thus, impose a distance constraint between two point masses as \[||\mathbf{x}_{i}-\mathbf{x}_{i-1}||^{2}=l_{i}^{2} \tag{7}\] where, \(l_{i}\) represents the length of the bar connecting the \((i-1)^{th}\) and \(i^{th}\) mass. This constraint can be differentiated to write in the form of a _Pfaffian_ constraint as \[(\mathbf{x}_{i}-\mathbf{x}_{i-1})(\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{i-1} )=0 \tag{8}\] Note that such constraint can be obtained for each of the \(n\) masses considered to obtain the constraint matrix. The Hamiltonian of this system can be written as \[H=\sum_{i=1}^{n}\sum_{j=1}^{2}\left(1/2m_{i}{x_{i,j}}^{2}-m_{i}gx_{i,2}\right) \tag{9}\] where \(j=1,2\) represents the dimensions of the system, \(m_{i}\) represents the mass of the \(i^{th}\) particle, \(g\) represents the acceleration due to gravity in the \(2-\)direction and \(x_{i,2}\) represents the position of the \(i^{th}\) particle in the \(2-\) direction. Here, we use \(l_{i}=1.0\) m, \(m_{i}=1.0\) kg, and \(g=10.0\,m/s^{2}\). #### \(n\)-spring system Here, \(n\)-point masses are connected by elastic springs that deform linearly (elastically) with extension or compression. Note that similar to the pendulum setup, each mass \(m_{i}\) is connected to two masses \(m_{i-1}\) and \(m_{i+1}\) through springs so that all the masses form a closed connection. The Hamiltonian of this system is given by \[H=\sum_{i=1}^{n}\sum_{j=1}^{2}\left(1/2m_{i}{x_{i,j}}^{2}\right)-\sum_{i=1}^{ n}1/2k(||\mathbf{x}_{i-1}-\mathbf{x}_{i}||-r_{0})^{2} \tag{10}\] where \(r_{0}\) and \(k\) represent the undeformed length and the stiffness, respectively, of the spring, and \(j=1,2\) represents the dimensions of the system. Here, we use \(r_{0}=1.0\) m, \(m_{i}=1.0\) kg and \(k=1.0\) N/m. ### \(n\)-body gravitational system Here, \(n\) point masses are in a gravitational field generated by the point masses themselves. The Hamiltonian of this system is given by \[H=\sum_{i=1}^{n}\sum_{j=1}^{2}\left(1/2m_{i}x_{i,j}^{\ 2}\right)+\sum_{i=1}^{n} \sum_{k=1,j\neq i}^{n}Gm_{i}m_{j}/(||\mathbf{x}_{i}-\mathbf{x}_{j}||) \tag{11}\] where \(G\) represents the gravitational constant, and \(j=1,2\) represents the dimension of the system. Here, we use \(G=1.0\) Nm2kg\({}^{-2}\), \(m_{i}=1.0\) kg and \(m_{j}=1.0\) kg \(\forall\ i,j\). ### Binary Lennard Jones system Here, we consider a binary LJ system known as the Kob-Andersen mixture [33] composed of 80% particles of type 0 and 20% particles of type 1. The particles in this system interact based on a 12-6 LJ potential with the pair-wise potential energy \(V_{ij}\)given by \[V_{ij}=\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12}-\left(\frac{ \sigma}{r_{ij}}\right)^{6}\right] \tag{12}\] where \(r_{ij}=||\mathbf{x}_{i}-\mathbf{x}_{j}||\) and \(\sigma\) and \(\epsilon\) are the LJ parameters, which takes the values as \(\epsilon_{0-0}=1.0\), \(\epsilon_{0-1}=1.5\), \(\epsilon_{1-1}=0.5\) and \(\sigma_{0-0}=1.00\), \(\sigma_{0-1}=0.80\), \(\sigma_{1-1}=0.88\), and \(r_{ij}\) represents the distance between particles \(i\) and \(j\). The pair-wise interaction energy between all the particles is summed to obtain the total energy of the system. For the LJ system, all the simulations are conducted at a temperature of 1.2 in the microcanonical (NVE) ensemble, ensuring the system is in a liquid state. The system is initialized with atoms placed in random positions avoiding overlap in a cubic box with periodic boundary conditions with box size \(3.968\) and cutoff for atom type \(0-0=2.5\), \(0-1=2.0\) and \(1-1=2.2\). Further, the system is equilibrated in the NVE ensemble until the memory of the initial configuration is lost. The equations of motion are integrated with the velocity Verlet algorithm. ### Gnn architecture **Pre-Processing:** In the pre-processing layer, we generate a compact vector representation for particle and their interactions \(e_{ij}\) by employing Multi-Layer Perceptrons. \[\mathbf{h}_{i}^{0} =\texttt{squareplus}(\texttt{MLP}_{em}(\texttt{one-hot}(t_{i}))) \tag{13}\] \[\mathbf{h}_{ij}^{0} =\texttt{squareplus}(\texttt{MLP}_{em}(e_{ij})) \tag{14}\] Here, \(\texttt{squareplus}\) is an activation function. In our implementation, we use different \(\texttt{MLP}_{em}\)s for node representation corresponding to kinetic energy, potential energy, and drag. For brevity, we do not separately write the \(\texttt{MLP}_{em}\)s in Eq. 13. ### Kinetic energy and drag prediction. Given that the graph employs Cartesian coordinates, the mass matrix can be represented as a diagonal matrix. Consequently, the kinetic energy (\(\tau_{i}\)) of a particle relies exclusively on the velocity (\(\hat{\mathbf{x}}_{i}\)) and mass (\(m_{i}\)) of said particle. In this context, the parameterized masses for each particle type are acquired through the utilization of the embedding (\(\mathbf{h}_{i}^{0}\)). As such, the predicted value of \(\tau_{i}\) for a given particle is determined by \(\tau_{i}=\texttt{squareplus}(\texttt{MLP}_{T}(\mathbf{h}_{i}^{0}\ ||\ \hat{\mathbf{x}}_{i}))\), where the symbol \(\|\) denotes the concatenation operator. In this equation, \(\texttt{MLP}_{T}\) denotes a multilayer perceptron responsible for learning the kinetic energy function, while \(\texttt{squareplus}\) represents the activation function employed. The overall kinetic energy of the system, denoted by \(T\), is calculated as the sum of individual kinetic energies: \(T=\sum_{i=1}^{n}\tau_{i}\). **Potential energy prediction.** Typically, the potential energy of a system exhibits significant dependence on the topology of its underlying structure. In order to effectively capture this information, we utilize a multiple layers of message-passing among interacting particles (nodes). During the \(l^{th}\) layer of message passing, the node embeddings are iteratively updated according to the following expression: \[\mathbf{h}_{i}^{l+1}=\texttt{squareplus}\left(\texttt{MLP}\left(\mathbf{h}_{i }^{l}+\sum_{j\in\mathcal{N}_{i}}\mathbf{W}_{\mathcal{V}}^{l}\cdot\left(\mathbf{ h}_{j}^{l}||\mathbf{h}_{ij}^{l}\right)\right)\right) \tag{15}\] where, \(\mathcal{N}_{i}=\{u_{j}\in\mathcal{V}\ |\ (u_{i},u_{j})\in\mathcal{E}\}\) is the set of neighbors of particle \(u_{i}\). \(\mathbf{W}_{\mathcal{V}}^{l}\) is a layer-specific learnable weight matrix. \(\mathbf{h}_{ij}^{l}\) represents the embedding of incoming edge \(e_{ij}\) on \(u_{i}\) in the \(l^{th}\) layer, which is computed as follows. \[\mathbf{h}_{ij}^{l+1}=\texttt{squareplus}\left(\texttt{MLP}\left(\mathbf{h}_{ ij}^{l}+\mathbf{W}_{\mathcal{E}}^{l}\cdot\left(\mathbf{h}_{i}^{l}||\mathbf{h}_{j}^{l} \right)\right)\right) \tag{16}\] Similar to \(\mathbf{W}_{\mathcal{V}}^{l}\), \(\mathbf{W}_{\mathcal{E}}^{l}\) is a layer-specific learnable weight matrix specific to the edge set. The message passing is performed over \(L\) layers, where \(L\) is a hyper-parameter. The final node and edge representations in the \(L^{th}\) layer are denoted as \(\mathbf{z}_{i}=\mathbf{h}_{i}^{L}\) and \(\mathbf{z}_{ij}=\mathbf{h}_{ij}^{L}\) respectively. The total potential energy of an \(n\)-body system is represented as \(V=\sum_{i}v_{i}+\sum_{ij}v_{ij}\). Here, \(v_{i}\) denotes the energy associated with the position of particle \(i\), while \(v_{ij}\) represents the energy arising from the interaction between particles \(i\) and \(j\). For instance, \(v_{i}\) corresponds to the potential energy of a bob in a double pendulum, considering its position within a gravitational field. On the other hand, \(v_{ij}\) signifies the energy associated with the expansion and contraction of a spring connecting two particles. In the proposed framework, the prediction for \(v_{i}\) is given by \(v_{i}=\texttt{squareplus}(\texttt{MLP}_{v_{i}}(\mathbf{h}_{i}^{0}\parallel \mathbf{x}_{i}))\). Similarly, the prediction for the pair-wise interaction energy \(v_{ij}\) is determined by \(v_{ij}=\texttt{squareplus}(\texttt{MLP}_{v_{ij}}(\mathbf{z}_{ij}))\). The parameters of the model are trained end-to-end using the MSE loss discussed in Eq. 6. #### Model architecture and training setup For Hgnn, all the MLPs are two layers deep. A square plus activation function is used for all the MLPs. We used 10000 data points from 100 trajectories divided into 75:25 (train: validation) to train all the models. The timestep used for the forward simulation of the pendulum system is \(10^{-5}s\), for the spring and gravitational system is \(10^{-3}s\), and for the LJ system is 0.0001 LJ units. All the equations of motion are integrated with the velocity-Verlet integrator. Detailed training procedures and hyper-parameters are provided in the Supplementary material. All models were trained until the decrease in loss saturates to less than 0.001 over 100 epochs. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of \(10s\) in the case of the pendulum and \(20s\) in the case of spring. Note that this trajectory is 2-3 orders of magnitude larger than the training trajectories from which the data has been sampled. The dynamics of \(n\)-body system are known to be chaotic for \(n\geq 2\). Hence, all the results are averaged over trajectories generated from 100 different initial conditions. #### Symbolic regression SR refers to an approach to search for equations that fit the data points and fit them rather than a parametric approach where an equation is chosen apriori to fit the data. Here, we employ the PySR package to perform the SR [7]. PySR employs a tree-based approach for fitting the governing equation based on the operations and variables provided. Since the parametric space available for SR can be too large with every additional operation, it is important to carefully provide the minimum required input features and the operations while providing meaningful constraints on the search space. In the present work, we choose the addition and multiplication operation. Further, we allow polynomial fit based on a set containing (square, cube, pow(n)) operations, where pow(n) refers to power from four to ten. The loss function to fit the SR is based on the mean squared error between the predicted equation and the data points obtained from Hgnn. Further, the equations are selected based on a score \(S\) that balances complexity \(C\) and loss \(L\). Specifically, the score is defined as \(S=\frac{dL}{dC}\), that is, the gradient of the loss with respect to complexity. For each set of hyperparameters, we select the top 10 equations based on the scores. Further, the equation having the best score among these equations is chosen as the optimal equation. All the hyperparameters associated with the SR and the corresponding equations obtained are included in the Supplementary material. #### Simulation environment All the simulations and training were carried out in the JAX environment [39, 40]. The graph architecture was developed using the jraph package [41]. The experiments were conducted on a machine with Apple M1 chip having 8GB RAM and running MacOS Monterey. **Software packages:** numpy-1.22.1, jax-0.3.0, jax-md-0.1.20, jaxlib-0.3.0, jraph-0.0.2.dev **Hardware:** Chip: Apple M1, Total Number of Cores: 8 (4 performance and 4 efficiency), Memory: 8 GB, System Firmware Version: 7459.101.3, OS Loader Version: 7459.101.3
2305.13282
Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection
Out-of-distribution (OOD) detection is a critical task for reliable predictions over text. Fine-tuning with pre-trained language models has been a de facto procedure to derive OOD detectors with respect to in-distribution (ID) data. Despite its common use, the understanding of the role of fine-tuning and its necessity for OOD detection is largely unexplored. In this paper, we raise the question: is fine-tuning necessary for OOD detection? We present a study investigating the efficacy of directly leveraging pre-trained language models for OOD detection, without any model fine-tuning on the ID data. We compare the approach with several competitive fine-tuning objectives, and offer new insights under various types of distributional shifts. Extensive evaluations on 8 diverse ID-OOD dataset pairs demonstrate near-perfect OOD detection performance (with 0% FPR95 in many cases), strongly outperforming its fine-tuned counterparts. We show that using distance-based detection methods, pre-trained language models are near-perfect OOD detectors when the distribution shift involves a domain change. Furthermore, we study the effect of fine-tuning on OOD detection and identify how to balance ID accuracy with OOD detection performance. Our code is publically available at https://github.com/Uppaal/lm-ood.
Rheeya Uppaal, Junjie Hu, Yixuan Li
2023-05-22T17:42:44Z
http://arxiv.org/abs/2305.13282v1
# Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection ###### Abstract Out-of-distribution (OOD) detection is a critical task for reliable predictions over text. Fine-tuning with pre-trained language models has been a _de facto_ procedure to derive OOD detectors with respect to in-distribution (ID) data. Despite its common use, the understanding of the role of fine-tuning and its necessity for OOD detection is largely unexplored. In this paper, we raise the question: _is fine-tuning necessary for OOD detection_? We present a study investigating the efficacy of directly leveraging pre-trained language models for OOD detection, without any model fine-tuning on the ID data. We compare the approach with several competitive fine-tuning objectives, and offer new insights under various types of distributional shifts. Extensive evaluations on 8 diverse ID-OOD dataset pairs demonstrate near-perfect OOD detection performance (with 0% FPR95 in many cases), strongly outperforming its fine-tuned counterparts. We show that using distance-based detection methods, pre-trained language models are near-perfect OOD detectors when the distribution shift involves a domain change. Furthermore, we study the effect of fine-tuning on OOD detection and identify how to balance ID accuracy with OOD detection performance. Our code is publically available1. Footnote 1: [https://github.com/Uppaal/lm-ood](https://github.com/Uppaal/lm-ood) ## 1 Introduction Despite recent successes, high-performing pre-trained language models are still fragile under distribution shifts, making their applications to the real world challenging Ribeiro et al. (2020). In most real-world settings, the train and test distributions are often not independent and identically distributed. Furthermore, test distributions are often non-stationary and can change over time. The problem of _out-of-distribution_ (OOD) detection addresses the identification of anomalous data, enabling the model to abstain from prediction when it is not supposed to. This is especially important for high-risk settings like financial and medical applications, where unreliable predictions could incur great costs Ulmer et al. (2020); Zhang et al. (2021). In literature, a _de facto_ procedure is to fine-tune a pre-trained language model on the in-distribution (ID) data2, and then derive the OOD detector based on the adapted model Zhou et al. (2021); Hendrycks et al. (2020); Xu et al. (2021). The fine-tuned model is hypothesized to produce embeddings that are customized to the ID data. Thus, prior work focuses on the design of fine-tuning and expects the adapted representations to be more useful for OOD detection. Despite its common use, the understanding of the role of fine-tuning and its necessity for OOD detection is largely lacking in the field. Footnote 2: Note that the ID data is defined _w.r.t._ the downstream dataset of interest, not the pre-training data. Motivated by this, we revisit the common procedure and raise the unexplored question: _is fine-tuning necessary at all, for OOD detection_? To answer this question, we introduce a simple and effective procedure for OOD detection, which does not require any model fine-tuning on the ID data. Specifically, we explore distance-based metrics for detection, which measure the relative distances of samples in the representation space of a pre-trained language model. The operating hypothesis is that embeddings of ID samples are closer to each other than the OOD sample embeddings. To the best of our knowledge, we are the first to explore distance-based OOD detection methods _directly on a pre-trained language model_, rather than the fine-tuned models adopted in previous works. We show that our method based on a pre-trained language model achieves near-perfect performance in detecting out-of-domain shifts, favorably outperforming its fine-tuned counterparts. For example, for 20NewsGroups (ID) vs. RTE (OOD), OOD detection with the best fine-tuning loss Khosla et al. (2020) yields an FPR95 of 24.8%, while a pre trained language model can perfectly detect RTE as OOD with 0% FPR95. For comprehensive evaluations, we experiment on 8 diverse ID-OOD dataset pairs spanning semantic and background shifts, and show that the strong performance of using the pre-trained model holds consistently. To better understand the strong performance, we further show that pre-trained models display strongly separated domain clusters, both qualitatively and quantitatively. The strong separation of domain clusters leads to the efficacy of distance-based OOD detection. Even further, we systematically compare different fine-tuning objectives, and interestingly observe that the performance of distance-based OOD detection declines over the course of fine-tuning across all objectives, despite the increase in ID classification accuracy. To this end, we provide new insights that early stopping (Yao et al., 2007) can be a promising solution, if one desires a good trade-off between OOD detection and ID classification performance. Our contributions can be summarized as follows: 1. We propose a simple and effective method for zero-shot3 OOD detection, leveraging pre-trained language models without fine-tuning on the ID data. Extensive experiments demonstrate its near-perfect performance (with 0% FPR95 in most cases), favorably outperforming its fine-tuned counterparts. Footnote 3: We use the term “zero-shot” to refer to a setting where no (ID or OOD) data is used to update the model parameters. 2. We conduct a comprehensive study to understand fine-tuning objectives and their impact on OOD detection. We offer new insights on their efficacy under various types of distribution shifts. 3. We perform qualitative and quantitative analysis on the embedding characteristics, explaining the strong performance of using a pre-trained language model for OOD detection. ## 2 Preliminaries OOD DetectionFor a supervised multi-class classification task, the labeled training dataset \(\mathcal{D}_{\text{in}}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) consists of samples from the joint distribution \(P_{\mathcal{X}\mathcal{Y}}\), where \(\mathcal{X}\) is the input space and \(\mathcal{Y}=\{1,\cdots,C\}\) is the label space. Given a test-time sample \(\mathbf{x}^{\prime}\), OOD detection aims to identify whether \(\mathbf{x}^{\prime}\) is in-distribution (ID) \(P_{\text{in}}\) or not, where \(P_{\text{in}}\) is the marginal of \(P_{\mathcal{X}\mathcal{Y}}\) on \(\mathcal{X}\). Formally, we denote the OOD detector as a binary function mapping \(G(\mathbf{x}^{\prime}):\mathcal{X}\rightarrow\{\text{in},\text{out}\}\). Types of Distribution ShiftsArora et al. (2021) categorize OOD samples by the type of distribution shift they exhibit in NLP problems. According to Ren et al. (2019), the representations \(h(\mathbf{x})\) can be decomposed into two independent and disjoint components--_semantic features_ and _background features_. Semantic features are discriminative and strongly correlated with labels for prediction, while background features contain population-level statistics and are invariant across labels. Based on the type of features in OOD samples, the distribution shift is categorized as _semantic shift_ or _background shift_. An example of the semantic shift is the open-set classification problem that encounters novel classes at test time (Scheirer et al., 2012), where the semantic of \(\mathbf{x}^{\prime}\) is outside the support of \(\mathcal{Y}\). Background shift is often seen when the domain or style of texts changes in the input space \(\mathcal{X}\) while \(\mathcal{Y}\) remains the same (Pavlick and Tetreault, 2016). We comprehensively consider both types of shifts later in our experiments in Section 4. ## 3 Methodology In Section 3.1, we start by introducing OOD detection with pre-trained language models, which does not require any model fine-tuning on the ID dataset. We further consider OOD detection with model fine-tuning in Section 3.2. ### OOD Detection with Pre-trained Models We consider a pre-trained language model backbone \(h\colon\mathcal{X}\rightarrow\mathbb{R}^{d}\), which encodes an input \(\mathbf{x}\) to a \(d\)-dimensional text embedding \(h(\mathbf{x})\). The goal of OOD detection is to identify samples that do not belong to \(P_{\text{in}}\). Note that the ID data is defined _w.r.t._ the downstream dataset \(\mathcal{D}_{\text{in}}\) of interest, instead of the pre-training data. Different from prior works, _there is no fine-tuning/training on the ID samples_, and the setup is thus labelled as zero-shot OOD detection. We formulate the zero-shot OOD detector as a binary function mapping: \[G_{\lambda}(\mathbf{x};h)=\begin{cases}\text{in}&\text{if }S(\mathbf{x};h) \geq\lambda\\ \text{out}&\text{if }S(\mathbf{x};h)<\lambda\end{cases}, \tag{1}\] where \(S(\mathbf{x};h)\) is the OOD scoring function, and \(\lambda\) is the threshold. By convention, \(\lambda\) is chosen so that a high fraction of ID data (_e.g.,_ 95%) is above the threshold. We describe \(S(\mathbf{x};h)\) in details next. We employ distance-based methods for zero-shot OOD detection, which measure the relative distances of samples in representation space. To the best of our knowledge, we are the first to use distance-based OOD detection _directly with a pre-trained language model_, while previous works use models adapted to the ID data. The operating hypothesis is that the embeddings of ID samples are closer to each other than the OOD sample embeddings. Modeling the learned representation space as a mixture of multivariate Gaussians, Lee et al. (2018) used the Maximum Mahalanobis distance Mahalanobis (2018) to all class centroids as the score for OOD detection: \[S_{\text{Maha}}(\mathbf{x};h)= \min_{c\in\mathcal{Y}}\left(h(\mathbf{x})-\boldsymbol{\mu}_{c} \right)^{\top}\] \[\Sigma^{-1}\left(h(\mathbf{x})-\boldsymbol{\mu}_{c}\right),\] where \(\Sigma\) is the covariance matrix and \(\boldsymbol{\mu}_{c}\) is the mean embedding of class \(c\). Both \(\Sigma\) and \(\boldsymbol{\mu}_{c}\) are estimated on the ID embeddings extracted from the pre-trained language model \(h(\cdot)\). Using Mahalanobis distance for OOD detection requires some distributional assumptions on the representation space. This is circumvented through _non-parametric_ density estimation using nearest neighbors Sun et al. (2022). The distance between a query point and its \(k\)-th nearest neighbor in the ID data is used for OOD detection: \[S_{\text{kNN}}(\mathbf{x},h)=-\|\mathbf{z}-\mathbf{z}_{k}\|_{2},\] where \(\mathbf{z}\) and \(\mathbf{z}_{k}\) are the \(L_{2}\) normalized embeddings, for the query point \(\mathbf{x}\) and its \(k\)-th nearest neighbor. In Section 5, we evaluate zero-shot OOD detection performance using both parametric (Maha) and non-parametric (KNN) distance functions. ### OOD Detection with Fine-tuning In contrast to the zero-shot OOD detection setup, an alternative strategy is to fine-tune the model on the ID dataset \(\mathcal{D}_{\text{in}}\) and then perform OOD detection _w.r.t._ the fine-tuned model. In what follows, we comprehensively consider three different fine-tuning objectives: (1) cross-entropy loss, (2) task-adaptive pretraining loss, and (3) supervised contrastive loss. #### 3.2.1 Cross-Entropy (CE) The cross-entropy loss is widely used for training neural networks, making it an ideal baseline for our study. Given a pre-trained model, we fine-tune with the CE loss: \[\mathcal{L}_{\text{CE}}=\frac{1}{N}\sum_{i=1}^{N}-\log\frac{e^{f_{y}(\mathbf{x }_{i};\theta)}}{\sum_{j=1}^{C}e^{f_{j}(\mathbf{x}_{i};\theta)}}\] where \(f_{y}\) is the logit output corresponding to the ground truth label \(y\), and \(\theta\) is the parameterization of the neural network. #### 3.2.2 Task-adaptive Pretraining (TAPT) Gururangan et al. (2020) show that multi-phase adaptive pre-training boosts downstream task performance of pre-trained language models. They introduce Task Adaptive Pre-Training (TAPT), which involves extending the unsupervised pre-training process (using the masked language modeling objective Kenton and Toutanova (2019)) with data for the downstream task, before fine-tuning to the same task using cross-entropy. TAPT improves generalization capabilities by providing a strong initialization for fine-tuning, and to the best of our knowledge, TAPT has _not_ been used in the setting of OOD detection prior to our work. #### 3.2.3 Supervised Contrastive Learning (SupCon) By leveraging information on labels and increasing the number of positive pairs during contrastive training, SupCon Khosla et al. (2020) has been shown to consistently outperform cross-entropy on large-scale classification tasks Gunel et al. (2020). The objective encourages embeddings of a class to be highly separated from other classes, boosting the performance of OOD detection on text classification tasks Zhou et al. (2021). Formally, \[\mathcal{L}_{\text{SupCon}}=-\sum_{i=1}^{N}\frac{1}{N|P(i)|}\] \[\sum_{p\in P(i)}\log\frac{\exp(\mathbf{z}_{i}^{\top}\mathbf{z}_{p }/\tau)}{\sum_{a\in A(i)}\exp{(\mathbf{z}_{i}^{\top}\mathbf{z}_{a}/\tau)}},\] where \(P(i)\) is the set of anchor instances from the same class as \(\mathbf{x}_{i}\), \(A(i)\) is the set of all anchor instances, \(\mathbf{z}_{i}\) is the \(L_{2}\) normalized sentence embedding for \(\mathbf{x}_{i}\), and \(\tau\) is the temperature. After fine-tuning, OOD detection is performed using a similar procedure as Equation 1, except that the scoring function \(S(\mathbf{x};h)\) is calculated using the fine-tuned model. While our primary focus is distance-based detection, we additionally consider two common output-based methods--maximum softmax probability (MSP) Hendrycks and Gimpel (2017) and energy score Liu et al. (2020). They derive OOD scores from the confidence or logits from the classification head of the model. it by mapping it to one of the existing ID class clusters. However, due to the distributional difference of the datapoint, the model is unable to perfectly map such a point and OOD points end up in the space between the ID class clusters most similar to it. Fine-tuned representations of the data thus make distance-based OOD detection more challenging. ### What's the best way of fine-tuning for OOD detection? While pre-trained models show strong out-of-domain detection performance, they lack the classification ability on the ID dataset. This is expected since the models are not optimized for the downstream classification task. Thus, we raise the next question: _How can we fine-tune the model to accurately classify ID data while having reasonable OOD detection performance?_ To answer this question, we comprehensively compare three fine-tuning objectives (_c.f._ Section 3.2), coupled with different OOD detection methods. Figure 2 depicts the effect of fine-tuning for OOD detection, for both semantic shift (top: 20NewsGroups vs. RTE) and background shift (middle: IMDB vs. SST-2). We highlight three key observations: **(1)** For distance-based methods, \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & \multicolumn{4}{c}{**KNN** (non-parametric)} & \multicolumn{4}{c}{**Mahalanobis** (parametric)} \\ **ID\(\rightarrow\)OOD Pair** & **Training** & **AUROC \(\uparrow\)** & **AUPR (In) \(\uparrow\)** & **AUPR (Out) \(\uparrow\)** & **FPR5\(\downarrow\)** & **AUROC \(\uparrow\)** & **AUPR (In) \(\uparrow\)** & **AUPR (Out) \(\uparrow\)** & **FPR5\(\downarrow\)** \\ \hline \multicolumn{7}{l}{_Out-of-Domain: Semantic Shift_} \\ \hline & Zhou et al. & 0.935 & 0.982 & 0.664 & 0.713 & 0.978 & 0.994 & 0.865 & 0.015 \\ 20NG\(\rightarrow\)SST-2 & CE & 0.973 & 0.991 & 0.923 & 0.155 & 0.981 & 0.994 & 0.942 & 0.087 \\ & TAPT & 0.969 & 0.990 & 0.903 & 0.169 & 0.981 & 0.994 & 0.939 & 0.088 \\ & SupCon & 0.969 & 0.990 & 0.909 & 0.180 & 0.980 & 0.994 & 0.943 & 0.094 \\ & Pre-trained & 1.000 & 1.000 & 1.000 & 0.000 & 1.000 & 1.000 & 1.000 & 0.000 \\ \hline & Zhou et al. & 0.935 & 0.929 & 0.950 & 0.718 & 0.964 & 0.955 & 0.978 & 0.224 \\ & CE & 0.954 & 0.898 & 0.984 & 0.263 & 0.968 & 0.925 & 0.989 & 0.166 \\ 20NG\(\rightarrow\)MNLI & TAPT & 0.950 & 0.887 & 0.982 & 0.263 & 0.964 & 0.910 & 0.988 & 0.175 \\ & SupCon & 0.954 & 0.899 & 0.984 & 0.265 & 0.970 & 0.932 & 0.990 & 0.156 \\ & Pre-trained & 1.000 & 0.999 & 1.000 & 0.000 & 1.000 & 0.999 & 1.000 & 0.000 \\ \hline & Zhou et al. & 0.934 & 0.972 & 0.780 & 0.594 & 0.956 & 0.981 & 0.860 & 0.312 \\ & CE & 0.922 & 0.958 & 0.858 & 0.410 & 0.945 & 0.970 & 0.902 & 0.285 \\ 20NG\(\rightarrow\)RTE & TAPT & 0.898 & 0.942 & 0.822 & 0.455 & 0.919 & 0.952 & 0.869 & 0.352 \\ & SupCon & 0.923 & 0.959 & 0.858 & 0.393 & 0.952 & 0.975 & 0.914 & 0.248 \\ & Pre-trained & 1.000 & 1.000 & 0.999 & 0.000 & 1.000 & 1.000 & 0.999 & 0.000 \\ \hline & Zhou et al. & 0.954 & 0.823 & 0.993 & 0.261 & 0.969 & 0.867 & 0.996 & 0.144 \\ & CE & 0.951 & 0.804 & 0.993 & 0.292 & 0.961 & 0.817 & 0.995 & 0.206 \\ 20NG\(\rightarrow\)MDB & TAPT & 0.955 & 0.797 & 0.994 & 0.227 & 0.965 & 0.804 & 0.995 & 0.159 \\ & SupCon & 0.958 & 0.826 & 0.994 & 0.234 & 0.970 & 0.852 & 0.996 & 0.150 \\ & Pre-trained & 0.988 & 0.970 & 0.998 & 0.019 & 0.990 & 0.975 & 0.998 & 0.012 \\ \hline & Zhou et al. & 0.932 & 0.977 & 0.708 & 0.851 & 0.980 & 0.993 & 0.888 & 0.005 \\ & CE & 0.949 & 0.976 & 0.898 & 0.264 & 0.962 & 0.982 & 0.920 & 0.175 \\ 20NG\(\rightarrow\)Multi30K & TAPT & 0.940 & 0.970 & 0.886 & 0.258 & 0.956 & 0.978 & 0.922 & 0.167 \\ & SupCon & 0.937 & 0.969 & 0.887 & 0.294 & 0.955 & 0.977 & 0.918 & 0.201 \\ & Pre-trained & 1.000 & 1.000 & 1.000 & 0.000 & 1.000 & 1.000 & 1.000 & 0.000 \\ \hline & Zhou et al. & 0.928 & 0.921 & 0.937 & 0.765 & 0.955 & 0.948 & 0.969 & 0.383 \\ & CE & 0.939 & 0.877 & 0.977 & 0.339 & 0.957 & 0.905 & 0.984 & 0.234 \\ 20NG\(\rightarrow\)NewsCategory & TAPT & 0.931 & 0.853 & 0.973 & 0.343 & 0.947 & 0.874 & 0.981 & 0.243 \\ & SupCon & 0.938 & 0.877 & 0.976 & 0.354 & 0.962 & 0.919 & 0.986 & 0.219 \\ & Pre-trained & 1.000 & 0.999 & 1.000 & 0.000 & 1.000 & 0.999 & 1.000 & 0.000 \\ \hline & Zhou et al. & 0.952 & 0.992 & 0.601 & 0.388 & 0.988 & 0.998 & 0.870 & 0.005 \\ & CE & 0.953 & 0.991 & 0.816 & 0.247 & 0.964 & 0.993 & 0.844 & 0.189 \\ 20NG\(\rightarrow\)CLINC150 & TAPT & 0.944 & 0.989 & 0.769 & 0.296 & 0.959 & 0.992 & 0.830 & 0.213 \\ & SupCon & 0.940 & 0.988 & 0.761 & 0.343 & 0.957 & 0.992 & 0.821 & 0.230 \\ & Pre-trained & 1.000 & 1.000 & 1.000 & 0.000 & 1.000 & 1.000 & 1.000 & 0.000 \\ \hline \multicolumn{7}{l}{_Out-of-Domain: Background Shift_} \\ \hline & CE & 0.865 & 0.994 & 0.147 & 0.741 & 0.893 & 0.996 & 0.231 & 0.618 \\ IMDB\(\rightarrow\) SST-2 & TAPT & 0.857 & 0.994 & 0.137 & 0.746 & 0.877 & 0.995 & 0.172 & 0.683 \\ & SupCon & 0.838 & 0.993 & 0.119 & 0.824 & 0.865 & 0.995 & 0.149 & 0.800 \\ & Pre-trained & 0.967 & 0.999 & 0.582 & 0.210 & 0.996 & 1.000 & 0.860 & 0.004 \\ \hline \multicolumn{7}{l}{_Same Domain Shift_} \\ \hline & CE & 0.925 & 0.922 & 0.933 & 0.465 & 0.877 & 0.815 & 0.912 & 0.467 \\ NewsCategory-ID\(\rightarrow\) & TAPT & 0.918 & 0.917 & 0.924 & 0.513 & 0 the OOD detection performance worsens as the number of fine-tuning epochs increases, highlighting that early stopping is the key to strong OOD detection performance. For example, on 20NewsGroups (ID) vs. RTE (OOD), the model trained with TAPT for 1 epoch yields an AUROC of 95.5% (with Mahalanobis), which declines to 91.9% after 10 epochs of fine-tuning. To the best of our knowledge, we are the first to show the importance of early stopping on fine-tuning language models for distance-based OOD detection. **(2)** Irrespective of the fine-tuning objectives, distance-based OOD detection methods consistently outperform output-based methods, particularly MSP using softmax confidence Hendrycks and Gimpel (2017) and energy score using logits Liu et al. (2020). **(3)** Under semantic shift, out-of-domain detection using any of the three fine-tuning objectives displays similar performance on most ID-OOD pairs, bearing a large gap _w.r.t._ the pre-trained language model. Linear Probing is SuboptimalTo perform classification while preserving the OOD detection performance of a pre-trained model, one possible solution is linear probing Alain and Bengio (2016), _i.e.,_ fine-tuning the classification head to the downstream task, while keeping the weights of the pre-trained model backbone unchanged. However, in Figure 6 (Appendix), we show that linear probing does not yield competitive classification performance. In particular, we observe the strongest fine-tuning objective (TAPT) only obtains an ID accuracy of 61% after 100 epochs of fine-tuning, compared to full network fine-tuning where an accuracy of 86% is achieved in 10 epochs. ### Investigation on same-domain data shifts In this subsection, we further investigate a more challenging type of data shift, where the test samples are from the _same domain_ and thus can be distributionally very close to the ID data. This is in contrast to our evaluations in Sections 5.1 and 5.2, where the OOD samples are from different domains. To simulate same-domain shifts, we split the NewsCategory dataset into two sets with disjoint classes: one for ID, and another for OOD. The domain for both sets of classes is identical, while the semantic label sets are different. The allocation of classes is described in Table 5 (Appendix A). Figure 2 (bottom) shows the effect of fine-tuning for detection in this challenging setup of same-domain shifts. A salient observation is that fine-tuning consistently improves OOD detection performance, across all training objectives. To better understand why the pre-trained model underperforms in this case, in Figure 3, we plot feature representations, before and after fine-tuning, respectively. As seen in the left of Figure 3, when both ID and OOD data are sampled from the same domain, their embeddings are highly overlapping. This explains the suboptimal performance of directly employing embeddings from the pre-trained language model. In contrast, fine-tuning creates stronger separability between ID and OOD data. Table 3 quantitatively confirms that fine-tuning leads to stronger ID-OOD separability (_c.f._ Equation 2). in Ming et al. (2023): (1) inter-class dispersion, which is the average cosine similarity among pairwise class centroids, (2) intra-class compactness, which measures the average cosine similarity between each feature embedding and its corresponding class centroid, and (3) ID-OOD separability, which functions as a measure of domain gap between ID and OOD. Formally, \[\begin{split}\text{Disp.}(\uparrow)&=\frac{1}{C} \sum_{i=1}^{C}\frac{1}{C-1}\sum_{j=1}^{C}\mathbf{\mu}_{i}\cdot\mathbf{\mu}_{j}\mathbbm{1 }\{i\neq j\}\\ \text{Comp.}(\downarrow)&=\frac{1}{C}\sum_{j=1}^{C} \frac{1}{N}\sum_{i=1}^{N}\mathbf{z}_{i}\cdot\mathbf{\mu}_{j}\mathbbm{1}\{y_{i}=j\} \\ \text{Sep.}(\uparrow)&=\frac{1}{|\mathcal{D}_{\text{ out}}^{\text{test}}|}\sum_{\mathbf{x}^{\prime}\in\mathcal{D}_{\text{out}}^{\text{test}} }\max_{j\in\mathcal{Y}}\mathbf{z}_{\mathbf{x}^{\prime}}\cdot\mathbf{\mu}_{j}\\ &-\frac{1}{|\mathcal{D}_{\text{in}}^{\text{test}}|}\sum_{ \mathbf{x}\in\mathcal{D}_{\text{in}}^{\text{test}}}\max_{j\in\mathcal{Y}} \mathbf{z}_{\mathbf{x}}\cdot\mathbf{\mu}_{j},\end{split} \tag{2}\] where \(\mathbf{\mu}_{i}\) is the average of embeddings for samples in class \(i\), and \(\mathbf{z}\) is the \(L_{2}\) normalized embedding. Figure 3: Comparison of data representations in the penultimate layer of pre-trained vs. fine-tuned models for _same-domain_ data shifts. Here we split the NewsCategory dataset into two parts with disjoint classes: one for ID, and another for OOD. ID data is shown in blue, while OOD data is in yellow. **Left**: Pre-trained model. **Right**: Fine-tuned with cross-entropy loss. Fine-tuning encourages the model to separate the embeddings into individual class clusters. \begin{table} \begin{tabular}{l l} \hline \hline **Training** & **ID-OOD Separability \(\uparrow\)** \\ \hline CE & 12.235 \\ TAPT & 12.489 \\ SupCon & 7.549 \\ Pre-trained & 0.138 \\ \hline \hline \end{tabular} \end{table} Table 3: Effect of fine-tuning on ID-OOD separability, for same-domain (SD) shift with the NewsCategory dataset. Fine-tuning for a single epoch helps separate overlapping ID and OOD data into dispersed clusters. Figure 2: Effect of fine-tuning on ID accuracy and OOD detection performance, across different objectives and detection methods. From left to right: (1) ID Accuracy, AUROC with (2) CE, (2) TAPT, and (3) SupCon losses. From top to bottom: OoD semantic shift, OoD background shift, and same-domain (SD) shift. The X-axis shows the number of fine-tuning epochs, with ’0’ indicating the pre-trained model. The Y-axis shows either the ID accuracy or the AUROC. Actual values can be found in Appendix D. Table 4 shows us that fine-tuning encourages the model to embed the data into well-separated class clusters with high inter-class dispersion (measured in angular degrees). In contrast, the pre-trained model represents the entire domain as a homogeneous cluster containing data from all classes. Interestingly, the pre-trained model displays the strongest compactness, indicating the closeness among ID data points in the original representation space. Note that the ID accuracy is random for the pre-trained model, which is expected. Dispersion and compactness monotonically improve through fine-tuning, further indicating that fine-tuning encourages the model to project the data into well-separated and compact class-wise clusters. However, Figure 4 shows us that while fine-tuning improves ID-OOD separability for the same-domain shift, it has less impact on out-of-domain shifts. (Actual values and results for other objectives can be found in Appendix D.) This trend also echos our previous observations in Section 5.2 and Section 5.3, on OOD detection performance. ## 6 Related Work The problem of OOD detection is different from domain adaptation (Ramponi and Plank, 2020), where a model is trained to generalize to a known target domain with the same label space. It is also different from selective prediction where a model abstains only when its confidence is low, irrespective of domain (El-Yaniv et al., 2010; Geifman and El-Yaniv, 2017; Kamath et al., 2020). OOD Detection MethodsA popular baseline is the calibration method Maximum Softmax Probability (MSP) (Hendrycks and Gimpel, 2017), that directly uses maximum class probability produced by the logits of a trained classifier. However, predictive confidence has been shown to be undesirably high for OOD samples, making MSP ineffective (Nguyen et al., 2015; Wei et al., 2022; Shen et al., 2021). Liu et al. (2020) propose using energy score for OOD detection, which better distinguishes in- and out-of-distribution samples than softmax scores. ReAct (Sun et al., 2021) improves the energy score by introducing a rectified activation, which reduces model overconfidence in OOD data. Sun and Li (2022) utilize logit sparsification to enhance the vanilla energy score. More recently, detection methods that utilize distances of samples in representation space, have risen as a promising class of OOD detection methods in both the vision (Mandelbaum and Weinshall, 2017; Lee et al., 2018; Sun et al., 2022; Ming et al., 2023) and multi-modal (Ming et al., 2022) regimes. OOD Detection in NLPIn the realm of NLP, model confidence using sentence embeddings has been shown to be a strong baseline with pre-trained transformers (Hendrycks et al., 2020; Desai and Durrett, 2020). Contrastive learning (Khosla et al., 2020; Gao et al., 2021; Jin et al., 2022) minimizes intra-class variance, leading to stronger OOD detection, especially in low data regimes (Zeng et al., 2021), and with Mahalanobis distance (Zhou et al., 2021; Podolskiy et al., 2021). Detection performance has also been strengthened using data aug \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{**ID**} & \multicolumn{1}{c}{**Objective**} & \multicolumn{1}{c}{**ID Accuracy \(\uparrow\)**} & \multicolumn{1}{c}{**Disoperation \(\uparrow\)**} & \multicolumn{1}{c}{**Compactness \(\downarrow\)**} \\ & & & (in degree) & (in degree) \\ \hline \multirow{3}{*}{**20NewsGroups**} & CE & 0.791 & 90.994 & 19.575 \\ & TAPT & **0.807** & **91.753** & 18.902 \\ & SupCon & 0.763 & 89.354 & 21.987 \\ & Pre-trained & 0.053 & 1.514 & **4.326** \\ \hline \multirow{3}{*}{**IMDB**} & CE & 0.938 & 87.041 & 21.787 \\ & TAPT & **0.940** & 76.871 & 15.894 \\ & SupCon & 0.928 & **135.859** & 19.245 \\ & Pre-trained & 0.500 & 0.636 & **6.058** \\ \hline \multirow{3}{*}{**NewsCategory**} & CE & 0.745 & **88.701** & 33.878 \\ & TAPT & **0.756** & 88.216 & 33.509 \\ & SupCon & 0.667 & 63.392 & 30.793 \\ & Pre-trained & 0.050 & 3.086 & **9.210** \\ \hline \hline \end{tabular} \end{table} Table 4: Quality of ID embeddings generated by pre-trained and fine-tuned models, quantified by accuracy on the ID test set, inter-class dispersion, and intra-class compactness. The fine-tuned models show well-separated and compact class clusters, while the pre-trained model shows a single domain cluster, a sub-optimal setting for downstream classification. Fine-tuned models are trained for a single epoch. Figure 4: Effect of fine-tuning (w/ SupCon loss) on the ID-OOD separability. The X-axis shows the number of fine-tuning epochs, and the Y-axis shows ID-OOD separability (in angular degrees). mentation (Chen and Yu, 2021; Rawat et al., 2021), discriminative training (Zhan et al., 2021), mutual information maximization (Nimah et al., 2021), ensembles (Li et al., 2021) and prototypical networks in the few-shot setup (Tan et al., 2019). While most previous works perform fine-tuning on the ID data, we provide a comprehensive understanding on _directly using the pre-trained model for zero-shot OOD detection_. **Pre-trained vs Fine-tuned** Pre-trained language models have been shown to learn implicit sentence representations, forming unsupervised domain clusters (Aharoni and Goldberg, 2020). Andreassen et al. (2021) and Kumar et al. (2021) showed that fine-tuning distorts pre-trained features, worsening accuracy on OOD generalization. However, to the best of our knowledge, we are the first to explore the effect of directly using pre-trained language models for _OOD detection_. Related to our work, Ming et al. (2022) show that pre-trained models can be used for zero-shot OOD detection. Different from ours, they perform OOD detection in the multi-modal space and calculate distances between the visual and textual representations. ## 7 Conclusion In this paper, we explore the simple and effective setting of zero-shot OOD detection with pre-trained langage models. Our work departs from prior literature that typically requires fine-tuning on the ID data. Extensive evaluations demonstrate that pre-trained models are near-perfect for OOD detection when the test data comes from a different domain. We additionally investigate the effect of fine-tuning on OOD detection, and identify strategies to achieve both strong OOD detection performance and ID accuracy. We perform both qualitative and quantitative analysis on the embedding characteristics, explaining the strong performance of our method. We hope our work will inspire future work to the strong promise of using pre-trained models for OOD detection. ## Ethical Considerations Our project aims to improve the reliability and safety of large language models, which can be fragile under distribution shift (Ribeiro et al., 2020) and incur great costs (Ulmer et al., 2020; Zhang et al., 2021). By properly flagging anomalous data, our method can lead to direct benefits and societal impacts, particularly for safety-critical applications. From a user's perspective, our method can help improve trust in the language models. Our study does not involve any human subjects or violation of legal compliance. We do not anticipate any potentially harmful consequences to our work. As detailed in Appendix A, all of our experiments are conducted using publicly available datasets. Our code has been released for reproducibility. Through our study and releasing our code, we hope to raise stronger research and societal awareness toward the problem of out-of-distribution detection in natural language processing. ## Limitations We provide a comprehensive study on the efficacy of leveraging pre-trained language models for zero-shot OOD detection. Our method is thus limited to the setting of abstaining from prediction on all OOD data. This is more conservative than selective prediction, where the model must make predictions over as many ID & OOD points as possible while maintaining high accuracy. Despite this, OOD detection has lower risks to high-risk and safety-critical applications, where rare and anomalous data is more reasonably flagged to the expert. We believe our work provides new values and insights to the research community, especially on safe handling of distributional shifts when deploying pre-trained language models. As discussed in our Ethical Considerations, the OOD detection problem is of significant use in high-risk settings, and should be incorporated into production-level pipelines. However, for the same reason, the OOD detection models must be also reliable to avoid any risk to the downstream applications. ## Acknowledgements Li is supported in part by the AFOSR Young Investigator Award under No. FA9550-23-1-0184; UL Research Institutes through the Center for Advancing Safety of Machine Intelligence; Philanthropic Fund from SFF; and faculty research awards from Google, Meta, and Amazon. Hu is supported in part by a gift fund from ProtagoLabs. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements either expressed or implied, of the sponsors. We would like to thank Yifei Ming and the anonymous reviewers for helpful comments.
2302.07250
The effective inflationary potential of constant-torsion emergent gravity
Constant-torsion emergent gravity (CTEG) has a Lagrangian quadratic in curvature and torsion, but without any Einstein--Hilbert term. CTEG is motivated by a unitary, power-counting renormalisable particle spectrum. The timelike axial torsion adopts a vacuum expectation value, and the Friedmann cosmology emerges dynamically on this torsion condensate. We show that this mechanism -- and the whole background cosmology of CTEG -- may be understood through the effective potential of a canonical single scalar field model. The effective potential allows for hilltop inflation in the early Universe. In the late Universe, the Hubble friction overdamps the final quadratic approach to the effective minimum at the condensate, where the value of the potential becomes the cosmological constant. We do not consider particle production through spin-torsion coupling, or running of Lagrangian parameters. The model must be completed if reheating and a separation of inflationary and dark energy scales are to be understood. It is suggested that the divergence of the potential at large values of the scalar is inconsistent with the linearised propagator analysis of CTEG around zero-torsion Minkowski spacetime. This background may therefore be a strongly coupled surface in CTEG.
C. Rew, W. E. V. Barker
2023-02-14T18:46:01Z
http://arxiv.org/abs/2302.07250v1
# The effective inflationary potential of constant-torsion emergent gravity ###### Abstract Constant-torsion emergent gravity (CTEG) has a Lagrangian quadratic in curvature and torsion, but without any Einstein-Hilbert term. CTEG is motivated by a unitary, power-counting renormalisable particle spectrum. The timelike axial torsion adopts a vacuum expectation value, and the Friedmann cosmology emerges dynamically on this torsion condensate. We show that this mechanism - and the whole background cosmology of CTEG - may be understood through the effective potential of a canonical single scalar field model. The effective potential allows for hilltop inflation in the early Universe. In the late Universe, the Hubble friction overdamps the final quadratic approach to the effective minimum at the condensate, where the value of the potential becomes the cosmological constant. We do not consider particle production through spin-torsion coupling, or running of Lagrangian parameters. The model must be completed if reheating and a separation of inflationary and dark energy scales are to be understood. It is suggested that the divergence of the potential at large values of the scalar is inconsistent with the linearised propagator analysis of CTEG around zero-torsion Minkowski spacetime. This background may therefore be a strongly coupled surface in CTEG. pacs: 04.50.Kd, 04.60.-m, 04.20.Fy, 98.80.Cq ## I Introduction The early and late Universe are both characterised by accelerated expansion with a nearly constant Hubble number. These regimes may be realised in general relativity (GR) by means of a slowly-rolling inflaton field and, in the full cosmic concordance (LCDM) model [1; 2], with a small positive external cosmological constant \(\Lambda_{\rm LCDM}\approx 2.846\times 10^{-122}\,M_{\rm Pl}\)[3; 4]. It is then interesting to consider whether _modified_ gravity models can furnish such driving mechanisms from within the gravitational sector itself. #### i.1.1 Poincare gauge theory GR may be naturally extended by augmenting the Levi-Civita connection \(\Gamma^{\mu}_{\ \ \ \nu\sigma}=C^{\mu}_{\ \ \ \nu\sigma}\equiv\frac{1}{2}g^{\mu\rho} \partial_{\nu}\,g_{\rho\sigma}+...\) with an antisymmetric _contortion_ correction. Compton directly encodes spacetime _torsion_\(\mathcal{T}^{\mu}_{\ \ \nu\sigma}\equiv 2\Gamma^{\mu}_{\ \ \ \ \ [\nu\sigma]}\), whose presence may eventually be indicated by a breaking of the equivalence principle within the fermionic sector [5], depending on the detailed (and entirely unknown) nature of the matter coupling1. Torsion theories can be motivated within the traditional GR framework by promoting the spin connection to an independent gauge field of the Lorentz group. This procedure generates the celebrated Poincare gauge theory of gravity (PGT) [7; 8; 9], in which the eighteen independent non-gauge components of the spin connection may propagate alongside the graviton as scalar \(0^{\pm}\), vector \(1^{\pm}\) or tensor \(2^{\pm}\) particles [10; 11; 12], depending on the specific balance of quadratic curvature (\(\mathcal{R}^{2}\)) invariants present in the low energy expansion. The external masses of these particles are contingent on the Einstein-Hilbert (\(\mathcal{R}\)) and quadratic torsion (\(\mathcal{T}^{2}\)) coupling constants; the low energy theory up to _even_ parity _quadratic_ invariants of curvature and torsion is denoted PGT3. Footnote 1: Note that some authors have also suggested that non-Riemannian spacetime geometries may, for specific gravity models, be distinguished by the geodesic or autoparallel character of particle trajectories [6]. Stringent constraints on the PGT3 establish that only the scalar \(0^{\pm}\) modes may propagate without exciting ghosts in the fully nonlinear theory [13; 14]. However, these constraints are only known to apply to theories which _modify_ GR, for instance, as low energy effective theories \(L\sim\mathcal{R}+\mathcal{R}^{2}+\mathcal{T}^{2}+L_{\rm M}\) which modify the Einstein-Cartan (EC) model \(L\sim\mathcal{R}+L_{\rm M}\), where the matter Lagrangian is \(L_{\rm M}\). Recent work has shown that there is a discrete collection of PGT3+ actions whose (linearised) particle spectra are free from ghosts and tachyons [15], and which actually appear renormalisable by a power counting [16]. It is hard to see how renormalisable, unitary gravity models can be consistently obtained as 'extra-particle' modifications of GR, and indeed these external actions are _purely quadratic in curvature and torsion_, entirely lacking an Einstein-Hilbert term \(L\sim\mathcal{R}^{2}+\mathcal{T}^{2}+L_{\rm M}\). It is remarkable that from this motivation in the UV, the nonlinear, classical phenomenology of these models, when coupled minimally to matter, can still admit a wholly viable background cosmology [17; 18; 19]. In the general PGT3+, it follows from the isotropy of the Friedmann-Lemaitre-Robertson-Walker (FLRW) model that only the \(0^{\pm}\) modes do not vanish at homogeneous scales [20]. For PGT3+ theories which _modify_ GR, these scalars may play a natural role in inflationary and dynamical dark energy models, by immediate analogy to scalar-tensor theories of gravity. Indeed, those 'permitted' low-energy PGT3+s in which the massive \(0^{\pm}\) modes alone are propagating, are dynamically equivalent to scalar-tensor gravity without torsion [10; 21; 19]. The CTEG model The new unitary, putatively renormalisable \(L\sim\mathcal{R}^{2}+\mathcal{T}^{2}+L_{\text{M}}\) PGT\({}^{\text{+}}\) is most fully developed in the _constant-torsion emergent gravity_ (CTEG) model [17; 18; 19]. The Lagrangian for this theory is \[L_{\text{CTEG}} =-\frac{4{M_{\text{Pl}}}^{2}}{9}{\mathcal{T}}_{\mu}\ \mathcal{T}^{\mu}-\frac{\mu}{6}\left[\lambda\mathcal{T}_{\mu\nu\sigma}\ \left(\mathcal{T}^{\mu\nu\sigma}-2\mathcal{T}^{\nu\mu\sigma}\right)+ \mathcal{R}_{\mu\nu}\big{(}\mathcal{R}^{[\mu\nu]}-12\mathcal{R}^{\mu\nu}\big{)} -2\mathcal{R}_{\mu\nu\sigma\rho}\big{(}\mathcal{R}^{\mu\nu\sigma\rho}-4 \mathcal{R}^{\mu\sigma\nu\rho}-5\mathcal{R}^{\sigma\rho\mu\nu}\big{)}\right] \tag{1}\] \[\quad+2\nu\mathcal{R}_{[\mu\nu]}\mathcal{R}^{[\mu\nu]}-{M_{\text{ Pl}}}^{2}\Lambda+L_{\text{M}},\] in which \(\mathcal{R}_{\mu\nu\sigma\rho}\) is the Riemann-Cartan (i.e. torsionful curvature, with its reduced symmetries) tensor2, and \(\mathcal{T}_{\mu\nu\sigma}\) is the torsion tensor, with \(\mathcal{T}_{\nu}\equiv\mathcal{T}^{\mu}_{\ \ \nu\mu}\). The modified gravitational sector is assumed to be minimally coupled to the standard model through the conventional matter Lagrangian \(L_{\text{M}}\). Footnote 2: Our convention for the torsion-free Riemann tensor will be \(R^{\mu}_{\ \ \nu\sigma}\equiv 2\partial_{[\sigma]}C^{\mu}_{[\mu\nu]}+\dots\), with Ricci scalar \(R\equiv R^{\mu\nu}_{\ \ \mu\nu}\), and for the Riemann–Cartan tensor \(R^{\mu}_{\ \ \nu\sigma}\equiv 2\partial_{[\sigma]}[\Gamma^{\mu}_{\ Open problems There are strong indications that the CS is stable against perturbations at the background level [17; 19]. Deviations in \(\psi\) decay most slowly towards the condensate \(\psi_{\rm C}\) during the radiation-dominated epoch, allowing (1) to modify the LCDM thermal history according to a single parameter: the initial value of \(\psi\) at the end of reheating. If initially \(\psi\lesssim\psi_{\rm C}\), the net effect is equal to that of so-called 'dark radiation' models, which boost the early-Universe expansion rate. Such models have been proposed in order to alleviate the Hubble tension. It is not yet clear _how_ the CS is formed in the first place. The CTEG particle spectrum was initially obtained [16] on the non-expanding Minkowski background, with vanishing torsion \(\psi=\phi=H=0\). There was moreover no external cosmological constant or matter source present, so \(\Lambda=L_{\rm M}=0\), though the spectra of CTEG both with and without the coupling \(\lambda\) were computed: the spectra differ by the massive \(0^{-}\) mode \(\psi\), which becomes non-propagating if \(\lambda=0\) and tachyonic if \(\lambda<0\), consistent with (2). Both these versions of free CTEG contain two massless polarizations of unknown spin and parity, which we consider to be the _gravitons_. It is currently understood that the CS is an inherently _nonlinear_ feature of the dynamics, which is not captured by the perturbative propagator analysis: this is evidenced by the fact that the CS can form as a vacuum expectation value \(\psi=\psi_{\rm C}\) of the \(0^{-}\) mode with both \(\lambda>0\) and \(\lambda=0\) versions of CTEG, i.e. _regardless of whether \(\psi\) is even supposed to be propagating_. This picture is disconcertingly familiar from the theory of strongly coupled surfaces [23; 24]. Indeed, a preliminary Hamiltonian analysis of the CTEG in [19] indicated the presence of strongly coupled modes around the original \(\psi=\phi=H=\Lambda=L_{\rm M}=0\) background, though these results were not fully conclusive. Even if the \(\psi=\phi=H=\Lambda=L_{\rm M}=0\) background is strongly coupled, CTEG may yet remain viable if the CS itself is (i) not strongly coupled and (ii) also furnished with an equally attractive particle spectrum. The focus then turns to inhomogeneous, anisotropic cosmological perturbations around (4) -- including perturbations of the remaining sixteen polarisation components of the spin connection -- and the propagator structure in that environment. This is an area of current investigation [19; 25], but there is already some suggestion that the Newtonian limit of local overdensities can be recovered [19]. The need to avoid strong coupling of the torsion condensate brings us back to the question of _how_ the CS may be formed, i.e. whether it can be reached dynamically as a non-singular surface in the phase space. If it can be formed, and if that process of formation occurs early enough in the history of the Universe, then it would be quite advantageous if the 'condensation' process happened to produce 50-60 e-folds of inflation as a by-product. #### i.2.5 Results of this work This work relates the CTEG Lagrangian (1) with the'reduced' model, to be expressed finally in Eqs. (57) and (58). This model fully preserves the background cosmological dynamics entailed by Eqs. (5) and (6). We will start with a recapitulation of the work conducted in [18], with particular focus on the bi-scalar-tensor analogue Lagrangian for the general PGT9\({}^{+}\) and specific CTEG theories. Then we will show the equivalence of the \(\Lambda=0\) scalar-tensor analogue with a non-minimally coupled single scalar field Lagrangian in Section III. This will be followed by a rigorous dynamical systems analysis of the new Lagrangian in Section IV; starting with the construction of the phase space in Section IV.1. Using this we will show the stability of the CS in Section IV.2, thus concluding the analysis of the theory in the Jordan frame. Following the analysis of the theory in the Jordan frame, we will move the focus to the Einstein frame formulation of the theory, by performing a conformal transformation in Section V. We will repeat the dynamical systems analysis in the Einstein frame, and show that both of the theories have stable points at the CS. Finally, we will look at the effect of adding an external cosmological constant \(\Lambda\) in the original Jordan frame scalar-tensor analogue. In this section we will give some discussion on the range of values the external cosmological constant \(\Lambda\) can take from a dynamical systems point of view, as well as a look at the effects of \(\Lambda\) on the potential for inflation. We will show that there are values that the two cosmological constants of the theory (\(\Lambda\) and \(\lambda\)) can take such that there is an inflationary hilltop regime which produces at least 50 e-folds of inflation in Section VI. Whilst the shape of the potential is suitable, the scale of inflation is insufficient when the late-time Hubble number is imposed. The resulting model is incomplete without an understanding of the particle theory, but serves as a qualitative test of inflation in CTEG. Conclusions follow in Section VII, and we reiterate that this final section contains Eqs. (57) and (58) which encode our main result. We will use the metric signature \((+,-,-,-)\) in the rest of the work. A list of nonstandard acronyms is provided in Table 1. ## II The Bi-scalar-tensor theory The restriction of attention to the scalar \(0^{\pm}\) modes (5) when considering the cosmological background invites the construction of a torsion-free, scalar-tensor \begin{table} \begin{tabular}{c|l} \hline \hline PGT\({}^{++}\) & Parity-preserving, quadratic Poincaré gauge theory \\ CTEG & Constant-torsion emergent gravity, defined in (1) \\ CS & Correspondence solution, i.e. torsion condensate \(\psi=\psi_{\rm C}\) \\ e.o.s & Equation of state \\ e.o.m & Equation of motion \\ \hline \hline \end{tabular} \end{table} Table 1: Nonstandard acronyms used in this work. replicate the cosmological background of general PGT\({}^{\text{q+}}\). This scalar-tensor analogue theory was identified in [18]: it has in general an Einstein-Hilbert term, non-minimally coupled to a pair of scalar fields \(\phi\) and \(\psi\) which emulate the dynamics of (5) at the background level, and whose kinetic terms may be non-canonical. Curiously, the Einstein-Hilbert term in the analogue does not directly translate to the Einstein-Hilbert term in the PGT\({}^{\text{q+}}\). In the limiting cases, the zero-curvature (\(R=0\)) _teleparallel equivalent of GR_ (TEGR) has an analogue \(L_{\text{TEGR}}=-\frac{1}{2}{M_{\text{Pl}}}^{2}R+L_{\text{M}}\) of pure GR without any scalars, whilst the conservative Einstein-Cartan (EC) model has an analogue of a pure, massive _Cuscuton_ field3\(L_{\text{EC}}=-{M_{\text{Pl}}}^{2}\sqrt{|2X^{\phi\phi}|}+\frac{3}{4}{M_{ \text{Pl}}}^{2}\phi^{2}+L_{\text{M}}\), with \(X^{\phi\phi}\equiv\frac{1}{2}\tensor{g}{{}^{\mu\nu}}_{\dot{\mu}}\phi\partial_{ \nu}\phi\). Even without any Ricci scalar, the quadratic Cuscuton still supports the Friedmann equations at the background level4. Footnote 3: See [26] for an introduction to the Cuscuton. Footnote 4: To see this, substitute the \(\phi\)-equation into the very simple \(g^{\mu\nu}\)-equation to recover the Friedmann constraint equation. The scalar-tensor analogue action from [18] corresponding to the CTEG in (1) is more complex. We will begin with the case \(\Lambda=0\), and re-introduce the external cosmological constant only from Section VI.2 onwards. The scalar-tensor analogue may be brought to a minimally coupled frame via a conformal transformation \(g_{\mu\nu}\equiv\hat{\Omega}(\psi)^{2}\hat{g}_{\mu\nu}\), followed by field redefinitions \(\phi\equiv\phi\left(\hat{\phi},\hat{\psi}\right)\) and \(\psi\equiv\psi\left(\hat{\psi}\right)\), where \[\hat{\Omega}(\psi)^{2} \equiv 3\left(4-\frac{\psi^{2}}{{\psi_{\text{C}}}^{2}}\right)^{-1}, \tag{7a}\] \[\phi(\hat{\phi},\hat{\psi}) \equiv\hat{\phi}\sqrt{\frac{8a}{3}\left[3\cosh\left(\sqrt{\frac{2 }{3}}\frac{\hat{\psi}}{{M_{\text{Pl}}}}\right)-5\right]}\] \[\times\text{sech}\left(\frac{\hat{\psi}}{\sqrt{6}{M_{\text{Pl}}} }\right),\] (7b) \[\psi(\hat{\psi}) \equiv 2\psi_{\text{C}}\tanh\left(\frac{\hat{\psi}}{\sqrt{6}{M_{ \text{Pl}}}}\right), \tag{7c}\] where we keep track of the sign \[\alpha\equiv\text{sgn}\left[3\cosh\left(\sqrt{\frac{2}{3}}\frac{\hat{\psi}}{{ M_{\text{Pl}}}}\right)-5\right]. \tag{8}\] It is important to note that we will refer to the frame \(\hat{g}_{\mu\nu}\) as the _Jordan_ frame, because it is only minimally coupled before the field \(\hat{\phi}\) has been integrated out in Section III. A non-minimal coupling between \(\hat{\omega}\left(\hat{\psi}\right)\) and \(\hat{R}\) will then be induced. In Section V a _second_ conformal transformation will be introduced to decouple the scalar, and only this final frame -- two conformal transformations removed from the physical frame \(g_{\mu\nu}\) of Eq. (1) -- will be referred to as the _Einstein_ frame. Note from Eqs. (7a) and (7c), the conformal transformation only admits the range \(\psi\in\left(-2\psi_{\text{C}},2\psi_{\text{C}}\right)\), which fills the whole range \(\hat{\psi}\in\left(-\infty,\infty\right)\). We do not need to consider values of the axial torsion _significantly_ above the condensate level in this work, and within the physical range we may conclude \[\alpha=\begin{cases}-1,&|\psi|<\psi_{\text{C}},\\ 1,&\psi_{\text{C}}<|\psi|.\end{cases} \tag{9}\] Using Eqs. (7a) to (9), the bi-scalar-tensor equivalent of (1) in the new frame becomes \[\hat{L}_{\text{CTEG}} =-\frac{1}{2}{M_{\text{Pl}}}^{2}\hat{R}+\hat{X}^{\hat{\psi}\hat{ \psi}}+{\beta M_{\text{Pl}}}^{2}\hat{\omega}(\hat{\psi})^{3}\sqrt{\left|\hat{X }^{\hat{\phi}\hat{\phi}}\right|}\] \[\quad-\hat{U}(\hat{\psi})+\frac{3}{4}{\alpha M_{\text{Pl}}}^{2} \hat{\omega}(\hat{\psi})^{4}\hat{\phi}^{2}+\hat{L}_{\text{M}}, \tag{10a}\] \[\hat{U}(\hat{\psi}) \equiv{\lambda M_{\text{Pl}}}^{2}\left(1+\alpha\frac{\hat{\omega }(\hat{\psi})^{2}}{2}\right)\left(1+\alpha\frac{\hat{\omega}(\hat{\psi})^{2}}{ 8}\right),\] (10b) \[\hat{\omega}(\hat{\psi}) \equiv\alpha\sqrt{\left|3\cosh\left(\sqrt{\frac{2}{3}}\frac{\hat{ \psi}}{{M_{\text{Pl}}}}\right)-5\right|}, \tag{10c}\] where the kinetic terms are \(\hat{X}^{\hat{\phi}\hat{\phi}}\equiv\frac{1}{2}\tensor{g}{{}^{\mu\nu}}\hat{ \sigma}_{\psi}\hat{\phi}\partial_{\mu}\hat{\phi}\), \(\hat{X}^{\hat{\psi}\hat{\psi}}\equiv\frac{1}{2}\tensor{g}{{}^{\mu\nu}}\partial_{ \nu}\hat{\psi}\partial_{\mu}\hat{\psi}\), with \(\beta\equiv\text{sgn}\left(\hat{X}^{\hat{\phi}\hat{\phi}}\right)\). The \(\hat{\psi}\) field has the hallmarks of a canonical scalar field, whereas the \(\hat{\phi}\) field is a quadratic Cuscuton field. The new treatment of this work is to not only recalculate the dynamics, but also to take into account values of the torsion above and below the correspondence solution; this is achieved by the appearance of the \(\alpha\) and \(\beta\) terms in Eqs. (10a) to (10c). The \(\alpha\) and \(\beta\) terms track, respectively, the sign of the term inside the modulus in Eq. (10c) and the sign of the Cuscuton velocity \(\dot{\hat{\phi}}\). The presence of the Cuscuton field in the Lagrangian Eq. (10a) is a point of concern, as square roots of kinetic energy-like terms are physically questionable and difficult to motivate. The Cuscuton field, as outlined in [27; 26], is a non-dynamical field with infinite speed of sound. The simplicity of scalar field models of dark energy is very appealing, as is reducing Eq. (10a) down to a single scalar field model, without the phenomenologicaly interesting, but nonetheless irksome, Cuscuton. Emphasis is placed on the fact that the model is only valid at the background level, and at this point it is useful to discuss the idea of the correspondence solution (CS). This is an attractive feature of the theory as it is the point at which the scalar-tensor analogue exactly matches the Lagrangian of GR with a cosmological constant. The stability of the CS, which is shown in this work, is an important feature of the theory, as at the background level GR is a good model of the Universe at the current accelerating dark energy dominated epoch. The physical motivation for reformulating this theory is from inspection of the degrees of freedom (d.o.f) for the Lagrangian in Eq. (10a). From [27; 26] the Cuscuton field \(\hat{\phi}\) has no propagating d.o.f, rather is acts as a constraint field and has the unusual property that the kinetic term in the Lagrangian does not contribute to the energy density of the field. Therefore, for the interests of being able to analyse the dynamics of the system using the powerful dynamical systems framework, it is convenient to reduce this two field theory to a single scalar field model, with the corresponding single d.o.f. Variation of Eq. (10a) with respect to \(\hat{\mathbf{\psi}}\), \(\hat{\mathbf{\phi}}\) and \(\hat{\mathbf{g}}^{\mu\nu}\) gives the following field equations for the bi-scalar-tensor system (where an overdot represents derivative w.r.t cosmic time in the new frame and \({}^{\prime}\) denotes a derivative w.r.t the \(\hat{\mathbf{\psi}}\) scalar field) \[3\hat{H}^{2}\left(1+\alpha\frac{\hat{\omega}^{2}}{2}\right)=\frac {1}{{M_{\mathrm{Pl}}}^{2}}\left(\frac{1}{2}\hat{\mathbf{\psi}}^{2}+\hat{U}(\hat{ \mathbf{\psi}})\right)\] \[\quad+\alpha\left(-3\hat{H}\hat{\omega}\hat{\omega}^{\prime}- \frac{3}{2}\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime 2}\right), \tag{11a}\] \[\left(2\hat{H}+3\hat{H}^{2}\right)\left(1+\alpha\frac{\hat{ \omega}^{2}}{2}\right)=-\frac{1}{{M_{\mathrm{Pl}}}^{2}}\left(\frac{1}{2}\hat {\mathbf{\psi}}^{2}-\hat{U}(\hat{\mathbf{\psi}})\right)\] \[+\alpha\left(-2\hat{H}\hat{\omega}\hat{\omega}^{\prime}\hat{\mathbf{ \psi}}+\frac{1}{2}\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime 2}-\hat{\omega}\hat{ \omega}\hat{\omega}^{\prime}\hat{\mathbf{\psi}}-\hat{\omega}\hat{\omega}^{\prime \prime}\hat{\mathbf{\psi}}^{2}\right),\] (11b) \[\ddot{\hat{\mathbf{\psi}}}+3\hat{H}\dot{\hat{\mathbf{\psi}}}+\hat{U}^{ \prime}(\hat{\mathbf{\psi}})=\alpha{M_{\mathrm{Pl}}}^{2}\Big{(}6\hat{H}^{2}\hat{ \omega}\hat{\omega}^{\prime}+3\hat{H}\hat{\omega}\hat{\omega}^{\prime}\] \[\quad+9\hat{H}\dot{\hat{\mathbf{\psi}}}\hat{\omega}^{\prime 2}+3\hat{ \omega}^{\prime 2}\ddot{\hat{\mathbf{\psi}}}+3\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime }\hat{\omega}^{\prime\prime}\Big{)}. \tag{11c}\] These equations are found by eliminating \(\hat{\mathbf{\phi}}\) from the system by substituting in the equation of motion, found by variations of Eq. (10a) w.r.t \(\hat{\mathbf{\phi}}\) \[\hat{\omega}^{2}\left(\sqrt{2}\beta\hat{\omega}^{\prime}\dot{\hat{\mathbf{\psi}}}+ \sqrt{2}\beta\hat{\omega}\hat{H}-\alpha\hat{\omega}^{2}\hat{\mathbf{\phi}}\right)=0. \tag{11d}\] Interestingly, upon substitution for the Cuscuton into the field equations, the prefactor of the Cuscuton kinetic term \(\beta\) only appears as \(\beta^{2}=1\), so the treatment by most of the literature with regards to the Cuscuton kinetic term's sign not being a necessary part of one's treatment of the system is at least justified for our present case. ## III Reduction from a bi-scalar-tensor theory to single scalar field model The starting point for removing the \(\hat{\mathbf{\phi}}\) field follows [27] where the authors mention that the Cuscuton is a minimal modification to GR at the background level, and showed the Cuscuton was equivalent to a renormalisation of the Planck mass. This motivates an attempt to remove the explicit Cuscuton field through the single field Lagrangian \[\hat{L}_{\mathrm{CTEG}} =-\frac{1}{2}{M_{\mathrm{Pl}}}^{2}\hat{F}(\hat{\mathbf{\psi}})\hat{R} +\hat{X}^{\hat{\mathbf{\psi}}\hat{\mathbf{\psi}}}-\hat{U}(\hat{\mathbf{\psi}})\] \[\quad-\alpha\frac{3}{2}{M_{\mathrm{Pl}}}^{2}\hat{g}^{\mu\nu} \partial_{\nu}\hat{\omega}\hat{\omega}(\hat{\mathbf{\psi}})\partial_{\mu}\hat{ \omega}(\hat{\mathbf{\psi}})+\hat{L}_{\mathrm{M}}, \tag{12}\] where \(\hat{F}(\hat{\mathbf{\psi}})\) is at this point some function of the \(\hat{\mathbf{\psi}}\) field; this will be defined explicity upon the field redefinition Eq. (14). The field equations resulting from Eq. (12) are \[3\hat{H}^{2}\hat{F} =\frac{1}{{M_{\mathrm{Pl}}}^{2}}\left(\frac{1}{2}\hat{\mathbf{\psi} }^{2}+\hat{U}(\hat{\mathbf{\psi}})\right)+\] \[\alpha\left(-3\hat{H}\hat{F}^{\prime}\dot{\hat{\mathbf{\psi}}}- \frac{3}{2}\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime 2}\right)+\hat{\rho}_{ \mathrm{M}}, \tag{13a}\] \[\left(2\hat{H}+3\hat{H}^{2}\right)\hat{F} =-\frac{1}{{M_{\mathrm{Pl}}}^{2}}\left(\frac{1}{2}\hat{\mathbf{\psi}}^{2}-\hat{U} (\hat{\mathbf{\psi}})\right)\] \[\quad+\alpha\left(-2\hat{H}\hat{\mathbf{\psi}}\hat{F}^{\prime}-\hat{ \mathbf{\psi}}^{2}\hat{F}^{\prime\prime}-\hat{F}^{\prime}\ddot{\hat{\mathbf{\psi}}}+ \frac{3}{2}\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime 2}\right),\] (13b) \[\ddot{\hat{\mathbf{\psi}}}+3\hat{H}\dot{\hat{\mathbf{\psi}}}+\hat{U}^{ \prime}(\hat{\mathbf{\psi}})=\alpha\left(3{M_{\mathrm{Pl}}}^{2}\hat{F}^{\prime} \left(2\hat{H}^{2}+\hat{H}\right)\right.\] \[\quad\left.+9\hat{H}\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime 2}+3\hat{ \mathbf{\psi}}\hat{\omega}^{\prime 2}\ddot{\hat{\mathbf{\psi}}}+3\hat{\mathbf{\psi}}^{2}\hat{\omega}^{\prime }\hat{\omega}^{\prime\prime}\right). \tag{13c}\] From a comparison of Eqs. (13a) to (13c) and Eqs. (11a) to (11c) it is clear that the field equations are equivalent, with the appropriate choice of \(\hat{F}(\hat{\mathbf{\psi}})=1+\left(\alpha\hat{\mathbf{\omega}}(\hat{\mathbf{\psi}})^{2}/2\right)\). The Lagrangian can be further simplified to a standard scalar-tensor gravity form by substituting for \(\hat{\mathbf{\psi}}\equiv\hat{\mathbf{\psi}}(\hat{\mathbf{\omega}})\), i.e. the inverse of Eq. (10c) \[\hat{\mathbf{\psi}}(\hat{\mathbf{\omega}})\equiv\sqrt{\frac{3}{2}}{M_{\mathrm{Pl}}} \mathrm{arccosh}\left(\frac{\hat{\omega}^{2}+5\alpha}{3\alpha}\right). \tag{14}\] This field redefinition from \(\hat{\mathbf{\psi}}\) to \(\hat{\mathbf{\omega}}\) means that the \(\hat{\mathbf{\omega}}\) field will inherit the \({}^{\prime}\) superscript notation from \(\hat{\mathbf{\psi}}\), i.e \({}^{\prime}\) will now be used to denote a derivate w.r.t the \(\hat{\mathbf{\omega}}\) field. At this point it is useful to also recall the sign \(\alpha\). From Eq. (9) we notice \[\alpha=\begin{cases}-1,&\hat{\mathbf{\omega}}<0,\\ 1,&0>\hat{\mathbf{\omega}}.\end{cases} \tag{15}\] This gives the \(\hat{\mathbf{\omega}}\) field an injective correspondence with the \(\hat{\mathbf{\psi}}\) field (and through the \(\hat{\mathbf{\psi}}\) field to the corresponding root torsion theory of the scalar-tensor analogue). This substitution then reduces the Lagrangian to the generalised form of a scalar field non-minimally coupled to the Ricci scalar, in which \(\hat{\mathbf{\omega}}\) carries the single extra dynamical d.o.f \[\hat{L}_{\mathrm{CTEG}}=-\frac{\hat{F}(\hat{\mathbf{\omega}}){M_{\mathrm{Pl}}}^{2}} \hat{R}+\frac{\hat{B}(\hat{\mathbf{\omega}})}{2}\hat{X}^{\hat{\mathbf{\omega}}\hat{ \mathbf{\omega}}}-\hat{U}(\hat{\mathbf{\omega}})+\hat{L}_{\mathrm{M}}, \tag{16}\] with \(\hat{X}^{\hat{\mathbf{\omega}}\hat{\mathbf{\omega}}}\equiv\frac{{M_{\mathrm{Pl}}}^{2}}{2} \hat{g}^{\mu\nu}\partial_{\nu}\hat{\mathbf{\omega}}\partial_{\mu}\hat{\mathbf{\omega}}\) and the functions \[\hat{F}(\hat{\mathbf{\omega}})\equiv 1+\alpha\frac{\hat{\omega}^{2}}{2},\quad\hat{B}(\hat{ \mathbf{\omega}})\equiv-\frac{3\left(16\alpha+8\hat{\omega}^{2}+\alpha\hat{\mathbf{ \omega}}^{4}\right)}{\left(\alpha\hat{\mathbf{\omega}}^{2}+2\right)\left(\alpha \hat{\mathbf{\omega}}^{2}+8\right)}. \tag{17}\] At this point we will assume that a function, unless otherwise stated is a function of \(\hat{\mathbf{\omega}}\), so the \(\hat{\mathbf{\omega}}\) dependence of \(\hat{F}=\hat{F}(\hat{\mathbf{\omega}})\) is implicit. The field equations for this Lagrangian Eq. (16) are, upon variation w.r.t the metric and \(\hat{\mathbf{\omega}}\) field \[3\hat{H}^{2}\hat{F} =\frac{\hat{B}\hat{\mathbf{\omega}}^{2}}{2}+ \[\hat{B}\ddot{\ddot{\omega}} +3\hat{H}\,\hat{B}\dot{\dot{\omega}}+\frac{\hat{U}^{\prime}}{2}=3 \alpha{M_{\text{Pl}}}^{2}\dot{\omega}\left(2\hat{H}^{2}+\dot{\hat{H}}\right)\] \[-\frac{\hat{B}^{\prime}\dot{\omega}^{2}}{2}, \tag{18c}\] where we have used \(\hat{\rho}_{\text{M}}\) and \(\hat{P}_{\text{M}}\) as the energy density and pressure for normal matter. For dust \(\hat{P}_{\text{M}}=0\), and the continuity equation is \[\dot{\hat{\rho}}_{\text{M}}+3\hat{H}\,\hat{\rho}_{\text{M}}=0. \tag{19a}\] The field equations are now in the form where the dynamics of the system can be studied and compared with the literature. For ease of calculation, and as it fits with the choice of most dynamical systems analysis in cosmology, we set the e.o.s for all matter to \(\omega=0\); this choice is purely arbitrary and further analysis can be easily extended to include matter and radiation separately. ## IV Dynamical systems analysis and stability ### Constructing the phase space To be able to analyse the dynamics of the system, and the nature of the critical points, we will employ dynamical systems analysis. This is particularly important in showing that the CS is a stable point. Following the standard method of dynamical systems applied to cosmology [28], we write the Friedmann equations Eqs. (18a) and (18b) in the following form \[1 =\frac{\hat{B}\dot{\omega}^{2}}{6\hat{H}^{2}\hat{F}}+\frac{\hat{ U}}{3\hat{H}^{2}\hat{F}}-\frac{\alpha\dot{\hat{\omega}}\dot{\hat{\omega}}}{\hat{H} \hat{F}}+\frac{\hat{\rho}_{\text{M}}}{3\hat{H}^{2}\hat{F}}, \tag{20a}\] \[1+\frac{2\hat{H}}{\hat{H}^{2}}=-\frac{\hat{B}\dot{\omega}^{2}}{6 \hat{H}^{2}\hat{F}}+\frac{\hat{U}}{3\hat{H}^{2}\hat{F}}+\alpha\Big{(}-\frac{2 \hat{\omega}\dot{\hat{\omega}}}{3\hat{H}\hat{F}}\] \[\quad-\frac{\dot{\omega}^{2}}{3\hat{H}^{2}\hat{F}}-\frac{\hat{ \omega}\ddot{\hat{\omega}}}{3\hat{H}^{2}\hat{F}}\Big{)}, \tag{20b}\] where \(\hat{\rho}_{\text{M}}\) represents the energy density of the barotropic matter. Eq. (20a) is the Friedmann energy constraint equation, and we will introduce the relative energy densities for dust and the scalar field \(\hat{\omega}\) as \[\hat{\Omega}_{\text{M}}\equiv\frac{\hat{\rho}_{\text{M}}}{3\hat{H}^{2}\hat{F }},\quad\hat{\Omega}_{\omega}\equiv\frac{\hat{B}\dot{\omega}^{2}}{6\hat{H}^{ 2}\hat{F}}+\frac{\hat{U}}{3\hat{H}^{2}\hat{F}}-\alpha\frac{\hat{\omega}\dot{ \hat{\omega}}}{\hat{H}\hat{F}}. \tag{21}\] With these definitions of the relative energy densities in Eq. (21), the Friedmann constraint, reads \[1=\hat{\Omega}_{\text{M}}+\hat{\Omega}_{\omega}, \tag{22}\] by assuming \(0\leq\hat{\Omega}_{\text{M}}\leq 1\), or equivalently that \[0\leq\hat{\Omega}_{\omega}\leq 1. \tag{23}\] Denoting \({}^{\prime}\) as the derivative \(\text{d}/\text{d}\text{N}\) being the derivative w.r.t e-folds \(N\equiv\ln(a)\) (a note for clarity, this \({}^{\prime}\) notation only applies for the phase space variables \(\hat{x},\hat{y},\hat{z}\)), and choosing the dynamical variables \[\hat{x}\equiv\frac{\dot{\hat{\omega}}}{\hat{H}},\quad\hat{y}\equiv\frac{1}{ \hat{H}}\sqrt{\frac{\hat{U}}{3\hat{F}}},\quad\hat{z}\equiv\hat{\omega}, \tag{24}\] the Friedmann constraint Eq. (22) reduces to \[0\leq\hat{x}^{2}\frac{\hat{B}}{6\hat{F}}+\hat{y}^{2}-\alpha\frac{\hat{x}\hat{ z}}{\hat{F}}\leq 1. \tag{25}\] Note that in this work the construction of the phase space for the theory is the main aim, and we neglect considerations of energy conditions in scalar field cosmology, in particular with regards to phantom scalar fields. We restrict ourselves to the construction of the phase space, as this is the most concise way to analyse the dynamical properties of the system. The phase space region is given by \(\hat{\omega}>-\sqrt{2}\); this lower bound is from the breakdown of the theory as the prefactor in front of the Ricci scalar, \(\hat{F}=1+\left(\alpha\hat{\omega}^{2}/2\right)\), vanishes as \(\hat{\omega}\rightarrow-\sqrt{2}\). To construct the dynamical system from the dynamical variables, it is necessary to manipulate the system into the standard dynamical systems form \[\mathbf{X}^{\prime}\left(N\right)=\mathbf{F}\left(\mathbf{X}\left(N\right) \right)=\mathbf{F}\left(\mathbf{X}\right), \tag{26}\] where \(\mathbf{X}\) is a state vector of the system dependent on a single variable \(N\) (number of e-folds), and \(\mathbf{F}\) is a vector field. This is the defining characteristic of an autonomous ODE: the system of ODEs doesn't have an explicit dependence on \(N\). This means that the stationary solutions are time invariant, if a system started at a point in the phase space \(\mathbf{X}_{0}\), such that \(\mathbf{X}_{0}^{\prime}(N)=\mathbf{F}\left(\mathbf{X}_{0}\right)=0\), then the system would stay at the point \(\mathbf{X}_{0}\) for any transformation in \(N\), such as \(N\to N+\delta N\). By substituting the dynamical variables Eq. (24) into Eqs. (20a) to (20b) we obtain \[\hat{x}^{\prime}(N)=\frac{3\left(\hat{B}^{2}\hat{x}^{3}+\hat{B}\hat{x}\left( \alpha\hat{x}(2\hat{x}-5\hat{z})-6\hat{F}\left(\hat{y}^{2}+1\right)\right)+6 \alpha\hat{z}\left(3\hat{F}\hat{y}^{2}+\hat{F}-\alpha\hat{x}^{2}\right)\right)- \left(2\hat{F}+\alpha\hat{x}\hat{z}\right)\left(3\hat{B}\hat{x}^{2}\lambda_{ \hat{B}}+\hat{y}^{2}\lambda_{\hat{U}}\right)}{6\left(2\hat{B}\hat{F}+3a^{2} \hat{z}^{2}\right)}6\left(2\hat{B}\hat{F}+3a^{2}\hat{z}^{2}\right), \tag{27a}\] \[\hat{y}^{\prime}(N) =\frac{\hat{y}}{6}\left[\frac{3\hat{B}\left(\hat{B}\hat{x}^{2}-6\hat{ F}\left(\hat{y}^{2}-1\right)\right)-a\hat{z}\left(3\hat{B}\hat{x}^{2}\lambda_{ \hat{B}}+\hat{y}^{2}\lambda_{\hat{U}}\right)+6\alpha\hat{B}\hat{x}(\hat{x}-\hat {z})+36\alpha^{2}\hat{z}^{2}}{2\hat{B}\hat{F}+3\alpha^{2}\hat{z}^{2}}-\frac{3 \alpha\hat{x}\hat{z}}{\hat{F}}+3\hat{x}\lambda_{\hat{U}}\right], \tag{27b}\] \[\hat{z}^{\prime}(N) =\hat{x}, \tag{27c}\] with \(\lambda_{\hat{U}}\equiv\frac{1}{\hat{U}}\frac{d\hat{U}}{d\hat{\omega}}\) and \(\lambda_{\hat{B}}\equiv\frac{1}{\hat{B}}\frac{d\hat{B}}{d\hat{\omega}}\). The equations Eqs. (27a) to (27c) form a closed dynamical system. ### Stability and study of critical points To study the critical points of this system, we employ the use of linear stability theory (see [29], or one of the many review articles on dynamical systems applied to cosmology [28]). We start by defining \[f(\mathbf{\hat{x}})\equiv\begin{pmatrix}P\\ Q\\ R\end{pmatrix}, \tag{28}\] and the Jacobian (or stability matrix) [28] of the system to be \[J\equiv\begin{pmatrix}\partial_{\hat{x}}P&\partial_{\hat{y}}P&\partial_{\hat{ z}}P\\ \partial_{\hat{x}}Q&\partial_{\hat{y}}Q&\partial_{\hat{z}}Q\\ \partial_{\hat{x}}R&\partial_{\hat{y}}R&\partial_{\hat{z}}R\end{pmatrix}, \tag{29}\] where we have defined \(P\equiv\hat{x}^{\prime}\), \(Q\equiv\hat{y}^{\prime}\) and \(R\equiv\hat{z}^{\prime}\) from Eqs. (27a) to (27c). To identify the critical points of the system we must find the points \(\mathbf{\hat{x}_{c}}\) that satisfy \(f\left(\mathbf{\hat{x}_{c}}\right)=0\) from Eq. (28). For the range of \(\hat{\omega}>-\sqrt{2}\) the critical points are \[\mathbf{\hat{x}_{\pm}}\equiv\begin{pmatrix}0\\ \pm 1\\ 0\end{pmatrix},\quad\mathbf{\hat{x}_{2}}\equiv\begin{pmatrix}0\\ 0\\ 0\end{pmatrix}. \tag{30}\] The CS point is to be found at \(f\left(\mathbf{\hat{x}_{\pm}}\right)\) (note the \(\pm\) is from the fact that variable \(\hat{y}\) is defined as the square root of the potential: the dynamics are the same for both signs we assume that the potential is always real). To analyse the stability of the correspondence solution, the eigenvalues of the Jacobian Eq. (29) need to be evaluated at the point \(\mathbf{\hat{x}_{\pm}}\). In Eq. (31) it can be seen that all the eigenvalues of \(J\left(\mathbf{\hat{x}_{\pm}}\right)\) have negative real parts [28] \[J(\mathbf{\hat{x}_{\pm}})=\begin{pmatrix}-3\\ \frac{1}{12}(-18-\sqrt{46})\\ \frac{1}{12}(-18+\sqrt{46})\end{pmatrix}. \tag{31}\] Thus, we find that the CS point in the phase space, corresponding to \(\hat{x}=0\), \(\hat{y}=1\) and \(\hat{z}=0\), is a stable point. Translating this back into the physical quantities of the \(\hat{\omega}\) scalar field, with the definitions in Eq. (24), the first Friedmann equation reads \[3\hat{H}^{2}=\hat{x}^{2}\frac{\hat{B}}{6\hat{F}}+\hat{y}^{2}-\frac{\alpha \hat{z}\hat{x}}{\hat{F}}+\hat{\rho}_{\text{M}}=\lambda+\hat{\rho}_{\text{M}}. \tag{32}\] This shows that the late time behaviour of the theory is exactly that of GR with a cosmological constant. With \(\hat{y}=\pm 1\) and the other terms being zero, the potential has found its minimum value \(\hat{\omega}=0\) at the CS, and thus the only quantity left in the Friedmann equations from the \(\hat{\omega}\) field is the \(\lambda\) constant which mimics the role of the cosmological constant of LCDM cosmology. This is the motivation for referring to the \(\lambda\) coupling in CTEG Eq. (1) as an emergent cosmological constant, we recall that the true _external_ cosmological constant \(\Lambda\) is not yet included in the model, but will be re-introduced in Section VI.2. This confirms the findings of [18], in which the correspondence solution was only studied graphically, but in our formulation of the Lagrangian it is possible to use the rigorous toolkit of linear stability analysis to study the correspondence solution. The dynamics of the phase space can be presented in a pictorial manner by using a phase space plot, as in Fig. 1. This plot shows the flow of the autonomous system Eqs. (27a) to (27c). As can be seen in Fig. 1 the point \(\mathbf{\hat{x}_{\pm}}\) acts as an asymptotically stable sink point, this being the point marked in blue, and denotes the correspondence solution at which the theory equates to GR with a cosmological constant. The CS solution, \(\mathbf{\hat{x}_{\pm}}\) is classified as a stable node point, as the eigenvalues of the Jacobian Eq. (29) are all negative real parts. The critical point \(f\left(\mathbf{\hat{x}_{2}}\right)\) is an unstable node. ## V Einstein frame dynamics We delineate between quantities in the Jordan frame of the preceding section, denoted with a hat, and quantities defined in the Einstein frame using a tilde. To move from the Jordan frame to the Einstein frame, we must perform a conformal transformation [30]. This conformal transformation will take the form of \(g_{\mu\nu}=\tilde{\Omega}(\hat{\omega})^{2}\hat{g}_{\mu\nu}\) relative to the original (physical) frame of Eq. (1), such that through its definition relative to the preceding frame it can remove the curvature prefactor \[\frac{\tilde{\Omega}(\hat{\omega})^{2}}{\tilde{\Omega}(\hat{\omega})^{2}}=\hat{ F}(\hat{\omega})=1+\left(\alpha\hat{\omega}^{2}/2\right). \tag{33}\] This transformation will take the Lagrangian Eq. (16) from the Jordan frame to the Einstein frame Lagrangian \[\tilde{L}_{\text{CTEG}}={M_{\text{Pl}}}^{2}\left(-\frac{1}{2}\tilde{R}+2\tilde{ B}(\hat{\omega})\tilde{X}^{\hat{\omega}\hat{\omega}}\right)-\tilde{U}(\hat{ \omega})+\tilde{L}_{\text{M}}, \tag{34}\] with \(\tilde{X}^{\hat{\omega}\hat{\omega}}\equiv\frac{1}{2}\hat{g}^{\mu\nu}\partial_{ \mu}\hat{\omega}\partial_{\nu}\hat{\omega}\), and \(\tilde{U}(\hat{\omega})\) denoting the potential in the Einstein frame, and \(\tilde{\mathbf{B}}(\hat{\omega})\) representing the new non-canonical factor in front of the kinetic term. It should be noted that the matter Lagrangian \(\tilde{L}_{\text{M}}\) will have picked up a further coupling to \(\hat{\omega}\) through the conformal transformation Eq. (33); this coupling will be from the two conformal transformations that have been made, with the first from Eq. (7a), and the second from Eq. (33), thus giving the combined coupling of \[\tilde{\Omega}(\hat{\omega})^{2}=\left(1+\left(\alpha\hat{\omega}^{2}/2\right) \right)\left(1+\left(\alpha\hat{\omega}^{2}/8\right)\right). \tag{35}\] Meanwhile, \(\tilde{B}(\hat{\omega})\) and \(\tilde{U}(\hat{\omega})\) are related to the original \(\tilde{B}(\hat{\omega})\) and \(\hat{U}(\hat{\omega})\) by \[\tilde{U}(\hat{\omega}) =\frac{\hat{U}(\hat{\omega})}{\hat{F}(\hat{\omega})^{2}}=\lambda \left(\frac{3}{2\alpha\hat{\omega}^{2}+4}+\frac{1}{4}\right), \tag{36a}\] \[\tilde{B}(\hat{\omega}) =2\left(\frac{\hat{B}(\hat{\omega})}{2\hat{F}(\hat{\omega})}+ \frac{3\hat{\omega}^{2}}{4\hat{F}(\hat{\omega})^{2}}\right)\] \[=-\frac{96\alpha}{(2+\alpha\hat{\omega}^{2})^{2}(8+\alpha\hat{ \omega}^{2})}. \tag{36b}\] The field equations for the Lagrangian Eq. (34) are \[3\tilde{H}^{2} =\frac{1}{2}\tilde{B}\hat{\omega}^{2}+\tilde{U}+\tilde{\rho}_{ \text{M}}, \tag{37a}\] \[3\tilde{H}^{2}+2\hat{H} =-\frac{1}{2}\tilde{B}\hat{\omega}^{2}+\tilde{U},\] (37b) \[\tilde{B}\tilde{\omega}+3\tilde{H}\tilde{B}\hat{\omega}+\tilde{U} ^{\prime} =-\frac{1}{2}\tilde{B}^{\prime}\hat{\omega}^{2}, \tag{37c}\] where the superscript \({}^{\prime}\) denotes a derivative w.r.t the \(\hat{\omega}\) field, i.e \(\text{d}/\text{d}\hat{\omega}\). ### Dynamical systems analysis in the Einstein frame From the field equations Eqs. (37a) to (37c) a dynamical systems analysis similar to the analysis in the Jordan frame can be performed. First, we divide Eqs. (37a) to (37b) by \(3\tilde{H}^{2}\) to get the following set of equations \[1 =\frac{\hat{B}\hat{\omega}^{2}}{6\tilde{H}^{2}}+\frac{\tilde{U}}{ 3\tilde{H}^{2}}+\frac{\tilde{\rho}_{\text{M}}}{3\tilde{H}^{2}}, \tag{38a}\] \[1+\frac{2}{3}\frac{\hat{H}}{\tilde{H}^{2}} =-\frac{\tilde{B}\hat{\omega}^{2}}{6\tilde{H}^{2}}+\frac{\tilde{U }}{3\tilde{H}^{2}}. \tag{38b}\] From this set of equations, we can define the set of dynamical variables that will describe the evolution of the system \[\tilde{x}\equiv\frac{\hat{\omega}}{\tilde{H}},\quad\tilde{y}\equiv\frac{1}{ \tilde{H}}\sqrt{\frac{\tilde{U}}{3}},\quad\tilde{z}\equiv\hat{\omega}. \tag{39}\] The Friedmann constraint Eq. (37a), with the definition of the relative matter density \(\tilde{\Omega}_{\text{M}}\equiv\tilde{\rho}_{\text{M}}/3\,\tilde{H}^{2}\), becomes \[1-\tilde{\Omega}_{\text{M}}=\frac{\tilde{B}}{6}\tilde{x}^{2}+\tilde{y}^{2}. \tag{40}\] Using these dynamical variables, the phase space can be set up, as in the previous section detailing the use of dynamical systems techniques in the Jordan frame, by getting the equations into the form in Eq. (26). The set of autonomous differential equations describing Eqs. (38a) to (38b) is given by \[\tilde{x}^{\prime}(N)=-\frac{24\alpha\tilde{x}^{3}}{\left(\alpha\tilde{z}^{2} +2\right)^{2}\left(\alpha\tilde{z}^{2}+8\right)}+\frac{3\alpha\tilde{x}^{2} \tilde{z}\left(\alpha\tilde{z}^{2}+6\right)}{\left(\alpha\tilde{z}^{2}+2 \right)\left(\alpha\tilde{z}^{2}+8\right)}-\frac{3}{2}\tilde{x}\left(\tilde{y} ^{2}+1\right)-\frac{3}{8}\tilde{y}^{2}\tilde{z}\left(\alpha\tilde{z}^{2}+2 \right), \tag{41a}\] Figure 1: Plot of phase space of system Eqs. (27a) to (27c), the blue marker is the CS point at \((\hat{x},\hat{y},\hat{z})=(0,1,0)\). The plots are slices of the \(\hat{x}-\hat{z}\) plane, with constant \(\hat{y}\) \[\tilde{L}_{\text{CTEG}}=\begin{cases}-\frac{{M_{\text{{\rm{Pl}}}}}^{2}}{2} \tilde{R}+\tilde{X}^{\dot{\omega}\dot{\omega}}-\tilde{U}(\tilde{\omega})+\tilde{ L}_{\text{M}},\\ \\ -\frac{{M_{\text{{\rm{Pl}}}}}^{2}}{2}\tilde{R}-\tilde{X}^{\dot{\omega}\dot{ \omega}}-\tilde{U}(\tilde{\omega})+\tilde{L}_{\text{M}},\end{cases} \tag{46}\] with \(\tilde{X}^{\dot{\omega}\dot{\omega}}\equiv\frac{{M_{\text{{\rm{Pl}}}}}^{2}}{2} \tilde{g}^{\mu\nu}\partial_{\mu}\tilde{\omega}\partial_{\nu}\tilde{\omega}\). A plot of the solution found for \(\tilde{\omega}\) is included as it is useful to understand the nature of the system. As can be seen from Fig. 3 the \(\tilde{\omega}\) function has 2 distinct regions. One point to note is that the original \(\hat{\omega}\) field and the redefined \(\tilde{\omega}\) field both have a the same crossing of the origin at \(\tilde{\omega}(\hat{\omega}=0)=0\), so the CS point is still at the origin for the redefined field \(\tilde{\omega}\). In Fig. 3 there can be seen two different asymptotic regions of the redefined field \(\tilde{\omega}\). As \(\hat{\omega}\to-\sqrt{2}\) the prefactor of the Ricci scalar in Eq. (16) vanishes \(\hat{F}(\hat{\omega})\to 0\); this point is seen as the limit \(\tilde{\omega}\to-\infty\). Also, as \(\hat{\omega}\to\infty\) the re-canonicalised field approaches a finite limit \(\tilde{\omega}\rightarrow\left(\sqrt{8}\pi\right)/3\). One important feature of the potential in Fig. 4 is that for \(\tilde{\omega}>0\) the field is a phantom scalar. We expect this to manifest as a field 'rolling up' its potential, whereas for \(\tilde{\omega}\leq 0\) the field is a standard canonical scalar field rolling down its potential. ### Bare cosmological constant and potential for inflation Now that the system introduced in Eq. (10a) has been simplified to the form Eq. (46), we can include a brief discussion of the addition of an external cosmological constant \(\Lambda\), and the inflationary implications therein. Note that \(\Lambda\) is not a conformal term in the physical frame Eq. (1), and so it will pick up scalar couplings with each conformal transformation in Eqs. (7a) and (33). Tracing these through, this has the effect of changing the form of the potential \(\tilde{U}\) to \[\tilde{U}\left(\hat{\omega}\right)=\frac{8a\hat{\omega}^{2}(5\lambda+2 \Lambda)+\hat{\omega}^{4}(4\lambda+\Lambda)+64(\lambda+\Lambda)}{16\left(a \hat{\omega}^{2}+2\right)^{2}}, \tag{47}\] Figure 4: Plot of the potential of the scalar field, with the orange line representing the phantom (‘roll up-hill’) region, and the blue line representing the standard canonical scalar field region. Figure 3: Plot of the solution of the field redefinition Eq. (45), with the two asymptotic regions show. As the field redefined field passes through the origin the CS point will remain at \(\hat{\omega}=\tilde{\omega}=0\). Figure 2: Plots of the Einstein frame phase space, with the blue marker representing the CS (correspondence solution), with the slices of the phase space being in the \(x-z\) plane at constant \(y\) values indicated above each slice. The form of the potential \(\tilde{U}(\tilde{\omega})\), once the field redefinition in Section VI.1 has been applied, reads \[\tilde{U}(\tilde{\omega}) =\frac{1}{2}\,{M_{\text{Pl}}}^{2}\cosh^{2}\left(\frac{\tilde{ \omega}}{\sqrt{8}}\right)\] \[\quad\times\left[2\lambda+\Lambda+\Lambda\cosh\left(\frac{\tilde{ \omega}}{\sqrt{2}}\right)\right], \tag{48}\] where we have restricted the scope of this section to looking at values of \(\tilde{\omega}\leq 0\). With this new potential, we can use the dynamical systems framework to find the conditions for stability with the addition of the parameter \(\Lambda\). The dynamical systems analysis follows the same method as Section IV and Section V, and we will not repeat all the steps here. The stability condition we impose is the requirement that all the real parts of the eigenvalues for the Jacobian of the dynamical system Eq. (29) are less than zero. This condition is ambivalent to the nature of the stable point (i.e spiral or sink), but rather the asymptotic stability of the point. With this in mind, we find that the region of validity for the two parameters \(\lambda\) and \(\Lambda\) reads \[-2\nleq\frac{\lambda}{\Lambda}\nleq-1. \tag{49}\] An interesting regime that could serve as an inflationary potential in this theory is for the values of \(\Lambda<0\) and \(\lambda>0\). This regime has a _negative external cosmological constant_, but allows for the formation of a hilltop in the potential. The slow-roll parameters for inflation are defined by [31] \[\epsilon \equiv\frac{1}{2}\left(\frac{1}{\tilde{U}(\tilde{\omega})}\frac{ \mathrm{d}\tilde{U}(\tilde{\omega})}{\mathrm{d}\tilde{\omega}}\right)^{2}\] \[=\frac{2\left(\lambda\tanh\left(\frac{\tilde{\omega}}{\sqrt{8}} \right)+\Lambda\sinh\left(\frac{\tilde{\omega}}{\sqrt{2}}\right)\right)^{2}}{ \left(2\lambda+\Lambda\cosh\left(\frac{\tilde{\omega}}{\sqrt{2}}\right)+ \Lambda\right)^{2}}, \tag{50a}\] \[\eta \equiv\left(\frac{1}{\tilde{U}(\tilde{\omega})}\frac{\mathrm{d}^{ 2}\tilde{U}(\tilde{\omega})}{\mathrm{d}\tilde{\omega}^{2}}\right)\] \[=\left|4-(6\lambda+5\Lambda)\left[2\lambda+\Lambda+\Lambda\cosh \left(\frac{\tilde{\omega}}{\sqrt{2}}\right)\right]^{-1}\right.\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad- \left.\left[\cosh\left(\frac{\tilde{\omega}}{\sqrt{2}}\right)+1\right]^{-1} \right|. \tag{50b}\] We consider inflation between a point near \(\epsilon=0\) at the top of the hill in Fig. 5, and the inflection point at \(\eta=0\). This region is stable, as shown in the preceding sections, and evolves towards the CS (note that this stability applies at the background level: the potential is only valid at the background level and the possibility of quantum effects, e.g tunnelling, are not considered). The heuristic constraints placed on the potential include the positive total cosmological constant seen at the CS at \(\tilde{\omega}=0\), and the stability bounds of the CS given in Eq. (49). These two conditions lead to \[\Lambda+\lambda=\Lambda_{\text{LCDM}}\] \[-2\nleq\frac{\lambda}{\Lambda}\nleq-1 \tag{51}\] The number of e-folds, \(\mathrm{N_{e}}\) for slow-roll inflation is given in terms of the slow-roll parameter \(\epsilon\), and is defined as [31] \[\mathrm{N_{e}}\equiv\int_{t_{l}}^{t_{f}}\tilde{H}\,\mathrm{d}t\approx\int_{ \tilde{\omega}_{l}}^{\tilde{\omega}_{f}}\frac{\mathrm{d}\tilde{\omega}}{\sqrt {2\epsilon}}, \tag{52}\] where the subscripts \(i,f\) denote the beginning and end of inflation respectively. These limits are shown in Fig. 5 by the shaded region, and they correspond to the top of the hill at \(\tilde{\omega}_{i}\), and the point where \(\mathrm{d}\epsilon/\mathrm{d}\lambda=0\). Note we have added a small shift from the hill top of the form \(\tilde{\omega}+\delta\), where \(\delta\) is a small positive perturbation from the hilltop, so that the field will roll down to the CS point. From [32], the value of the cosmological constant is \(\Lambda_{\text{LCDM}}=2.846\times 10^{-122}\,{M_{\text{Pl}}}^{2}\). To find this in terms of the two parameters of the theory \(\Lambda\) and \(\lambda\), an application of Eq. (51) yields \[\lambda =1.752\times\Lambda_{\text{LCDM}}=4.987\times 10^{-122}\,{M_{ \text{Pl}}}^{2}, \tag{53a}\] \[\Lambda =\Lambda_{\text{LCDM}}-\lambda=-2.141\times 10^{-122}\,{M_{ \text{Pl}}}^{2}. \tag{53b}\] We will return to these values in Section VII. Figure 5: Plot of the potential with the parameters \(\Lambda,\lambda\) chosen such that there are 50 e-folds of inflation. The shaded region shows the region integrated over in Eq. (52). We have normalised the potential such that the final value of 1 at the CS corresponds to the potential having the value of \(\Lambda_{\text{LCDM}}\). ### Inflationary phase space Following from the dynamical systems phase portraits, we can also construct a phase space with inflationary views in mind. Using the method outlined in [33], we can construct a phase space that eliminated the Hubble number \(H\) from the system. Using the Friedmann equations, and the Klein Gordon equation for the Lagrangian in Eq. (46), which read \[3\tilde{H}^{2}=\frac{-\alpha}{2}\hat{\omega}+\tilde{U}(\hat{\omega}),\quad\ddot {\tilde{\omega}}+3\tilde{H}^{2}\dot{\tilde{\omega}}-\alpha\tilde{U}(\tilde{ \omega})^{\prime}=0, \tag{54}\] where the \({}^{\prime}\) superscript denotes derivative w.r.t \(\tilde{\omega}\), and \(\alpha=\text{sgn}(\hat{\omega})\). Now the phase space can be constructed for the \(\left(\tilde{\omega},\hat{\omega}\right)\) phase space, such that \[\mathbf{x}\equiv\left(\tilde{\omega},\hat{\omega}\right) =\left(\hat{\omega},-3\tilde{H}^{2}\hat{\omega}+\alpha\tilde{U}( \tilde{\omega})^{\prime}\right)\] \[=\left(\hat{\omega},\alpha\tilde{U}(\tilde{\omega})^{\prime}- \hat{\omega}\sqrt{3}\sqrt{\hat{U}(\tilde{\omega})-\alpha\frac{\hat{\omega}^{2 }}{2}}\right), \tag{55}\] and the full phase space, upon substituting the external cosmological constant with \(\Lambda=1-\lambda\) (where we have set \(\Lambda_{\text{LCDM}}=1\)), is of the form \[\hat{\mathbf{x}}=\left\{\begin{pmatrix}\hat{\omega},\frac{(\lambda-1)\sinh \left(\sqrt{2}\hat{\omega}\right)}{4\sqrt{2}}-\sqrt{\frac{3}{8}}\hat{\omega} \sqrt{\lambda-(\lambda-1)\cosh\left(\sqrt{2}\hat{\omega}\right)+4\cosh\left( \frac{\hat{\omega}}{\sqrt{2}}\right)+4\hat{\omega}^{2}+3}-\frac{\sinh\left( \frac{\hat{\omega}}{\sqrt{2}}\right)}{\sqrt{8}}\end{pmatrix},&\tilde{\omega} \leq 0,\\ \left(\hat{\omega},\frac{(\lambda-1)\sin\left(\sqrt{2}\hat{\omega} \right)}{4\sqrt{2}}-\sqrt{\frac{3}{8}}\hat{\omega}\sqrt{\lambda-(\lambda-1) \cos\left(\sqrt{2}\hat{\omega}\right)+4\cos\left(\frac{\hat{\omega}}{\sqrt{2} }\right)-4\hat{\omega}^{2}+3}-\frac{\sin\left(\frac{\hat{\omega}}{\sqrt{2}} \right)}{\sqrt{8}}\right),&0<\tilde{\omega}<\frac{\sqrt{8}\pi}{3},\end{pmatrix}\right. \tag{56}\] The phase space is plotted from the hill-top at \(\epsilon=0\), up to \(\tilde{\omega}=\sqrt{8}\pi/3\) which corresponds to the upper limit of the field redefinition of \(\tilde{\omega}\). From Fig. 6, the colour function tracks the value of \(\left[3\tilde{H}^{2}\tilde{\omega}-\text{sgn}(\tilde{\omega})\tilde{U}^{ \prime}(\tilde{\omega})\right]\); this being the deviation from slow roll of the system. As can be seen from the plot, the phase space effectively picks out the inflationary attractor, as the system progresses towards the CS point at the origin. ## VII Conclusions In this work we demonstrated that the entire cosmology of the CTEG theory, which we defined in (1), may be encoded in a single potential function. The CTEG is a non-Riemannian theory with curvature and torsion, which contains no Einstein-Hilbert term and therefore has no a priori connection to the IR limit of GR [17; 18; 19]. The CTEG is independently motivated by a unitary, power-counting renormalisable particle spectrum [15; 16]. #### vi.3.1 Summary of the model Our main result is as follows. The CTEG is formulated in (1) with conventional minimally-coupled matter, and a metric \(g_{\mu\nu}\) which is FLRW on homogeneous, isotropic scales according to (6). This is consistent with the LCDM model in which the cosmological constant \(\Lambda+\lambda\) is the sum of external \(\Lambda\) and emergent \(\lambda\) components, which are both couplings in (1). On these scales the torsion tensor contains only the scalars \(\phi\) and \(\psi\) in (5), so that one may view the model as a torsion-free (i.e. Riemannian) scalar-tensor theory. Because \(\phi\) drops out entirely as an algebraically determined quantity in the CTEG field equations, we can focus on a conformal transformation of the physical metric to \(g_{\mu\nu}\equiv\tilde{\Omega}(\tilde{\omega})^{2}\tilde{g}_{\mu\nu}\), where \(\psi\equiv\psi(\tilde{\omega})\) is defined in terms of a dimensionless parameterised field \(\tilde{\omega}\). This is the metric of an emphatically _non-physical_ but nonetheless _convenient_ conformal frame, in which the Riemann curvature tensor is \(\tilde{R}^{\mu}_{\ \ \ \nu\phi\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ \(\tilde{U}(\tilde{\omega})\) is a potential which encodes all the phenomenology. The possibility of this ghost should not be over-interpreted: the validity of the model is confirmed only at the level of the cosmological background, and so there can be no meaningful notion of particle production. The inflection and resulting concavity of the potential can then ensure that the background evolution is qualitatively similar on both sides of the origin. Due to the change in conformal frame, note that the matter Lagrangian \(\tilde{L}_{\text{M}}\) will inevitably acquire some dependence on \(\tilde{\omega}\). Accordingly, it is most meaningful to use (57) in scenarios where the matter content of the Universe may be neglected. In this work we have focussed on two such regimes: hilltop inflation as the new scalar rolls slowly through some range of \(\tilde{\omega}<0\) in the early Universe, and finally the emergence of the late asymptotic de Sitter Universe as \(\tilde{\omega}\to 0\) from below. How does the model (57) map to the CTEG in (1), and why consider these values of \(\tilde{\omega}\)? The conformal shift and field redefinition required to reach the theory in Eq. (57) are respectively \[\tilde{\Omega}(\tilde{\omega})=\begin{cases}\frac{9}{2}\left[\cosh\left(\frac {\tilde{\omega}}{\sqrt{2}}\right)+1\right]\left[2\cosh\left(\frac{\tilde{ \omega}}{\sqrt{2}}\right)+1\right]^{-2},\\ \\ \frac{9}{2}\left[\cos\left(\frac{\tilde{\omega}}{\sqrt{2}}\right)+1\right] \left[2\cos\left(\frac{\tilde{\omega}}{\sqrt{2}}\right)+1\right]^{-2},\end{cases} \tag{58}\] The modulus is expected in (58) because the CTEG cosmological field equations are not sensitive to \(\text{sgn}(\psi)\). From the transformation in (58), we identify the negative semidefinite range \(\tilde{\omega}\in(-\infty,0]\) monotonically with \(\psi\in\left(0,\psi_{\text{C}}\right]\), wherein the background axial torsion magnitude grows from the neighbourhood of zero up to its critical condensate value \(\psi_{\text{C}}\) defined in (4). Following [17] and as discussed in Section 1 2, we speculate that there is an extended pause at some \(\psi\lesssim\psi_{\text{C}}\) during the radiation-dominated epoch5. Since the final approach to \(\psi\rightarrow\psi_{\text{C}}\) is known to be over-damped by the Hubble friction [17; 18; 19], it is reasonable to assume that the Universe inhabits this range throughout its whole history6. In this range, Figure 6: Phase space portrait of the inflationary potential \(\tilde{U}\), with the attractor solutions (satisfying slow roll such that \(z=0\), with \(z\) defined in (27)) shown by the orange dotted line and the blue dot representing the CS point, and \(\Lambda\approx-0.75\), \(\lambda\approx 1.75\). The colour plot shows the values of the deviation from the slow roll condition \(z\). the scalar \(\tilde{\omega}\) appears with a positive kinetic energy in (57): it is a conventional scalar whose phenomenology is wholly determined by its potential \(\tilde{U}(\tilde{\omega})\). As \(\psi\) increases from zero towards \(\psi_{\rm C}\), so the conformal shift increases through the range \(\tilde{\Omega}(\tilde{\omega})\in(0,1]\). It is suggestive that the conformal frames are _equivalent_ at the condensate, where \(\tilde{\omega}\) vanishes. #### iv.2.2 Strong coupling from dark energy? The particle spectrum of CTEG was initially computed without matter, or any external cosmological constant \(\Lambda=\tilde{L}_{\rm M}=0\)[15; 16]. For torsion below the level of the condensate, the potential then takes the very natural form \(\tilde{U}(\tilde{\omega})={M_{\rm Pl}}^{2}\lambda\cosh^{2}\left(\tilde{\omega} /\sqrt{8}\right)\). This empty Universe inevitably evolves towards the condensate \(\psi\to\psi_{\rm C}\) at the bottom of the potential \(\tilde{\omega}\to 0\), which drives an asymptotically de Sitter expansion in the asymptotically coincident conformal frames \(\tilde{H}\to H\to\sqrt{\lambda/3}\). This is perfectly consistent with the previous nonlinear analyses [18; 19]. However, we recall that the motivating particle spectrum analysis assumed a background value of _zero_ torsion. The limit \(\psi\to 0\) corresponds to the limit \(\tilde{\omega}\to-\infty\), which sends us to the 'top' of the hyperbolic cosine potential in Fig. 4. This is an unsettling result. Around the zero-torsion Minkowski background, the first-order perturbations to the \(\text{PGT}^{\prime+}\) Lagrangian vanish (or are pure surface terms), confirming this background to be a solution of the nonlinear field equations. The second-order perturbations give rise to a unitary, power-counting renormalisable particle spectrum [16]. It is not however clear what perturbative treatment can be valid near the divergent potential at negative infinity. The existence of this potential was only revealed in the current work, using inherently non-perturbative methods which study the bulk nonlinear phase space of the theory. A possible interpretation is that the zero-torsion Minkowski background is a _strongly coupled_ surface of the theory [23; 24]. Many \(\text{PGT}^{+}\) Lagrangian are well known to be spoiled by the strong coupling effect [13; 19; 25]. Near such surfaces, perturbative methods cannot apply, and they yield a particle spectrum which belongs to a 'fictitious' theory. The effect is especially dangerous because no indication of strong coupling can arise at the perturbative order used in the propagator analysis: the method 'fails silently'. A thorough investigation into the current case will take into account not only the divergent potential, but also the vanishing conformal factor (58) in the limit \(\tilde{\omega}\to-\infty\). For the moment we note from (58) that, in the absence of other non-minimally coupled matter, a non-divergent potential strictly requires \(\lambda=\Lambda=0\). In this case \(\tilde{\omega}\) becomes shift-symmetric, and no shadow is cast by the current work on the validity of the zero-torsion particle spectrum: as a phenomenological consequence all forms of dark energy are, however, forfeit. In this context it may be appropriate to relax the interpretation of \(\lambda\) and \(\Lambda\) as bare couplings in the theory. These might run, or be anomalously acquired in an effective field theory framework, in a way which should be shown to be consistent with the perturbative QFT. We leave this investigation to future work. #### iv.2.3 Hilltop inflation Dynamically adjusted (or renormalised) \(\lambda\) and \(\Lambda\) may also be necessary for inflation. A key result in the current treatment, which resolves some speculation in previous work [17; 18; 19], is that a concrete inflationary model emerges entirely within the gravitational sector of CTEG. As illustrated in Fig. 4, if \(\Lambda<0\) and \(\lambda>0\) then \(\tilde{U}(\tilde{\omega})\) can become a hilltop potential for \(\tilde{\omega}<0\). As part of this work, we demonstrated in Figs. 1 and 2, using various conformal frames, that the torsion condensate is a late-Universe attractor provided \(-2\not\lesssim\lambda/\Lambda\not\lesssim-1\). In this case the asymptotic de Sitter expansion in the late Universe is \(\tilde{H}\to H\to\sqrt{(\lambda+\Lambda)/3}\). In principle, values \(\lambda=4.987\times 10^{-122}{M_{\rm Pl}}^{2}\) and \(\Lambda=-2.846\times 10^{-122}{M_{\rm Pl}}^{2}\) can be found which generate 50 e-folds of inflation in the hilltop slow-roll regime, whilst remaining consistent with current estimates of the accelerated expansion at late times [4]. Yet these parameters are not legitimate for the following reason. As can be seen from Fig. 4, the scale of inflation with these parameters is inseparable from that of the current Hubble number: the early- and late-Universe phenomena cannot be'resolved'. For the moment therefore, the slow-roll dynamics demonstrated in Fig. 6 are encouraging, but should be viewed _qualitatively_. Equipped with the new reduced theory in Eqs. (57) and (58), our understanding of the classical background phenomena of CTEG in Eq. (1) is unlikely now to be improved by further study. Progress instead rests on the interpretation of CTEG as an effective quantum theory in which torsion and matter are coupled. _"The great revelation perhaps never did come. Instead there were little daily miracles, matches struck unexpectedly in the dark; here was one."_ (To The Lighthouse, Virginia Woolf, 1927) #### iv.2.4 Acknowledgments We are grateful for insightful discussions with Anthony Lasenby, Mike Hobson and Will Handley. C.R is grateful for the opportunity of the summer internship with the Institute of Astronomy (IoA) which facilitated this work. W.E.V.B. is grateful for the kind hospitality of Leiden University and the Lorentz Institute, and the support of Girton College, Cambridge.
2304.05911
Understanding oscillons: standing waves in a ball
Oscillons are localised long-lived pulsating states in the three-dimensional $\phi^4$ theory. We gain insight into the spatio-temporal structure and bifurcation of the oscillons by studying time-periodic solutions in a ball of a finite radius. A sequence of weakly localised {\it Bessel waves} -- nonlinear standing waves with the Bessel-like $r$-dependence -- is shown to extend from eigenfunctions of the linearised operator. The lowest-frequency Bessel wave serves as a starting point of a branch of periodic solutions with exponentially localised cores and small-amplitude tails decaying slowly towards the surface of the ball. A numerical continuation of this branch gives rise to the energy-frequency diagram featuring a series of resonant spikes. We show that the standing waves associated with the resonances are born in the period-multiplication bifurcations of the Bessel waves with higher frequencies. The energy-frequency diagram for a sufficiently large ball displays sizeable intervals of stability against spherically-symmetric perturbations.
N. V. Alexeeva, I. V. Barashenkov, A. A. Bogolubskaya, E. V. Zemlyanaya
2023-04-12T15:29:38Z
http://arxiv.org/abs/2304.05911v1
# Understanding oscillons: standing waves in a ball ###### Abstract Oscillons are localised long-lived pulsating states in the three-dimensional \(\phi^{4}\) theory. We gain insight into the spatio-temporal structure and bifurcation of the oscillons by studying time-periodic solutions in a ball of a finite radius. A sequence of weakly localised _Bessel waves_ -- nonlinear standing waves with the Bessel-like \(r\)-dependence -- is shown to extend from eigenfunctions of the linearised operator. The lowest-frequency Bessel wave serves as a starting point of a branch of periodic solutions with exponentially localised cores and small-amplitude tails decaying slowly towards the surface of the ball. A numerical continuation of this branch gives rise to the energy-frequency diagram featuring a series of resonant spikes. We show that the standing waves associated with the resonances are born in the period-multiplication bifurcations of the Bessel waves with higher frequencies. The energy-frequency diagram for a sufficiently large ball displays sizeable intervals of stability against spherically-symmetric perturbations. ## I Introduction Repeated expansions and contractions of spherically-symmetric vacuum domains were observed [1; 2] in computer simulations of the \(\phi^{4}\) equation, \[\Phi_{tt}-\Delta\Phi-\Phi+\Phi^{3}=0. \tag{1}\] More accurate numerical studies [3] revealed the formation of long-lived pulsating structures of large amplitude and nearly unchanging width. These structures -- dubbed oscillons in Ref [4] -- have turned out to be of interest in several cosmological contexts, including the dynamics of inflationary reheating, symmetry-breaking phase transitions, and false vacuum decay [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Oscillons have been discovered in the planar Abelian Higgs theory [23; 24], Einstein-Klein-Gordon equations [25; 26; 27; 28; 29; 30], axion models [31; 32; 33; 34; 35], string phenomenology [36; 37; 38] and bosonic sector of the standard model [39; 40; 41; 42]. The oscillon's quantum radiation was evaluated in [43; 44] and the impact of fermionic corrections was considered in [45]. Oscillatory localised structures (known as \(\mathcal{I}\)-balls in that context) feature prominently in studies of the adiabatic invariant in theories without electric or topological charge [46; 47; 48; 49]. Considerable progress in the understanding of the oscillon properties was achieved through the state-of-the-art computer simulations [50; 51; 4; 5] and numerical Fourier analysis [50; 52]. Most importantly, the authors of Ref [52] demonstrated the existence of periodic solutions with frequencies filling the entire \((0,\omega_{0})\) interval. (Here \(\omega_{0}\) is the frequency of spatially uniform small-amplitude oscillations about the vacuum.) The solutions in question have exponentially localised cores and oscillatory tails, with the tail amplitudes decaying in proportion to \(r^{-1}\). The authors of Ref [52] have interpreted the evolution of oscillons as an adiabatic motion in the parameter space of those "quasibreathers". At the same time, theoretical arguments produced estimates for the oscillon's energy, radius, frequency, core amplitude, and lifetime [53; 54]. These were based on a heuristic combination of linear radiation analysis and a single-mode variational model [53; 54; 55; 5]. A refined perturbation expansion of the small-amplitude oscillons [56] is also worth to be mentioned. The aim of the present study is to shed further light on the structure and resonant properties of the oscillon by examining periodic standing waves in a ball of a large but finite radius. To make it more precise, let \(\Phi(r,t)\) be a spherically-symmetric solution of equation (1) approaching \(\Phi_{0}=-1\) (one of two vacuum solutions) as \(r\to\infty\). The difference \[\phi=\Phi-\Phi_{0}\] obeys \[\phi_{tt}-\phi_{rr}-\frac{2}{r}\phi_{r}+2\phi-3\phi^{2}+\phi^{3}=0.\] (2a) Instead of searching for solutions of the equation ( 2a ) vanishing at infinity, we consider solutions satisfying the boundary conditions \[\phi_{r}(0,t)=\phi(R,t)=0\] (2b) with a large \[R\]. (The first condition in ( 2b ) ensures the regularity of the Laplacian at the origin.) One more boundary condition stems from the requirement of periodicity with some \[T\] : \[\phi(r,T)=\phi(r,0). \tag{2c}\] The periodic standing waves are characterised by their energy \[E=4\pi\int_{0}^{R}\left(\frac{\phi_{t}^{2}}{2}+\frac{\phi_{r}^{2}}{2}+\phi^{ 2}-\phi^{3}+\frac{\phi^{4}}{4}\right)r^{2}dr \tag{3}\] and frequency \[\omega=\frac{2\pi}{T}. \tag{4}\] If the solution with frequency \(\omega\) does not change appreciably as \(R\) is increased -- in particular, if the energy (3) does not change -- this standing wave provides a fairly accurate approximation for the periodic solution in an infinite space. In what follows, we present results of numerical and asymptotic analysis of the boundary-value problem (2). Numerically, we employed a predictor-corrector algorithm with a newtonian iteration to continue solutions in \(\omega\)[57]. To classify the stability of the resulting standing waves against spherically-symmetric perturbations we considered the linearised equation \[y_{tt}-y_{rr}-\frac{2}{r}y_{r}-y+3(\phi-1)^{2}y=0 \tag{5}\] with the boundary conditions \(y_{r}(0,t)=y(R,t)=0\). The solution \(\phi(r,t)\) is deemed stable if all its Floquet multipliers lie on the unit circle \(|\zeta|=1\) and unstable if there are multipliers outside the circle [58; 59]. The monotonically growing instability is associated with a pair of real multipliers, \(\zeta\) and \(1/\zeta\); the oscillatory instability is characterised by a complex quadruplet: \(\zeta,1/\zeta,\zeta^{*},1/\zeta^{*}\). The paper is organised into five sections. In the next section we establish the existence of a sequence of standing waves with \(n-1\) nodes (\(n=1,2,...\)) and no clearly defined core. These Bessel-like patterns are nonlinear descendants of linear standing waves in the ball. The subsequent asymptotic analysis (section III) focusses on the evolution of the \(n=1\) Bessel wave as its frequency is decreased to below the frequency of the spatially uniform oscillations. Further frequency reduction is carried out using numerical continuation; the resulting resonant energy-frequency diagram is presented in section IV. We consider the spatiotemporal structure of the resonant solutions and demonstrate that they are born in the period-doubling bifurcations of the \(n>0\) Bessel waves. Stability of the standing waves is classified in the same section. Finally, section V summarises results of this study. ## II Birth of the Bessel Wave We start our analysis by considering the emergence of a standing wave from the zero solution of equation (2a). The small-amplitude standing wave can be constructed as a power series \[\phi=\epsilon\phi_{1}+\epsilon^{2}\phi_{2}+\epsilon^{3}\phi_{3}+..., \tag{6}\] where the coefficients \(\phi_{n}\) are functions of \(x\) and a hierarchy of time scales \(\mathcal{T}_{0}=t\), \(\mathcal{T}_{1}=\epsilon t\), \(\mathcal{T}_{2}=\epsilon^{2}t\),.... In the limit \(\epsilon\to 0\) the time scales become independent; hence \[\frac{\partial^{2}}{\partial t^{2}}=\frac{\partial^{2}}{\partial\mathcal{T}_{ 0}^{2}}+2\epsilon\frac{\partial}{\partial\mathcal{T}_{0}}\frac{\partial}{ \partial\mathcal{T}_{1}}+\epsilon^{2}\left(\frac{\partial^{2}}{\partial \mathcal{T}_{1}^{2}}+2\frac{\partial}{\partial\mathcal{T}_{0}}\frac{\partial }{\partial\mathcal{T}_{2}}\right)+...\] Substituting the above expansions in (2a) we set to zero coefficients of like powers of \(\epsilon\). The solution to the order-\(\epsilon\) equation, satisfying the boundary conditions \(\partial_{r}\phi_{1}(0,t)=\phi_{1}(R,t)=0\), is \[\phi_{1}=\left(Ae^{i\Omega^{(n)}\mathcal{T}_{0}}+c.c.\right)f_{1}^{(n)}(r), \tag{7}\] where \[\Omega^{(n)}=\sqrt{\omega_{0}^{2}+\left(k^{(n)}\right)^{2}}, \quad\omega_{0}=\sqrt{2}, \tag{8}\] \[f_{1}^{(n)}=\frac{\sin(k^{(n)}r)}{r},\quad k^{(n)}=\frac{\pi}{R }n, \tag{9}\] \(n=1,2,...\), and \(c.c.\) stands for the complex conjugate of the immediately preceding term. The amplitude \(A\) is slowly changing in time: \(A=A(\mathcal{T}_{1},\mathcal{T}_{2},...)\). Since the localised mode (9) has the form of the spherical Bessel function, we will be referring to solutions branching off the zero solution at \(\omega=\Omega^{(n)}\) as "Bessel waves". In equation (8), \(\omega_{0}\) demarcates the endpoint of the continuous spectrum of frequencies in the ball of an infinite radius. This endpoint defines a natural frequency scale that will regularly occur in the following analysis. The order-\(\epsilon^{2}\) solution, satisfying \(\partial_{r}\phi_{2}(0,t)=\phi_{2}(R,t)=0\), is given by \[\phi_{2}=\left(3A^{2}e^{2i\Omega^{(n)}\mathcal{T}_{0}}+c.c.\right)f_{2}^{(n)}( r)+6|A|^{2}g_{2}^{(n)}(r), \tag{10}\] where \[f_{2}^{(n)}=\frac{1}{\kappa^{(n)}r}\left(p^{(n)}(r)-\frac{\sin( \kappa^{(n)}r)}{\sin(\kappa^{(n)}R)}\,p^{(n)}(R)\right),\] \[g_{2}^{(n)}=\frac{1}{\sqrt{2}r}\left(q^{(n)}(r)-\frac{\sinh( \sqrt{2}r)}{\sinh(\sqrt{2}R)}\,q^{(n)}(R)\right),\] \[p^{(n)}(r)=\int_{0}^{r}\sin\left[\kappa^{(n)}(r^{\prime}-r) \right]\frac{\sin^{2}(k^{(n)}r^{\prime})}{r^{\prime}}dr^{\prime},\] \[q^{(n)}(r)=\int_{0}^{r}\sinh\left[\sqrt{2}(r^{\prime}-r)\right] \frac{\sin^{2}(k^{(n)}r^{\prime})}{r^{\prime}}dr^{\prime} \tag{11}\] and \[\kappa^{(n)}=\sqrt{6+4(k^{(n)})^{2}}.\] The solution (10) exists provided the amplitude satisfies the nonsecurity constraint \(\partial A/\partial\mathcal{T}_{1}=0\). We are also assuming that \(\kappa^{(n)}\neq k^{(m)}\), \(m=1,2,...\). Finally, the order \(\epsilon^{3}\) gives an equation for \(\phi_{3}\): \[\left(\frac{\partial^{2}}{\partial\mathcal{T}_{0}^{2}}-\nabla^{2}+2\right) \phi_{3}=-2\frac{\partial^{2}\phi_{1}}{\partial\mathcal{T}_{0}\partial \mathcal{T}_{2}}+6\phi_{1}\phi_{2}-\phi_{1}^{3}. \tag{12}\] The solvability condition is \[i\Omega^{(n)}R\frac{\partial A}{\partial\mathcal{T}_{2}}+3\sigma^{(n)}|A|^{2}A =0, \tag{13}\] where \[\sigma^{(n)}=\int_{0}^{R}\left[(f_{1}^{(n)})^{2}-12g_{2}^{(n)}-6f_{2}^{(n)} \right]\left(f_{1}^{(n)}r\right)^{2}dr, \tag{14}\] and we have used \(\partial A/\partial{\cal T}_{1}=0\). The values of the integral (14) with varied \(n\) are presented graphically in Fig 1. The quantity \(\sigma^{(n)}\) is determined to be negative for all \(n\leq n_{s}\) and positive for \(n>n_{s}\), where \(n_{s}\) is an integer dependent on \(R\). When \(R\) is large enough, a fairly accurate approximation for \(n_{s}(R)\) is given by the integer part of \((\sqrt{30}/\pi)R\). The general solution of the amplitude equation (13) is \[A=\exp\left(i\frac{3\sigma^{(n)}}{\Omega^{(n)}R}{\cal T}_{2}\right), \tag{15}\] where the initial value was set equal to 1. (There is no loss of generality in setting \(A(0)\) to 1 as it enters \(\phi_{n}\) only in combination \(\epsilon A(0)\), where \(\epsilon\) is free to vary.) Thus, the fundamental frequency of the Bessel wave with amplitude \(\epsilon\), branching off the trivial solution \(\phi=0\) at the point \(\omega=\Omega^{(n)}\), is \[\omega=\Omega^{(n)}+\frac{3\sigma^{(n)}}{\Omega^{(n)}R}\epsilon^{2}+.... \tag{16}\] Note that the nonlinear frequency shift is negative (\(\omega<\Omega^{(n)}\)) for all \(n\leq n_{s}\) and positive for \(n>n_{s}\). The relation (16) implies that our \(\epsilon\)-expansion is, in fact, an expansion in powers of the detuning from the resonant frequency, \(|\omega-\Omega^{(n)}|\). The energy (3) of the series solution (6) is \[E^{(n)}(\epsilon)=4\pi R\left(\Omega^{(n)}\right)^{2}\epsilon^{2}+O(\epsilon ^{4}). \tag{17}\] Eliminating \(\epsilon^{2}\) between (16) and (17) we can express the energy of the Bessel wave as a function of its frequency: \[E^{(n)}(\omega)=\frac{4\pi R^{2}\left(\Omega^{(n)}\right)^{3}}{3\sigma^{(n)}} \left(\omega-\Omega^{(n)}\right)+O\left(\left(\omega-\Omega^{(n)}\right)^{2} \right). \tag{18}\] This is an equation of a ray emanating from the point \((\Omega^{(n)},0)\) on the \((\omega,E)\), \(E>0\), half-plane. The slope of the ray is negative for all \(n\leq n_{s}(R)\) and positive for \(n>n_{s}(R)\). All solutions of equation (13) are stable. (Trajectories form concentric circles on the \((\mbox{Re}A,\mbox{Im}A)\) phase plane.) The asymptotic construction of the Bessel wave is corroborated by the numerical analysis of the boundary-value problem (2). A numerically-continued branch starting with the trivial solution \(\phi=0\) at \(\omega=\Omega^{(n)}\) consists of standing waves with \(n-1\) nodes inside the interval \((0,R)\). An important feature of these solutions is their weak localisation. Even when the energy of the Bessel wave is high -- that is, even when the solution is far from its linear limit (9) -- the wave does not have an exponentially localised core and the amplitude of the damped sinusoid \(\phi(r,t)\) remains of order \(R^{-1}\) as \(r\) approaches \(R\). (See Fig 2.) Consistently with the asymptotic considerations, the numerically continued Bessel waves are stable near their inception points and only lose stability as their energies become high enough. (For details of the corresponding period-doubling bifurcation see section IV.1.) The continuation starting at the lowest of the resonance values, \(\omega=\Omega^{(1)}\), produces a stable branch with a steep negative slope (Fig 3). The steep growth of the energy is due to the small absolute value of \(\sigma^{(1)}\) in (18) while the negativity of \(dE^{(1)}/d\omega\) is due to \(n_{s}(100)\) being greater than 1. As the solution is continued to lower values of \(\omega\), the function \(E(\omega)\) reaches a maximum and starts decreasing. Not unexpectedly, the asymptotic expansion in powers of the small detuning \(|\omega-\Omega^{(1)}|\) does not capture the formation of the energy peak. Before turning to an asymptotic expansion about a different frequency value, we make a remark on the nomenclature of numerical solutions. Assume that the computation interval \((0,T)\) includes an integer number of fundamental periods of a solution of the boundary-value Figure 1: (a) The integral (14) for \(R=40\) and \(R=100\). The function \(\sigma^{(n)}\) is negative for all \(n\leq n_{s}(R)\) and positive for \(n>n_{s}(R)\). (b) The integer \(n_{s}\) (marked by circles) for a sequence of \(R\) values. For \(R\geq 2\), the function \(n_{s}(R)\) is well approximated by \(n_{s}=(\sqrt{30}/\pi)R\) (shown by the straight line). The inset blows up the interval \(2\leq R<10\). problem (2): \(T=mT_{\rm f}\), \(m>1\). Equation (4) gives then a formal frequency \(\omega=\omega_{\rm f}/m\), where \(\omega_{\rm f}=2\pi/T_{\rm f}\) is the fundamental frequency of the wave. In this case the periodic solution \(\phi(r,t)\) will be referred to as the \(1/m\) undertone of the standing wave. It is important to emphasise that the only difference between a standing wave and its undertone is the length of the interval \((0,T)\) that we use to determine the respective solution -- and hence its formal frequency (4). For example, the \(n\)-th Bessel wave is born with the frequency \(\omega=\Omega^{(n)}\) while its \(1/2\) undertone is born with \(\omega=\Omega^{(n)}/2\). Basically, the \(1/2\) undertone of the periodic oscillation \(\phi(r,t)\) is the oscillation itself, where we skip every other beat. ## III Small-amplitude wave in a large ball ### Inverse radius as a small parameter In order to account for the energy peak in Fig 3 and track the \(E(\omega)\) curve over the point of maximum analytically, we need an asymptotic expansion of a different kind. Instead of assuming the proximity to the resonant frequency \(\omega=\Omega^{(1)}\), we will zoom in on the neighbourhood of the frequency \(\omega_{0}\) corresponding to the uniform oscillations in an infinitely large ball. Our approach is a relative of the Lindstedt-Poincare method utilised in the context of the infinite space in Ref [60; 61] and elucidated in [56]. (The method was pioneered in the one-dimensional setting [62; 63].) We construct the small-amplitude solution in a ball of a large -- yet finite -- radius. Instead of the techniques used in [60; 61; 62; 63; 56], we employ a multiple scale expansion. This approach affords information on the spectrum of small perturbations of the standing wave, in addition to the standing wave itself. The inverse radius \(\epsilon=R^{-1}\) provides a natural small parameter. We expand \(\phi\) as in (6), introduce the sequence of slow times \(\mathcal{T}_{n}\) and, in addition, define a hierarchy of spatial scales \(\mathbf{X}_{n}=\epsilon^{n}\mathbf{x}\). Hence \[\nabla=\nabla_{0}+\epsilon\nabla_{1}+\epsilon^{2}\nabla_{2}+...,\quad\nabla_{ n}=\frac{\partial}{\partial\mathbf{X}_{n}}.\] These expansions are substituted in the equation (2a) where, for ease of computation, we drop the requirement of spherical symmetry: \[\phi_{tt}-\nabla^{2}\phi+2\phi-3\phi^{2}+\phi^{3}=0. \tag{19}\] At the order \(\epsilon^{1}\), we choose a spatially homogeneous solution \[\phi_{1}=Ae^{i\omega_{0}\mathcal{T}_{0}}+c.c. \tag{20}\] In (20), the amplitude \(A\) does not depend on \(\mathbf{X}_{0}\) or \(\mathcal{T}_{0}\) but may depend on the "slower" variables \(\mathbf{X}_{1},\mathbf{X}_{2},...\) and \(\mathcal{T}_{1},\mathcal{T}_{2},...\). The order \(\epsilon^{2}\) gives \[\phi_{2}=3|A|^{2}-\frac{1}{2}A^{2}e^{2i\omega_{0}\mathcal{T}_{0}}+c.c., \tag{21}\] and we had to impose the constraint \(\partial A/\partial\mathcal{T}_{1}=0\). Pro Figure 2: A snapshot of the Bessel wave with high energy. This solution was obtained by the numerical continuation of the trivial solution from \(\omega=\Omega^{(114)}\), in a ball with \(R=150\). (For the entire Bessel branch see Fig 7.) The wave is depicted by a solid line while its dashed envelope highlights the absence of a well-defined core. Note that only a portion of the \((0,150)\) interval is shown. Figure 3: The \(E(\omega)\) dependence near the point of inception of the standing wave in a ball with \(R=100\). Blue (thick) curve: result of numerical continuation. Brown (thin) curve: asymptotic approximation exploiting \(R^{-1}\) as a small parameter. Stable solutions are marked by the solid and unstable ones by the dashed lines. ceeding to the cubic order in \(\epsilon\) we obtain \[\left(\frac{\partial^{2}}{\partial\mathcal{T}_{0}^{2}}-\nabla_{0}^{2 }+2\right)\phi_{3}=-4A^{3}e^{3i\omega_{0}\mathcal{T}_{0}}+c.c.\] \[+ \left(\nabla_{1}^{2}A-2i\omega_{0}\frac{\partial A}{\partial \mathcal{T}_{2}}+12|A|^{2}A\right)e^{i\omega_{0}\mathcal{T}_{0}}+c.c. \tag{22}\] Setting to zero the secular term in the second line of (22), we arrive at the amplitude equation \[-2i\omega_{0}\frac{\partial A}{\partial\mathcal{T}_{2}}+\nabla_{1}^{2}A+12|A| ^{2}A=0.\] (23a) The boundary condition \[\phi(R,t)=0\] translates into \[A(\mathbf{X}_{1},\mathcal{T}_{2})|_{|\mathbf{X}_{1}|=1}=0. \tag{23b}\] ### Schrodinger equation in a finite ball A family of spherically-symmetric solutions of (23) is given by \[A=e^{i\omega_{2}\mathcal{T}_{2}}\mathcal{R}_{\mu}(r_{1}), \tag{24}\] where \(r_{1}=\sqrt{\mathbf{X}_{1}^{2}}\) and \(\mathcal{R}_{\mu}(\rho)\) solves the boundary-value problem \[\mathcal{R}^{\prime\prime}+\frac{2}{\rho}\mathcal{R}^{\prime}+ \mu\mathcal{R}+12\mathcal{R}^{3} =0, \tag{25a}\] \[\mathcal{R}^{\prime}(0)=\mathcal{R}(1) =0, \tag{25b}\] with \(\mu=2\omega_{0}\omega_{2}\). (In (25), the prime stands for the derivative with respect to \(\rho\).) In what follows we confine our attention to the nodeless (everywhere positive) solution \(\mathcal{R}_{\mu}(\rho)\) (Fig 4). Of particular importance will be its norm squared, \[N(\mu)=\int_{0}^{1}\mathcal{R}_{\mu}^{2}(\rho)\rho^{2}d\rho. \tag{26}\] The nodeless solution \(\mathcal{R}_{\mu}(\rho)\) exists for all \(\mu\) with \(-\infty<\mu<\pi^{2}\). As \(\mu\to\pi^{2}\), a perturbation argument gives \[\mathcal{R}_{\mu}(\rho) =\alpha\sqrt{\pi^{2}-\mu}\,\frac{\sin(\pi\rho)}{\rho}+O\left(( \pi^{2}-\mu)^{\frac{3}{2}}\right), \tag{27a}\] \[\alpha^{2} =\frac{1}{12\pi}\frac{1}{2\mathrm{Si}(2\pi)-\mathrm{Si}(4\pi)}=1.973\times 10^{-2}, \tag{27b}\] so that the norm decays to zero: \[N(\mu)=\frac{\alpha^{2}}{2}(\pi^{2}-\mu)+O\left(\left(\pi^{2}-\mu\right)^{2} \right).\] As \(\mu\to-\infty\), we have \[\mathcal{R}_{\mu}(\rho)\to\sqrt{-\mu}\,S\left(\sqrt{-\mu}\rho\right), \tag{28}\] where \(S(\rho)\) is the nodeless solution of the boundary value problem \[S^{\prime\prime}+\frac{2}{\rho}S^{\prime}-S+12S^{3} =0, \tag{29a}\] \[S^{\prime}(0)=S(\infty) =0. \tag{29b}\] Accordingly, the norm (26) decays to zero in the latter limit as well: \[N(\mu)\to\frac{1}{\sqrt{-\mu}}\int_{0}^{\infty}S^{2}(\rho)\rho^{2}d\rho\quad \text{as }\mu\to-\infty.\] The numerical analysis of the problem (25) verifies that \(N(\mu)\) has a single maximum, at \(\mu_{c}=-0.225\). Thus we have constructed an asymptotic standing-wave solution of equation (2), parametrised by its frequency \(\omega\): \[\phi=\frac{2}{R}\cos(\omega t)\mathcal{R}_{\mu}\left(\frac{r}{R}\right)+O(R^ {-2}),\] (30a) where \[\mu=2\omega_{0}R^{2}(\omega-\omega_{0}). \tag{30b}\] Substituting (30a) in (3) we obtain the corresponding energy: \[E(\omega)=16\pi RN(\mu)+O(R^{-1}). \tag{31}\] The dependence (31) is shown by the thin line in Fig 3. Unlike the expansion in powers of the frequency detuning (section II), the expansion in powers of \(R^{-1}\) is seen to reproduce the energy peak. The peak of the curve \(E(\omega)\) is a scaled version of the peak of \(N(\mu)\). Finally, we note that the function \(\mathcal{R}_{\mu}(\rho)\) with negative \(\mu\) has an exponentially localised core, with the width of the order \(\frac{1}{\sqrt{-\mu}}\). By contrast, solutions with \(\mu>0\) approach zero at a nearly uniform rate (Fig 4). ### Stability of small-amplitude standing wave By deriving the amplitude equation (23) the analysis of stability of the time-periodic standing wave has been reduced to the stability problem for the stationary solution Figure 4: \(\mathcal{R}_{\mu}(\rho)\): the nodeless solution of the boundary-value problem (25). As \(\mu\) changes from negative to positive values, the exponentially localised solution gives way to a function without a clearly defined core. of the 3D nonlinear Schrodinger equation. The leading order of a linear perturbation to the solution (30) is given by \[\delta\phi=\left\{e^{i\omega_{0}\left(1+\frac{\rho}{4R^{2}}\right)t} \left[\mathcal{F}\left(\frac{r}{R}\right)+i\mathcal{G}\left(\frac{r}{R}\right) \right]+c.c.\right\}\] \[\times\exp\left(\frac{\lambda}{2\omega_{0}R^{2}}t\right), \tag{32}\] where \(\mathcal{F}=\mathcal{F}(\rho)\) and \(\mathcal{G}=\mathcal{G}(\rho)\) are two components of an eigenvector of the symplectic eigenvalue problem \[L_{0}\mathcal{G}=-\lambda\mathcal{F},\quad L_{1}\mathcal{F}=\lambda\mathcal{G}. \tag{33}\] In (33), \(L_{0}\) and \(L_{1}\) are a pair of radial operators \[L_{0}=-\frac{d^{2}}{d\rho^{2}}-\frac{2}{\rho}\frac{d}{d\rho}- \mu-12\mathcal{R}_{\mu}^{2}(\rho),\] \[L_{1}=-\frac{d^{2}}{d\rho^{2}}-\frac{2}{\rho}\frac{d}{d\rho}- \mu-36\mathcal{R}_{\mu}^{2}(\rho), \tag{34}\] with the boundary conditions \[\mathcal{F}^{\prime}(0)=\mathcal{G}^{\prime}(0)=\mathcal{F}(1)=\mathcal{G}(1)=0. \tag{35}\] The lowest eigenvalue of the Schrodinger operator \(L_{0}\) is zero, with the associated eigenfunction given by \(\mathcal{R}_{\mu}(\rho)\). Numerical analysis reveals that the operator \(L_{1}\) has a single negative eigenvalue. This is the case of applicability of the Vakhitov-Kolokolov criterion [64; 65; 66]. The criterion guarantees the stability of the solution (24) if \(dN/d\mu<0\) and instability otherwise. Numerical methods confirm that in the region \(\mu_{c}<\mu<\pi^{2}\), the eigenvalue problem (33)-(35) does not have any real eigenvalues apart from a pair of zeros resulting from the U(1) invariance of (23a). (We remind that \(\mu_{c}\) is the point of maximum of the curve \(N(\mu)\); \(\mu_{c}=-0.225\).) As \(\mu\) is decreased through \(\mu_{c}\), a pair of opposite pure-imaginary eigenvalues \(\pm\lambda_{0}(\mu)\) converges at the origin and diverges along the positive and negative real axis. As \(\mu\to-\infty\), the scaling (28) gives \(\lambda_{0}(\mu)\to-5.50\mu\), where \(5.50\) is the symplectic eigenvalue associated with the solution of the infinite domain problem (29). The upshot of our asymptotic analysis is that there is a continuous family of standing-wave solutions in the ball of a large radius \(R\), with frequencies \(\omega\) extending down from \(\Omega^{(1)}\). The function \(E(\omega)\) features a sharp peak at \(\omega_{c}=\omega_{0}+\mu_{c}(2\omega_{0}R^{2})^{-1}\), with the standing waves to the right of the peak (where \(dE/d\omega<0\)) being stable and those to the left (where \(dE/d\omega>0\)) unstable. (See the thin curve in Fig 3.) ### Continuation over the energy peak The large-\(R\) perturbation expansion with \(\omega\) close to \(\omega_{0}\) was validated by the numerical study of the boundary-value problem (2). We continued the periodic solution \(\phi(r,t)\) to lower \(\omega\) and used the linearised equation (5) to evaluate the associated monodromy matrix. In agreement with the asymptotic considerations, a pair of real Floquet multipliers (\(\zeta\) and \(\zeta^{-1}\)) was seen to leave the unit circle as \(\omega\) passed through the point of maximum energy. Consequently, the left slope of the energy peak in Fig 3 does indeed correspond to unstable standing waves. Fig.5 documents the solution as it is continued from \(\Omega^{(1)}\) over the energy peak. Consistently with the asymptotic expression (30), the peripheral field values \(\phi(r_{p},t)\), where \(r_{p}\sim R\), oscillate at the same frequency \(\omega\) as the amplitude at the origin, \(\phi(0,t)\). This agreement is recorded on either side of the energy peak; see panel pairs (b) and (c), (e) and (f), (h) and (i). As \(\omega\) is reduced below the point of maximum energy, the Bessel-function profile (30), (27) gives way to an exponentially localised shape. This metamorphosis agrees with the evolution of the asymptotic profile \(\mathcal{R}_{\mu}(\rho)\) as \(\mu\) is taken from positive to negative values. The difference in the type of decay is clearly visible in panels (a), (d) and (g) of Fig 5. Lowering \(\omega\) even further sees the formation of a small-amplitude undulating tail (Fig 5(j)). At the same time, the oscillation frequency in the peripheral region switches from the frequency of the core of the standing wave to its second harmonic (compare panel (l) to (k)). Figure 5: Top to bottom row: standing wave as it is continued from \(\Omega^{(1)}\) to lower \(\omega\) in Fig 3. (The ball radius \(R=100\).) Left column: spatial profile at a particular time, \(\phi(r,0)\). Middle and right column: temporal behaviour at the central and a peripheral point, \(\phi(0,t)\) and \(\phi(90,t)\). In the panel legends, \(\nu\) is the normalised frequency: \(\nu=\omega/\omega_{0}\). The same notation is used in Figs 8, 10, 11 It may seem that the presence of the second-harmonic tail is at variance with the uniformly-first harmonic pattern (30). There is no contradiction, in fact. As we take \(\omega\) far enough from \(\omega_{0}\), the assumption \(\phi=O(R^{-1})\) becomes invalid and the expression (30) stops providing any accurate approximation to the solution \(\phi(r,t)\). Why does the formation of the second-harmonic tail require taking \(\omega\) far from \(\omega_{0}\)? The reason is that when \(\omega\) is close to \(\omega_{0}\), the core of the exponentially-localised standing wave is much wider than the wavelength of the second-harmonic radiation: \[\frac{1}{\sqrt{2\omega_{0}(\omega_{0}-\omega)}}\gg\frac{2\pi}{\sqrt{3\omega_{0 }^{2}}}.\] (Here we took advantage of the fact the characteristic width of the bell-shaped function \(\mathcal{R}_{\mu}(\rho)\) is \(1/\sqrt{-\mu}\) and used (30b) to express \(\mu\).) As a result, the radiation coupling to the core is weak and its amplitude is exponentially small. Thus when \(\omega\) is close to \(\omega_{0}\), we can simply not discern the amplitude of the second harmonic against the first-harmonic oscillation. ### Small-amplitude wave in the infinite space It is instructive to comment on the \(R\to\infty\) limit for which the small-amplitude solution is available in the earlier literature [56; 60; 61]. In the case of the infinitely large ball our asymptotic expansion remains in place but \(\epsilon\) becomes a formal expansion parameter, not tied to \(R\). Without loss of generality, we can let \(\mu=-1\) in equation (25a) while the boundary condition \(\mathcal{R}(1)=0\) should be replaced with \(\mathcal{R}(\infty)=0\). In agreement with [60; 61; 56], the asymptotic solution (6) acquires the form \[\phi=2\epsilon\cos\left[\omega_{0}\left(1-\frac{\epsilon^{2}}{4}\right)t \right]S(\epsilon r)+O(\epsilon^{2}), \tag{36}\] where \(\mathcal{S}(\rho)\) is a nodeless solution of the boundary value problem (29). (For solutions of (29) see [56; 67].) As \(\omega\to\omega_{0}\) (i.e. as \(\epsilon\to 0\)), the energy of the asymptotic solution (36) tends to infinity: \[E=\frac{16\pi}{\epsilon}\int_{0}^{\infty}S^{2}(\rho)\rho^{2}d\rho=\frac{16 \pi}{\epsilon}\times 0.1253. \tag{37}\] Stability or instability of the infinite-space solution is decided by eigenvalues of the symplectic eigenvalue problem (33)-(34) with \(\mu\) set to \(-1\), \(\mathcal{R}_{\mu}(\rho)\) replaced with \(S(\rho)\), and the boundary conditions (35) substituted with \(\mathcal{F}(\infty)=\mathcal{G}(\infty)=0\). The numerical analysis verifies that the resulting symplectic problem has a (single) pair of opposite real eigenvalues \(\lambda=\pm 5.50\). Hence the solution (36) is unstable for any sufficiently small \(\epsilon\). ## IV Resonances in the ball ### Energy-frequency diagram The numerical continuation beyond the peak in Fig 3, from right to left, produces an \(E(\omega)\) curve with what looks like a sequence of spikes. Fig 6(a) depicts this curve for \(R=100\). It also shows an envelope of the family of spikes -- a U-shaped arc that coincides with the \(E(\omega)\) curve everywhere except the neighbourhoods Figure 6: (a) Energy of the standing wave with frequency \(\omega\) in the ball of radius \(R=100\) (a), \(R=40\) (b) and \(R=150\) (c). The \(E(\omega)\) curve features a sequence of sharp spikes that have complex fine structure indiscernible in the figure. Although some branches were only continued to moderate energies, we expect all spikes to extend to the top of the panels. The vertical dashed lines in (a) mark the points \(\omega=\Omega^{(n)}/2\) where \(\Omega^{(n)}\) are the frequencies of the newborn Bessel waves (defined by equation (8)). The fraction next to a spike indicates the order of the Bessel-wave undertone that this spike’s slopes approach (but not necessarily join) at a larger \(E\). (The Bessel undertones are not shown in the figure.) In all three panels, the red dashed arc underlying the \(E(\omega)\) curve is the envelope of the family of spikes. For visual clarity, it has been shifted down by a tiny amount from its actual position. of the spikes. In the neighbourhood of each spike, the envelope bounds it from below. Figs 6(b) and (c) compare the density of spikes in the diagrams with different values of \(R\). (Either panel focusses on the right end of the respective diagram where spikes are thin and nonoverlapping.) The number and positions of the spikes are seen to be \(R\)-sensitive. In contrast, the U-shaped envelope does not change appreciably as the radius of the ball is varied. Regardless of \(R\), the U-shaped curve has a single minimum, at \[\omega_{\rm min}=0.967\,\omega_{0}. \tag{38}\] The U-shaped envelope agrees with the energy curve of periodic infinite-space solutions with exponentially localised cores and small-amplitude tails decaying slowly as \(r\to\infty\)[52]. The energy of those nanopterons is defined as the integral (3) where \(R\) is a radius of the core. The nanopteron's energy has a minimum at \(\omega=0.9652\,\omega_{0}\)[52] which is close to our \(\omega_{\rm min}\) in (38). A sequence of vertical dashed lines drawn at \(\omega=\frac{1}{2}\Omega^{(n)}\) in Fig 6(a) is seen to match the sequence of spikes. The correspondence between the two sequences suggests some relation between the spikes and the Bessel waves born at \(\omega=\Omega^{(n)}\). ### Bifurcation unpacked Zooming in on one of the distinctly separate spikes near the right end of the diagram reveals that it is not a mere peak, or projection, on the \(E(\omega)\) curve. As in a proper peak, there are two energy branches that rise steeply from the U-shaped arc but instead of joining together, the left and right "slopes" connect to another curve. This curve turns out to be a Bessel branch -- more precisely, the 1/2 undertone of the Bessel branch emerging from \(\phi=0\) at the frequency \(\omega=\Omega^{(n)}/2\) with some large \(n\) (Fig 7). To appreciate details of the bifurcation, we follow the curve corresponding to the left slope of the "spike" (the blue curve in Fig 7). A standing wave with \((\omega,E)\) located at the base of the "spike" has an exponentially localised core and an oscillatory tail with the amplitude decaying in proportion to \(r^{-1}\) (Fig 8(a)). The \(\phi\)-value at \(r=0\) performs nearly-harmonic oscillations with the fundamental frequency \(\omega=2\pi/T\) (panel (b)) while the tail oscillates at the frequency \(2\omega\) (panel (c)). Moving up the blue curve in Fig 7, the contribution of the second harmonic to the oscillation of the core increases (Fig 8(e)). Eventually, when the curve is about to join the branch of the 1/2 Bessel undertones (shown by the dashed magenta in Fig 7), \(\phi(0,t)\) completes two nearly-identical cycles over the interval \(T=2\pi/\omega\) (Fig 8(h)). The solution does not have any well-defined core (panel (g)), with the central and peripheral values oscillating at the same fundamental frequency \(2\omega\) (panels (h) and (i)). This is exactly the spatio-temporal behaviour of the 1/2 Bessel undertone. Note that the merger of the blue and magenta curves in Fig 7 can be seen as the period-doubling bifurcation of the Bessel wave. As we observed in section II, the \(n\)-th Figure 7: A fragment of the \(E(\omega)\) diagram in the vicinity of a 1:2 resonance in the ball with \(R=150\). The blue and brown curves are two slopes of the “spike”. The dashed magenta arc emerging from \(E=0\) at \(\omega=\Omega^{(114)}/2\) is the 1/2 undertone of the \(n=114\)-th Bessel wave, (That is, a point \((\omega,E)\) on this branch represents the Bessel wave with frequency \(2\omega\).) The insets zooming in on the lower sections of the “spike” and Bessel branch aim to emphasise the difference in the origins of the two branches. Figure 8: Top to bottom row: spatial and temporal behaviour of the solution to the problem (2) as it is continued along the blue slope of the “spike” in Fig 7, from the underlying U-arc towards the Bessel curve. Left column: snapshot of \(\phi(r,t)\) at a particular moment of time (\(t=0\)); central column: behaviour of the central value \(\phi(0,t)\); right column: evolution of an asymptotic value \(\phi(r_{p},t)\) with \(r_{p}=140\). All solutions satisfy the boundary condition \(\phi(150,t)=0\). Bessel wave (\(n=1,2,...\)) is stable when its frequency is close enough to \(\Omega^{(n)}\), its inception point. Our numerical analysis indicates that the Bessel wave loses its stability once its energy \(E\) has grown above the period-doubling bifurcation value. A quadruplet of complex Floquet multipliers leaves the unit circle at this point signifying the onset of instability against an oscillatory mode with an additional frequency. While most of the clearly distinguishable, nonoverlapping, spikes result from the 1:2 resonances with the Bessel waves, some correspond to the 1:3, 1:4 or 1:6 resonances. Similar to the 1:2 spikes, an exponentially localised solution at the base of a 1:3, 1:4 or 1:6 projection has a core oscillating at the frequency \(\omega=2\pi/T\) and its second-harmonic tail. As this solution is continued up the slope of its spike, the contribution of higher harmonics to the oscillation of the core and tail increases. Eventually the standing wave switches to the uniform regime where its core and tail oscillate at the same frequency -- \(3\omega\), \(4\omega\) or \(6\omega\). The change of the temporal pattern is accompanied by the transformation of the spatial profile of the wave, from the "core-and-tail" composition to a slowly decaying structure with no clearly defined core. It would be natural to expect this weakly localised solution to merge with the 1/3, 1/4 or 1/6 undertone of a Bessel wave, implying the period multiplication of the latter. Numerically, we do observe the bifurcations with \(m=4\) and 6 while the period-tripling of a Bessel wave is yet to be discovered. ### Higher resonances Fig 9 zooms in on the neighbourhood of \(\omega=\Omega^{(64)}/2\) in the ball of radius \(R=100\). Besides the primary spike pattern recognisable from our earlier Fig 7, the diagram features several thinner vertical projections. These secondary, or "baby", spikes result from resonances with higher harmonics. The magenta curve in Fig 9 comprises the 1/2 undertones of the Bessel wave. This branch and two needle-like secondary projections sprouting up from it represent standing waves without clearly defined cores; see Fig 10. The top row in Fig 10 corresponds to a solution occurring between the two baby spikes; it consists of a pair of identical cycles on the interval \((0,2\pi/\omega)\). The middle row of Fig 10 exemplifies standing waves found on either slope of the "lower" baby spike (spike centred on \(\omega/\omega_{0}=0.86834\)). These include six repeated cycles. As \(E\) grows, both slopes of the "lower" spike merge with the branch of the 1/6 undertones of another Bessel branch extending from \(E=0\) (not shown in Fig 9). Finally, in the bottom row of Fig 10 we display a solution that belongs to the secondary projection appearing higher on the Bessel curve (spike centred on \(\omega=0.86741\)). This coreless standing wave oscillates at the frequency \(10\,\omega\). We note that solutions on both slopes of each of the two baby spikes emerging from the Bessel branch are stable. Fig 11 documents standing waves found on the left slope of the primary spike (the blue curve in Fig 9) and secondary spikes emerging from it. The top row illustrates the solution at a point of the primary curve near its merger with the Bessel branch. The structure of this solution is similar to that in the bottom row of Fig 8. The wave does not have a clearly defined core while its central value \(\left.\phi\right|_{r=0}\) and a slowly decaying tail oscillate at the same frequency \(2\,\omega\). The middle and bottom rows in Fig 11 describe solutions on the left and right baby spikes jutting out from the primary curve. These have a large-amplitude \(12\,\omega\)- and \(9\,\omega\)-component, respectively. ### Stability of standing waves With the stability of the Bessel waves classified earlier in this paper, we turn to the exponentially localised solutions comprising the \(E(\omega)\) curve in Fig 6. As we demonstrated in sections III.3 and III.4, the monodromy matrix acquires a pair of real eigenvalues (\(\zeta_{1}>1\) and \(\zeta_{2}=1/\zeta_{1}\)) as the solution is continued over the peak at \(\omega_{c}=\omega_{0}+\mu_{c}(2\omega_{0}R^{2})^{-1}\) (the rightmost peak in Fig 6) in the direction of lower frequencies. The numerical analysis indicates that another real pair (\(\zeta_{3}>1\) and \(\zeta_{4}=1/\zeta_{3}\)) leaves the unit circle as \(\omega\) is reduced past the local energy minimum between the peak at \(\omega_{c}\) and the next spike on its left. Regardless of the choice of \(R\), real or complex unstable Floquet multipliers persist over the entire interval \(\omega_{\rm min}<\omega<\omega_{c}\), where \(\omega_{\rm min}\) is the point of minimum of the U Figure 9: The bifurcation diagram in the neighbourhood of \(\omega=\Omega^{(64)}/2\) in the ball of radius \(R=100\). The inset zooms in on a tiny segment of the left slope of the primary peak (blue curve) that hosts two baby spikes and merges with the Bessel branch (shown in magenta). shaped envelope of the family of spikes. For low energies, the instability is due to the real multipliers, \(\zeta_{1}\) and \(\zeta_{3}\). As the solution "climbs" up the energy slope, the real multipliers \(\zeta_{1}\), \(\zeta_{3}\), \(1/\zeta_{1}\), \(1/\zeta_{3}\) merge, pairwise, and form a complex quadruplet. The quadruplet dissociates as the solution descends along the other slope of the same spike. Stability properties in the region \(\omega<\omega_{\min}\) prove to be sensitive to the choice of \(R\). The case of a small radius is exemplified by the ball of \(R=40\). Fig 6(b) depicts the corresponding \(E(\omega)\) diagram in an interval of frequencies adjacent to \(\omega_{0}\). (Note that the frequency \(\omega_{\min}\) is close to the position of the second spike from the right in Fig 6(b).) All frequencies between each pair of spikes in Fig 6(b) correspond to unstable solutions, with one or two pairs of real Floquet multipliers off the unit circle. The second spike from the right (spike centred on \(\omega\approx 0.97\)) is also entirely unstable. The only intervals of stability in Fig 6(b) are found at the base of the third and forth spike (centred on \(\omega\approx 0.94\) and \(\omega\approx 0.92\), respectively). Fig 12(a) illustrates stability of several branches associated with the third spike. Turning to a larger ball radius (\(R=100\)), the stability domain expands considerably. As \(\omega\) is reduced below \(\omega_{\min}\) in that case, two pairs of real multipliers form a complex quadruplet which, on further reduction, converges to two points on the unit circle. The value of \(\omega\) at which the multipliers join the circle marks the beginning of a sizeable interval of stable frequencies (Fig 12(b)). A continued reduction of \(\omega\) sees an intermittent appearance and disappearance of one or several complex quadruplets separating stability from instability intervals. ## V Concluding remarks A linear standing wave in a ball results from the interference of an expanding spherical wavetrain of infinitesimal amplitude and the wavetrain reflected from the ball's surface. When continued to finite amplitudes, the resulting nonlinear solution does not have a well-defined core and retains the \(r\)-dependence similar to the spherical Bessel function \(j_{0}(r)=\frac{\sin r}{r}\). The total energy associated with this configuration in a ball of radius \(R\) is a multiple of \(R^{2}\). A different type of nonlinear standing wave in a ball is characterised by an exponentially localised pulsating core. The core is a fundamentally nonlinear feature; the nonlinearity shifts its frequency below the linear spectrum and this frequency shift ensures the core's exponential localisation. The core pulsating at the frequency \(\omega\) radiates spherical waves with higher-harmonic frequencies \(m\omega\), \(m=2,3,...\). The standing pattern arises as a result of the interference of the expanding and reflected radiation wavetrains. As the radiation frequency \(m\omega\) comes near one of the linear eigenfrequencies, the solution approaches the corresponding Bessel-like pattern. The amplitude of the radiation increases and the total energy in the ball of radius \(R\) shoots up to values \(O(R^{2})\). By contrast, when \(\omega\) is not near a resonant value, the radiation from the core is weak. The standing wave in that case may serve as an approximation to an _oscillon_ -- a long-lived localised pulsating structure in the infinite space -- at the nearly-periodic Figure 11: Top row: solution of frequency \(2\omega\) on the blue side of the primary “peak” shown in Fig 9. Middle respectively bottom row: solutions of frequency \(12\,\omega\) respectively \(9\,\omega\) found on the left respectively right baby spike stemming out from the primary peak. (The two spikes are clearly visible in the inset to Fig 9). All three standing waves are coreless due to the proximity to the Bessel branch. Figure 10: Solutions on the Bessel branch and its two offshoots in Fig 9. Top row: the wave of frequency \(2\omega\) found on the Bessel curve between the two baby spikes. Middle row: solution of frequency \(6\,\omega\) represented by the right-hand offshoot. Bottom row: solution of frequency \(10\,\omega\) corresponding to the left baby spike. stage of its evolution. Nonlinear standing waves provide information on the oscillon's energy-frequency relation and stability as well as topology of the nearby regions of the phase space. We examined the energy-frequency diagram of the standing wave and scrutinised the associated spatio-temporal transformation of the periodic solution. Results of this study can be summarised as follows. 1. We have demonstrated the existence of a countable set of standing waves ("Bessel waves") in a ball of a finite radius. The \(n\)-th (\(n=1,2,...\)) Bessel wave is a solution of the boundary-value problem (2) with \(n-1\) internal nodes in the interval \((0,R)\) and the envelope decaying in proportion to \(r^{-1}\) as \(r\to R\). The Bessel wave branches off the zero solution at \(\omega=\Omega^{(n)}\); we have constructed it as an expansion in powers of the frequency detuning \(\omega-\Omega^{(n)}\). The Bessel wave remains stable in an interval of frequencies adjacent to \(\Omega^{(n)}\). 2. The nodeless (\(n=1\)) Bessel wave is amenable to asymptotic analysis in a wider frequency range. The pertinent asymptotic expansion is in powers of \(R^{-1}\) and the resulting solution is valid in a neighbourhood of \(\omega_{0}\), the frequency of spatially-uniform oscillations. This neighbourhood is found to be wide enough to include \(\Omega^{(1)}\), the Bessel branch's inception point, and \(\omega_{c}\) (\(\omega_{c}<\Omega^{(1)}\)) -- the frequency at which the energy curve \(E(\omega)\) has a maximum. The \(n=1\) Bessel wave remains stable in the entire interval \(\omega_{c}\leq\omega<\Omega^{(1)}\) but loses its stability as \(\omega\) is reduced below \(\omega_{c}\). 3. The numerical continuation of the \(n=1\) Bessel wave to values of \(\omega\) below \(\omega_{c}\) produces an \(E(\omega)\) curve with a sequence of spikes near the undertone points \(\omega=\Omega^{(n)}/2\) with some large \(n\). The left and right slope of the spike adjacent to \(\frac{1}{2}\Omega^{(n)}\) result from a period-doubling bifurcation of the \(n\)-th Bessel wave. In addition to the primary sequence \(\frac{1}{2}\Omega^{(n)}\), there are also thinner spikes near the \(\frac{1}{3}\Omega^{(n)}\), \(\frac{1}{4}\Omega^{(n)}\) and other undertones. Slopes of the spikes in the primary sequence host secondary projections corresponding to higher resonances. Away from the neighbourhoods of the spikes, the \(E(\omega)\) curve follows a U-shaped arc with a single minimum at \(\omega_{\rm min}=0.967\omega_{0}\); the arc bounds all spikes from below. The arc is unaffected by the ball radius variations, as long as \(R\) remains large enough. This envelope curve describes the energy-frequency dependence of the nearly-periodic oscillons in the infinite space. 4. Standing waves with energies lying on the envelope curve and at the base of the spikes have an exponentially localised core and a small-amplitude slowly decaying second-harmonic tail. We have classified stability of these solutions against spherically-symmetric perturbations. Specifically, we focused on the interval \(0.91\omega_{0}<\omega<\Omega^{(1)}\) and considered two values of \(R\): \(R=40\) and \(R=100\). The ball of radius \(R=40\) has only short stability intervals, located at the base of two spikes in its \(E(\omega)\) diagram. By contrast, the standing waves in the ball of \(R=100\) have long stretches of stable frequencies. Finally, it is appropriate to draw parallels with resonance patterns observed in other systems. The authors of Ref [68] carried out numerical continuations of breather solutions in a one-dimensional necklace of Morse oscillators. Their \(E(\omega)\) diagram features resonances similar to those reported in section IV of the present paper. Standing waves residing on the slopes of the spikes in our Figs 6, 7, 9 and 12(a) are akin to the phonoreathers of Ref [68] while solutions represented by the U-arc in our Figs 6 correspond to their "phantom breathers". Figure 12: (a) The fine structure of the third spike from the right in Fig 6(b). (Here \(R=40\).) The spike is, in fact, a doublet; it consists of two separate projections. The inset zooms in on a figure-eight shaped isola occurring at the bottom of the left “subspike”. (b) Stability of standing waves in the ball of \(R=100\), near the right end of its energy-frequency diagram (Fig 6(a)). In (a) and (b) the magenta respectively blue lines demarcate stable respectively unstable standing waves. A more recent Ref [69] is a numerical study of the circular-symmetric breathers in the sine-Gordon equation posed in a disc of a finite radius. The \(E(\omega)\) diagram produced in that publication displays projections due to the odd-harmonic resonances. We note that neither Ref [68] nor [69] observe a period-doubling transmutation of phonon waves into breathers. ## Acknowledgments AB and EZ are grateful to the HybriLIT platform team for their assistance with the _Goverun_ supercomputer computations. This research was supported by the bilateral collaborative grant from the Joint Institute for Nuclear Research and National Research Foundation of South Africa (grant 120467).
2303.07179
Data-Driven Classifications of Video Game Vocabulary
As a novel and fast-changing field, the video game industry does not have a fixed and well-defined vocabulary. In particular, game genres are of interest: No two experts seem to agree on what they are and how they relate to each other. We use the user-generated tags of the video game digital distribution service Steam to better understand how players think about games. We investigate what they consider to be genres, what comes first to their minds when describing a game, and more generally what words do they use and how those words relate to each other. Our method is data-driven as we consider for each game on Steam how many players assigned each tag to it. We introduce a new metric, the priority of a Steam tag, that we find interesting in itself. This allows us to create taxonomies and meronomies of some of the Steam tags. In particular, in addition to providing a list of game genres, we distinguish what tags are essential or not for describing games according to players. Furthermore, we provide a small group of tags that summarise all information contained in the Steam tags.
Nicolas Grelier, Stéphane Kaufmann
2023-03-13T15:19:09Z
http://arxiv.org/abs/2303.07179v1
# Data-Driven Classifications of Video Game Vocabulary ###### Abstract As a novel and fast-changing field, the video game industry does not have a fixed and well-defined vocabulary. In particular, game genres are of interest: No two experts seem to agree on what they are and how they relate to each other. We use the user-generated tags of the video game digital distribution service Steam to better understand how players think about games. We investigate what they consider to be genres, what comes first to their minds when describing a game, and more generally what words do they use and how those words relate to each other. Our method is data-driven as we consider for each game on Steam how many players assigned each tag to it. We introduce a new metric, the priority of a Steam tag, that we find interesting in itself. This allows us to create taxonomies and meronomies of some of the Steam tags. In particular, in addition to providing a list of game genres, we distinguish what tags are essential or not for describing games according to players. Furthermore, we provide a small group of tags that summarise all information contained in the Steam tags. taxonomy, meronomy, video game genres, Steam tags, game analytics 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmright 2012 acmcopyright 2012 acmright 2012 acmright 2012 acmright 2012 acmright 2012 acmright 2012 acright 2012 acmright 2012 acright mobile game analytics (although we believe that the concept is essential to game analytics in general) in order to better connect industry and academia. However, they argue that the available knowledge and data are too fragmented and incomplete to establish stylised fact. Thus, they introduce two "proto-stylised fact concepts which describe situations where less empirical validation is available: _beliefs_ and _hypothetical stylised facts_" [6]. The aim is to "provide a roadmap for structuring current knowledge and building towards a situation where stylised facts can be generated and validated". Beliefs are statements supported by virtually no empirical evidence, whereas hypothetical stylised facts are supported by some empirical evidence. However, they are still not stylised facts, since "while there maybe some available empirical evidence supporting these hypothetical stylised facts, it is clearly not enough to rigorously, generally, support them". Establishing hypothetical stylised facts help in obtaining stylised fact, which is essential for the research field (see [6] for a detailed argumentation). In this paper, we provide empirical evidence for several hypothetical stylised facts. Some of which we introduce, while others were already beliefs in the industry, for instance that genre is one of the most crucial pieces of information when describing a video game. We aim at presenting our findings in a quantitative manner, and to provide an interpretation of the metrics we introduce. ### Taxonomies and meronomies In Section 3, we present a data-driven taxonomy of all Steam tags. This consists in doing a hierarchical classification, where we divide the tags into categories and subcategories. Those are referred to as "taxa". Each tag is put into one of the taxa of lower rank, which are themselves grouped into taxa of higher rank. Our taxonomy has a tree-like structure, but others may be more network-like, with one child having several parents. We also provide in Section 4 two data-driven meronomies of some Steam tags. A meronomy differs from a taxonomy as it deals with part-whole relationships, whereas a taxonomy is a classification of discrete elements. Using mathematical notation, a meronomy can be thought of as a partially ordered set where the elements are sets and the relation considered is inclusion (we have \(A\leq B\) if \(A\) is a subset of \(B\)). Here, our aim is to better understand how some tags relate to each other. Using a data-driven approach, we establish some relation between tags, for instance that _2D Platformer_ is a subtag of _Platformer_, and that _Sailing_ is a subtag of _Naval_. ### A focus on game genres A part of our study concerns game genres. In particular, we are interested in finding a complete list of game genres, and to understand how much genre matters to players. Arsenault observes that many websites allow the user to search through video games while filtering by genre [3]. However, no two websites seem to agree on what the list of genres is. For instance, he mentions GameSpot's system (at the time of his writing) that does a taxonomy of the genres: genres are subdivided into subgenres, thus creating 157 categories. He gives some examples: * _Action \(\rightarrow\) Shooter \(\rightarrow\) First-Person \(\rightarrow\) Fantasy_, * _Miscellaneous \(\rightarrow\) Puzzle \(\rightarrow\) Action_, * _Strategy \(\rightarrow\) Real-Time \(\rightarrow\) Fantasy_. Arsenault notes that "while every category is unique and independent in principle, some of the lower-tiered descriptors appear as sub-branches of multiple genres" [3]. He adds that "the levels or branches themselves are not named, which means there is no basis on which to compare them", and "comparing all 2nd-level classes would probably amount to comparing apples to oranges to shoes to faith". He concludes: "One thing that these different taxonomies highlight is the fluidity and impreciseness of the concept of genre itself, and how it is used in actually describing games". The same issues are noted by Heintz and Law [10]: * Genres are not clearly or consistently defined, * The relation between genres is unknown, * Definitions are based on completely different aspects, * Different sources use different sets of genres. Steam provides a set of tools for developers and publishers named Steamworks. They provide a taxonomy of Steam tags [27]. Tags are divided into several taxa: Genres, Visual Properties, Themes & Moods, Features, Players, Other Tags, Software, Assessments, Ratings etc, Hardware/Input and Funding etc. The taxon Genres, which is our main focus here, is split into three taxa: Super-Genre, Genre and Sub-Genre. This taxonomy is to be understood with the broad definition of a taxonomy, in which a child may have several parents. For instance, _Heist_ is a sub-genre and also a theme. _Flight_ is a sub-genre and also a feature. _Experimental_ is a super-genre and a genre. One objective of this paper is to challenge Steam's taxonomy and investigate whether players concur with it. Classifying video games into genres is valuable for studying and comparing meaningful categories. For instance, in the literature we can find studies on what genres are the most successful [8, 22] (and also [24] although the study is about board games instead of video games), how considering games from the same genre is important for deciding on a release date [7, 9], how genre relates to players motivations, gender and localisation [16, 23, 29, 30], or what are their usability profiles [20]. All these studies use a list of genres, but do not discuss much (or not at all) whether it makes sense to discuss about genres, whether the list is comprehensive, and so on. A more industry-oriented reason for establishing a comprehensive list of game genres, along with definitions and relationships between the genres, comes from trailer making. Lieu, who was involved in the creation of the trailers of _Half-Life: Alyx_, _Among Us_ and _The Long Dark_ among others, asserts that a trailer of a video game must first clearly establish its genre [14]. Therefore, it is crucial to have an exact understanding of what a genre is. Moreover, Lieu details what a trailer should show according to the game genre [13]. Similarly, Carless, who is the founder of GameDiscoverCo, a video game discoverability consultancy firm, believes that game genre is "what [their] users want to see information about". In a database meant for people working in the video game industry, Carless "care[s] most that the main genres are in _Genre_ and are easy to see and sort" ([Carless, personal communication]). ### The Steam tags In this paper, we rely solely on Steam data. More precisely, we study the user-generated tags on Steam. There exists a total of 427 tags that players have assigned to the 50757 games in Steam. Those tags, consisting of groups of words or acronyms like _Action RPG_, _MOBA_ or _Open World_, are freely assigned to games by players, who can also invent new ones. There exist many papers relying on Steam data, for instance Steam players reviews [15, 19, 21, 26, 33] or Steam user accounts [4, 5, 17, 25]. Some of these studies use Steam tags, but there are only used as tools for filtering games, and are not the topic of study. To the best of our knowledge, there are only three papers that truly study Steam tags per se. First, Windleharth _et al._ do in [32] a conceptual analysis of Steam tags. They discuss and classify Steam tags into categories, some containing many elements like Gameplay genre, and some containing only one elements like Relationships which contains only the tag _Remake_. Windleharth _et al._ rely on their own expertise to determine what tags are genre tags. In this paper, we use a data-driven approach with the same goal in mind. A second paper that studies Steam tags is [12], in which Li and Zhang examine how genre tags are related to each other. Finally, in [11], Li uses correlation between Steam tags related to gameplay to obtain a list of 29 "factors". Those factors are not necessarily Steam tags but rather groups of highly correlated Steam tags. For instance, the factor _Rogue_ contains the Steam tags _Dungeon Crawler_, _Rogue-like_ and _Rogue-lite_. We compare our method to theirs in Section 2, and argue why ours is more reliable. We also compare our results with theirs in Sections 3 and 4. All players are free to assign to games some of the already-existing Steam tags, or even to add new ones. For a game, Steam shows "tags that were applied to the product by the most users". Steam shows at most 20 tags for any game, sorted by decreasing order of number of players who assigned the tag. The exact number of players who assigned a tag is not directly shown on Steam, but can be found in the source code of the page. To obtain the data of how many players assigned a tag to a game, we used SteamSpy's API. There are 13848 games with 20 tags shown on Steam, and therefore \(50757-13848=36909\) games with fewer than 20 tags. One can make the experience of adding a new tag to a game with fewer than 20 tags, and have the surprise of not having their newly added tag shown in Steam. The reason for that is that Steam only shows tags that were assigned by at least 5 players. Steam does not explicitly mention this, but we could check it by going over all games. Since anyone can add any set of nonsensical words as a tag, this is probably for the best. It is important to note that Steam does not offer a definition of the tags, one sufficient reason being that tags are freely added and invented by players (as can been seen in the inconsistent capitalisation and hyphenation of the tags). Therefore, players interpret by themselves the meanings of the tags. For our study, this comes with pros and cons. Since "genres are not clearly or consistently defined" [10], we may, thanks to Steam tags, find consensuses about what a given genre is. We believe that there exists some common understanding of what, say, an adventure game is. By an equivalent of the law of large numbers, Steam tags can help decipher what the definition is. However, there are also cases when players are clearly referring to distinct things while using the same tag, which perturbs our analysis. One easily spotted example is the tag _Football_, which sometimes refers to soccer, and sometimes to American football. An another example is the tag _Fighting_. For some games, it seems that the tag is used to state the genre of the games, whereas in other games it may be used to merely inform that there is some fighting in the game. This motivates why some tags have several parents in Steamworks' taxonomy [27]. Several of our results rely on computing the Pearson correlation coefficient between tags. We point out another difficulty we face, that stems from the nature of the database. One would expect some pairs of tags to have an extremely low correlation (around \(-1\)), for instance: _2D_ and _3D_, _First-Person_ and _Third Person1_, _Singleplayer_ and _Multiplayer_, and so on. Indeed, a priori, a game is either in 2D or 3D, either has a first-person or third person viewpoint, and is either a singleplayer or a multiplayer game. To compute the correlation, we create a matrix with one entry per game, and one column per tag. There is a value of 1 if the tag is assigned to the corresponding game, and 0 otherwise. We compute a correlation of \(-0.17\) for _2D_ and _3D_, 0.04 for _First-Person_ and _Third Person_, and 0.10 for _Singleplayer_ and _Multiplayer_. We investigated what was the cause of these surprisingly high correlation coefficients. There are games that use 2.5D or isometric representations. Those are mixes of 2D and 3D features. We found that generally, whenever the tag _2.5D_ or _Isometric_ is assigned to a game, the tags _2D_ and _3D_ are also (wrongly) assigned to the game by players. This is why the correlation is closer to 0 than to \(-1\). Concerning the viewpoint, there are many shooter games using the third person view that change for a first-person view when the player is aiming. In those games, we found that both tags _First-Person_ and _Third Person_ are assigned. Similarly, if a game contains a singleplayer campaign and a multiplayer mode, then both tags _Singleplayer_ and _Multiplayer_ are usually assigned. In conclusion, one has to be very careful when exploiting pairs of tags with low correlation, since one might miss a few relevant pairs. Steam does not provide the information of how many players assigned tags to a given game. However, the number of players who assigned the most given tag of a game \(G\) gives us a lower bound on the number of players who assigned tags to \(G\). Using this method, we know that there are at least 15843 games for which at least 100 players assigned some tags. We said that Steam shows at most 20 tags for a game. This is not exactly true in the source code. Indeed, there are a few games, 235 to be exact, with 21 tags. They all share a common property: The 21st tag is _VR Only_, and is assigned by precisely one player. This is probably a feature added by Steam. We make the choice of removing this tag from the database, since it was not added by real players. ### Our contributions In Section 2 we present a new concept: the priority of a tag for a game. We show that this is a useful notion, for we establish as a hypothetical stylised fact that tags with the highest priority are what players think about first when describing the game (see Section 2 for a formal definition of priority and a formal statement of the hypothetical stylised fact). On the other hand, tags with low priority are what players deem interesting about the game, but which only come later to their minds. Using this concept of priority, we give empirical evidence in Section 3 for the following hypothetical stylised fact: **Hypothetical stylised fact 1**.: _Game genre is what players think about first when describing a game._ Consequently, we believe that this is also what they want to hear about first when discovering a new game, confirming the soundness of Lieu's practices in trailer making [14]. On a related note, using a data-driven approach we establish a list, as comprehensive as possible, of game genres. Our aim here is to encompass all words that players may consider as game genres. The list can be found in Appendix A. We provide in Section 4 a meronomy of the tags (not necessarily about genre) with broadest meanings. In particular, we find a set of seven tags with broadest meanings, that we name the "capital tags". There are four properties that we want the capital tags to satisfy. First, they should form a rather short list. As we have 7 capital tags out of the 427 Steam tags, this is clearly satisfied. Secondly, for any game, at least one of these tags should be assignable to the game. Thirdly, those tags should not be redundant. Finally, any other tag should be a more refined version of some of the capital tags. We show in Section 4 that the capital tags satisfy this three last properties. This implies that the capital tags summarise all information contained in Steam tags. We provide a second meronomy in Section 4, that aims at reducing the number of Steam tags without loosing information, by merging synonymous tags together. We propose a method for doing so as well as finding a representing tag for each group of merged tags. ## 2 The priority of Steam tags In [12], Li and Zhang defines a graph that aims to accurately represent the Steam tags. There is one vertex per tag, and two tags are connected by an edge if they are both assigned to a same game. The edges are weighted according to how many games have both tags assigned to them. The authors discard some edges by using the "Narrow by tag" feature in Steam. We find the authors' results difficult to interpret due to their chosen methodology. First, it is not clear what the Steam "Narrow by tag" feature does. Additionally, the authors connect two tags \(T\) and \(T^{\prime}\) with an edge even if for the game \(G\) to which they are both assigned, the tag \(T\) is the most assigned tag to \(G\) and \(T^{\prime}\) is the least assigned tag to \(G\), without discussing the rationale behind this decision. It seems wrong to us to connect the two tags with an edge since \(T\) is apparently essential for describing \(G\) whereas \(T^{\prime}\) is not so much relevant. In [11], Li computes correlation between tags. The analysis is more precise than in [12] since when a tag \(T\) is assigned to a game, a weight is associated to \(T\). If for a given game \(G\), the tag \(T\) is the tag most assigned to \(G\), then \(T\) obtains a weight of 20. If \(G\) has 20 tags and \(T\) is the least assigned, it obtains a weight of 1. More generally, the \(n\)-th most assigned tag obtains a weight of \(20-n+1\). We point out two issues. First, Li does not discuss whether it makes sense to use these weights, and how to interpret them. Secondly, the distributions of the number of players who assigned tag varies greatly over all games. For instance, the two most assigned tags to the game _HITMAN 2_ are _Stealth_ (10373 players) and _Assassin_ (819 players). The ratios of the number of players for these tags is therefore \(819/10373\approx 0.08\). If we do the same with the same with the game _Golf It!_, we find that the most assigned tag is _Multiplayer_ (249 players) and the second most assigned is _Mini Golf_ (243 players). This time, the ratio is \(243/249\approx 0.98\). With Li's method, those ratios would be \(19/20=0.95\) for both games [11]. In this section, we define, interpret and study a new metric, that will be used in all later sections for the taxonomies and meronomies. It is a more refined version of what was proposed by Li [11]. For each Steam tag, we define what we call its _priority_ for a given game. **Definition 2**: _The priority of a tag \(T\) for a given game \(G\) is a score that ranges from 0 to 1. Let \(t_{T}\) denote the number of players who assigned to the game \(G\) the tag \(T\). Similarly, let \(t_{\max}\) denote the maximum over all tags assigned to \(G\) of the number of players who assigned the tag. The priority of \(T\) for \(G\) is equal to \(t_{T}/t_{\max}\). A fictional example is shown in Table 1._ In particular, any game with at least one tag has a tag with priority 1: the tag that was assigned to it by the largest number of players. If a tag was not among the 20 most assigned tags to a given game, then its priority is 0 (recall that Steam only shows the 20 most assigned tags). We choose this metric as we are interested in identifying how players think about \begin{table} \begin{tabular}{|l|l|l|} \hline tag & number of players & priority \\ \hline _Adventure_ & 1000 & 1 \\ _Puzzle_ & 750 & 0.75 \\ _2D_ & 500 & 0.5 \\ _Atmospheric_ & 100 & 0.1 \\ _3D_ & 0 & 0 \\ \hline \end{tabular} \end{table} Table 1: A fictional example of tags assigned to a game with their priorities games. Let us consider all positive priorities over all games and tags. We compute a mean of \(0.60\), a median of \(0.61\) and a standard deviation of \(0.28\). This shows that the priority contains some information, as the standard deviation is quite high. There are two compatible reasons as to why not all tags have the same priority for a given game \(G\). First, it may be that some players assign much fewer tags than other players. This would imply that the tags with highest priority correspond to what players think about first when describing \(G\). The second reason would be that players do not perfectly agree on what tags are best suited to describe \(G\). This second reason would imply that the tags with highest priority are what players agree upon for describing \(G\), and that their opinions diverge on the tags with low (but non-zero) priority. We argue that the first reason is the most important, and present the following hypothetical stylised fact: For a given game \(G\), the tags with highest priority correspond to what players think about first when describing \(G\). We provide empirical evidence supporting Hypothetical Stylised Fact 3. The fact is hypothetical as defined in [6] because our methods for supporting it cannot establish it for good. This could be done if we had access to a database of the tags assignment per player, but this is not publicly available. We consider two tags: _Shooter_ and _FPS_. "FPS" stands for "first-person shooter", and is a subgenre of shooter game [1, 31]. Therefore, anyone who thinks that a given game \(G\) is an FPS would also agree that it is a shooter game. We consider all of the \(1867\) games to which both tags _Shooter_ and _FPS_ are assigned, with _FPS_ having the highest number of players. If Figure 1: Some empirical evidence for Hypothetical Stylised Fact 3. Hypothetical Stylised Fact 3 were wrong, for each game we should have very similar priorities for both tags. Indeed all players who assigned the tag _FPS_ would also assign the tag _Shooter_. In contrast, we believe that some players assign only a few tags, corresponding to what comes first to their mind. Under this hypothesis, we would expect some players to only assign the tag _FPS_, but not the tag _Shooter_ as it is less important to them, or because they consider it to be already implied by the tag _FPS_. The histogram in Figure 0(a) illustrates the ratios between the priority of _FPS_ and _Shooter_. We observe a long tail phenomenon; also the mean is equal to 0.76 and the median to 0.83. This implies that for half of the considered games, less than 83% of those players who assigned the tag _FPS_ also assigned the tag _Shooter_. We apply the same method for the tags _Online Co-Op_ and _Co-op_2. Obviously, a game with an online co-op mode thereby contains a co-op mode. However, when we consider the 1171 games to which both tags _Online Co-Op_ and _Co-op_ are assigned, with _Online Co-Op_ having the highest number of players, we observe again a long tail phenomenon in Figure 0(b). The mean is equal to 0.73 and the median 0.80. The natural explanation to this long tail phenomenon and to these low means and medians is Hypothetical Stylised Fact 3: Many players only assign a few tags to a game, the ones that come first to their mind. We have shown here the results for only two pairs of tags, for which the numbers of corresponding games were important (1867 and 1171, respectively). We tested other pairs, and for all of them we obtained similar results, but due to lack of space we do not show them here. Also, the number of games was significantly lower. Some examples of those pairs are _Top-Down Shooter_ and _Top-Down_ (655 games), _Football_ and _Sports_ (165 games), _Traditional Roguelike_ and _Rogue-like_3 (102 games). Footnote 2: Steam tags are case sensitive. We follow the Steam capitalisation which may differ from tag to tag. Footnote 3: Once again we follow the spelling of Steam, where rogue-like is not always written with a hyphen. In Figure 1, we observed histograms with a peak around a priority of 0.9 and a long tail. Recall that we were considering pairs of tags where the first tag implies the second. We argued that causality was not perfectly reflected in the tag distributions, because of the long tail phenomenon. However, the position of the peak still indicates that there is a high correlation. As a comparison, let us do the same for two tags which are a priori not correlated. In Figure 2, we applied the same method as before on the tags _RPG_ and _Multiplayer_. Those are intuitively not correlated since there are multiplayer RPGs (for instance MMORPGs), but also many singleplayer RPGs. We observe that the histogram is rather constant; there is no strong peak around 1. This is a first hint that useful information might be contained in Figure 2: Histogram of the ratios of the priority of _RPG_ and _Multiplayer_ for the 1124 games whose _RPG_ priority is higher than the one of _Multiplayer_. the correlation between tag priorities. We use this idea of considering the correlation for our meronomies in Section 4. We point out an issue that we discovered concerning the priority. Although we still believe the notion to be of interest, and that it brings useful information, we think that the notion could be refined and improved in future work. It seems that for some reason unknown to us, the priority of some tags behave erratically. Let us consider for instance the tag _Free to Play_. We consider three free to play games: _Dota 2_, _Team Fortress 2_ and _Counter-Strike: Global Offensive_. For the first two games, _Free to Play_ has a priority of 1. However, the second tag with highest priority for _Dota 2_ is _MOBA_ with a priority of 0.33, whereas for _Team Fortress 2_ it is _Hero Shooter_ with a priority of 0.99. It is quite intriguing to us how the priorities of the second tags with highest priorities can be so different. This phenomenon occur regularly with the tag _Free to Play_. We do not understand why the priority of the second most assigned tag can differ so much, when both games are equally free to play. We fear that for _Dota 2_, the tag _Free to Play_ pushes all other priorities to very low values, thereby introducing noise in the data. Concerning _Counter-Strike: Global Offensive_, the priority of _Free to Play_ is 0. The reason for this can be easily understood: The game was released in 2012 and made free to play only in 2018. We believe that the tags were mostly assigned soon after the release of the game, and that is why _Free to Play_ is not amongst the 20 most assigned tags. This motivates even further the fact that the method should be refined, with a particular emphasis on the tag _Free to Play_. ## 3 A taxonomy of Steam tags In this section, we categorise all of the Steam tags through a taxonomy. In Section 2, we defined the notion of priority of a tag for a game, and established that it contained some information. In particular, we gave empirical evidence for Hypothetical Stylised Fact 3, stating that for a given game \(G\), tags with high priorities correspond to what players think about first when describing \(G\). In this section, we consider tags one by one, and for each tag we look at its priority distribution over all games for which the priority is non-zero. This allows us to define the taxa of higher rank into which we classify the Steam tags. ### The taxa of higher rank At the highest level, we have three taxa: High priority tags, Medium priority tags and Low priority tags. Let us start with the taxon Low priority tags. #### Low priority tags According to Hypothetical Stylised Fact 3, those tags correspond to what few players deem interesting for describing a game, and that is not what they think about first. We define this taxon as the set of tags whose priority median is at most 0.45. There are 80 tags in this taxon. We chose this threshold using our own expertise of the video game industry. Tags with higher priority medians seem to us to come significantly faster in players' minds when describing a game. We present in Table 2 the tags with the lowest priority median, along with the number of games to which they are assigned. One could argue that we do not have enough data to deal with tags that are assigned to very few games. However, for the taxon of Low priority tags, we think that if a tag was seldomly assigned, it supports even more the claim that this tag is not what players think about first when describing a game. Nonetheless, among the Low priority tags, there are some that are assigned to thousands of games. We show them in Table 3. We find those tags especially interesting. Players do not use those tags as their primary descriptors for a game, yet they are still assigned to many games. This gives us an interesting learning: Having a low priority does not mean that the tag is not important. Indeed, it is important for players to know whether a game has singleplayer or multiplayer modes. It is just not what is the most essential to describe a game. In Subsection 4.2, we delve deeper into the tags _Singleplayer_ and _Multiplayer_. #### High priority tags Before defining the taxon of High priority tags, let us first consider priority histograms of tags that clearly should be in that taxon. Three are depicted in Figure 3. We observe that these histograms roughly look like Dirac functions centred at 1. We want to define our High priority tags taxon as the set of tags whose histograms look similarly to those three. To allow for some wiggle room, we decided to define the taxon as the set of tags \begin{table} \begin{tabular}{|l|l|l|} \hline tag & priority median & number of games \\ \hline _Masterpiece_ & 0.18 & 6 \\ \hline _Epic_ & 0.20 & 106 \\ \hline _TrackIR_ & 0.20 & 31 \\ \hline _Vikings_ & 0.22 & 45 \\ \hline _Reboot_ & 0.23 & 11 \\ \hline _Mod_ & 0.24 & 62 \\ \hline _Addictive_ & 0.24 & 409 \\ \hline _Cult Classic_ & 0.25 & 305 \\ \hline _Kickstarter_ & 0.26 & 181 \\ \hline \end{tabular} \end{table} Table 2: Tags with the lowest priority medians. \begin{table} \begin{tabular}{|l|l|l|} \hline tag & priority median & number of games \\ \hline _Singleplayer_ & 0.36 & 23376 \\ \hline _Multiplayer_ & 0.42 & 6371 \\ \hline _Retro_ & 0.43 & 4748 \\ \hline _VR_ & 0.44 & 4696 \\ \hline _Difficult_ & 0.39 & 4404 \\ \hline _Great Soundtrack_ & 0.31 & 4245 \\ \hline _Co-op_ & 0.39 & 3153 \\ \hline _Controller_ & 0.40 & 2469 \\ \hline _Combat_ & 0.40 & 2135 \\ \hline \end{tabular} \end{table} Table 3: Low priority tags that are most assigned to games. Figure 3: Histograms of the priorities of _RPG_, _Racing_ and _Roguelike Deckbuilder_. whose priority median is sufficiently high, at least 0.574803, and whose maximum value in the histogram is obtained for a priority of at least 0.765644. Both thresholds were decided using our own expertise of video games. The median threshold was chosen so that the tag _Twin Stick Shooter_ would be in that taxon, as its priority median is exactly that value, but not _Underground_ and _Conversation_ that were following. The maximum value threshold was chosen such that _Tabletop_ would be a High priority tag, but not _2.5D_ and _Lore-Rich_ that were following. With this definition, we have 155 High priority tags. By looking at these tags, it seems that nearly all of them are about genre. We discuss this in further detail in Subsection 3.2. #### Medium priority tags We have one last taxon of higher rank for all of the remaining 192 tags. When looking at the histograms, we distinguish three types of shapes. Figure 4 depicts them. We have curves that look like Gaussian curves, as with _2D_, curves that look rather constant, like the one of _Open World_, and curves with two peaks, like the one of _PvE_. However, it seems that these types of shapes do not procure any insightful meaning. Indeed, one can see in Figure 5 three histograms of other Medium priority tags. Observe that they are all similar to the one of _2D_. However, we do not see any similarity between _2D_, _1990's_, _Military_ and _Robots_ beyond the fact that they are Medium priority tags. It seems to us that in this case, the shape of the histogram does not bring us any further insight. ### The taxa of lower rank In this subsection, we divide taxa of higher rank into several taxa. Our most interesting comments concerns High priority tags, thus we focus on their study. #### The taxon of High priority tags Interestingly, all High priority tags related to gameplay are genre names: _Adventure_, _RPG_, _Farming Sim_ and so on. Furthermore, nearly all tags commonly understood as genre tags are in the High priority tags taxon. We consider this finding to be one of the main contributions Figure 4: Histograms of the priorities of _2D_, _Open World_ and _PvE_. Figure 5: Histograms of the priorities of _1990’s_, _Military_ and _Robots_. of this paper. We can now define "genre tags" as "gameplay related tags with high priority". Also, it provides empirical evidence for Hypothetical Stylised Fact 1: Genre is indeed what players think about first when describing a game. Some tags are assigned to fewer than 100 games. We believe that this is too few games for qualifying those tags as "genres". We create a taxon for those tags that have high priority median, but that are assigned to too few games. Those are: _BMX_, _Cycling_, _Motorcross_, _Skateboarding_, _Tennis_, _Baseball_, _Lemmings_, _Wrestling_, _Mini Golf_, _Social Deduction_, _Warhammer 40K_, _Roguevania_, _Medical Sim_, _LEGO_, _Outbreak Sim_, _Pinball_, _Spelling_, _Golf_, _Jet_ and _Action RTS_. Steam products are not limited to video games, strictly defined. Other products are also available, and their corresponding tags are all in the High priority tags taxon. This is unsurprising, as if these products were deemed to be games, then the following High priority tags would undoubtedly qualify as game genres: _Utilities_, _Audio Production_, _Video Production_, _Animation & Modeling_, _Design & Illustration_, _Photo Editing_, _Software Training_ and _Web Publishing_. On a related note, we also have the tags _Game Development_ and _Programming_ which are more about making than playing video games. Those tags we put on a separate taxon. In Subsection 4.3, we provide further evidence to support our decision to treat those tags as an independent taxon. In total, we divide the taxon of High priority games into four taxa: game genres (which are gameplay related tags), tags assigned to too few games, tags related to products that are not video games, and the miscellaneous tags. In this last taxon we find: _Free to Play_, _Indie_, _Early Access_, _Massively Multiplayer_, _e-sports_, _Sexual Content_, _Nudity_, _LGBTQ+_, _Dinosaurs_, _Mechs_, _Cats_, _Experimental_, _Noir_, _Lovocraftian_ and _Western_. We list the learnings we obtain from these tags being High priority tags. Unsurprisingly, players care about whether the game is free. We indeed expect players to mind how much they have to pay for a game, which is why _Free to Play_ is a High priority tag. Secondly, the tags _Indie_, _Early Access_, _Massively Multiplayer_ and _e-sports_ provide a lot of information about the type of game, even though those are not genres. Players know better how much money was invested into the game, whether the game is entirely finished, whether it involves a lot of interactions with other players, and whether it is made for e-sports. This tag in itself is full of information. It implies that the game is hard to master, that it has a multiplayer mode, and that it is competitive. The presence of the tags _Sexual Content_ and _Nudity_ is also not surprising, as they can be important filters for players to know whether they want to play a game. Next, we have the tags _LGBTQ+_, _Dinosaurs_, _Mechs_ and _Cats4_ which are themes that players think to be so important that they are worth mentioning right away when describing a game. Similarly, _Experimental_, _Noir_, _Lovocraftian_ and _Western_ are ambiances of high importance for players when describing a game. We believe that game developers can benefit from knowing that these themes and ambiances matter particularly to players. Footnote 4: Interestingly, players are more interested in cats than in dogs. Indeed _Cats_ has a median priority of 0.60 with a maximum value in the histogram at 0.96, whereas for the tag _Dog_ those values are 0.56 and 0.45, respectively. #### The taxon of Medium priority tags As we said, our most interesting comments are for High priority tags. In particular, we showed in Figure 5 that the priority histogram was not sufficient for classifying the Medium priority tags into meaningful taxa. Thus, for the rest of the section, we only discuss why a few tags widely regarded as genres were put into the Medium priority tags taxon. Those tags are: FPS, Shooter, Fighting, Stealth, Hack and Slash, Survival, Survival Horror, Horror, MOBA, 4X, RTS, Grand Strategy, Trading Card Game, Match 3, Hidden Object and MMORPG. We still believe that those tags should be considered as genre, and explain why they were misclassified. Concerning FPS, Shooter, Fighting, Stealth, Hack and Slash, Survival, Survival Horror and Horror, they have relatively low priority medians because they are often used by players to say that some elements from these genres appear in a game. For instance, if it is possible to fight in a game, then the tag _Fighting_ will surely appear with medium priority. _Fighting_ is not the genre of the game, but it corresponds here to a gameplay element. Let us take another example with FPS and Shooter. The most assigned tags to the game _Fallout 4_ are _Open World_, _Post-apocalyptic_, _Exploration_ and _RPG_. In our opinion, those tags indeed describe well this game. With a lower priority, we find also the tags _Shooter_ and _FPS_. It is true that there are FPS elements in the game, which occur for instance when the player is aiming, therefore it is understandable that the tag appears with a medium priority. The takeaway is that tags may have different purposes and connotations. There are tags that might be used as genres, but also sometimes used to merely describe some gameplay elements. If it were a common practice to distinguish the two things and have a tag _FPS - genre_ and a tag _FPS - gameplay_, we believe that the former would be put into the High priority tags taxon while the latter would be rightly classified as a Medium priority tag. It was most surprising to us to discover that MOBA is in that taxon. We expected it to only appear as a High priority tag. However, its priority median is quite low, with a value of 0.48, making it even closer to Low priority tags than High priority tags. After investigation, it appears that the tag is often assigned with a medium priority to team-based multiplayer games, even when they do not belong to the MOBA genre. It seems that players and the industry have a different understanding of the tag. Concerning the remaining tags, _4X_, _RTS_, _Grand Strategy_, _Trading Card Game_, _Match 3_, _Hidden Object_ and _MMORPG_, they seem to be overshadowed by other tags preferred by players. For instance _4X_, _RTS_ and _Grand Strategy_ are overshadowed by _Strategy_, _Turn-Based_ and _Real-Time_. _Trading Card Game_ often appears after _Card Game_ and _Card Battler_. _Match 3_ and _Hidden Object_ are overshadowed by _Puzzle_. Likewise, _MMORPG_ is regularly chosen only after _RPG_, _Multiplayer_ and _Massively Multiplayer_. This is why, although they have a high priority median, their occurrence peak in the histogram is not close to a priority value of 1. ### List of genres comparison Using our data-driven approach, we have now obtained a list of genre tags. It can be found in Appendix A. Those are mainly in the High priority taxon, and we presented in Paragraph 3.2.2 a list of tags that should still be considered as genre tags, although they were classed as Medium priority tags. We compare our list to Li's [11] and to the one in Steamworks [27] as they are both made from Steam tags. It would be interesting to compare ours to all genre lists. However, for the sake of brevity, we concentrate solely on those two lists. Li's list, although also being made from a data-driven approach, serves a different role than ours [11]. His aim was to extract from Steam tags a reasonably small list of genre tags that would allow one to characterise all games, whereas our goal is to find an extensive list of genres. Consequently, Li's list contains 29 genres, and ours contains 127 tags. Still, the comparison is interesting as there are a few tags which are in Li's list but not in ours. Those are: _Soccer_, _Resource Management_, _Music_ and _Classic_. For each genre, Li associates a group of correlated gameplay tags. Concerning _Soccer_, the group contains _Soccer_, _Football_ and Sports_. This choice seems strange to us, for then according to Li's list all sports game would be considered as soccer games. In our genre list, we have the tag _Sports_ instead of _Soccer_, which seems more reasonable. The same happens with _Resource Management_ that we would replace by _Management_ and _Music_ that we would replace by _Rhythm_. Concerning _Classic_ in Li's list, its presence follows from the fact that Li considers _Classic_ and _Cult Classic_ to be related to gameplay. We disagree with that statement, although we cannot substantiate our belief with data. Similar to ours, Steamworks' list of genre tags strives for comprehensiveness [27]. Our list contains 127 tags whereas theirs contains 140. The two lists share 112 tags in common, which indicates that they mostly agree with each other. Let us start with the 15 tags that are in our list but not in Steamworks'. It is worth noting that despite Steamworks appearing to have all of the Steam tags (whether genre-related or not), some tags are actually absent. The genre tags in our list that do not even appear on Steamworks are: _Cooking_, _Creature Collector_, _Party Game_, _Puzzle-Platformer_ and _Roguelike Deckbuilder_. More interestingly, there are a few tags that are High priority tags related to gameplay (which we thus consider as genre tags), which are not considered as genre tags by Steamworks. We provide the list here: _Archery_, _Automation_, _Boxing_, _Deckbuilding_, _Fishing_, _Hunting_, _Naval Combat_, _Vehicular Combat_, _Otome_ and _Parkour_. All those tags were classed as Features in Steamworks instead of genres, except for _Otome_ and _Parkour_ which are classed as Themes & Moods. We think that the fact that those are High priority tags supports the idea of considering them as genre tags. We present the 28 tags that are in Steamworks' list of genre tags but not in ours. Some are High priority tags about gameplay that we removed in Paragraph 3.2.1 because they were assigned to fewer than 100 games. Indeed, it seems to us that tags assigned to too few games cannot really define a genre. Those tags are: _BMX_, _Baseball_, _Cycling_, _Golf_, _Medical Sim_, _Mini Golf_, _Motocross_, _Outbreak Sim_, _Roguevania_, _Pinball_, _Skateboarding_, _Spelling_, _Tennis_ and _Wrestling_. There are also a few High priority tags that we did not put into our list of genre tags because we deem that they are not clearly enough related to gameplay. Those are _Programming_, _e-sports_ and _Experimental_. The remaining tags are all Medium priority tags, and are therefore not part of our genre lists. Our findings indicate that those are not words that first come to players mind when describing a game, and supports the idea of not considering them as genres. The tags are: _Basketball_, _Bowling_, _Football_, _Hacking_, _Hockey_, _Open World_, _Skating_, _Skiting_, _Snowboarding_, _Soccer_ and _Investigation_. ## 4 Two meronomies of Steam tags In this section, we propose two meronomies of some Steam tags. In Section 3, we classified the tags into taxa, where two tags in the same taxa share some properties (related to their priority distributions). In Section 4, we use totally different methodologies, for different goals. In the resulting meronomies, we say that a tag \(A\) is a part of a broader tag \(B\), or equivalently that \(A\) is a meronym of \(B\). The first meronomy we present focuses on the tags with the broadest meanings. We aim at finding the most general tags, and to see how they relate to each other. In particular, we focus on the tags at the top of the meronomy, the ones that are parts of no others, but all others tags are parts of them. We name them the "capital tags". We claim that the capital tags are interesting because they form the smallest set of words that one needs to describe all video games (without going into detail). The second meronomy aims at finding synonymous tags. We believe that one can reduce the number of Steam tags without loosing too much information, as many tags share similar meanings. Our aim is to find those groups of tags, and for each one to find one representative. ### The method The method starts the same for both meronomies. The idea is to look at pairs of tags \(A\) and \(B\) that are correlated enough, and then check whether \(A\) or \(B\) appears more often. If \(B\) appears more often, we declare that \(A\) is a meronym of \(B\). Intuitively, \(A\) should indeed be a part of \(B\), as they are correlated and \(B\) appears more often, ergo has a broader meaning. For each game \(G\), we have an entry with 427 values (the number of Steam tags), where the value is the priority of the corresponding tag for the game \(G\). As there are at most 20 tags with positive priority for any given game, it implies that at least 407 values are equal to 0. We compute the Pearson correlation coefficient for all pairs of tags. A natural idea would be to take all pairs with correlation above some threshold and be done with it. The issue is that many tags are assigned to very few games. Thus, their priority is 0 for virtually all games. This induces a bias, and two of such tags will have a strong correlation even though they are hardly related. To tackle this issue, we consider all pairs of tags \(A\) and \(B\) with correlation above 0.1, and then compute what we call their _local Pearson correlation coefficient_. It is equal to the Pearson correlation coefficient, but it is computed only on games to which at least one of \(A\) or \(B\) is assigned. For the first meronomy, we keep only the pairs with local Pearson correlation coefficient above \(-0.7\), over all pairs with (global) Pearson correlation above 0.1. For the second meronomy, we take all pairs with local Pearson coefficient above \(-0.6\), while keeping a global Pearson correlation threshold of 0.1. Let us motivate the choice of these thresholds. For the global threshold, the lower it is, the better for the soundness of the results. However, getting it lower increases a lot the number of pairs of tags for which we have to compute the local Pearson correlation. As we have 428 tags, there are \({428\choose 2}=91378\) pairs to consider. This is computationally quite expensive, which is why we only consider pairs with correlation above 0.1. There are 2197 of those pairs. As mentioned earlier, we have concerns with the global correlation approach in that it may produce some false positives (pairs with high correlation that are not closely related). However, we do not anticipate the occurrence of a false negative, that is a pair of tags with similar meanings but low correlation. For the local threshold of the first meronomy, the value \(-0.7\) may seem extremely low. However, as we do the computation only on the games to which at least one of the tags is assigned, this diminishes drastically the correlation value. Indeed, there are proportionally much more games to which only one tag is assigned, since we removed all games to which none are assigned. Thus, with a local threshold of \(-0.7\), we are actually considering only 652 pairs of tags out of the 2196 that have global correlation above 0.1. Getting the threshold even lower would make us consider pairs that are actually not so much related, according to our own video game expertise. We present the result in Subsection 4.2. We study what happens when we set the local threshold to \(-0.6\) in Subsection 4.3. There are only 361 pairs with local correlation above this threshold. The choice of the value was again made using our own video game expertise. We chose the lowest value such that the pairs were indeed synonymous in our opinion. ### The capital tags Now that we have a list of 652 pairs of tags with high correlation, we compute for each pair \(A\), \(B\) whether \(A\) or \(B\) appears more. We construct an oriented graph with the tags as vertices and an edge from tag \(A\) to tag \(B\) if \(B\) occurs more often than \(A\). This graph is unreadable as there are too many vertices and edges. However, we are mainly interested in the vertices that have the most incoming edges, as they correspond to tags with the broadest meanings. Figure 6 shows the subgraph induced by the vertices \(V\) with at least 9 incoming edges (that is the subgraph where the vertex set is \(V\), and we show only edges between vertices in \(V\)). We observe that there are seven vertices with no outgoing edges, that we name the _capital tags_. One might observe that the graph shown in Figure 6 does not perfectly correspond to a meronomy. We tackle this issue at the end of the subsection, as we want first to focus on the capital tags. The seven capital tags are _Multiplayer_, _Singleplayer_, _Action_, _Casual_, _Adventure_, _Strategy_ and _Anime_. They correspond to tags that are correlated with a lot of other tags, and that occur more than them. The name is a reference to the seven capital sins from Christian teachings. Thomas Aquinas uses the word "capital" for the following reason: Accordingly a capital vice is so called, in the first place, from "head" taken in the proper sense, and thus the name "capital" is given to a sin for which capital punishment is inflicted. It is not in this sense that we are now speaking of capital sins, but in another sense, in which the term "capital" is derived from head, taken metaphorically for a principle or director of others. In this way a capital vice is one from which other vices arise, chiefly by being their final cause [2]. We posit that a similar phenomenon exists with tags, where the capital tags serve as the directors of the other Steam tags, much like how the capital vices act as the head of other vices. The seven capital tags encompass all others, as stated in the following hypothetical stylised fact: **Hypothetical stylised fact 5**.: _All tags are parts of the capital tags._ By construction, our seven capital tags are tags that are correlated with a lot of other tags, and that occur more often than them. This is already some justification for Hypothetical Stylised Fact 5. We add some more evidence at the end of the subsection when we define a Figure 6: The oriented subgraph induced by the tags with at least 9 incoming edges. For each tag, we write its name and the number of incoming edges in the original graph. meronomy of the tags in Figure 6. Before that, we state and establish another hypothetical stylised fact. First, we need to define the following concept: We say that a game \(G\) is covered by a set of tags \(\mathcal{T}\) if at least one of the tags in \(\mathcal{T}\) is assigned to \(G\). The capital tags form a very small group of tags such that nearly all games are covered by this set. Hypothetical Stylised Fact 7, although being quantitative in essence, is not perfectly mathematically defined. One reason for this is that the Steam tag are assigned by players. So all rules and observations should suffer from a few counterexamples, as the data is significantly noisy. On another note, it is computationally too hard to check Hypothetical Stylised Fact 7. If one wanted to check all sets of seven tags for instance, that would be \(\binom{428}{7}\approx 4.97\cdot 10^{14}\) sets of tags to test. One can see in Table 4 that our choice of capital tags does cover nearly all games. We show the percentage of games covered by each of the capital tag. We also show the percentage for the set consisting of the tag of the line and all the ones above. The tags are sorted by decreasing percentage of games covered. We also do the same when considering only games with 20 tags. Games with fewer tags can be misleading, as it may simply be that not enough players assigned tags, and if more players did, then surely some of them would have assigned a capital tag. We observe that Table 4 strongly supports Hypothetical Stylised Fact 7. As argued, it is not computationally possible to test all sets of seven games. Still, we present another set that could have been a fitting candidate: _2D_, _3D_, _2.5D_, _Isometric_, _Third Person_, _First-Person_ and _Top-Down_. All games are either in 2D, in 3D, or in some mix of the two: 2.5D or isometric. We even add _Third Person_ and _First-Person_ which imply that the game is in 3D, but give more information about the representation. The same holds with _Top-Down_ and _2D_. Therefore, one might expect to cover all games with these seven tags. This is not the case, as only 47% of the games are covered, and only 85% of the games with 20 tags are covered. This again supports Hypothetical Stylised Fact 7, as it substantiates the claim that the capital tags are very few while covering essentially all games. For the sake of curiosity, we extended the capital tags with as few tags as possible in order to cover even more games. This was done by hand by looking at which games were not covered yet, and finding what tags were assigned to most of them. By adding the tags _Arcade_, _RPG_, _Simulation_, _Survival_, _Puzzle_ and _Horror_ to the capital tags, we cover 96.73% percent of games, and 99.78% percent of games with 20 tags. Unsurprisingly, several of those new tags can be found in Figure 6: _Puzzle_, _RPG_ and _Simulation_. We make a few side-remarks that we deem interesting. First, we observe that for six of the capital tags, the percentage of games covered is less than the one of games with 20 tags covered. The only one for which it decreases is _Casual_, which goes from 43% to 42%. \begin{table} \begin{tabular}{|l|l|l|l|l|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{l|}{percentage of covered games} & \multicolumn{2}{l|}{percentage of covered games by this tag and the ones above} \\ \hline tag & over all games & over games with exactly 20 tags & over all games & over games with exactly 20 tags \\ \hline _Singleplayer_ & 46 & 67 & 46 & 67 \\ \hline _Action_ & 45 & 54 & 70 & 85 \\ \hline _Casual_ & 43 & 42 & 84 & 92 \\ \hline _Alternative_ & 42 & 51 & 90 & 95 \\ \hline _Struggy_ & 21 & 26 & 93 & 98 \\ \hline _Multiplayer_ & 13 & 22 & 93 & 98 \\ \hline _Anime_ & 9 & 14 & 94 & 98 \\ \hline \end{tabular} \end{table} Table 4: How the capital tags cover nearly all games Moreover, the increase for the six others is quite different from tag to tag. The biggest increase in terms of scale factor happens with _Multiplayer_ (scale factor of 1.69). Intuitively, the games with 20 tags are the most mainstream. This supports the idea that mainstream games on Steam more often offer multiplayer modes, and that they tend to be less casual. We also note that the second biggest increase is with _Singleplayer_ (scale factor of 1.46). This tells us that even though _Multiplayer_ and _Singleplayer_ are classified in the Low Priority taxon (see Subsection 3.1), they are actually assigned to many games, and would perhaps be assigned to all games if all games had 20 tags assigned to them. Therefore, low priority tags should not be thought of as of less importance. They are not what players think about first when describing a game, but may still be a feature that should be mentioned at a later step. Recall that a meronomy deals with part-whole relationships. Mathematically, a meronomy can be seen as a partially ordered set (poset). A partial order, denoted by \(\leq\), must satisfy three properties: 1. Transitivity: if \(A\leq B\) and \(B\leq C\) then \(A\leq C\), 2. Reflexivity: we have \(A\leq A\), 3. Antisymmetry: if \(A\leq B\) and \(B\leq A\), then \(A=B\). We can see in Figure 6 that the relation depicted is not perfectly a partial order. Indeed, let us say that \(A\) is a part of \(B\), denoted by \(A\leq B\), if there is an arrow from \(A\) to \(B\). The graph in Figure 6 looks a lot like a Hasse diagram. A Hasse diagram of a poset \(\leq\) is a representation where there is an edge from \(A\) to \(B\) if \(A\leq B\) and there is no \(C\) such that \(A\leq C\leq B\). This is a useful and concise way of representing posets. For the graph in Figure 6 to be a Hasse diagram, one simply has to remove the edges from _Puzzle_, _Atmospheric_, _Story Rich_ and _Fantasy_ to _Singleplayer_. Thus, we are implying that we consider _Sports_ to be a part of _Simulation_, which itself is a part of _Strategy_. Likewise, we say that _Fantasy_ is a part of _RPG_, which itself is a part of _Adventure_. Let us mention some other relationships, not depicted in Figure 6, but that are present in the larger graph (which we did not represent as there are too many edges to be readable). We have arrows from _Co-op_ to _Multiplayer_ and _Action_. There is an arrow from _Cute_ to _Colorful_, itself being connected to _2D_ and _Casual_. _Management_ is connected to _Singleplayer_ and _Simulation_. Finally, we want to mention _PvP_ being connected to _Multiplayer_. As all these relations are quite sensible, this supports our idea of looking at that graph as the Hasse diagram of some poset, or equivalently to establish a meronomy of those tags. We now have evidence for Hypothetical Stylised Fact 5: All tags are parts of the capital tags. By only adding or removing a few edges, we have obtained a meronomy for the tags of Figure 6 that could be extended further to the other tags. In this meronomy, we have seen that the capital tags are the ones at the top: All other tags are meronyms of the capital tags. Finally, we make a concluding remark about the capital tags. Four of them are game genres: _Action_, _Adventure_, _Strategy_ and _Casual_ (although it is debatable whether it is a genre, this is how it is considered in Steamworks' and our genre list [27]). Two other tags give important information about the game, even though they are not genres: _Singleplayer_ and _Multiplayer_. We suspect that the remaining tag, _Anime_, is actually an error. First, it is the tag among the capital tags that covers the smallest number of games. Secondly, the 11 tags that have an edge directed to it in the graph are: _JRPG_, _Visual Novel_, _Otome_, _Dating Sim_, _Romance_, _Female Protagonist_, _Sexual Content_, _NSFW_, _Hentai_, _Nudity_ and _Mature_. We observe that the five latter tags refer to sexual content, and share a similar meaning. This might induce a bias, giving to much weight to tags related to sexual content, therefore wrongly putting _Anime_ as a tag with many incoming edges in the graph. ### The synonymous tags As detailed in Subsection 4.1, we now use a local correlation threshold of \(-0.6\). We deem the pairs of tags that have a local correlation that high to be synonymous. In Subsection 4, all edges were oriented from a tag to another: there was an arrow from \(A\) to \(B\) if \(B\) appears much more often than \(A\). In this subsection, we are also interested in knowing when tags appear mostly together. Indeed, this would be the best examples of tags being synonymous. When tags appear mostly together, we draw a _mutual_ edge: an arrow that points towards both ends. To decide what kind of edge to draw, we do the following: For the tags \(A\) and \(B\), we compute \(X_{A}\), \(X_{B}\), \(X_{A\cup B}\) and \(X_{A\cap B}\) which denote how many times \(A\) was assigned to a game but not \(B\), how many times \(B\) was assigned to a game but not \(A\), how many times at least one of \(A\) or \(B\) was assigned, and how many times both were assigned, respectively. We define \(r_{A}:=X_{A}/X_{A\cup B}\), \(r_{B}:=X_{B}/X_{A\cup B}\) and \(r_{A\cap B}:=X_{A\cap B}/X_{A\cup B}\). If the maximum of these three ratios is \(r_{A}\), then we draw an arrow from \(B\) to \(A\). If it is \(r_{B}\), then we draw an arrow from \(A\) to \(B\). If it is \(r_{A\cap B}\), then we draw a mutual arrow between \(A\) and \(B\). Before running the program on the tags, we remove a few of them. The reason for this is that there are tags which are correlated with a lot of other tags, and that are assigned to many games. Those are the ones we mentioned in Subsection 4.2. We remove the capital tags in addition to _2D_, _Shooter_, _Puzzle_, _Atmospheric_, _Simulation_, _Story Rich_ and _Fantasy_ (which are amongst the other tags with the highest number of edges oriented towards them in Figure 6). Keeping those tags would distort the result and show only correlation between those tags and few others. Also, we use the information we obtained in Subsection 3.1. Recall that we classified the tags into three taxa of higher rank: High priority tags, Medium priority tags and Low priority tags. In the oriented graph of this subsection, we draw dotted red arrows for edges where the two involved tags are not from the same taxon. Indeed, not being in the same taxon may be a hint that these tags are not synonymous after all, since they are not used in the same way by players, although being highly correlated. Some synonymous tags are shown in Figure 7. Due to lack of space, we only depict the examples we find most meaningful. As a first simple example, we have an edge from _Crowdfunded_ to _Kickstarter_. As wanted, the meanings of those two tags are very close. Let us consider the subgraph induced by _Naval Combat_, _Naval_ and _Sailing_. The arrow from _Sailing_ to _Naval_ is natural, as the meanings are close and _Sailing_ implies _Naval_. However, _Naval Combat_ and _Naval_ are not so much synonymous. Indeed, one is understood as a game genre, whereas the other merely indicates that a game involves ships. This is why the information from Subsection 3.1, depicted by the dotted red arrow, does matter. Although it is inevitable that _Naval Combat_ and _Naval_ be correlated, it seems wrong to treat them as synonyms. Likewise, the arrow from _Turn-Based_ to _Turn-Based Strategy_ is treacherous, as one is a gameplay element, and the other is a game genre. More generally, we believe that no red arrow should be taken into account. We make one exception for the edge from _Trading Card Game_ to _Card Game_. Indeed, we argued in Subsection 3.2 that _Trading Card Game_ was one of the few tags that are misclassified by our method, and that it should actually be treated as a genre. On another note, we have a group of tags related to non-games, that we mentioned in Subsection 3.2. All those tags can be considered as one group of tags related to products offered by Steam that are not games. When looking at Figure 7, a natural idea is to give a name for all these groups of synonymous tags. This name would be a representative of the group, the one that is most general. For instance, the group (_Crowfunded_, _Kickstarter_) would be represented by _Kickstarter_. Similarly, the group of platformer tags would be represented by _Platformer_. But how would one deal with the group that contains card game tags and rogue-like tags? Intuitively, this group should be split in two. Likewise, what about the group of tags not related to games? It would seem natural to have most of those tags represented by _Utilities_, but it is no clear what should happen to _Education_, as the only edge incident to it is incoming, and not outgoing. We detail in the next paragraph a formal explanation on how to deal with these situations, which results in a grouping that is in accordance with our intuition. Let \(G\) be a directed graph with labeled vertices. First merge all mutual edges of \(G\). For a mutual edge \(e=\{u,v\}\), keep arbitrarily one of the two labels \(u\) or \(v\). Let \(H\) denote the same graph as \(G\) with the orientations of the edges removed, resulting in an unoriented graph. Let us denote by \(W_{1},\ldots,W_{k}\) the vertex sets of the \(k\) connected components of \(H\). We denote by \(V_{1},\ldots,V_{k}\) the same vertex sets considered in \(G\). Let us consider the subgraph \(G_{i}\) of \(G\) induced by \(V_{i}\), for some \(1\leq i\leq k\). We say that \(G_{i}\) is _well-oriented_ if there exists a unique vertex \(v\in V_{i}\) such that for each \(u\neq v\) in \(V_{i}\), there exists a directed path from \(u\) to \(v\) in \(G_{i}\). Making an abuse of notation, we say that \(G\) is _well-oriented_ if each \(G_{i}\) is well-oriented. Going back to the example in Figure 7, we see that most connected components are well-oriented. Moreover, for the group of platformer tags, _Platformer_ is the unique tag that can be reached via directed paths from all other tags in the group. However, the group with card game tags and rogue-like tags is not well-oriented. Now, what we suggest for splitting into well-oriented groups is the following: Given the graph \(G\), remove as few edges as possible to make it well-oriented. To the best of our knowledge, this problem has not been previously studied. However, we believe it is interesting from both a theoretical perspective and for practical applications: an efficient algorithm would be needed for naming the synonym groups. We do not know of any fast algorithm for solving the problem, and wonder whether it is NP-hard. Nonetheless, on the small example we are considering now, Figure 7: Some synonymous tags. Arrows are dotted red if the tags belong to different taxa of higher rank. we solve the problem by hand. In Figure 7, it suffices to remove the edges from _Roguelike Deckbuilder_ to _Rogue-lite_ and _Rogue-like_ to obtain a well-oriented graph. We therefore obtain the group (_Card Battler_, _Deckbuilding_, _Trading Card Game_, _Roguelike Deckbuilder_, _Card Game_) represented by _Card Game_, and the group (_Roguevania_, _Action Roguelike_, _Traditional Roguelike_, _Rogue-lite_, _Rogue-like_) represented by either _Rogue-lite_ or _Rogue-like_. Similarly, the group of tags not related to video games is not well-oriented. To make it well oriented, one has to remove the edge from _Software Training_ to either _Education_ or _Utilities_. Although in this case we solved the problem by hand, it would be very useful to have efficient algorithms for solving this problem for general directed graphs. This would allow us to deal with the synonymous tags on a larger scale. Finally, we observe that the graph is very similar to the graph of a poset, as was the case in Figure 6. One could remove the dotted red edges, merge the vertices connected by a mutual edge, and remove the edges we mentioned to make the graph well-oriented. Now it suffices to remove a few edges that give only redundant information, like the one from _Photo Editing_ to _Utilities_ or the one from _Roguevania_ to _Rogue-lite_/_Rogue-like_. Our approach yields a meronomy of synonymous tags, with the topmost tags serving as representatives of their respective groups. ## 5 Conclusion We proposed several hypothetical facts, such as the notion that players primarily associate games with their genre. To support this claim, we introduced the concept of priority of Steam tags and demonstrated its relationship to players' perceptions of games. Moreover, our approach enabled us to create a data-driven taxonomy that provides a comprehensive list of genres. We found a set of seven tags, the capital tags, which roughly summarise all information contained in the Steam tags. Those are _Multiplayer_, _Singleplayer_, _Action_, _Casual_, _Adventure_, _Strategy_ and _Anime_, although we argued why the presence of this last tag in the list is dubious at best. We found the tags _Multiplayer_ and _Singleplayer_ extremely interesting as they are capital tags, assigned to thousands of games, but are Low priority tags. We showed how some tags can be merged without loosing too much information. We proposed a criterion for finding a representative of a merged group, and ask whether there exists an efficient algorithm for applying this method. We list some further improvements that can be made to our study. We stated that the notion of priority might need to be refined in order to be more consistent, by taking the example of the tag _Free to Play_. Although considering priority histograms allowed us to define an interesting taxonomy, this method has some limit. In particular for Medium priority tags, we could not find a meaningful way of subdividing this taxon into taxa (see Figure 5). Maybe some other ideas could allow one to establish a meaningful data-driven classification of Medium Priority tags. Secondly, our method is biased towards games available on Steam, and we acknowledge that this may limit the generalisability of our findings. Also, we note that our computations assume an equal impact for all games. However, according to Orland [18], in 2014, over a quarter of registered games on Steam had never been played. Thus, we believe it would be beneficial to consider player engagement metrics, such as the number of players or total hours played, and to weight games accordingly in our computations. We noticed that some Low priority tags are assigned to thousands of games, like _Multiplayer_ and _Singleplayer_. We believe that they should be investigated further, to better understand how players think about them. Using the knowledge of capital tags, we developed a method for identifying synonymous tags. It would be intriguing to apply this method in a database where synonymous tags are intelligently merged. Our expectation is that, in such a scenario, _Anime_ would no longer be considered a capital tag.
2305.03091
Confidence-Based Skill Reproduction Through Perturbation Analysis
Several methods exist for teaching robots, with one of the most prominent being Learning from Demonstration (LfD). Many LfD representations can be formulated as constrained optimization problems. We propose a novel convex formulation of the LfD problem represented as elastic maps, which models reproductions as a series of connected springs. Relying on the properties of strong duality and perturbation analysis of the constrained optimization problem, we create a confidence metric. Our method allows the demonstrated skill to be reproduced with varying confidence level yielding different levels of smoothness and flexibility. Our confidence-based method provides reproductions of the skill that perform better for a given set of constraints. By analyzing the constraints, our method can also remove unnecessary constraints. We validate our approach using several simulated and real-world experiments using a Jaco2 7DOF manipulator arm.
Brendan Hertel, S. Reza Ahmadzadeh
2023-05-04T18:13:59Z
http://arxiv.org/abs/2305.03091v3
# Confidence-Based Skill Reproduction Through Perturbation Analysis ###### Abstract Several methods exist for teaching robots, with one of the most prominent being Learning from Demonstration (LfD). Many LfD representations can be formulated as constrained optimization problems. We propose a novel convex formulation of the LfD problem represented as elastic maps, which models reproductions as a series of connected springs. Relying on the properties of strong duality and perturbation analysis of the constrained optimization problem, we create a confidence metric. Our method allows the demonstrated skill to be reproduced with varying confidence level yielding different levels of smoothness and flexibility. Our confidence-based method provides reproductions of the skill that perform better for a given set of constraints. By analyzing the constraints, our method can also remove unnecessary constraints. We validate our approach using several simulated and real-world experiments using a Jaco2 7DOF manipulator arm. ## I Introduction One of the most efficient methods for teaching robots new skills is Learning from Demonstration (LfD) where a teacher shows a skill to a robot in order to enable it to reproduce the skill under new constraints [1]. Many LfD representations can be formulated as an optimization problem, where some cost function is formulated and minimized to find a reproduction while satisfying any set of given constraints. Some representations explicitly minimize a predefined cost function [2, 3]. For example, _Trajectory Learning via Failed and Successful Demonstrations_ (TLFSD) [2] minimizes distance from successful demonstrations while maximizing distance from failed ones. Similarly _Multi-Coordinate Cost Balancing_[3] optimizes between several differential coordinate representations of demonstrations to find the optimal reproduction. Other representations are not formulated as optimization problems, but can be interpreted as such. _Laplacian Trajectory Editing_[4] transforms the demonstrations into curvature space, applies constraints, and then inverts the transform back into Cartesian space. This algorithm formulates the LfD problem as a least squares optimization. Alternatively, _Dynamic Movement Primitives_ (DMPs) [5] use a dynamical system to model and execute the reproduction. Solving this dynamical system can also be formulated as an optimization problem involving Hilbert norm minimization [6]. Optimization problems, particularly convex optimization problems, have certain properties which to our knowledge have not yet been extensively exploited for LfD [7]. One property of interest is _perturbation analysis_ that focuses on how the optimal value changes when constraints are perturbed. In this paper, we propose a new convex formulation of the LfD problem using elastic maps [8]. Then we derive the dual problem to use duality conditions and perturbation analysis to exploit knowledge about the reproductions, creating a confidence metric for each reproduction. This method presents several advantages over previous methods. First is that our method provides confident reproductions of the skill for a specific set of constraints. Confident reproductions, such as the one shown in Fig. 1, perform better at the intended skill for the given constraints. Additionally, since more confident reproductions have tighter constraints, which does not allow for variability and smoothness in reproductions, the confidence in a reproduction may be tuned as shown in Fig. 2 to allow for reproductions of different smoothness and variability. Unlike other methods that consider static constraints, another advantage of our method is it analyzes constraints and considers perturbations of these constraints. Through constraint analysis, our method can remove unnecessary constraints using duality and properties of the dual values of constraints. We validate our approach in two simulated and one real-world experiment using a Jaco 7DOF manipulator arm. ## II Related Work Learning from Demonstration covers a large variety of techniques, which can be in the form of dynamical [5], statistical [8], geometric [4, 9, 10], or probabilistic methods [11, 12]. Each of these different types of meth Fig. 1: Reproduction of a door opening task using a level of confidence which results in success. ods present different advantages when reproducing trajectories [13]. While some methods rely on explicit optimization [2, 6, 8], others can be reformulated as optimization problems [4, 5]. Many explicit optimization formulations use a cost function and rely on tradeoffs between elements of the cost function to reproduce demonstrations. When optimizing, using different norms results in different properties of the reproduction, such as different velocity or jerk [6]. The Jerk-Accuracy method uses parameter is used to tradeoff between converging to a given demonstration and minimizing jerk [14]. Other methods do not use tradeoff values, instead optimizing without parameters. Rana et al. proposed a method which incorporates both LfD and motion planning techniques [15]. This method uses a _maximum a posteriori_ inference to create trajectories, using effective optimization strategies designed for motion planning [16]. Alternatively, multimodal approaches combine several optimization approaches to create a family of reproductions which are optimal for the skill [17]. Other LfD representations may not explicitly use optimization but can be formulated as an optimization problem. As shown in [6], DMPs [5] can be formulated as optimizing trajectory reproduction using a special case of Hilbert norm minimization. While these previous works use optimization either explicitly or implicitly, they do not take advantage of certain properties associated with optimization such as perturbation analysis. Additionally, not all works formulate convex optimization problems, which can be solved efficiently and have strong duality. Non-convex problems must be solved by an optimal search algorithm, and do not provide valuable information regarding constraints. Various methods have been used to increase confidence in the execution of a robot's movements. In [13], several LfD methods are evaluated using a similarity metric, and the highest similarity reproduction is selected. In [18], the authors propose a method to avoid unsuccessful reproductions in which a reproduction achieving the highest "feasibility score" for the imitator is performed. Other methods focused on avoiding failure have been utilized, such as ergodic imitation or safety credit assignment. Ergodic imitation [19] relies on reproducing demonstrations with maximum ergodicity, resulting in a successful reproduction. Safety credit assignment [20] learns a safety barrier from demonstrations, which can then be used with a controller to prevent unsafe executions. Alternatively, confidence-based autonomy [21] promotes more efficient learning from demonstrations by requesting demonstrations when confidence in reproductions is low. This method also incorporates skill refinement, which allows a user to correct portions of the reproduction. Skill refinement allows users to increase the likelihood of success in reproductions by adjusting reproductions until they are suitable for the skill [22]. Our perturbation analysis approach is similar to these methods as it uses spatial properties of the demonstrations to measure confidence in the reproduction, but also allows for reproductions of varying confidence levels, as sometimes reproductions of lower confidence may be more desirable. Perturbation analysis also allows for visualizing confidence across the reproduction space. ## III Background ### _Elastic Maps_ Elastic maps are a tool for nonlinear dimensionality reduction. They can be used to approximate a lower-dimensional manifold of given data [23]. Elastic maps have been used for data analysis and visualization across several fields, including bioinformatics [23], political science, and social science [24]. Maps are represented by nodes, which are interconnected by edges, and adjacent edges form a rib. Elastic maps have a mechanical interpretation inspired by springs. Nodes connect to the data and other nodes through springs. By minimizing three energies associated with the elastic map and its spring Fig. 2: Demonstration and reproductions with different confidence factors of opening a real-world box. As confidence in a reproduction increases, the constraints tighten. Left: a low-confidence reproduction does not successfully open the box. Center and right: higher confidence reproductions successfully complete the task with different features. like structure, an optimal representation of the data is found. The three energies are (i) the approximation energy \(U_{\mathcal{X}}\) which penalizes a bad fit to the data, (ii) the stretching energy \(U_{E}\) which penalizes high distance between adjacent nodes, and (iii) the bending energy \(U_{R}\) which penalizes the curvature of nodes. These energies may be formulated differently depending upon the structure of the elastic map. ### _Elastic Maps for Trajectory Reproduction_ We have previously developed an approach using elastic maps for the use of trajectory reproduction in [8]. Of particular use for trajectories are polyline elastic maps, where each node has only two edges except for two terminal nodes. The energy terms in a polyline elastic map can be defined as follows: \[U_{\mathcal{X}} =\frac{1}{\sum_{\zeta_{j}}w_{j}}\sum_{i=1}^{N}\sum_{\zeta_{j} \in k_{i}}w_{j}||\zeta_{j}-x_{i}||_{2}^{2}, \tag{1}\] \[U_{E} =\alpha\sum_{i=1}^{N-1}||x_{i+1}-x_{i}||_{2}^{2},\] (2) \[U_{R} =\beta\sum_{i=1}^{N-2}||x_{i}-2x_{i+1}+x_{i+2}||_{2}^{2}, \tag{3}\] where \(\boldsymbol{\zeta}=[\zeta_{1},\zeta_{2},...,\zeta_{M}]\) is the demonstration data, \(w_{j}\) is the weight of data \(\zeta_{j}\), \(\boldsymbol{x}=[x_{1},x_{2},...,x_{N}]\) is the nodes in the elastic map, \(k_{i}\) is the cluster of data for node \(x_{i}\), \(||\cdot||_{n}\) is the \(L^{n}\)-norm, and \(\alpha\) and \(\beta\) are the stretching and bending constants, respectively. Optimizing these three energies results in a series of nodes which has properties particularly well-suited to a robot trajectory; a smooth, evenly-spaced reproduction fitted to the demonstrations which adheres to all given constraints. Performance using elastic maps compared to other contemporary methods is shown in [8]. Additionally, elastic maps can be used with any number of demonstrations and/or constraints, which provides flexibility in its applications for robot trajectories. In this paper we utilize elastic maps using convex optimization techniques and analyze how perturbations of constraints can affect the optimal solution to understand confidence in elastic map reproductions of robot trajectories. Elastic maps are used in favor of other methods due to their flexibility with the number of demonstrations and constraints. ## IV Methodology ### _Problem Formulation: Derivation of the Primal_ To find an optimal elastic map representation of demonstration data, the following optimization problem is solved: \[\underset{\boldsymbol{x}}{\text{minimize}}\ f_{0}(\boldsymbol{x}) =U_{\mathcal{X}}+U_{E}+U_{R}\] (4) subject to \[f_{i}(\boldsymbol{x}) =||y-x_{j}||_{1}-r\leq 0\] where \(f_{i}(\boldsymbol{x})\) is the \(i\)th constraint, taking the form of constraining a point on the reproduction \(x_{j}\) to some point in space \(y\) within radius \(r\). The form of these constraints is discussed further in Sec. IV-D. To use convex minimization efficiently, the energy formulations are changed as follows: \[U_{\mathcal{X}} =\gamma||\boldsymbol{I}\boldsymbol{x}-\boldsymbol{K}\boldsymbol{ \zeta}||_{2}^{2} \tag{5}\] \[U_{E} =\alpha||\boldsymbol{E}\boldsymbol{x}||_{2}^{2}\] (6) \[U_{R} =\beta||\boldsymbol{R}\boldsymbol{x}||_{2}^{2}, \tag{7}\] where \(\gamma=(\sum_{\zeta_{j}}w_{j})^{-1}\) is the normalization factor for the data weights, \(\boldsymbol{I}\) is the identity matrix, \(\boldsymbol{K}\) is a clustering matrix which incorporates the individual weights of data points, \(\boldsymbol{E}\) and \(\boldsymbol{R}\) define the edges and ribs such that \[\boldsymbol{E} =\begin{bmatrix}-1&1&0&\cdots&0\\ 0&-1&1&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&-1&1\end{bmatrix},\] \[\boldsymbol{R} =\begin{bmatrix}1&-2&1&0&\cdots&0\\ 0&1&-2&1&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&1&-2&1\end{bmatrix}\] which results in a convex minimization that can be solved efficiently. The use of \(\boldsymbol{I}\) in the above equations is only permissible when \(\boldsymbol{x}\) and \(\boldsymbol{\zeta}\) are the same size and uniform weights are used. In all other cases, a diagonal weighted matrix must be used instead. Note that unlike [8], constraints will not be included in \(\boldsymbol{\zeta}\). Constraints included in \(\boldsymbol{\zeta}\) are forced to be met exactly, while convex constraints include inequalities. This allows for further flexibility in reproductions, and makes properties such as obstacle avoidance easier to implement. Without the convex optimization, obstacle avoidance must be used with a specific via-point or set of via-points. With convex formulation, obstacle avoidance can be implemented automatically without applying hard constraints to the reproduction. ### _Problem Formulation: Derivation of the Dual_ We derive the Lagrangian dual problem for the primal in (4). We first expand (4) as follows: \[f_{0}(\boldsymbol{x}) =\gamma||\boldsymbol{I}\boldsymbol{x}-\boldsymbol{K}\boldsymbol{ \zeta}||_{2}^{2}+\alpha||\boldsymbol{E}\boldsymbol{x}||_{2}^{2}+\beta|| \boldsymbol{R}\boldsymbol{x}||_{2}^{2}\] \[=\gamma(\boldsymbol{x}^{\top}\boldsymbol{I}^{\top}\boldsymbol{I} \boldsymbol{x}+(\boldsymbol{K}\boldsymbol{\zeta})^{\top}(\boldsymbol{K} \boldsymbol{\zeta})-2(\boldsymbol{K}\boldsymbol{\zeta})^{\top}(\boldsymbol{I }\boldsymbol{x}))\] \[+\alpha\boldsymbol{x}^{\top}\boldsymbol{E}^{\top}\boldsymbol{E} \boldsymbol{x}+\beta\boldsymbol{x}^{\top}\boldsymbol{R}^{\top}\boldsymbol{R} \boldsymbol{x}, \tag{8}\] which can be simplified to \[f_{0}(\boldsymbol{x}) =\boldsymbol{x}^{\top}(\gamma\boldsymbol{I}^{\top}\boldsymbol{I} +\alpha\boldsymbol{E}^{\top}\boldsymbol{E}+\beta\boldsymbol{R}^{\top} \boldsymbol{R})\boldsymbol{x}\] \[-\gamma(\boldsymbol{K}\boldsymbol{\zeta})^{\top}(2\boldsymbol{I} \boldsymbol{x}-\boldsymbol{K}\boldsymbol{\zeta}). \tag{9}\] The Lagrangian then can be calculated by combining (9) and the inequality constraints in (4) as follows: \[\mathcal{L}(\boldsymbol{x},\boldsymbol{\lambda})= \boldsymbol{x}^{\top}(\gamma\boldsymbol{I}^{\top}\boldsymbol{I} +\alpha\boldsymbol{E}^{\top}\boldsymbol{E}+\beta\boldsymbol{R}^{\top} \boldsymbol{R})\boldsymbol{x}\] \[-\gamma(\boldsymbol{K}\boldsymbol{\zeta})^{\top}(2\boldsymbol{I} \boldsymbol{x}-\boldsymbol{K}\boldsymbol{\zeta})\] \[+\boldsymbol{\lambda}^{\top}(||\boldsymbol{y}-\boldsymbol{x}||_{1} -\boldsymbol{r}). \tag{10}\] Finally, the dual problem can be written as: \[\underset{\boldsymbol{\lambda}}{\text{maximize}}\ g(\boldsymbol{ \lambda}) =\underset{\boldsymbol{x}}{\text{inf}}\Bigg{\{} \tag{11}\] \[\boldsymbol{x}^{\top}(\gamma\boldsymbol{I}^{\top}\boldsymbol{I}+ \alpha\boldsymbol{E}^{\top}\boldsymbol{E}+\beta\boldsymbol{R}^{\top} \boldsymbol{R})\boldsymbol{x}\] \[-\gamma(\boldsymbol{K}\boldsymbol{\zeta})^{\top}(2I\boldsymbol{x }-\boldsymbol{K}\boldsymbol{\zeta})\] \[+\boldsymbol{\lambda}^{\top}(||\boldsymbol{y}-\boldsymbol{x}| |_{1}-\boldsymbol{r})\Bigg{\}}\] subject to \[\boldsymbol{\lambda}\succeq 0,\ \alpha,\beta,\gamma\geq 0,\] which can be solved to find the optimal dual values \(\boldsymbol{\lambda}^{*}\). The use of these dual values is explained in Sec. IV-D. ### _Interpretations_ The \(\boldsymbol{E}\) and \(\boldsymbol{R}\) matrices used here have multiple interpretations, some of which include: Edges and ribs in an elastic map, finite derivatives of functions, Tangent and Laplacian coordinates, smoothing regularization in optimization, and interpolation between points. In this work, they represent the edges and ribs of the elastic map, which are "spring" connections between consecutive points. By increasing or decreasing the stretching and bending stiffness of these springs, a different tuning of the elastic map can be found. Another interpretation of these matrices involves the first and second finite derivatives of a function. Alternatively, [3] uses these matrices to transform demonstrations from Cartesian coordinates to Tangent and Laplacian coordinates. These different coordinate systems represent different properties of the Cartesian demonstrations. The Tangent coordinates represent the tangent vectors along the demonstration, and Laplacian coordinates represent the curvature of the demonstration. In optimization, these terms can be used for smoothing regularization [7]. To penalize variation in a solution, one or more regularization terms are added to the objective function. These regularization terms penalize the difference from a solution term to its neighboring solution terms, promoting a more uniform and smoother solution. Using the matrix we have denoted as \(\boldsymbol{R}\) can also be used for interpolation of noise-free data [25]. Given some data points, the unknown points between them can be assumed as an average of the neighbors in addition to some noise. Composing this assumption for all points results in the second-order finite difference matrix. ### _Perturbation Analysis_ The optimization problem in (4), and in general any optimization problem written in the standard form (considered the _primal_), can be used to form a Lagrangian _dual_ function. One of the main advantages of the dual problem is that it is concave, even when the primal is not convex. Another important property of this formulation is that the optimal value of the Lagrangian dual problem, denoted as \(d^{*}\), is the best lower bound on the primal optimal value, denoted \(p^{*}\). In our formulation, the optimal duality gap (the distance between \(d^{*}\) and \(p^{*}\)) is zero, so we say that _strong duality_ holds. This is correct because our primal problem is convex. For each constraint in an optimization problem, there is an associated optimal dual value related to the dual problem, with the optimal dual solution \(\lambda_{i}^{*}\) relating to the \(i\)th constraint. These dual solutions are related to how the optimal value, denoted \(p^{*}\), changes when that constraint is perturbed. Perturbing a constraint changes the constraint by some small value \(u\). For example, the constraint \(x_{1}-x_{2}\leq 0\) would be perturbed to \(x_{1}-x_{2}\leq u\). The optimal value for the original and perturbed constraint is denoted as \(p^{*}(0)\) and \(p^{*}(u)\), respectively. The following rules apply when perturbing constraints [7]: * If \(\lambda_{i}^{*}\) is large and the \(i\)th constraint is tightened (\(u<0\)) then the optimal value \(p^{*}(u)\) increases greatly. * If \(\lambda_{i}^{*}\) is small and the \(i\)th constraint is loosened (\(u>0\)) then the optimal value \(p^{*}(u)\) will not decrease greatly. * If \(\lambda_{i}^{*}\) is 0 then the \(i\)th constraint has no effect on the optimal value. For all problems with strong duality when Slater's condition is satisfied, the following global inequality holds for all \(u\)[7]. \[p^{*}(u)\geq p^{*}(0)-\lambda^{*}u. \tag{12}\] If the optimal value changes slightly, then the optimal solution also changes slightly. This can be leveraged when reproducing demonstrations to measure confidence in the reproduction in the presence of perturbations. Additionally, if the optimal value does not change at all, then the optimal solution does not change either. Therefore any constraints applied with \(\lambda_{i}^{*}=0\) are unnecessary and can be removed. ### _Constraints Formulation_ We consider constraints for initial, final, or via-points and/or obstacle avoidance. These are formulated as inequalities, which allows for flexibility in reproductions up to a given bound. We define a point constraint as \[f_{i}(x)=||y-x_{j}||_{1}-r, \tag{13}\] where \(y\) is the point to constrain to, and \(r\) is some safe radius around that point (see Fig. 3). This forces a node \(x_{j}\) to be within some radius \(r\) around a specified point \(y\). For \(r=0\) this can be made an exact constraint. This formulation can work for initial, final, or via-point constraints. The perturbed version of this constraint is \(f_{i}(x)\leq u\), where \(u\geq-r\). This perturbs the safety radius around the constrained point, either tightening (\(u\leq 0\)) or loosening (\(u\geq 0\)) the constraint. For obstacle avoidance constraints, we assume that the demonstrations have successfully avoided the obstacle. If this is not the case, new demonstrations can generated using methods such as TLFSD [2]. Under this assumption, we create safety regions around the original demonstrations which avoid the obstacle. This is formulated as a point constraint, where \(y=\zeta_{j}\), the closest point on the demonstration which has avoided the obstacle, and \(r=||\zeta_{j}-b||_{1}\), a safety radius from the demonstration to the obstacle, where \(b\) is the closest obstacle. The full constraint is written as \[f_{i}(x)=||\zeta_{j}-x_{j}||_{1}-||\zeta_{j}-b||_{1}, \tag{14}\] This constraint is perturbed in the same way as point constraints, with the bounds \(-||\zeta_{j}-b||_{1}\leq u\leq 0\). Additionally, we can use the property that if constraint \(f_{i}(\mathbf{x})\) has \(\lambda_{i}^{*}=0\), it does not affect the optimal value or solution. Therefore, for obstacle avoidance constraints, we apply an obstacle avoidance constraint to each point along the reproduction, solve equation (4), and then remove all constraints with \(\lambda_{i}^{*}=0\). This leaves only the constraints which affect the reproduction and change the reproduction when perturbed. Note that this method works for convex and non-convex obstacles. ### _Confidence Measurement_ We wish to leverage the properties of perturbing constraints to measure confidence in reproductions. Given a reproduction generated under certain constraints, loosening one of those constraints would increase the confidence in that reproduction in the new environment, and vice versa for tightening constraints. To establish a scale for confidence, the tightest a perturbation can be is the lower bound, which is \(-r\). As an upper bound for point constraints, there exists a region for which loosening the constraint no longer significantly decreases the optimal value, an example of which is shown in Fig. 3. That is, \(p^{*}(u)\approx p^{*}(v)\) for \(v>u\). We denote the start of this region \(u_{upper}\), and establish a scale for confidence \(\sigma_{c}=(u+r)/(u_{upper}+r)\), such that \(0\leq\sigma_{c}\leq 1\). For obstacle constraints, we consider the upper bound the original \(u=0\), as this is the loosest constraint which guarantees obstacle avoidance. We can measure the confidence using this scale of perturbations and given how \(p^{*}\) changes with \(\lambda_{i}^{*}\). For obstacle avoidance constraints, there may be multiple constraints which need to be perturbed (i.e., in the case where two or more different points along a reproduction are in near collision with an obstacle. In this case, we apply a confidence factor \(\sigma_{s}\) to all obstacle avoidance constraints, where \(0\leq\sigma_{s}\leq 1\). A higher confidence factor tightens the constraints and increases confidence in reproductions, whereas a lower confidence factor loosens the constraints. Using this factor, \(u\) values are generated such that \(u=-\sigma_{s}||\zeta_{j}-b||_{1}\). This confidence factor is selected according to the user. ## V Experiments ### _Experimental Setup_ We validate our approach using several simulated and real-world experiments. We analyze different constraints, the perturbation of constraints, and how these perturbations relate to confidence in reproductions. All experiments are performed using code1 written in Python 3.8 utilizing the cvxpy library [26]. Uniform weights (i.e., \(w_{1}=w_{2}=...=w_{n}\)) are used for the approximation energy, and the bending and stretching constants are tuned manually. Footnote 1: [https://github.com/brenhertel/LfD-Perturbations](https://github.com/brenhertel/LfD-Perturbations) ### _Perturbing a Via-Point Constraint_ We first validate our approach in a simulated reaching environment. In this experiment, shown in Fig. 3 (left), a demonstration is given that reaches from an initial point towards a specified endpoint, but a via-point is given with tight upper and lower limits. In this experiment, we only perturb around a single axis as shown in Fig. 3 (right), but multiple axes can be used for perturbations. Additionally, the bounded region is tight but non-zero.Using \(r=0\) generates values for \(\lambda_{i}^{*}\) which are inaccurate. A reproduction is generated using this set of constraints, shown in red in Fig. 3 (left). We wish to analyze how the reproduction may change if the constraint is perturbed. Multiple \(u\) values are generated which loosen the constraint. The result of these perturbations on the optimal value is shown with cyan dots in Fig. 3 (center). Loosening the constraint decreases the cost of the optimal reproduction, resulting in smoother reproductions. These costs remain above the lower bound set by inequality in (12), shown in black. The constraint is loosened until increasing \(u\) would no longer decrease \(p^{*}(u)\). These perturbed reproductions are plotted in Fig. 3 (right), where opacity corresponds to confidence. Reproductions found under tighter constraints are considered more confident than those with looser constraints. The reproduction found under the tightest constraints is considered to have a confidence of 1, whereas a reproduction generated under the loosest constraints (when loosening the constraint no longer decreases the optimal value) has a confidence of 0. This experiment shows how confidence in reproductions can be measured by perturbing a constraint. ### _Perturbing Obstacle Avoidance Constraints_ In this experiment, we show how constraints can be applied to obstacle avoidance and how confidence can be measured in obstacle avoidance skills. This experiment, shown on the left in Fig. 4, involves a demonstration (black) reaching around an obstacle (green). Simply generating a reproduction does not successfully avoid the obstacle (blue), so obstacle avoidance constraints are applied, resulting in a reproduction which narrowly avoids the obstacle (red). Several confidence factors ranging from 0 to 1 are shown on the right of Fig. 4, with the confidence factor shown in opacity. It can be seen that as the confidence factor increases, the reproduction avoids the obstacle by a larger margin. The confidence factor directly correlates to confidence in the reproduction, with higher factors more similar to the demonstration, but more constrained. Some reproductions of very high confidence contain jagged corners, high-jerk features which may be undesirable in robot reproductions. ### _Perturbing Reproductions in the Real World_ Finally, we use a real-world Kinova Jaco2 7DOF manipulator arm in a door opening skill. An electrical box (shown in Figs. 1, 2, and represented in brown in Fig. 5) is placed in front of the arm which must be opened. A human demonstrates the task through kinesthetic teaching, where the arm is guided towards the edge of the box and pulled back to open the door.2 The demonstration successfully completes the skill, but is jagged and suboptimal. Therefore, we wish to establish a level of confidence which is able to successfully complete the skill while incorporating features of the LfD representation such as smoothness and generalization. We apply obstacle avoidance constraints and generate three reproductions with confidence factors 0, 0.5, and 1, shown in Fig. 5. All reproductions are computed a priori and executed in the same real-world environment the demonstration was recorded, where the reproductions with confidence levels of 0.5 and 1 successfully reproduce the task. Of these successful reproductions, the lower confidence level is more desirable, as it is smoother than the reproduction with a confidence factor of 1. Higher confidence levels, while more successful at completing tasks, are tightly constrained which may cause undesirable features in reproductions, such as the high-jerk movements seen in the \(\sigma_{s}=1\) reproduction shown in Fig. 5. This shows how different confidence levels can be used to find tradeoffs between desired features in the reproduction and demonstration while successfully completing the intended skill. Footnote 2: See accompanying video: [https://youtu.be/IQxbhEiNbk](https://youtu.be/IQxbhEiNbk) ## VI Conclusions and Future Work We have presented a method for finding confidence in reproductions through perturbation analysis. We have shown how the estimated confidence values can be used to inform users or future algorithms about various properties of reproductions and the safety of trajectory execution. Additionally, We have validated the utility of this technique through several experiments, including using via-point generalization and obstacle avoidance constraints. Beyond what we have shown here, there are several opportunities for future work. Firstly, we measure confidence by comparing a reproduction to the demonstration, assuming the demonstration to be the "most confident." This may not always be the case because demonstrations may be noisy or otherwise sub-optimal. Instead, information from the surrounding environment could be used Fig. 4: Left: optimal reproduction for an obstacle avoidance task with (red) and without (blue) constraints for obstacle avoidance. Right: reproductions of varying levels of confidence, where confidence is shown with opacity. Fig. 5: Demonstration and reproductions with different confidence factors of opening a real-world box. Individual x and y dimensions are shown to highlight differences in reproductions. Higher confidence factors generalize less, but are more confident in the ability for the reproduction to successfully complete the skill. Fig. 3: The perturbation analysis process shown in a simulated environment over a via-point constraint, with several reproductions of varying confidence calculated. Left: solution to the original problem with endpoint and via-point constraints. Center: perturbation analysis of the via-point constraint. The optimal value decreases when the constraint is loosened, leading to a smoother but less confident reproduction. Right: several reproductions of varying confidence levels. Confidence is shown with opacity. to find a better measure of confidence. Another avenue for future work is online adaptation of trajectories based on confidence. This paper considers stationary environments with trajectories computed a priori and executed with a low-level non-reactive controller. Real-world environments, however, are non-stationary and challenging, and require robots to adapt on-the-fly to the dynamic changes. ## Acknowledgements This work was supported by the U.S. Office of Naval Research (N00014-21-1-2582). Additional thanks to Ryan Donald for his assistance.
2302.08163
Holographic dual of extended black hole thermodynamics
By respecting the conformal symmetry of the dual CFT, and treating the conformal factor of the AdS boundary as a thermodynamic parameter, we formulate the holographic first law that is exactly dual to the first law of extended black hole thermodynamics with variable cosmological constant but fixed Newton's constant.
Moaathe Belhaj Ahmed, Wan Cong, David Kubizňák, Robert B. Mann, Manus R. Visser
2023-02-16T09:21:06Z
http://arxiv.org/abs/2302.08163v2
# Holographic dual of extended black hole thermodynamics ###### Abstract By respecting the conformal symmetry of the dual CFT, and treating the conformal factor of the AdS boundary as a thermodynamic parameter, we formulate the holographic first law that is exactly dual to the first law of extended black hole thermodynamics with variable cosmological constant but fixed Newton's constant. _Extended black hole thermodynamics_, also known as _black hole chemistry_[1; 2], is one of the major developments in classical black hole thermodynamics in recent years. The idea stems from reconsidering the thermodynamics of asymptotically Anti-de Sitter (AdS) black holes in the context of a variable cosmological constant \(\Lambda\)[3; 4], which is treated as a thermodynamic pressure according to the following (perfect fluid) prescription: \[P=-\frac{\Lambda}{8\pi G_{N}}=\frac{(d-1)(d-2)}{16\pi G_{N}L^{2}}\,, \tag{1}\] where \(G_{N}\) is Newton's constant, \(L\) is the AdS radius, and \(d\) stands for the number of (bulk) spacetime dimensions. This identification allows one to define the black hole volume \(V\) and introduces the standard pressure-volume term into black hole thermodynamics. Namely, we now have the extended first law, together with the corresponding generalized Smarr relation [5]: \[\delta M =T\delta S+V\delta P+\Phi\delta Q+\Omega\delta J\,, \tag{2}\] \[M =\frac{d-2}{d-3}(TS+\Omega J)+\Phi Q-\frac{2}{d-3}PV\,, \tag{3}\] with the two being related by a dimensional scaling argument (resulting in \(d\)-dependent factors in the Smarr relation). The key result to emerge from the black hole chemistry approach is the realization that AdS black holes exhibit phase transitions that are fully analogous to those of ordinary thermodynamic systems. In particular, one observes phase transitions a la Van der Waals [6; 7], reentrant phase transitions [8], isolated critical points [9; 10], superfluid like behavior [2], and multicritical points [11]. Very recently a mechanism for the higher-dimensional origin of a dynamical cosmological constant was proposed [12]. However, the _holographic interpretation_ of extended thermodynamics remained unclear for many years. The first attempts [13; 14; 15; 16; 17] suggested that, according to the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence [18], the \(V\delta P\) term should be related to a \(\mu\delta C\) term in the dual CFT, where \(C\) is the central charge and \(\mu\) the thermodynamically conjugate chemical potential. For holographic CFTs dual to Einstein gravity the dictionary for the central charge is \[C\propto\frac{L^{d-2}}{G_{N}}\,, \tag{4}\] where the proportionality constant depends on the normalization of \(C\), which is irrelevant for the discussion below. The situation is not that simple, however, as it is also standard to identify the curvature radius of the spatial geometry on which the CFT is formulated with the AdS radius \(L\)[19]. Namely, the boundary metric of the dual CFT is obtained by the conformal completion of the bulk AdS spacetime and reads [20; 21] \[ds^{2}=\omega^{2}\Big{(}-dt^{2}+L^{2}d\Omega_{k,d-2}^{2}\Big{)}\,, \tag{5}\] where \(\omega\) is an 'arbitrary' dimensionless conformal factor, a function of boundary coordinates, that reflects the conformal symmetry of the boundary theory. For \(k=1\)\(d\Omega_{k,d-2}^{2}\)is the metric on a unit \((d-2)\)-dimensional sphere, for \(k=0\) it is the dimensionless metric \(\frac{1}{L^{2}}\sum_{i}dx_{i}^{2}\) on the plane, and for \(k=-1\) it is the unit metric on hyperbolic space \(du^{2}+\sinh^{2}(u)d\Omega_{k=1,d-3}^{2}\). The standard choice is to set \(\omega=1\), in which case the CFT volume \(\mathcal{V}\) is proportional to \(L^{d-2}\). Consequently a variation of the cosmological constant in the bulk induces also a variation of the CFT volume \(\mathcal{V}\). Hence a pressure-volume work term, \(-p\delta\mathcal{V}\), should be present on the CFT side. This implies that either (i) the corresponding CFT first law is _degenerate_ as the \(\mu\delta C\) and \(-p\delta\mathcal{V}\) terms are not truly independent (leaving the CFT interpretation of black hole chemistry a bit obscure), or (ii) apart from varying the cosmological constant we also have to vary Newton's gravitational constant \(G_{N}\) so that the variations of \(\mathcal{V}\) and \(C\) are independent [19]. Alternatively, in the spirit of the second option, the authors of [22] have proposed the so-called _restricted phase space_ (RPS) formalism, where the CFT volume \(\mathcal{V}\propto L^{d-2}\) is kept fixed. This leaves only the \(\mu\delta C\) term on the CFT side coming from a variable \(G_{N}\) in the bulk. The resultant holographic thermodynamics thus has nothing to do with the original black hole chemistry. In this note we generalize the approach developed in [23] to find the holographic first law that is dual to (2) while avoiding both of the above mentioned problems. Namely, in order to capture the above rescaling freedom of the CFT, while in the setting of equilibrium thermodynamics, in what follows we shall treat \(\omega\) as a (dimensionless) thermodynamic parameter (similar to the horizon radius or AdS radius), rather than a function of the boundary coordinates. This will make the volume and central charge independent variables. This is not without precedent. For the \(k=0\) planar AdS black brane case, variations of volume \(\mathcal{V}\) and central charge \(C\) are clearly independent; varying the former corresponds to changing the number of points in the system, whereas varying the latter corresponds to varying the number of degrees of freedom at each point. Since the planar case can be reached as a limit of the \(k=1\) spherical case, it is reasonable to expect this independence to extend to non-planar cases. Consequently, rather than using the standard choice \(\omega=1\), we regard \(\omega\) as another variable. This effectively amounts to changing the CFT volume, which is now proportional to \[\mathcal{V}\propto(\omega L)^{d-2}\,. \tag{6}\] In [23; 24] we considered the choice \(\omega=R/L\) with \(R\) being a constant boundary curvature radius, but here we allow the conformal factor to be a generic parameter which need not depend on \(L\). For the Einstein-Maxwell Lagrangian density \(\mathcal{L}=\frac{1}{16\pi G_{N}}(R-2\Lambda)-\frac{1}{4}F^{2}\) this results in the following generalized dictionary between the bulk (without tildes) and dual CFT (with tildes) thermodynamic quantities: \[\tilde{S}=S=\frac{A}{4G_{N}}\,,\quad\tilde{E}=\frac{M}{\omega} \,,\quad\tilde{T}=\frac{T}{\omega}\,,\quad\tilde{\Omega}=\frac{\Omega}{ \omega}\,,\] \[\tilde{J}=J,\quad\tilde{\Phi}=\frac{\Phi\sqrt{G_{N}}}{\omega L} \,,\quad\tilde{Q}=\frac{QL}{\sqrt{G_{N}}}\,. \tag{7}\] If we now allow the bulk curvature radius \(L\) to vary, while _holding \(G_{N}\) fixed_, the variation of the central charge \(C\), (4), is then purely induced by variations of \(L\). Analogously to the calculation in [23] (see also [24; 25]), it is straightforward to show using (3) that the extended first law (2) can be rewritten as follows: \[\delta\Big{(}\frac{M}{\omega}\Big{)} =\frac{T}{\omega}\delta\Big{(}\frac{A}{4G_{N}}\Big{)}+\frac{ \Omega}{\omega}\delta J+\frac{\Phi\sqrt{G_{N}}}{\omega L}\delta\Big{(}\frac{ QL}{\sqrt{G_{N}}}\Big{)}\] \[+\Big{(}\frac{M}{\omega}-\frac{TS}{\omega}-\frac{\Omega J}{ \omega}-\frac{\Phi Q}{\omega}\Big{)}\frac{\delta(L^{d-2}/G_{N})}{L^{d-2}/G_{N}}\] \[-\frac{M}{\omega(d-2)}\frac{\delta(\omega L)^{d-2}}{(\omega L)^{ d-2}}, \tag{8}\] or simply as \[\delta\tilde{E}=\tilde{T}\delta S+\tilde{\Omega}\delta J+\tilde{\Phi}\delta \tilde{Q}+\mu\delta C-p\delta\mathcal{V}\,, \tag{9}\] where, using (4), (6) and (7), \[\mu =\frac{1}{C}(\tilde{E}-\tilde{T}S-\tilde{\Omega}J-\tilde{\Phi} \tilde{Q})\,, \tag{10}\] \[p =\frac{\tilde{E}}{(d-2)\mathcal{V}}\,. \tag{11}\] The CFT first law (9) is no longer degenerate, as both \(\mathcal{V}\) and \(C\) can now be independently varied. Together with relations (10) and (11), it is exactly dual to the first law of extended black hole thermodynamics. As is obvious from the derivation, the variation of \(\Lambda\) does not only enter in the variation of the central charge, but it also appears in the dictionary for the spatial volume and electric charge. The variation of \(\Lambda\) (the \(V\delta P\) term in (2)) has thus been split into several pieces and is related to the variation of the volume, electric charge and central charge of the CFT. Eq. (11) is the equation of state for conformal theories, which is derivable from the scaling symmetry of the CFT. Moreover, (10) is the Euler relation for holographic CFTs, which can be derived on the CFT side from the proportionality of the thermodynamic quantities with the central charge, \(\tilde{E},\tilde{S},\tilde{J},\tilde{Q}\propto C\), which occurs in the deconfined phase (that is dual to an AdS black hole geometry). We note the absence of a \(-p\mathcal{V}\) term in the Euler relation, which reflects the fact that the internal energy is not an extensive variable on compact spaces at finite temperature in the deconfined phase. This is not an issue, as claimed in [22], but rather a feature of holographic CFTs. In the high-temperature or large-volume regime, i.e. \(\omega L\tilde{T}\gg 1\), the \(\mu C\) term becomes equal to \(-p\mathcal{V}\), and hence the energy becomes extensive. As explained in [19; 23], the Euler relation (10) is dual to the Smarr formula (3) for AdS black holes. The latter relation contains dimension dependent factors, whereas the former does not. We can understand this by expressing the \(PV\) term in the Smarr formula in terms of a partial derivative of the CFT energy \[-2PV=L\left(\frac{\partial M}{\partial L}\right)_{A,J,Q,G_{N}}=L\omega\left( \frac{\partial\tilde{E}}{\partial L}\right)_{A,J,Q,G_{N}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Hence, the dictionary (4), (6) and (7) implies that \[\left(\frac{\partial\tilde{E}}{\partial L}\right)_{A,J,Q,G_{N}}= \frac{1}{L}(\tilde{\Phi}\tilde{Q}+(d-2)\mu C-(d-2)p\mathcal{V}) \tag{13}\] \[=\frac{1}{L}((d-3)(\tilde{E}-\tilde{\Phi}\tilde{Q})-(d-2)(\tilde{ \Omega}J+\tilde{T}S))\,,\] where we inserted the Euler relation and the equation of state to obtain the last equality. By combining (12) and (13), and using the holographic dictionary we recover the Smarr relation. Since the Euler relation and equation of state follow from the scaling of the thermodynamic quantities with \(C\) and with \(\mathcal{V}\), respectively, one can eliminate some of the terms in the first law (9) by rescaling the CFT quantities. In particular, using (11) and rescaling some of the CFT quantities by \(\omega L\) we can eliminate the \(-p\delta V\) term, to obtain the laws \[\delta\hat{E} =\hat{T}\delta S+\hat{\Omega}\delta J+\tilde{\Phi}\delta\tilde{Q }+\hat{\mu}\delta C\,, \tag{14}\] \[\hat{E} =\hat{T}S+\hat{\Omega}J+\tilde{\Phi}\tilde{Q}+\hat{\mu}C\,, \tag{15}\] for the rescaled (dimensionless) quantities: \[\hat{E} =\omega L\tilde{E}\,,\quad\hat{T}=\omega L\tilde{T}\,,\quad\hat{ \Omega}=\omega L\tilde{\Omega}\,,\] \[\hat{\Phi} =\omega L\tilde{\Phi}\,,\quad\hat{\mu}=\omega L\mu\,. \tag{16}\] The advantage of (16) is that all thermodynamic quantities are now scale invariant, and so the thermal description respects the symmetries of the CFT. While the laws in (14) and (15) are formally the same as those of the recently proposed RPS [22], their physical interpretation is different. In our case the CFT lives on a geometry with an arbitrary curvature radius \(\omega L\), distinct from the AdS radius \(L\). At the same time we allow the central charge \(C\) to vary. Contrary to the RPS approach, this variation is induced by a variable cosmological constant in the bulk, rather than a variable Newton's constant. Perhaps more interesting is to employ the Euler relation (10) to eliminate the \(\mu\delta C\) term from (9), yielding \[\delta\bar{E} =\tilde{T}\delta\bar{S}+\tilde{\Omega}\delta\tilde{J}+\tilde{ \Phi}\delta\bar{Q}-\bar{p}\delta\mathcal{V}\,, \tag{17}\] \[\bar{E} =(d-2)\bar{p}\mathcal{V}\,, \tag{18}\] with the rescaled quantities: \[\bar{E}=\frac{\tilde{E}}{C}\,,\quad\bar{S}=\frac{S}{C}\,,\quad\bar{J}=\frac{ J}{C}\,,\quad\bar{Q}=\frac{\tilde{Q}}{C}\,,\quad\bar{p}=\frac{p}{C}\,. \tag{19}\] These quantities are no longer proportional to \(C\), i.e., they are \(\mathcal{O}(C^{0})\). The advantage of these laws is that all thermodynamic quantities keep their correct dimensionality. Moreover, for fixed \(C\) one recovers the'standard' thermodynamic first law, with \(\bar{E}\) interpreted as internal energy. Of course, the rescalings in (16) and (19) can be combined together, to obtain a dimensionless CFT law for the rescaled quantities without \(-p\delta\mathcal{V}\) and \(\mu\delta C\) terms. However, both the central charge \(C\) and the CFT volume \(\mathcal{V}\) can remain dynamical quantities in this first law. To summarize, we have established an exact duality between the extended black hole thermodynamics and the CFT description. Whereas the bulk first law (2) has one extra (\(V\delta P\)) term and is accompanied by a single Smarr relation (3), to reflect the scaling symmetry of the dual CFT, the corresponding CFT first law (9) has two extra terms and is accompanied by two relations: the Euler equation (10) and the equation of state (11). For this reason, the variation of \(\Lambda\) in the bulk corresponds to both changing the CFT central charge \(C\) and the CFT volume \(\mathcal{V}\). Furthermore, the corresponding \(\mu\delta C\) and \(-p\delta\mathcal{V}\) terms can be further eliminated by using (10) and (11). In this way one formally recovers the laws of the RPS, or the striking (17) - but we emphasize that our approach differs from [22] in that we keep \(G_{N}\) fixed but allow the AdS radius \(L\) and conformal factor \(\omega\) to independently vary. Finally, it would be interesting to generalize the holographic dual of extended black hole thermodynamics to other geometries, e.g., de Sitter spacetime, and other gravitational theories, e.g., higher curvature gravity. ## Acknowledgements We would like to thank Roberto Emparan and David Mateos for useful discussions. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. D.K. is grateful for support from GACR 23-07457S grant of the Czech Science Foundation. M.R.V. is supported by SNF Postdoc Mobility grant P500PT-206877 "Semi-classical thermodynamics of black holes and the information paradox".
2303.06267
An alternate proof of Payan's theorem on cubelike graphs
A cubelike graph is a Cayley graph on the product $\mathbb{Z}_2\times\cdots\times\mathbb{Z}_2$ of the integers modulo $2$ with itself finitely many times. In 1992, Payan proved that no cubelike graph can have chromatic number $3$. The authors of the present paper previously developed a general matrix method for studying chromatic numbers of Cayley graphs on abelian groups. In this note, we apply this method of Heuberger matrices to give an alternate proof of Payan's theorem.
Jonathan Cervantes, Mike Krebs
2023-03-11T01:10:01Z
http://arxiv.org/abs/2303.06267v1
# An alternate proof of Payan's theorem on cubelike graphs ###### Abstract A cubelike graph is a Cayley graph on the product \(\mathbb{Z}_{2}\times\cdots\times\mathbb{Z}_{2}\) of the integers modulo \(2\) with itself finitely many times. In 1992, Payan proved that no cubelike graph can have chromatic number \(3\). The authors of the present paper previously developed a general matrix method for studying chromatic numbers of Cayley graphs on abelian groups. In this note, we apply this method of Heuberger matrices to give an alternate proof of Payan's theorem. _Keywords--_ graph, chromatic number, abelian group, Cayley graph, cube-like graph, Payan's theorem ## 1 Introduction Given a finite set \(A\), we take \(\mathcal{P}(A)\) to be the power set of \(A\). We have that \(\mathcal{P}(A)\) is an abelian group under the operation of symmetric difference, that is, \(X\vartriangle Y=(X\smallsetminus Y)\cup(Y\smallsetminus X)\). A _cubelike graph_ is a Cayley graph whose underlying group is \(\mathcal{P}(A)\). Equivalently, writing \(A=\{x_{1},\ldots,x_{n}\}\) and identifying the set \(X\subset A\) with the \(n\)-tuple whose \(i\)th component is \(1\) if \(x_{i}\in X\) and is \(0\) otherwise, a cubelike graph can be regarded as a Cayley graph whose underlying group is an \(n\)-fold product \(\mathbb{Z}_{2}^{n}=\mathbb{Z}_{2}\times\cdots\times\mathbb{Z}_{2}\), where \(\mathbb{Z}_{2}\) is the group of the integers modulo \(2\) under addition. Chromatic numbers of cubelike graphs have been studied by many authors. One notable result is due to Payan [4], who proved that the chromatic number of a nonbipartite cubelike graph is always at least \(4\). That is, the chromatic number of a cubelike graph cannot equal \(3\). Publications with other results on chromatic numbers of cube-like graphs include [3] and [2, Section 9.7]. Payan's proof is rather clever. It is, however, somewhat _ad hoc_. The purpose of the present note is to furnish an alternate proof of Payan's theorem, one that may lend itself naturally to generalizations. Indeed, in [1], the authors put forward a general method for approaching the problem of finding the chromatic number of a Cayley graph on an abelian group. We show that Payan's theorem falls out quite naturally as a byproduct of this "method of Heuberger matrices." The other key ingredient in our proof is a special case of Payan's theorem due to Sokolova [5], who computed that even-dimensional cubes-with-diagonals (defined below) have chromatic number 4. The key idea of our proof of Payan's theorem is that if a cubelike graph is nonbipartite, then there is a graph homomorphism to it from an even-dimensional cube-with-diagonals. The Heuberger matrices make transparent the existence of this homomorphism. This note depends heavily on [1], which we will refer to frequently. The reader should assume that all notation, terminology, and theorems used but not explained here are explained there. ## 2 Payan's theorem In this section we prove the following theorem. **Theorem 2.1** ([4]).: _A cube-like graph cannot have chromatic number 3._ Throughout this section we take cube-like graph to be a Cayley graph on \(\mathbb{Z}_{2}^{n}\). A special case of Theorem 2.1 had previously been proven by Sokolova in [5]. We will derive Payan's theorem from Sokolova's theorem, and for that reason we begin by discussing the latter. For a positive integer \(n\), the _\(n\)-dimensional cube-with-diagonals graph_\(Q_{n}^{d}\) is defined by \[Q_{n}^{d}=\operatorname{Cay}(\mathbb{Z}_{2}^{n},\{e_{1},\ldots,e_{n},w_{n}\}),\] where \(e_{j}\) is the \(n\)-tuple in \(\mathbb{Z}_{2}^{n}\) with 1 in the \(j\)th entry and 0 everywhere else, and \(w_{n}\) is the \(n\)-tuple in \(\mathbb{Z}_{2}^{n}\) with 1 in every entry. We can visualize \(Q_{n}^{d}\) as a hypercube with edges (called "diagonals," hence the name and the superscript '\(d\)') added to join each pair of antipodal vertices. Sokolova proved that for \(n\) even, \(Q_{n}^{d}\) has chromatic number 4. We present here a condensed version of the proof in [5] of this result. **Theorem 2.2** ([5]).: _If \(n\) is even, then \(\chi(Q_{n}^{d})=4\)._ Proof.: First observe that \((x_{1},\ldots,x_{n})\mapsto(x_{1},x_{2}+\cdots+x_{n})\) defines a group homomorphism from \(\mathbb{Z}_{2}^{n}\) to \(\mathbb{Z}_{2}^{2}\) mapping \(\{e_{1},\ldots,e_{n},w_{n}\}\) to \(\{(1,0),(0,1),(1,1)\}\). So this defines a graph homomorphism from \(Q_{n}^{d}\) to \(Q_{2}^{d}\cong K_{4}\), the complete graph on 4 vertices. Hence \(\chi(Q_{n}^{d})\leq 4\). Next we show that \(Q_{n}^{d}\) is not properly 3-colorable. We do so by induction. For the base case \((n=2)\), we saw previously that \(Q_{2}^{d}\cong K_{4}\), which is not properly 3-colorable. Now assume that \(Q_{n}^{d}\) is not properly 3-colorable, and we will show that \(Q_{n+2}^{d}\) is not properly 3-colorable. Suppose to the contrary that \(c\colon\mathbb{Z}_{2}^{n+2}\to\mathbb{Z}_{3}\) is a proper 3-coloring. For two tuples \(v=(v_{1},\ldots,v_{j})\) and \(u=(u_{1},\ldots,u_{k})\), we define \(v*u=(v_{1},\ldots,v_{j},u_{1},\ldots,u_{k})\). Define \(c^{\prime}\colon\mathbb{Z}_{2}^{n}\to\mathbb{Z}_{3}\) by \(c^{\prime}(v)=k\) if \(\{c(v*(0,0)),c((v+w_{n})*(1,0))\}\) equals either \(\{k\}\) or \(\{k,k+1\}\). A straightforward case-by-case analysis shows that \(c^{\prime}\) is a proper \(3\)-coloring of \(Q_{n}^{d}\), which is a contradiction. _Remark 2.3_.: We briefly digress to remark that Sokolova's theorem can be restated as follows. In any (not necessarily proper) \(3\)-coloring of the vertices of an even-dimensional hypercube, there must exist two antipodal vertices, both of which are assigned the same color. Stated this way, it brings to mind various topological theorems such as the hairy ball theorem and the Borsuk-Ulam theorem. We wonder whether there might be some connection between Sokolova's combinatorial result and one or more of these facts from topology, perhaps along the lines of the connection between Sperner's lemma and the Brouwer fixed point theorem. Using Heuberger matrices, we will now see how Sokolova's theorem implies Payan's theorem. The key idea is to show that every nonbipartite cube-like graph contains a homomorphic image of an even-dimensional cube-with-diagonals graph. Proof of Theorem 2.1.: Let \(X=\operatorname{Cay}(\mathbb{Z}_{2}^{n},S)\) be a nonbipartite cube-like graph. Because \(2x=0\) for all \(x\in S\), there is a Heuberger matrix \(M_{X}\) associated to \(X\) whose last \(m\) columns are \(2e_{1},\ldots,2e_{m}\), where \(m=|S|\). That is, \(M_{X}\) has the form \((A\,|\,2I_{m})\) for some integer matrix \(A\). Here \(I_{m}\) is the \(m\times m\) identity matrix. Using column operations as in [1, Lemma 2.6], we have that \[(A\,|\,2I_{m})_{X}^{\operatorname{SACG}}\cong(A^{\prime}\,|\,2I_{m})_{X}^{ \operatorname{SACG}},\] for some matrix \(A^{\prime}\) whose entries are all in \(\{0,1\}\). Because \(X\) is nonbipartite, by [1, Lemma 2.11], some column \(y\) of \(A^{\prime}\) contains an odd number \(z\) of nonzero entries. Hence by [1, Lemma 2.10, parts (4) and (5)], we have homomorphisms \[(w_{z}^{t}\,|\,2I_{z})_{Y}^{\operatorname{SACG}}\mathrel{\mathop{\kern 0.0pt \Leftarrow}\limits_{\tau_{1}}}(y\,2e_{i_{1}}\,\cdots 2e_{i_{z}})^{ \operatorname{SACG}}\mathrel{\mathop{\kern 0.0pt\Leftarrow}\limits_{\tau_{2}}}(A^{ \prime}\,|\,2I_{m})_{X}^{\operatorname{SACG}}\] where \(i_{1},\ldots,i_{z}\) are the indices of the nonzero entries of \(y\), and \(w_{z}^{t}\) is a column vector of length \(z\) with a \(1\) in every entry. For \(\tau_{1}\), we insert zero rows as appropriate; for \(\tau_{2}\) we append the requisite columns. So \(\chi(Y)\leq\chi(X)\) by [1, Lemma 2.9]. If \(z=1\), then \(X\) has loops and is not properly colorable. So assume \(z\geq 3\). Observe that \(Y\cong Q_{z-1}^{d}\). An application of Theorem 2.2 then completes the proof.
2303.01508
Fine-grained Emotional Control of Text-To-Speech: Learning To Rank Inter- And Intra-Class Emotion Intensities
State-of-the-art Text-To-Speech (TTS) models are capable of producing high-quality speech. The generated speech, however, is usually neutral in emotional expression, whereas very often one would want fine-grained emotional control of words or phonemes. Although still challenging, the first TTS models have been recently proposed that are able to control voice by manually assigning emotion intensity. Unfortunately, due to the neglect of intra-class distance, the intensity differences are often unrecognizable. In this paper, we propose a fine-grained controllable emotional TTS, that considers both inter- and intra-class distances and be able to synthesize speech with recognizable intensity difference. Our subjective and objective experiments demonstrate that our model exceeds two state-of-the-art controllable TTS models for controllability, emotion expressiveness and naturalness.
Shijun Wang, Jón Guðnason, Damian Borth
2023-03-02T09:09:03Z
http://arxiv.org/abs/2303.01508v2
# Fine-grained emotional control of text-to-speech: ###### Abstract State-of-the-art Text-To-Speech (TTS) models are capable of producing high-quality speech. The generated speech, however, is usually neutral in emotional expression, whereas very often one would want fine-grained emotional control of words or phonemes. Although still challenging, the first TTS models have been recently proposed that are able to control voice by manually assigning emotion intensity. Unfortunately, due to the neglect of intra-class distance, the intensity differences are often unrecognizable. In this paper, we propose a fine-grained controllable emotional TTS, that considers both inter- and intra-class distances and be able to synthesize speech with recognizable intensity difference. Our subjective and objective experiments demonstrate that our model exceeds two state-of-the-art controllable TTS models for controllability, emotion expressiveness and naturalness. Shijun Wang\({}^{1}\), Jon Guanason\({}^{2}\), Damian Borth\({}^{1}\)\({}^{1}\)University of St.Gallen, Switzerland \({}^{2}\)Reykjavik University, Iceland emotional TTS, emotion intensity control, speech emotion analysis ## 1 Introduction Recent end-to-end Text-To-Speech (TTS) models [1, 2, 3, 4, 5] have the capacity to synthesize high-quality speech with neutral emotion. These models are, however, limited when it comes to expressing paralinguistic information such as emotion. It is critical to address this issue because expressing emotion in speech is crucial in many applications such as audiobook generation or digital assistants. Moreover, an additional challenge of current TTS models is the lack of fine-grained controllability of emotion on words or phonemes. Such a drawback results in inflexible control of speech, and failure to meet the context or users' intentions. One straightforward strategy to express different emotions is by conditioning global emotion labels [6, 7]. However, synthesized speech from these models has monotonous emotional expression due to the condition of one global emotion representation. To achieve diverse emotion expression, models like GST [8] apply a token (a single vector) to represent the emotional style of a reference speech, then use this token to influence the synthesis. RFIacotron [9] is an extended work of GST. It uses a sequence of vectors instead of a single token to represent the emotion, which allows the improvement of the robustness and prosody control. Nevertheless, the nuance of references might be difficult to be captured by these models (e.g. one sad and one depressed reference might produce the same synthesized speech), due to a mismatch between the content or speaker of the reference and synthesized speech, which implies the inflexible controllability of these models. A better approach to achieve fine-grained controllable emotional TTS is by manually assigning intensity labels (such as strong or weak happiness) on words or phonemes, which provides a flexible and efficient way to control the emotion expression, even for subtle variations. In [10, 11, 12, 13], Rank algorithms are used to extract emotion intensity information, by following the assumptions: i) speech samples from the same emotion class have similar ranks, and ii) intensity of neutral emotion is the weakest and all other emotions are ranked higher than neutral. Despite the production of recognizable speech samples with different emotion intensity levels, intra-class distance is neglected in these models. Specifically, during the training, samples belonging to the same emotion class (for instance, the strongest and weakest happiness) are arbitrarily considered the same. In practice, confusion could happen when we compare a median-level intensity speech with a strong- or weak-level one. In this paper, we propose a TTS model, which outperforms the state-of-the-art fine-grained controllable emotional TTS models. The model is based on a novel Rank model, which is simple yet efficient for extracting emotion intensity information, by taking into account both inter- and intra-distance. Instead of performing rank on a non-neutral and a neutral sample, we use two samples augmented by Mixup [14]. Each augmented sample is a mixture from the same non-neutral and neutral speech. By applying different weights to non-neutral and neutral speech, one mixture contains more non-neutral components than the other one. In other words, one mixture's non-neutral intensity is stronger than that of the other. By learning to rank these two mixtures, our Rank model not only needs to determine the emotion class (inter-class distance), but also has to capture the amount of non-neutral emotion present in a mixed speech, i.e. intensity of non-neutral emotion (intra-class distance). We summarize our contributions as: 1) we propose a fine-grained controllable emotional TTS model based on a novel Rank model. 2) The proposed Rank model is simple and efficient to extract intensity information. 3) Our experimental results demonstrate that our TTS model outperforms two state-of-the-art fine-grained controllable emotional TTS models. Demo page can be found at [https://wshijun1991.github.io/ICASSP2023_DEMO/](https://wshijun1991.github.io/ICASSP2023_DEMO/). ## 2 Approach We train two models. One is a Rank model that aims to extract emotion intensity representations. The other is a backbone TTS model used to generate speech. ### Rank Model Our Rank model is shown in Fig. 1. It maps the speech into intensity representations, then outputs a rank score regarding the emotion intensity. Input \(\mathbf{X}\) is a concatenation of Mel-Spectrogram, pitch contour, and energy. \(\mathbf{X}_{neu}\) indicates an input from neutral class, while \(\mathbf{X}_{emo}\) represents an input from other non-neutral emotion classes. We then perform Mixup augmentation on the pair (\(\mathbf{X}_{neu}\), \(\mathbf{X}_{emo}\)): \[\begin{split}\mathbf{X}_{mix}^{i}&=\lambda_{i} \mathbf{X}_{emo}+(1-\lambda_{i})\mathbf{X}_{neu},\\ \mathbf{X}_{mix}^{j}&=\lambda_{j}\mathbf{X}_{emo}+( 1-\lambda_{j})\mathbf{X}_{neu},\end{split} \tag{1}\] where \(\lambda_{i}\) and \(\lambda_{j}\) are from Beta distribution \(Beta(1,1)\). The Intensity Extractor is then used to extract intensity representations. It first applies the same Feed-Forward Transformer (FFT) in [4] to process the input. We further add an emotion embedding to the output of FFT to produce intensity representations \(\mathbf{I}_{mix}^{i}\) and \(\mathbf{I}_{mix}^{j}\). This embedding is from a look-up table and depends on the emotion class of \(\mathbf{X}_{emo}\). The addition of emotion embedding is to provide information on emotion class, because intensity might vary differently in various emotion classes. From the intensity representations \(\mathbf{I}_{mix}^{i}\) and \(\mathbf{I}_{mix}^{j}\), we then average these two sequences into two vectors \(h_{mix}^{i}\) and \(h_{mix}^{j}\). The original Mixup loss is further applied on them: \[\begin{split}\mathcal{L}_{mixup}=\mathcal{L}_{i}+\mathcal{L}_{j},\text{where}\\ \mathcal{L}_{i}=\lambda_{i}\text{CE}(h_{mix}^{i},y_{emo})+(1- \lambda_{i})\text{CE}(h_{mix}^{i},y_{neu}),\\ \mathcal{L}_{j}=\lambda_{j}\text{CE}(h_{mix}^{j},y_{emo})+(1- \lambda_{j})\text{CE}(h_{mix}^{j},y_{neu}),\end{split} \tag{2}\] and CE(\(\cdot,\cdot\)) represents Cross Entropy loss, \(y_{emo}\) indicates labels for non-neutral emotion, while \(y_{neu}\) indicates neutral. Despite the fact that Mixup has been demonstrated as an effective regularization method, there is little evidence showing it is sensitive to the intra-class distance. Thus, apart from \(\mathcal{L}_{mixup}\) (inter-class), we need to introduce another loss to capture intra-class information. Inspired by [15], we first use a Projector (linear layers) to map the pair (\(h_{mix}^{i}\), \(h_{mix}^{j}\)) to a scalar pair (\(r_{mix}^{i}\), \(r_{mix}^{j}\)), where \(r_{mix}\in\mathbb{R}^{1}\). \(r_{mix}\) is a score indicating the amount of non-neutral emotion present in speech, i.e. intensity. To force the model to correctly assign scores, we first feed the score difference into a Sigmoid function: \[p^{ij}=\frac{1}{1+e^{-(r_{mix}^{i}-r_{mix}^{j})}}, \tag{3}\] then we apply the rank loss on it: \[\mathcal{L}_{rank}=-\lambda_{diff}\text{log}(p^{ij})-(1-\lambda_{diff})\text{ log}(1-p^{ij}), \tag{4}\] where \(\lambda_{diff}\) is a normalized result of \(\lambda_{i}-\lambda_{j}\), which means if \(\lambda_{i}>\lambda_{j}\), then \(\lambda_{diff}\in(0.5,1)\); if \(\lambda_{i}<\lambda_{j}\), then \(\lambda_{diff}\in(0,0.5)\); if \(\lambda_{i}=\lambda_{j}\), then \(\lambda_{diff}=0.5\). As an example, if \(\lambda_{i}>\lambda_{j}\) (non-neutral emotion presents more in \(\mathbf{X}_{mix}^{i}\) compared to \(\mathbf{X}_{mix}^{j}\)), then \(\lambda_{diff}>0.5\), in this case, in order to decrease the rank loss in Eq. 4, the model needs to assign a bigger \(r_{mix}^{i}\) for \(\mathbf{X}_{mix}^{i}\), to enable the Sigmoid output in Eq. 3 to be bigger than 0.5. The intuition is forcing the model to correctly rank two samples that both contain non-neutral emotion. To achieve this, the intensity representation \(\mathbf{I}_{mix}\) must convey information that can indicate the intensity of non-neutral emotion. Lastly, we train our Rank model with the total loss: \[\mathcal{L}_{total}=\alpha\mathcal{L}_{mixup}+\beta\mathcal{L}_{rank}, \tag{5}\] where \(\alpha\) and \(\beta\) are the loss weights. ### TTS Model We use FastSpeech2 [4] to convert the phonemes to speech, given intensity information. We maintain the original model configuration, except our Intensity Extractor is combined to provide intensity information. The training of FastSpeech2 is shown in Fig. 2, and we only give a short description of each module here but refer the Figure 1: The training of our Rank model. \(\mathbf{X}_{emo}\) is a speech sample of non-neutral emotion classes, and \(\mathbf{X}_{neu}\) is neutral. The Intensity Extractor produces intensity representations \(\mathbf{I}_{mix}^{i}\) and \(\mathbf{I}_{mix}^{j}\) given mixtures \(\mathbf{X}_{mix}^{i}\) and \(\mathbf{X}_{mix}^{j}\). \(h_{mix}^{i}\) and \(h_{mix}^{j}\) are two averaged vectors. \(\mathcal{L}_{mixup}\) is a weighted cross entropy loss, while \(\mathcal{L}_{rank}\) is to rank \(r_{mix}^{i}\) and \(r_{mix}^{j}\), two scores, regarding the intensity of non-neutral emotion. readers to the original paper [4] for more in-depth description. The Phoneme Encoder is to process phoneme and position information. The speaker ID is mapped to speaker embedding to represent speaker characteristics. The Variance Adaptor aims to predict pitch, energy and duration (frame length of each phoneme) based on the input. The decoder generates the final Mel-Spectrogram. To incorporate intensity information, a pre-trained Intensity Extractor is frozen and integrated. And the Variance Adaptor uses phoneme, speaker information, and intensity representation \(\mathbf{I}\) to predict pitch, energy, and duration. We set intensity representations for neutral emotion to zero, since we assume there is no intensity variation for neutral speech. One thing to point is that since the length of \(\mathbf{I}\) is not equal to the phoneme length, we use Montreal Forced Aligner [16] to acquire intensity segments corresponding to each phoneme. Then the intensity segments are averaged to make the lengths of intensity representation and phoneme the same. ### Training and Inference **Training**: We first train our Rank model. Then the Intensity Extractor from the trained Rank model is frozen and combined during the training of FastSpeech2. **Inference**: During inference, we expect to use phonemes and manual intensity labels to control the emotion intensity of synthesized speech. However, our Intensity Extractor can only output intensity representation from speech. In order to achieve controlling intensity with manual labels, we use the following strategy: with a trained Intensity Extractor, we first collect all intensity representations and their intensity scores. Then, we bucket all scores into several bins, where each bin denotes one intensity level (e.g. in our work, we use Min, Median and Max intensity levels, which means we apply three bins). After that, intensity representations corresponding to each intensity level are averaged into a single vector. Finally, we can map manual intensity labels to intensity representations during inference. This strategy is applied to each emotion class, therefore, we can find individual intensity representations by feeding emotion and manual intensity labels. **Implementation Details**: We train the Rank model for 20k iterations with 1e-6 learning rate. For FastSpeech2, we use 250k iterations with 1e-4 learning rate. Adam optimizer is applied for both cases. In Eq. 5, \(\alpha\) and \(\beta\) are respectively set as 0.1 and 1.0. ## 3 Experiments ### Experimental Setup **Dataset**: EmoV-DB [17] is used as our dataset, it contains four speakers and five emotions (Amused, Angry, Disgusted, Neutral and Sleepy). The overall speech samples are around 7 hours and the sampling rate is 16KHz. **Data Preprocessing**: Since FastSpeech2 is used as our backbone TTS, we need to feed Mel-Spectrogram, pitch and energy as inputs. We use 50-millisecond window and a 50 percent overlap ratio to extract energy and Mel-Spectrogram with 80 Mel Coefficients. PyWorld1 is applied to extract pitch. Footnote 1: [https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder](https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder) **Baselines**: We use FEC [11] and RFTacotron [9] as our baselines. Similar to our model, FEC has a Rank model and allows you to assign emotion intensity to each phoneme as well. To ensure a fair comparison, we replace the original Tacotron2 [2] in FEC with our FastSpeech2. RFTacotron transfers emotional style from a reference into the synthesized speech, which can be used to control the emotion intensity by applying reference samples with different intensities. We keep using the original Tacotron2 because its attention mechanism is the key part for emotion transfer. **Evaluation Setup**: We use PWGAN [18] to convert generated Mel-Spectrograms to waveforms. To perform objective and subjective evaluations, for each emotion and speaker, we randomly select 5 unseen speech samples, by using their corresponding utterances, 90 utterances (speaker Josh has no data for Angry and Disgusted) are prepared for evaluation. 20 subjects participate in the subjective evaluations. ### Emotion Intensity Controllability In this section, we perform subjective evaluation to detect how recognizable of synthesized speech samples with different intensities (Min, Median or Max). Easily recognizable synthesized speech samples imply that we can efficiently control the emotion intensity by manually assigning intensity labels. Like in [19], for each utterance, we first synthesize three speech samples with three intensity levels, respectively. Then, three pairs (Min-Max, Min-Median and Median-Max) can be acquired and we ask subjects to select which one from a pair contains a stronger intensity. If the one that selected by subjects is the one synthesized with a stronger intensity, then it shows the emotion intensity can be appropriately controlled. In FEC, the intensity rank scores are normalized in \((0,1)\), thus, we refer to scores in \((0,0.33]\) as Min, \((0.33,0.66]\) as Figure 2: The training of FastSpeech2, a trained Intensity Extractor is combined to provide intensity representations. Median, and \((0.66,1.0)\) as Max. Since RFTacotron is unable to assign intensity scores, we use our Rank model (Sec. 2.1) to find the strongest, median and the weakest samples from the dataset as references (based on the rank score \(r\)). We believe it is a fair comparison, because if our Rank model fails, then both RFTacotron and our model should fail. As we can see from the results (Tab. 1), RFTacotron might not be efficient for performing intensity control, in some cases, the intensity difference is not easily perceivable. FEC improves a lot regarding the control of intensity, however, confusion happens when a median-level sample is in the pair. In other words, as we mentioned before, intra-class distance information might be partially lost in FEC. On the other side, our model performs the best compared with baseline models. It not only has the ability to synthesize Max- and Min-level samples, but also be capable of synthesizing recognizable Median-level speech samples. ### Emotion Expressiveness In this section, we conduct preference tests to evaluate whether models can express clear emotions. Since we don't consider intensity here, we only use synthesized samples with median-level intensity. For each utterance, we synthesize three median-level speech samples with our and two baseline models, respectively. Subjects are asked to select the one that conveys more clear emotion. If there is no detectable difference, they should choose "Same". As we can see from the results (Fig. 3), our model significantly outperforms RFTacotron and subjects are barely confused, which suggests that our model's emotion expression is more clear than RFTacotron's. FEC also performs well on emotion expressiveness, but our model is preferred despite the same FastSpeech2 is used for both, which implies the benefit is caused by the intensity representation of our Rank model. ### Quality and Naturalness Evaluation We further evaluate quality and naturalness of synthesized speech samples. Objective measurement Mean Cepstral Distortion (MCD) [20], and subjective measurement Mean Opinion Score (MOS) are conducted for this evaluation. Since we only focus on quality and naturalness here, and to be able to compare with ground truth speech, manual intensity labels are not used in this experiment. For FEC and our model, intensity representations are provided by their individual Rank models given ground truth speech. For RFTacotron, we use ground truth speech samples as references. We report MCD and MOS results in Tab. 2. According to MCD results, both FEC and our model outperform RFTacotron largely, this might be because: 1) as opposed to transferring emotion from a reference, directly assigning intensity representations is easier for the model to generate good quality speech. 2) FastSpeech2 requires less data then Tacotron2 for a high quality result. Despite using the same FastSpeech2, the MCD of our model is slightly better than FEC. This is because our intensity representations might also bring benefits for high-quality synthesis. MOS scores (with 95% confidence intervals) reveal a similar phenomenon, where FEC and our model surpass RFTacotron greatly, while our model is slightly better than FEC. ## 4 Conclusion In this paper, we propose a fine-grained controllable emotional TTS, based on a novel Rank model. The Rank model captures both inter- and intra-class distance information, and thus is able to produce meaningful intensity representations. We conduct subjective and objective tests to evaluate our model, the experimental results show that our model surpasses two state-of-the-art baselines in intensity controllability, emotion expressiveness and naturalness. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{Emotion} & \multirow{2}{*}{Models} & \multicolumn{3}{c}{Intensity Pairs} \\ \cline{3-5} & & Min & Median & Min \\ & & -Median & -Max & -Max \\ \hline \multirow{3}{*}{Amused} & RFTacotron & 0.38 & 0.52 & 0.59 \\ & FEC & 0.63 & 0.58 & 0.63 \\ & Ours & **0.66** & **0.60** & **0.74** \\ \hline \multirow{3}{*}{Angry} & RFTacotron & 0.53 & 0.59 & 0.60 \\ & FEC & 0.59 & 0.58 & 0.73 \\ & Ours & **0.65** & **0.67** & **0.75** \\ \hline \multirow{3}{*}{Sleepy} & RFTacotron & 0.39 & 0.45 & 0.52 \\ & FEC & 0.56 & 0.54 & 0.64 \\ & Ours & **0.65** & **0.73** & **0.83** \\ \hline \multirow{3}{*}{Disgusted} & RFTacotron & 0.48 & 0.51 & 0.53 \\ & FEC & 0.57 & 0.63 & 0.72 \\ & Ours & **0.72** & **0.67** & **0.75** \\ \hline \hline \multirow{3}{*}{Average} & RFTacotron & 0.45 & 0.52 & 0.56 \\ & FEC & 0.61 & 0.58 & 0.68 \\ \cline{1-1} & Ours & **0.67** & **0.67** & **0.77** \\ \hline \end{tabular} \end{table} Table 1: Rate Accuracy of Emotion Intensities. Min, Median and Max are three intensity levels. Subjects are asked to select the sample with the stronger intensity from a pair. \begin{table} \begin{tabular}{c|c|c} \hline Model & MCD (dB) & MOS \\ \hline Ground Truth & / & \(3.9\pm 0.05\) \\ \hline RFTacotron & 5.21 & \(3.49\pm 0.04\) \\ FEC & 4.79 & \(3.7\pm 0.04\) \\ Ours & **4.66** & \(\textbf{3.76\pm 0.03}\) \\ \hline \end{tabular} \end{table} Table 2: MCD and Naturalness MOS Figure 3: Preference test for emotion expressiveness.
2302.01621
Agreed and Disagreed Uncertainty
When agents' information is imperfect and dispersed, existing measures of macroeconomic uncertainty based on the forecast error variance have two distinct drivers: the variance of the economic shock and the variance of the information dispersion. The former driver increases uncertainty and reduces agents' disagreement (agreed uncertainty). The latter increases both uncertainty and disagreement (disagreed uncertainty). We use these implications to identify empirically the effects of agreed and disagreed uncertainty shocks, based on a novel measure of consumer disagreement derived from survey expectations. Disagreed uncertainty has no discernible economic effects and is benign for economic activity, but agreed uncertainty exerts significant depressing effects on a broad spectrum of macroeconomic indicators.
Luca Gambetti, Dimitris Korobilis, John Tsoukalas, Francesco Zanetti
2023-02-03T09:47:41Z
http://arxiv.org/abs/2302.01621v1
# Agreed and Disagreed Uncertainty+ ###### Abstract When agents' information is imperfect and dispersed, existing measures of macroeconomic uncertainty based on the forecast error variance have two distinct drivers: the variance of the economic shock and the variance of the information dispersion. The former driver increases uncertainty and reduces agents' disagreement (agreed uncertainty). The latter increases both uncertainty and disagreement (disagreed uncertainty). We use these implications to identify empirically the effects of agreed and disagreed uncertainty shocks, based on a novel measure of consumer disagreement derived from survey expectations. Disagreed uncertainty has no discernible economic effects and is benign for economic activity, but agreed uncertainty exerts significant depressing effects on a broad spectrum of macroeconomic indicators. **Keywords**: uncertainty, information frictions, disagreement, Bayesian vector autoregression (VAR), sign restrictions. **JEL Classification**: E20, E32, E43, E52. Introduction Since the seminal work of Bloom (2009), a large body of research shows that uncertainty has powerful recessionary effects on a broad spectrum of activity indicators.1 However, although recessions do coincide with heightened uncertainty, protracted and elevated uncertainty is not always associated with recessions. Examples include the stock market crash of October 1987, which resulted in enormous losses in stock returns, and the 2011 debt-ceiling crisis, which resulted in increase in U.S. government credit default swaps of 46 basis points without generating a contraction in real activity.2 This paper argues that the dispersion in consumer views about the state of the economy (thereafter consumer disagreement) conveys important information about the systematic effect of uncertainty on economic activity. By developing a new index of consumer disagreement about current and future economic conditions from survey data, we show that spikes in uncertainty during periods of high consumer agreement ("agreed uncertainty") have the standard depressing effects on activity indicators found in numerous studies (e.g., Bloom, 2009; Jurado et al., 2015; Caldara et al., 2016); however, equivalent spikes in uncertainty in periods of high consumer disagreement ("disagreed uncertainty"), have no discernible effects on economic activity. Footnote 1: See Bloom (2014) for a survey. Footnote 2: In November 1998, the Russian crisis and near collapse of the hedge fund LTCM lead various uncertainty proxies to spike above their levels in the 2001 recession without a concomitant slowdown in economic activity. The starting point of our analysis is a dispersed (and noisy) information framework. Mankiw and Reis (2002), Woodford (2003), Sims (2003), Mackowiak and Wiederholt (2009), Mankiw et al. (2004), and Okuda et al. (2021) argue in favour of information frictions manifested in models of sticky and noisy information. Coibion and Gorodnichenko (2012) and Coibion and Gorodnichenko (2015) (see Coibion et al., 2018 for a comprehensive survey) establish robust evidence in favor of information rigidities in agents' expectation formation, with the bulk of evidence supporting noisy information models. This framework allows us to formalize the distinction between agreed and disagreed uncertainty. In this framework, the observed uncertainty - measured by the conditional volatility of the forecast error - is a function of both innovations in the _volatility of fundamental disturbances_ and innovations in the _volatility of idiosyncratic noise_, inherent in the noisy signals that agents process. This finding opens up the possibility that innovations in both types of volatility may drive changes in the observed uncertainty. More precisely, the premise of our study is that innovations in the volatility of idiosyncratic noise may increase measured uncertainty, without a change in the volatility of exogenous fundamental disturbances, and the spike in uncertainty may not necessarily exert depressing effects on economic activity. We provide new evidence from the Michigan Survey of Consumers on the prevalence of dispersion of information manifested in our new index that captures the disparity of consumers' opinions about current and future economic conditions.3 We document several new facts. First, consumer disagreement is pervasive and applies to both _current_ and _future_ economic conditions. Second, it is pro-cyclical and negatively correlated with widely used measures of economic uncertainty. Third, the procyclicality of disagreement is time-varying: it increases in recessions and weakens in periods of robust economic activity. This result evinces the widening in the dispersion of consumer views during economic expansions that diminishes during recessions, leading to more homogeneous views concomitant to the decline in economic activity. In other words, consumers disagree less strongly about current and future conditions (low disagreement) when then economy is in a recession. Footnote 1: The _average_ of the idiosyncratic shock affects uncertainty in the model because the forecast error itself is a function of a key parameter – the signal-to-noise ratio – that controls the updating of forecasts agents make. The latter is inversely related to the variance of idiosyncratic noise. The core analysis proceeds in two steps. First, we develop a simple model with noisy and dispersed information that sheds light on the interplay between disagreement and uncertainty. We then use the predictions of the model to formulate simple sign restrictions in a Bayesian VAR model to identify shocks to agreed and disagreed uncertainty in the data. The empirical analysis based on our novel index of consumer disagreement sheds lights on important differences in the effects of agreed and disagreed uncertainty shocks in the data. In our simple model, agents receive idiosyncratic signals about a fundamental shock and form forecasts about the path of the latter by solving a signal extraction problem. The resulting dispersion of forecasts is proportional to the variance of the noise in the signal. We derive measures of uncertainty and disagreement in the model that are consistent with their empirical counterparts. Specifically, uncertainty in the model is the variance of the forecast errors made by the agents (the standard measure of uncertainty in Jurado et al., 2015 and several other studies). We show that the measure of uncertainty in the model is an increasing function of both the variance of the fundamental shock and the variance of the idiosyncratic noise.4 At the same time, the model disagreement index is an increasing function of the variance of noise, but a decreasing function of the variance of the fundamental shock. Therefore, an increase in either the variance of fundamental shock or the variance of the noise can increase uncertainty, but with _opposite_ shifts in the index of disagreement. More concretely, a rise in the variance of the fundamental shock increases uncertainty and _decreases_ the index of disagreement, but a rise in the variance of the noise increases both uncertainty and the index of disagreement. Thus, although uncertainty always raises when the variances of the fundamental shock or the noise rise, the opposite response of the index of disagreement allows us to identify shocks to agreed and disagreed uncertainty. These distinct predictions provide a set of minimal sign restrictions we use in a medium-scale VAR model, with U.S. monthly data from 1977 to 2020, to estimate the dynamic effects of agreed and disagreed uncertainty shocks. Footnote 4: The variance of the idiosyncratic shock affects uncertainty in the model because the forecast error itself is a function of a key parameter – the signal-to-noise ratio – that controls the updating of forecasts agents make. The latter is inversely related to the variance of idiosyncratic noise. The baseline empirical results can be summarized as follows. _Agreed uncertainty_ shocks, identified by a concomitant fall in disagreement and rise in uncertainty indicators, generates large, protracted, contractionary economic effects consistent with the standard negative impact of economic uncertainty on real activity, as reported in seminal studies by Bloom (2009), Jurado et al. (2015), and Ludvigson et al. (2021). Specifically, a positive innovation in agreed uncertainty is associated with large and persistent declines in industrial production, and employment. By contrast, _disagreed uncertainty_ shocks identified by the joint increase in disagreement and uncertainty indicators exhibit qualitatively different dynamic effects. Although the rise in uncertainty is strong, significant, and persistent, as in the case of agreed uncertainty shocks, economic activity indicators do not exhibit any depressing effects. A positive innovation in disagreed uncertainty generates a short-lived positive response of industrial production and employment, after which both activity indicators return to the pre-shock level. Finally, the contrasting dynamic effects of agreed and disagreed uncertainty shocks are robust to using various measures of consumer disagreement, uncertainty, and they are robust to VAR models that encompass a broader spectrum of macroeconomic activity indicators, as well as to VAR models that distinguish consumer disagreement by education and age cohorts. These empirical findings contribute to the growing literature on the macroeconomic effects of economic uncertainty, and we are the first study to link uncertainty and consumer disagreement. Our evidence sheds light on a new channel in the propagation of uncertainty to economic activity, showing that high consumer disagreement is a relevant indicator for the dampened effect of uncertainty in the economy. Bloom (2009), Jurado et al. (2015), and Ludvigson et al. (2021) show that uncertainty shocks are strongly contractionary on economic activity. Baker et al. (2016) develop an index of economic policy uncertainty and show that innovations in policy uncertainty exert a negative effect on employment, industrial production, and investment. Bachmann et al. (2013) use uncertainty measures from U.S. and German business survey data and find a significant negative effect of uncertainty in production and employment. Caldara et al. (2016) and Alessandri and Mumtaz (2019), stress the interaction between financial conditions and uncertainty, providing evidence that the negative impact of uncertainty shocks is amplified when financial conditions worsen. Fernandez-Villaverde et al. (2011), Fernandez-Villaverde et al. (2015), Fernandez-Villaverde et al. (2019, 2021), Fernandez-Villaverde et al. (2023), Mumtaz and Zanetti (2013), Born and Pfeifer (2014), Basu and Bundick (2017), Cascaldi-Garcia et al. (2022), Melosi et al. (2022), and several others show that uncertainty from different sources, such as fiscal and monetary policy, costs of borrowing, and future perceived uncertainty, results in reduced economic activity. We also relate to Caggiano et al. (2014), Leduc and Liu (2016), Theodoridis and Zanetti (2016), Schaal (2017), and Cascaldi-Garcia and Galvao (2021) which show a tight link between uncertainty, labor, and production markets. Earlier literature studies the cyclical effects of _first moment_ noise shocks (e.g., Lorenzoni, 2009; Blanchard et al., 2013; Forni et al., 2017). Our work contributes to this literature by identifying potential cyclical effects of second-moment noise shocks. Recent work also shows that episodes of high uncertainty may not have adverse economic effects. For example, Segal et al. (2015) distinguish between bad and good uncertainty, and their good uncertainty measure is benign for production and consumption.5 Berger et al. (2020) separate contemporaneous shocks in realized stock market volatility from news shocks that they interpret as forward-looking uncertainty, which are benign for economic activity. Footnote 5: Using measures of low and high uncertainty from quantile factor models, Korobilis and Schroder (2022) show that only high-uncertainty shocks cause a significant fall in industrial production. Aastveit et al. (2017) show that in periods of high uncertainty, monetary policy effects to output are dampened. The remainder of the paper is organized as follows. Section 2 derives our measures of consumer disagreement and studies the time-series properties of our index. Section 3 develops a stylized model to study the links between uncertainty and consumer disagreement. Section 4 uses predictions from the model that disentangle the dynamic effects of agreed and disagreed uncertainty. Section 5 explores robustness of the empirical results to alternative modeling assumptions. Section 6 concludes the paper. ## 2 Measuring consumer disagreement In this section we construct a new index of consumer disagreement using the University of Michigan Survey of Consumers. It is a parsimonious index that encapsulates the cross-sectional dispersion of consumer views from different survey questions, and it reveals consumers' information and beliefs on current and future economic conditions. We then study the cyclical properties of our disagreement index and focus on the link with economic activity and alternative measures of uncertainty. ### Consumer survey data The Michigan Survey of Consumers (hereafter MSC), is produced by the Survey Research Center at the University of Michigan. Each month, it conducts a minimum of 500 interviews, and consumers answer a questionnaire that contains 28 core questions and several subquestions. Survey questions are aggregated over respondents (consumers) to produce approximately 45 monthly and quarterly categorical time series.6 To formulate our index, we select questions that capture the views of consumers about current and future economic conditions, summarized in Table 1. Footnote 6: The only exception is the question that asks consumers to forecast a value for inflation one year and five years ahead, which results in a continuous variable. The samples for the Surveys of Consumers are statistically designed to be representative of all American households. For a detailed description of the survey, including questionnaires, see: [https://data.sca.isr.umich.edu/survey-info.php](https://data.sca.isr.umich.edu/survey-info.php). Consumer responses to the survey questions consist of three qualitative categories ("better/about the same/worse,"); the associated time-series measures the proportion of respondents in each category.7 Our benchmark measure is an index of _tail disagreement_, which reflects disagreement between the two polar categories in the distribution of responses. That is, the _tail disagreement_ index extracts disagreement from the "better/worse" (or "good time/bad time," or "favorable/unfavorable") responses. Formally, the definition of the disagreement index is: Footnote 7: Depending on the question, these answers can also take the form “favorable/no mention/unfavorable,” “good time/uncertain/bad time,” or “more/about the same/less.” \[T_{t}^{(j)}=1-\frac{|b_{t}^{(j)}-w_{t}^{(j)}|}{100}, \tag{1}\] where \(j=\) NEWS, BAGO, BEXP, BUS12, BUS5 indexes each of the five survey questions, \(b_{t}^{j}\) is the percentage of respondents in question \(j\) with a positive/optimistic answer, and \(w_{t}^{j}\) is the percentage of respondents with a negative/pessimistic answer. The disagreement index \(T_{t}^{(j)}\) takes values of 0 and 1 by construction. A value equal to zero, which occurs if either \(b_{t}^{(j)}\) or \(w_{t}^{(j)}\) is equal to 100, indicates all respondents have the same opinion or view about the current and future economic outlook and therefore no disagreement. On the other hand, a value equal to 1 indicates that consumers are evenly split between the two polar responses, reflecting sharp differences in opinions or views and consequently maximal disagreement. This indicator is intuitive but ignores information from the middle category of responses (e.g., "no mention," "same"). In section 5 we compute the Shannons' entropy (Shannon, 1948) measure of disagreement, which considers both the polar and "middle" category responses. The entropy can be a measure of uncertainty: consumers are more uncertain about economic conditions when the "middle" category has a non zero chance of occurring. We show that results are robust to this consideration. It is important to stress that the qualitative approach in the reporting of views suggests our measure of consumer disagreement refers to what we can loosely call "directional" disagreement. Thus, our concept of disagreement is different from disagreement among professional forecasters. In other words, our index does not convey information about the intensity of the responses (e.g., how much better relative to how much worse). The index also cannot capture disagreement within the proportion of consumers that report better (or \begin{table} \begin{tabular}{l l l} \hline Question & Mnemonic & Topic \\ Q23 & NEWS & News Heard of Recent Changes in Business Conditions \\ Q25 & BAGO & Current Business Conditions Compared with a Year Ago \\ Q26 & BEXP & Expected Change in Business Conditions in a Year \\ Q28 & BUS12 & Business Conditions Expected During the Next Year \\ Q29 & BUS5 & Business Conditions Expected During the Next 5 Years \\ \hline \end{tabular} \end{table} Table 1: Questions from the Michigan Survey of Consumers worse) economic prospects. ### Time-series properties of consumer disagreement We use monthly data spanning the period 1978M1 to 2020M12, and we derive distinct measures of disagreement by applying the formula in equation (1) to each of the five survey questions. We denote the singular disagreement measures related to each survey question in Table Table 1 by \(T^{NEWS}\), \(T^{BAGO}\), \(T^{BEXP}\), \(T^{BUS12}\), and \(T^{BUS5}\). The measures of disagreement based on the mnemonics "NEWS" and "BAGO" in Table 1 (i.e., \(T^{NEWS}\) and \(T^{BAGO}\), respectively) refer to _current_ business conditions and thus directly relate to the information that consumers receive and process about the past and present economic conditions. If all agents could perfectly access all information relevant for assessing current conditions, the degree of disagreement on _past and present_ conditions would be absent. The degree of disagreement on _future_ economic conditions will still be present, as agents need to make forecasts conditional on potentially different models of the economy. Thus, a good check to ascertain the degree of information dispersion is to focus on disagreement about _current_ economic conditions that would be absent if agents have full information on the state of the economy. Our indices \(T^{NEWS}\) and \(T^{BAGO}\) record substantial disagreement on present and past economic conditions, suggesting substantial disparity of views across consumers and evincing imperfect information about the state of the economy. To develop a parsimonious indicator of disagreement, we summarize the information in the five different measures by formulating a single, latent, consumer disagreement index using principal component analysis. In line with the literature on macroeconomic diffusion indexes (see for example, Stock and Watson, 2002), our latent index is the first principal component of the five individual disagreement series. The first principal component is a weighted average of all five series, where the weights (loadings) are such that the latent index maximizes the variance explained for each series.8 We refer to our latent index as DISAG, and we use it as the benchmark measure of consumer disagreement for the rest of the analysis. Footnote 8: In order to ensure that the first principal component describes the direction of maximum variance, we standardize the individual disagreement measures (and the index) to have a mean equal to zero and a variance equal to 1. This transformation does not affect the informational content of each series; rather, it affects the scale. However, disagreement as a concept is not an ordinal measure in the sense that an index value of, say, 0.5 implies that consumers disagree “twice as much” compared to a value of 0.25. For that reason, we prefer to work with an index that is standardized. The top four panels and the bottom left panel of Figure 1 show the estimate of the disagreement index (DISAG) against the individual measures of disagreement. A first finding is the large and significant time variation in the disagreement index that also characterizes the individual disagreement series.9 The figure shows that the comovement of the disagreement index with each individual series is high. The bottom right panel of Figure 1 shows the loadings of each series on the principal component. The values in the figure are the weights with which each individual series contributes to the estimate of our latent disagreement index. The values show that the DISAG index is evenly and strongly correlated with the individual disagreement indexes NEWS, BAGO, BUS12, BUS5, and it is less strongly correlated with the BEXP measure of disagreement. We proceed to study the cyclical properties of DISAG by focusing on the comovement of the disagreement index with two representative measures of economic activity: the monthly Industrial Production Index and Real Personal Consumption Expenditure. Over the entire sample period, DISAG is very weakly correlated with either industrial production (0.19) and real personal consumption growth (0.04); however, the correlation displays significant time variation. Figure 2 shows a rolling correlation between the disagreement index, industrial production, and real personal consumption growth: It demonstrates that the correlation coefficient is time varying, covering a wide range of values between -0.65 to 0.85 over the sample period.10 Starting Figure 1: Measures of disagreement and loading factors from the 1981 recession, the correlation between disagreement and either measure of real activity becomes predominantly positive and peaks during the five subsequent recessions (shown in shaded areas). In other words, the positive correlation between disagreement with economic activity indicators increases significantly during recessions, implying that disagreement falls sharply with declines in real activity. We next compare the disagreement index with empirical measures of uncertainty and measures of disagreement derived from business surveys. Jurado et al. (2015) develop uncertainty indicators from a large set of macroeconomic and financial time-series data using factor-augmented VAR methods. The top left panel of Figure 3 displays our disagreement index (solid line) together with the Jurado et al. (2015) measures of macroeconomic and financial uncertainty (JLN12 and JLNF12, respectively) obtained from a 12-month forecast horizon (dotted and dashed line). The uncertainty indicators are highly countercyclical and exhibit a strong negative comovement with our index of disagreement; the correlations of JLN12 and JLNF12 with the index of consumer disagreement are -0.62 and -0.57, respectively. The top right panel of Figure 3 compares our disagreement index with the business-level uncertainty index from the Philadelphia Fed Business Outlook Survey (BOS) that encapsulates the cross-sectional-forecast dispersion about six-month-ahead business activity in the manufacturing sector. Bachmann et al. (2013) shows that this index is a good proxy Figure 2: Time-varying correlations of DISAG with industrial production and real personal consumption growth for uncertainty. The correlation of our index with the business dispersion index exhibits a negative yet weak correlation equal to -0.1.11 The bottom left panel of Figure 3 compares our index with the CBOE S&P 100 volatility index (VXO) measure, the latter being a measure of uncertainty in many previous studies. VXO exhibits strong negative comovement with our disagreement index, with a correlation coefficient equal to -0.55. Last, the bottom right panel of Figure 3 compares our index with the measure of Economic Policy Uncertainty developed by Baker et al. (2016). As in the case of business dispersion, this indicator, capturing a different dimension of uncertainty, is not strongly negatively correlated with our disagreement indicator with a correlation coefficient equal to -0.36. The key finding from these comparisons is the _negative_ comovement of the different uncertainty indicators with consumer disagreement. In sum, consumer disagreement has fundamentally different cyclical properties compared to indicators of business-level uncertainty, stock market volatility, or uncertainty indicators from forecasts of Figure 3: Index of consumer disagreement and uncertainty indicators financial and macroeconomic indicators and economic policy uncertainty. ## 3 A simple model of information dispersion We develop a simple model with disagreement arising from imperfect and dispersed information, and uncertainty stemming from changes in the variance of a fundamental shock. We study the effect of information dispersion and the volatility of the fundamental shock on i) the variance of the forecast errors, the empirical proxy for uncertainty, and ii) the model index of disagreement congruous with our empirical measure of disagreement. The model allows us to separately identify shocks to information dispersion and shocks to uncertainty, and it provides simple sign restrictions that enable us to illustrate how disagreement is associated with the different concepts of uncertainty, which we use to identify the effect of agreed and disagreed uncertainty in the data (see Section 4). The economy is populated by a continuum, large number of \(N\) agents defined over the unit interval, indexed by \(i\). In each period \(t=1,2,...\), the economy experiences the realization of an exogenous process \(a_{t}\) (expressed in logs) whose growth rate (\(\Delta a_{t}=a_{t}-a_{t-1}\)) follows the invertible moving average (MA) process: \[a_{t}-a_{t-1}=\psi_{0}\varepsilon_{t}+\psi_{1}\varepsilon_{t-1}+\psi_{2} \varepsilon_{t-2}+...+\psi_{n}\varepsilon_{t-n}, \tag{2}\] where, \(\psi_{0}\), \(\psi_{1}\),..., \(\psi_{n}\) are the MA coefficients and \(\varepsilon_{t}\sim N(0,\sigma_{\varepsilon}^{2})\) is an i.i.d. fundamental shock with known variance \(\sigma_{\varepsilon}^{2}\).12 Footnote 12: The exogenous fundamental process can adopt a variety of interpretations (e.g., productivity or demand shocks that are relevant sources of macroeconomic fluctuations). Information is imperfect and dispersed. It is imperfect because agents cannot observe the current fundamental shock \(\varepsilon_{t}\) and the current exogenous process \(a_{t}\) during each period \(t\), while they observe the history \(\varepsilon_{t-1}\),..., \(\varepsilon_{t-n}\), and the past exogenous process \(a_{t-1}\). Information is dispersed because each agent \(i\) receives a different idiosyncratic signal about the fundamental shock: \[s_{it}=\varepsilon_{t}+v_{it}, \tag{3}\] where \(v_{it}\sim N(0,\sigma_{v_{i}}^{2})\) is an idiosyncratic, i.i.d. shock with known variance \(\sigma_{v_{i}}^{2}\). The idiosyncratic shock \(v_{it}\) blurs the realization of the fundamental shock and generates cross-sectional dispersion in the signals across agents. This formulation implies an innovation to the volatility of the idiosyncratic shock, \(\sigma_{v_{i}}^{2}\), which leads to a greater dispersion of information across agents. Agents care about the path of the fundamental shock \(\varepsilon_{t}\), and they solve a signal extraction problem to infer the fundamental shock from the signal \(s_{i}\). Each agent \(i\) solves this problem by conditioning on the history and volatilities as follows: \(\mathcal{I}_{it}\equiv\{a_{t-1-j},\varepsilon_{t-1-j},s_{it-j},\sigma_{ \varepsilon}^{2},\sigma_{v}^{2}\}_{j=0}^{\infty}\), where \(\mathcal{I}_{it}\) is the agent specific information set. Formally, each agent \(i\) uses equation (2) to form expectations about the growth rate of the exogenous process \(a\) in future periods \(t+1,...,t+n\), which yields: \[E\left(\Delta a_{t+k}|\mathcal{I}_{it}\right) = \psi_{k}E\left(\varepsilon_{t}|\mathcal{I}_{it}\right)+\psi_{k+1} \varepsilon_{t-1}+...+\psi_{n}\varepsilon_{t+k-n},\quad\text{for}\quad k=1,2,..., \tag{4}\] as well as in the current period, \[E\left(\Delta a_{t}|\mathcal{I}_{it}\right) = \psi_{0}E\left(\varepsilon_{t}|\mathcal{I}_{it}\right)+\psi_{1} \varepsilon_{t-1}+...+\psi_{n}\varepsilon_{t-n}, \tag{5}\] where \(E\) is the rational expectations operator, and \(E\left(\varepsilon_{t}|\mathcal{I}_{it}\right)\) is the expectation on the current fundamental shock \(\varepsilon_{t}\) conditional on the information set \(\mathcal{I}_{it}\), which can be represented as the linear projection of \(\varepsilon_{t}\) on \(s_{t}\) by solving the signal extraction problem. Equations (4) and (5) show that the presence of the idiosyncratic signal generates cross-sectional dispersion on current and future growth expectations of the exogenous process \(a\), reflected by the dependency of the conditional expectations \(E\left(\varepsilon_{t}|\mathcal{I}_{it}\right)\) on the agent-specific information set. By solving the signal extraction problem for agent \(i\), we rewrite equation (4) as: \[E\left(\Delta a_{t+k}|\mathcal{I}_{it}\right) = \psi_{k}\gamma_{i}s_{it}+\psi_{k+1}\varepsilon_{t-1}+...+\psi_{n }\varepsilon_{t+k-n}\] \[= \psi_{k}\gamma_{i}(\varepsilon_{t}+v_{it})+\psi_{k+1}\varepsilon _{t-1}+...+\psi_{n}\varepsilon_{t+k-n}\quad k=1,2,...\] and equation (5) as: \[E\left(\Delta a_{t}|\mathcal{I}_{it}\right) = \psi_{0}\gamma_{i}s_{it}+\psi_{1}\varepsilon_{t-1}+...+\psi_{n} \varepsilon_{t-n}\] \[= \psi_{0}\gamma_{i}(\varepsilon_{t}+v_{it})+\psi_{1}\varepsilon_{ t-1}+...+\psi_{n}\varepsilon_{t-n}\quad k=1,2,...\] where \[\gamma_{i}=\frac{\sigma_{\varepsilon}^{2}}{\sigma_{\varepsilon}^{2}+\sigma_{v _{i}}^{2}} \tag{8}\] is the agent-specific linear projection coefficient. Equations (6) and (7) show that the future and current expected growth rate of the exogenous process \(a\) depends on the agent-specific reaction to the signal, controlled by the coefficient \(\gamma_{i}\). The response of these expectations to the signal falls with the dispersion of information encapsulated by the variance of the idiosyncratic shock, \(\sigma_{v_{i}}^{2}\), and it increases with the variance of the fundamental shock, \(\sigma_{\varepsilon}^{2}\), as implied by equation (8). The dispersion of information decreases the content of information contained in the signal received by each agent \(i\), and it makes the conditional expectations in equations (6) and (7) less responsive to the signal. In the rest of the analysis, we simplify the analytical derivation of the system without loss of generality by assuming an identical variance of the idiosyncratic shock across agents (i.e., \(\sigma_{v_{i}}^{2}=\sigma_{v}^{2}\)). ### Interplay between uncertainty and information dispersion In this section, we study the interplay between uncertainty and information dispersion. We proxy uncertainty with the variance of the \(k\)-periods-ahead forecast errors for \(\Delta a_{t+k}\), and disagreement with an index derived from simulations of the model that is consistent with our measure of disagreement in Section 2. Our aim is to map the effect of dispersed information and the spread of the fundamental shock into the empirical proxies for disagreement and uncertainty. As a preliminary step, we derive the \(k\)-periods-ahead _aggregate expectations_ by averaging the different expectations of the single agents in equation (6) across the \(N\) agents in the economy: \[E\left(\Delta a_{t+k}|\mathcal{I}_{t}\right) = \frac{1}{N}\sum_{i=1}^{N}E\left(\Delta a_{t+k}|\mathcal{I}_{it}\right) \tag{9}\] \[= \psi_{k}\gamma\frac{1}{N}\sum_{i=1}^{N}(\varepsilon_{t}+v_{it})+ \frac{1}{n}\sum_{i=1}^{n}\left(\psi_{k+1}\varepsilon_{t-1}+...+\psi_{n} \varepsilon_{t+k-n}\right)\] \[= \psi_{k}\gamma\varepsilon_{t}+\frac{1}{N}\sum_{i=1}^{N}\left( \psi_{k+1}\varepsilon_{t-1}+...+\psi_{n}\varepsilon_{t+k-n}\right),\] where \(\frac{1}{N}\sum_{i=1}^{N}v_{it}\) converges to zero by the law of large numbers, and the average projection coefficient is equal to: \[\gamma=\frac{\sigma_{\varepsilon}^{2}}{\sigma_{\varepsilon}^{2}+\sigma_{v}^{ 2}}. \tag{10}\] #### 3.1.1 Variance of forecast error Our proxy for uncertainty is the variance of the forecast error \(k\)-periods ahead, which is equal to: \[var[\Delta a_{t+k}-E(\Delta a_{t+k}|\mathcal{I}_{t})] = var[\psi_{0}\varepsilon_{t+k}+\psi_{1}\varepsilon_{t+k-1}+\psi_{ 2}\varepsilon_{t+k-2}+...+\psi_{k}(1-\gamma_{t})\varepsilon_{t}] \tag{11}\] \[= \left[\psi_{0}^{2}+\psi_{1}^{2}+\psi_{2}^{2}+...+\psi_{k}^{2} \left(\frac{\sigma_{v}^{2}}{\sigma_{\varepsilon}^{2}+\sigma_{t}^{2}}\right)^ {2}\right]\sigma_{\varepsilon}^{2}\] \[+2\psi_{0}\sum_{j=1}^{k-1}\psi_{j}cov(\varepsilon_{t+k}, \varepsilon_{t+k-j})+2\psi_{1}\sum_{j=2}^{k-1}\psi_{j}cov(\varepsilon_{t+k-1},\varepsilon_{t+k-j})+...\] \[+2\psi_{k-2}\sum_{j=1}^{k-1}\psi_{j}cov(\varepsilon_{t+2}, \varepsilon_{t+k-j})+2\psi_{k}(1-\gamma)\sum_{j=0}^{k-1}\psi_{j}cov( \varepsilon_{t},\varepsilon_{t+j+1})\] \[= \left[\psi_{0}^{2}+\psi_{1}^{2}+\psi_{2}^{2}+...+\psi_{k}^{2} \left(\frac{\sigma_{v}^{2}}{\sigma_{\varepsilon}^{2}+\sigma_{v}^{2}}\right)^ {2}\right]\sigma_{\varepsilon}^{2},\] where the covariance terms are equal to zero because the shock \(\varepsilon_{t}\) is i.i.d. Equation (11) shows two important properties of the effect of uncertainty and information dispersion on the variance of the forecast error. First, the effect of a unitary change in uncertainty on the variance of the forecast error at time \(t+k\) is equal to:13 Footnote 13: Appendix A shows that, for an invertible MA process, the sign of equation (12) is always positive. \[\frac{\partial var[\Delta a_{t+k}-E(\Delta a_{t+k}|\mathcal{I}_{t})]}{\partial \sigma_{\varepsilon}^{2}}=\psi_{0}^{2}+\psi_{1}^{2}+\psi_{2}^{2}+...+\psi_{k} ^{2}\left(\frac{\sigma_{v}^{2}}{\sigma_{v}^{2}+\sigma_{\varepsilon}^{2}} \right)\left(\frac{\sigma_{v}^{2}-\sigma_{\varepsilon}^{2}}{\sigma_{v}^{2}+ \sigma_{\varepsilon}^{2}}\right)>0. \tag{12}\] This positive derivative underpins and supports the prevalent adoption of the variance of the forecast error as a proxy for uncertainty. Second, the variance of the forecast error increases with a unitary change in information dispersion:14 Footnote 14: This finding is consistent with the positive relationship between dispersion in beliefs and aggregate uncertainty, as outlined in Bianchi and Melosi (2016). \[\frac{\partial var[\Delta a_{t+k}-E(\Delta a_{t+k}|\mathcal{I}_{t})]}{ \partial\sigma_{v}^{2}}=2\psi_{0}^{2}\sigma_{\varepsilon}\left(\frac{\sigma_ {v}^{2}}{\sigma_{\varepsilon}^{2}+\sigma_{v}^{2}}\right)\left(\frac{\sigma_{ \varepsilon}^{2}}{(\sigma_{\varepsilon}^{2}+\sigma_{v}^{2})^{2}}\right)>0. \tag{13}\] Our model shows that the empirical proxy for uncertainty, measured by the variance of the forecast error, increases in _both_ innovations in the variance of the fundamental uncertainty shock, and in the variance of the idiosyncratic noise. These comovements, complemented by two further restrictions we derive in the next section, allow us to disentangle innovations to the variance of fundamental shocks (agreed uncertainty) and innovations to information dispersion (disagreed uncertainty). #### 3.1.2 Mapping information dispersion and volatility of fundamental shocks on disagreement We use the model to study the mapping from information dispersion to disagreement, and investigate the relation between disagreement and uncertainty. Because we cannot derive an analytical solution that shows the effect of information dispersion and the variance of fundamental shocks on disagreement, we compute the disagreement index from numerical simulations of the model consistent with our empirical index. In the model, the cross-sectional expectations of consumers about economic conditions are represented by: \[E(\Delta a_{t+k}|\mathcal{I}_{it}) = \psi_{k}\gamma_{it}(\varepsilon_{t}+v_{it})+\psi_{k+1}\varepsilon _{t-1}+...+\psi_{n}\varepsilon_{t+k-n}. \tag{14}\] In the MSC, the consumer responses to several questions about _future_ business conditions is the natural empirical concept corresponding to the forecast captured by equation (14). We therefore use the equation to generate artificial survey data consistent with the qualitative responses in MSC by defining the following indexes for individual answers: \[\text{Expected Conditions}:\left\{\begin{array}{lcl}b_{it}^{\text{Expected}}= 1&\text{if}&E(\Delta a_{t+k}|\mathcal{I}_{it})>0,\\ w_{it}^{\text{Expected}}=-1&\text{if}&E(\Delta a_{t+k}|\mathcal{I}_{it})<0. \end{array}\right. \tag{15}\] We apply the standard quantification method for qualitative survey data and code with \(b_{it}=1\) a positive forecast, and \(w_{it}=-1\) a negative forecast. This coding is equivalent to the responses ("better or worse," "good times or bad times") reported for the survey questions in the MSC summarized in Table 1. Consistent with the empirical disagreement index described in Section 2, we compute the index of tail disagreement as: \[\tilde{T}_{t}=1-\frac{1}{N}\left|\sum_{i=1}^{n}b_{it}-w_{it}\right| \tag{16}\] We simulate the model as follows. We assume a monthly time period consistent with the frequency in the MSC. We set the order of the MA process that governs \(\Delta a_{t}\) equal to \(n=12\), and we set \(k=12\), which corresponds to a forecast one year ahead.15 It is well known that the condition for invertibility of an MA process is the counterpart to the stationarity condition for an AR process. Thus, taking a stationary AR(1) process with AR coefficient equal to \(\beta\), we write the MA coefficients as \(\psi_{k}=\beta^{k}\). The variances of the idiosyncratic component, \(\sigma_{v}^{2}\), and of the fundamental shock, \(\sigma_{\varepsilon}^{2}\), are both allowed to vary in a discrete manner in the set \([1,5]\).16 We set the AR coefficient \(\beta=0.5\), although the results are quantitatively very similar to alternative values to this parameter. We set the number of agents to \(N=10000\). Using this calibration of the model, we compute the tail disagreement index \(\tilde{T}_{t}\) in equation (16). Footnote 15: For robustness we repeat this simulation exercise assuming an annual frequency and changing the order of the MA process to equal 5. Footnote 16: For each value in the set \([1,5]\) we generate random draws from a normal distribution to compute the forecast in (14) for each economic agent. Each dashed line in Figure 4 shows the tail disagreement index from simulating the model (y-axes) as a function of the variance of the idiosyncratic shock \(\sigma_{v}^{2}\) (x-axes), and we compute it for different values for the variance of the fundamental shock \(\sigma_{\varepsilon}^{2}\) equal to \(1,2,3,4\), and 5 (blue-dashed line). The figure illustrates that the disagreement index is an increasing function of information dispersion but decreases with the variance of the fundamental shock \(\sigma_{\varepsilon}^{2}\) for any given level of \(\sigma_{v}^{2}\). The intuition from our model is that the signal becomes more precise and agents downplay the idiosyncratic information content of the signal when \(\sigma_{\varepsilon}^{2}\) increases, thus agents update expectations more strongly in the direction of the signal and agree more (i.e., agents disagree less). Important to our analysis, the model establishes an inverse comovement between disagreement and uncertainty, consistent with the strong negative correlation between those indicators in the data, as documented in Section 2. ## 4 Empirical model **VAR inference and shock identification.** Our starting point is a Bayesian vector autoregressive (VAR) in the tradition of several recent studies on uncertainty (Jurado et al., 2015; Gilchrist et al., 2014). We use the identifying sign restrictions extracted from the simple model of information dispersion in order to disentangle the dynamic effects of agreed uncertainty and disagreed uncertainty shocks in the data using the VARs. The restrictions are (i) those in equations (12) and (13), and (ii) those implied by Figure 4. Table 2 summarizes our identifying restrictions, showing the response of the observed variables (i.e., uncertainty and disagreement) to agreed and disagreed uncertainty, in columns (1) and (2), respectively. \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{Shock} \\ \cline{2-3} & (1) & (2) \\ Observed variable & \(\sigma_{\varepsilon}\) & \(\sigma_{v}\) \\ & Agreed Uncertainty & Disagreed Uncertainty \\ \hline Variance of the forecast error & \(+\) & \(+\) \\ Index of disagreement & \(-\) & \(+\) \\ \hline \hline \end{tabular} _Notes_: The entries show the impact response of the variance of the forecast error and the index of disagreement to the shock to agreed uncertainty (column 1) and disagreed uncertainty (column 2). \end{table} Table 2: Identifying restrictions Figure 4: Disagreement index in the model Column (1) in the table shows that an innovation to the fundamental shock \(\sigma_{\varepsilon}^{2}\) is associated with an increase in observed uncertainty (represented by the variance of the forecast error) and a decrease in the index of disagreement. This is our concept of _agreed uncertainty_. Instead, column (2) in the table shows that an innovation to information dispersion \(\sigma_{v}^{2}\) is associated with a simultaneous increase in observed uncertainty and the index of disagreement. This is our concept of _disagreed uncertainty_.17 Using these distinct comovements in observed uncertainty and the index of disagreement, we identify the two distinct concepts of uncertainty shocks in the data. Footnote 17: To rule out the autonomous effect of uncertainty shocks on economic activity that results in overstating the impact economic effects of agreed or disagreed uncertainty shocks, we impose a zero-impact response of activity indicators to the identified shocks. We tackle estimation of the VAR under these restrictions using the Bayesian Markov chain Monte Carlo (MCMC) algorithm developed in Korobilis (2022), which allows us to sample sign and zero restrictions in arbitrarily large VARs with high computational efficiency. For the \(n\times 1\) vector of time-series variables \(\mathbf{y}_{t}\), the VAR takes the multivariate regression form: \[\mathbf{y}_{t}=\mathbf{\Phi}\mathbf{x}_{t}+\mathbf{\varepsilon}_{t}, \tag{17}\] where \(\mathbf{y}_{t}\) is a \((n\times 1)\) vector of observed variables, \(\mathbf{x}_{t}=\left(1,\mathbf{y}_{t-1}^{\prime},...,\mathbf{y}_{t-p}^{\prime }\right)^{\prime}\) a \((k\times 1)\) vector (with \(k=np+1\)) containing a constant and \(p\) lags of \(\mathbf{y}\), \(\mathbf{\Phi}\) is an \((n\times k)\) matrix of coefficients, and \(\mathbf{\varepsilon}_{t}\) is a \((n\times 1)\) vector of disturbances distributed as \(N\left(\mathbf{0}_{n\times 1},\mathbf{\Omega}\right)\) with \(\mathbf{\Omega}\) an \(n\times n\) covariance matrix. We further assume the following factor decomposition of \(\mathbf{\varepsilon}_{t}\): \[\mathbf{\varepsilon}_{t}=\mathbf{\Lambda}\mathbf{f}_{t}+\mathbf{v}_{t}, \tag{18}\] where \(\mathbf{\Lambda}\) is an \(n\times r\) matrix of factor loadings, \(\mathbf{f}_{t}\sim N(\mathbf{0},\mathbf{I}_{r})\) is an \(r\times 1\) vector of factors, and \(\mathbf{v}_{t}\sim N(\mathbf{0},\mathbf{\Sigma})\) is an \(n\times 1\) vector of idiosyncratic shocks with \(\mathbf{\Sigma}\) an \(n\times n\) diagonal matrix. The rationale behind the VAR in equations (17)-(18) is that the \(n\)-dimensional vector of VAR disturbances is decomposed into \(r\) common shocks \(\mathbf{f}_{t}\) (\(r<n\)) and \(n\) idiosyncratic shocks \(\mathbf{v}_{t}\).18 Because \(\mathbf{\Sigma}\) is diagonal, we consider only the \(r\) common shocks as structural and the \(n\) idiosyncratic shocks as nuisance shocks (e.g., due to measurement error or asymmetric information). Indeed, by left-multiplying the VAR using the generalized inverse of \(\mathbf{\Lambda}\), the implied structural VAR form is: Footnote 18: Gorodinchenko (2005) also exploits this formulation of the VAR for the identification of monetary policy shocks. \[\mathbf{y}_{t} = \mathbf{\Phi}\mathbf{x}_{t}+\mathbf{\Lambda}\mathbf{f}_{t}+\mathbf{v}_{t} \tag{19}\] \[\left(\mathbf{\Lambda}^{\prime}\mathbf{\Lambda}\right)^{-1}\mathbf{\Lambda}^{ \prime}\mathbf{y}_{t} = \left(\mathbf{\Lambda}^{\prime}\mathbf{\Lambda}\right)^{-1}\mathbf{\Lambda}^{ \prime}\mathbf{\Phi}\mathbf{x}_{t}+\mathbf{f}_{t}+\left(\mathbf{\Lambda}^{\prime}\mathbf{ \Lambda}\right)^{-1}\mathbf{\Lambda}^{\prime}\mathbf{v}_{t}\] (20) \[\mathbf{A}_{1}\mathbf{y}_{t} = \mathbf{B}_{1}\mathbf{x}_{t}+\mathbf{f}_{t}+\left(\mathbf{\Lambda}^{ \prime}\mathbf{\Lambda}\right)^{-1}\mathbf{\Lambda}^{\prime}\mathbf{v}_{t}, \tag{21}\] where \(\mathbf{A}_{1}=\left(\mathbf{\Lambda}^{\prime}\mathbf{\Lambda}\right)^{-1}\mathbf{\Lambda} ^{\prime}\) and \(\mathbf{B}_{1}=\mathbf{A}_{1}\mathbf{\Phi}\). As long as \(\mathbf{\Sigma}\) is diagonal the term \(\left(\mathbf{\Lambda}^{\prime}\mathbf{\Lambda}\right)^{-1}\mathbf{\Lambda}^{\prime} \mathbf{v}_{t}\) vanishes asymptotically, meaning that \(\mathbf{f}_{t}\) retains the interpretation of structural shocks. Therefore, the desired sign and zero restrictions required for identifying agreed and disagreed uncertainty can take the form of simple parametric restrictions imposed on the respective elements of \(\mathbf{\Lambda}\). Bayesian inference requires specification and tuning of prior distributions for all parameters, and estimation with iterative MCMC algorithms requires further assumptions and tuning parameters. Without dismissing the significance of such choices, we use default, automatic prior choices justified in detail in Korobilis (2022).19 We provide further technical details in the Appendix. Footnote 19: Our default choice is to iterate the algorithm 600,000 times, discard the first 100,000 iterations, and save every \(100^{th}\) draw from the parameter posterior. In all VARs of different sizes we estimate, this setting ensures low autocorrelation of posterior samples, as well as their good numerical properties. Data and specifications.Because fluctuations in measures of uncertainty and disagreement are short-lived, following Bloom (2009), Jurado et al. (2015), and Berger et al. (2020) our benchmark results rely on monthly U.S. macroeconomic data. The sample runs from 1978M1 to 2020M12, where the earliest date is dictated by the availability of the MSC data. Consumer disagreement is measured by our disagreement index DISAG described in Section 2. We adopt the 12-month-ahead macro uncertainty indicator developed by Jurado et al. (2015) as the benchmark measure of uncertainty. We compute this indicator from estimates of conditional volatilities of \(h\)-step ahead forecast errors using a monthly dataset of 134 macroeconomic time series and captures broad-based macroeconomic uncertainty.20 This indicator, being a conditional variance of a forecast error, is therefore the natural counterpart of the model concept. Moreover, because macro-uncertainty is about broad-based future economic conditions, it links more naturally to the concept of information dispersion we use in this paper, which is about dispersion of consumer views about economy-wide business conditions. The remaining monthly variables in our benchmark VAR specification are: real industrial production index (IP), real personal consumption expenditure index (CONS), total non-farm employment (EMPL), inflation rate based on the personal consumption expenditure price index (INFL), the S&P 500 index (SP500), and the federal funds effective rate (FEDFUNDS). We discuss robustness of our results in Section 5. The VAR models are estimated with 13 lags, and Appendix C describes the econometric methodology in detail. To conserve space we report IRFs to four key variables from the VAR specification, and the Appendix reports the remaining IRFs. Footnote 20: Methodologically, the indicator captures broad-based movements in economic uncertainty while filtering out variations in the conditional volatilities of the forecast errors. This procedure avoids accounting predictable movements in the economy as uncertainty, as shown in Ludvigson et al. (2021). Benchmark specification.The left panel in Figure 5 shows IRFs to a positive innovation in the variance of the fundamental shock \(\sigma_{\epsilon}\) - _agreed uncertainty_ - identified by imposing the sign restrictions in column (1) of Table 2 on the response of uncertainty and disagreement indicators in the first period after the shock. The JLN-12 uncertainty indicator rises immediately on impact and remains persistently elevated for approximately 15 months, but the DISAG indicator declines persistently in the short run and remains depressed for approximately 20 months. Beyond the initial period (where the response of activity indicators are restricted to zero), industrial production, and employment decline sharply and remain depressed even at the sixty month horizon. Our results echo recent findings in the literature (e.g., Jurado et al., 2015, Gilchrist et al., 2014, Ludvigson et al., 2021) using similar empirical methods; they emphasize a significant depressing effect of economic uncertainty on real activity indicators. The right panel in Figure 5 shows IRFs to a positive innovation in the variance of idiosyncratic noise, \(\sigma_{v}\) - _disagreed uncertainty_. It is identified by imposing the sign restrictions in column (2) of Table 2 on the response of uncertainty and disagreement indicators in the first period after the shock. The JLN-12 indicator shows a persistent rise that extends beyond the 30-month horizon, while disagreement displays a short-lived, persistent rise and stays elevated for about 10-months after the shock. Note that the increase in the JLN-12 indicator is stronger and more persistent than the increase in JLN-12 estimated in the left panel of the figure. Despite a stronger and more persistent rise in uncertainty, the responses of real activity indicators are qualitatively different than the responses under the agreed uncertainty shock. Specifically, industrial production exhibits a small positive and statistically significant short-lived response, in contrast to the persistent negative response estimated in the left panel; it then quickly reverts to the pre-shock level. Similarly, employment exhibits a small positive and statistically significant response that persists until month 30, in contrast to the negative persistent response estimated in the left panel. Thus, disagreed uncertainty shocks are characterized by dynamic effects that are broadly benign for economic activity in the short run, and they differ qualitatively from Figure 5: Benchmark model. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. the strong, adverse, and long-lasting effects on economic activity in the aftermath of shocks to agreed uncertainty. Appendix D reports the complete set of IRFs. To summarize, identified innovations in _agreed uncertainty_ and _disagreed uncertainty_ display sharp qualitative differences in the dynamic responses of real activity indicators. _Agreed uncertainty_ shocks are robustly contractionary and generate a sustained decline in industrial production and employment. By contrast, _disagreed uncertainty_ shocks are broadly benign; they are associated with a small and statistically significant positive response of real activity in the short run. We are the first study to show that disagreement in household views about current and future economic conditions that characterizes disagreed uncertainty is critical for the benign effects of uncertainty on real activity, but agreed uncertainty retains the standard adverse effect on real activity. **Forecast error variance decomposition**. We decompose the share of forecast error variance (FEVD) of the benchmark VAR variables into the two identified shocks. This is a useful check to ascertain whether these shocks are important drivers of the empirical indicators of uncertainty and disagreement. Figure 6 below shows the variance decomposition to the identified shocks of agreed uncertainty (green) and disagreed uncertainty (blue); we also report the residual variation, which is not attributed to any identified shock (yellow). Two important findings emerge. First, the share of FEVD in JLN-12 explained by the two shocks is significant, rising to over 70% after the seventh horizon. Interestingly, the share of FEVD in JLN-12 accounted for by the innovation in disagreed uncertainty alone exceeds the share of FEVD accounted for by the agreed uncertainty shock at all horizons, suggesting that the former is a significant driver of the variance in JLN-12. The disagreed uncertainty innovation accounts for 50% of FEVD in JLN-12 after the seven-month horizon and never drops below 20% of the same FEVD. Second, the two shocks combined account for the majority of the FEVD in disagreement, approximately 70% at all horizons. For disagreement, the share in FEVD accounted for by the agreed uncertainty shock exceeds the FEVD share accounted for by the disagreed uncertainty shock by a large margin. These findings establish that our two different uncertainty shocks jointly explain a significant share of the variation in the indexes of uncertainty and disagreement. Taken together, the results show the key role of innovations to the dispersion of information to explain movements in the different types of uncertainty, supporting the insight from our simple model in Section 3. Our exercise also suggests that the combination of two uncertainty shocks accounts for a large share of FEVDs in industrial production, rising from approximately 20% at the 10-month horizon to 40% at the 60-month horizon, and innovations to agreed uncertainty account for the majority of this total. These shocks, however, account for a relatively small share, which is approximately 10% after 10-months in the FEVD of employment, suggesting that other unidentified shocks are the major drivers of the variation in this variable.21 ## 5 Robustness analysis In this section we briefly discuss the robustness analysis intended to check the sensitivity of our results to alternative measures of consumer disagreement. We investigate the robustness of our findings when we control for different demographic characteristics of consumers, namely, age and education. In Appendix D we examine the robustness to alternative proxies to uncertainty used in related studies.22 We also consider VAR specifications that i) estimate the dynamic effects of the two shocks on a broad spectrum of macroeconomic (including labor market) and survey indicators, ii) replace the DISAG index with individual disagreement indices from specific survey questions, iii) estimate a VAR specification using a quarterly macro dataset. These results are also described in Appendix D. Figure 6: Variance Decomposition: Agreed (\(\sigma_{\varepsilon}\), green) vs. disagreed (\(\sigma_{v}\), blue) uncertainty. **Alternative disagreement indicators.** The tails disagreement employed in our benchmark specification is simple and intuitive, but does not fully utilize all the responses from MSC. Specifically, it only considers the two polar categories of responses (better/worse), while ignoring the middle category (depending on the question, this category relates to past/future conditions that are either the "same" or "uncertain"). For that reason we recompute the disagreement index using two alternative measures: "Entropy disagreement" using Shannon's (Shannon, 1948) entropy measure, and "Lacy disagreement" using the transformation proposed by Lacy (2006). These exploit all possible answers from consumers. The entropy disagreement is defined as23 Footnote 23: This measure we define as disagreement is called the “Shannon Index” in ecology and related sciences, and it is used to measure the diversity and distribution of types of species in a community; see Hill (1973). \[H_{t}^{j}=-\sum_{i=1}^{n}p(x_{i}^{j})\log p(x_{i}^{j})\] where \(x_{i}^{j}\) is option \(i\) of \(n\) possible answers for question \(j\), and \(p(x_{i}^{j})\) is the proportion of individuals answering \(x_{i}^{j}\). This index gives a measure of the cross-sectional uncertainty of consumers about the possible business outcomes that may occur, where \(p(x_{i}^{j})\) has an interpretation of probabilities.24 The higher the index the higher the uncertainty and the higher the disagreement. For example, if all consumers shared the same view about the prospects of the economy, the value of the index will be zero, which reflects a situation of zero uncertainty and disagreement. By contrast if consumers are equally divided between the three outcome categories ("better," "worse," "same"), the value of the index attains the maximum value. The second alternative disagreement measure, from Lacy (2006), describes how dispersed or concentrated ordinal data is without requiring further assumptions about inter-category distances. The Lacy disagreement is defined using, Footnote 24: We assume that consumers who have the same view about business conditions do so because they also agree on the probabilities about observing a specific outcome. \[D_{j}^{2}=\sum_{i=1}^{n-1}F_{i}\left(1-F_{i}\right),\] where \(F_{i}\) is the cumulative relative frequency for the \(i\)th category. Note that the sum excludes the last category, because \(F_{n}\) is always 1. This \(D_{j}^{2}\) measure ranges from 0 to \(\left(n-1\right)/4\). When the value of this measure is zero, all responses fall in the same category. The maximum value of \(\left(n-1\right)/4\) denotes completely polarized distribution in which half of the responses are in category 1 and half are in category \(n\). Values between the minimum and the maximum indicate intermediate levels of dispersion. We re-estimate the VAR after replacing the disagreement indicator DISAG with the two alternative indicators one at a time, retaining all other variables in the benchmark specification. The results are reported in Figure 7 below. First, we note that the median IRFs displayed following a shock to agreed uncertainty (left panel) and disagreed uncertainty (right panel) are qualitatively and quantitatively similar when we use either the Lacy (DISAG-L, dashed-green line) or Entropy (DISAG-E, dashed-blue line) concept of disagreement in the VAR, and they are broadly similar to the IRFs we estimate from the benchmark specification (also plotted in the same figure). This result shows that the different VAR specifications identify the same shocks to agreed and disagreed uncertainty. Moreover, the VAR specifications with the DISAG-L and DISAG-E indicators suggest that the short-run positive response of industrial production and employment following an innovation to disagreed uncertainty are stronger in comparison to the responses in the same variables estimated in the benchmark specification. Overall, this exercise ensures that the DISAG indicator used in the benchmark VAR is robust to including information from those consumers that are more uncertain about the strength or weakness of current and future economic conditions. **Whose disagreement: Education and age.** In addition to the overall aggregate response to the survey questions, the MSC collects demographic responses from consumers of different education and age status. They collect responses from three education categories, namely: _high-school_, _some college_, and _college degree_. They also collect responses from three age groups: 18-34, 35-54, and 55 and above. In this section we compute disagreement indicators for each of these education and age groups - six in total - using the tails concept of disagreement. We then, re-estimate the benchmark VAR using these indicators one at a time. Figures 8, 9, and 10 Figure 7: Alternative disagreement indexes. Agreed (\(\sigma_{\varepsilon}\), left) vs. disagreed \(\sigma_{v}\) (right) uncertainty. display dynamic effects from the VARs that condition on the disagreement indicators based on the different education groups. The dynamic effects of agreed and disagreed uncertainty shocks, when we condition on disagreement of the least educated group (_high school_ education, Figure 8), are very similar quantitatively to those dynamic effects reported for the baseline specification. Interestingly, the IRFs from the specifications that condition on disagreement of the _some college_ and _college degree_ groups do not exhibit the sharp differences in the estimated real effects following agreed and disagreed uncertainty shocks. Industrial production is the only variable that appears to respond statistically significantly following an agreed uncertainty shock. Figures 9 and 10 suggest industrial production and employment do not respond statistically significantly following a disagreed uncertainty shock. It appears that disagreement from those consumers with _high school_ educations matters the most for the real activity effects we estimate in the benchmark specification. We report results from the VAR specifications conditioned on disagreement indicators based on the three age groups. When we condition the VAR on disagreement from the age groups, 18-34, and 35-54 age groups (see Figures 11, 12), the responses to industrial production and employment following agreed and disagreed uncertainty shocks are not statistically different from zero (with the exception of industrial production after the 40-month horizon in Figure 11). By contrast, when we condition on disagreement of the 55-and-over age group, the responses to the real activity indicators following agreed and disagreed uncertainty innovations (Figure 13) are strong and statistically significant and very similar to the dynamic effects displayed in 5. Moreover, the dynamic responses of the real activity indicators to agreed and disagreed innovations display the systematic differences we estimate in the benchmark specification. These results suggest that disagreement from the 55-and-over age group appears to be the most relevant driver behind our benchmark results, which are based on the aggregate responses. _Notes_: The figure shows impulse responses to the JLN 12-months-ahead uncertainty indicator (JLN12), the disagreement index for high school education level (DISAG-EDU1), industrial production (IP), and employment (EMPL). We compute IRFs from an eight-variable VAR system as described in the text. The shaded gray areas are the 16% and 84% posterior bands generated from the posterior distribution of VAR parameters. The units of the vertical axes are percentage deviations, and the horizontal axes report time measured in months. Figure 8: Benchmark model–education: High school. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. Figure 9: Benchmark model–education: some college. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. _Notes_: The figure shows impulse responses to the JLN 12-months-ahead uncertainty indicator (JLN12), the disagreement index for college-or-higher education level (DISAG-EDU3), industrial production (IP), and employment (EMPL). We compute IRFs from an eight-variable VAR system as described in the text. The shaded gray areas are the 16% and 84% posterior bands generated from the posterior distribution of VAR parameters. The units of the vertical axes are percentage deviations, and the horizontal axes report time measured in months. Figure 11: Benchmark model–age: 18-34. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. Figure 10: Benchmark model–education: College or higher. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. _Notes_: The figure shows impulse responses to the JLN 12-months-ahead uncertainty indicator (JLN12), the disagreement index for age 35-54 (DISAG-AGE2), industrial production (IP), and employment (EMPL). We compute the IRFs from an eight-variable VAR system as described in the text. The shaded gray areas are the 16% and 84% posterior bands generated from the posterior distribution of VAR parameters. The units of the vertical axes are percentage deviations, and the horizontal axes report time measured in months. Figure 12: Benchmark model–age: 35-54. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. Figure 13: Benchmark model–age: 55 and above. Agreed \(\sigma_{\varepsilon}\) (left) versus disagreed \(\sigma_{v}\) (right) uncertainty. Conclusion In this paper we establish two new, distinct concepts of uncertainty shocks, namely, _agreed_ and _disagreed_ uncertainty shocks. We show that the dispersion of consumer views about current and future economic conditions, measured by consumer disagreement, is an important conditioning factor for the effect of uncertainty on economic activity. We present a dispersed and noisy information model where agents form expectations by processing idiosyncratic signals about an economic fundamental. We use the model to illustrate the connection between consumer disagreement, which is a manifestation of information dispersion, and uncertainty. The model shows that the change in observed uncertainty, measured by the variance of a forecast error, is a function of both the variance of the fundamental shock and the variance of the idiosyncratic noise. Thus, a larger dispersion of views on economic conditions (i.e., higher consumer disagreement) may increase the variance of the forecast error without involving any change in the volatility of exogenous fundamental forces in the economy. We use the model to formulate simple sign restrictions that disentangle the dynamic effects of innovations to _agreed_ and _disagreed_ uncertainty on U.S. economic indicators in a medium-scale Bayesian VAR model. In our benchmark specification, innovations in agreed uncertainty foreshadow significant and often long-lasting, depressing effects on economic activity, namely, industrial production and employment, corroborating the evidence from numerous studies. By contrast, innovations in disagreed uncertainty (a rise in uncertainty in periods of high consumer disagreement) are benign for economic activity indicators, and they often lead to a short-run, positive responses in those indicators. Our analysis suggests that shocks to disagreed uncertainty are non-recessionary. Our results imply it is important to distinguish between the two types of uncertainty shocks to study the link between uncertainty and economic activity. Our study opens up interesting avenues for future research. The analysis implies that the disclosure of information that reduces disagreement may increase the adverse effect of uncertainty. A straightforward extension of our analysis is to study how policy announcements that convey information about the economy may result in lower disagreement and exacerbate the negative effect of uncertainty. It would be interesting to study whether a strategic diffusion of information that maintains a wide range of views could alleviate, or even overturn, the adverse effect of uncertainty. Finally, our results show that the heterogeneity of views is critical for the aggregate effect of uncertainty on output, suggesting that models with heterogenous agents may prove fruitful for the study of expectations and the interplay between uncertainty and economic activity. We plan to pursue some of these ideas in future work.
2308.14511
ATMOSPHERIX: II- Characterising exoplanet atmospheres through transmission spectroscopy with SPIRou
In a companion paper, we introduced a publicly-available pipeline to characterise exoplanet atmospheres through high-resolution spectroscopy. In this paper, we use this pipeline to study the biases and degeneracies that arise in atmospheric characterisation of exoplanets in near-infrared ground-based transmission spectroscopy. We inject synthetic planetary transits into sequences of SPIRou spectra of the well known M dwarf star Gl 15 A, and study the effects of different assumptions on the retrieval. We focus on (i) mass and radius uncertainties, (ii) non isothermal vertical profiles and (iii) identification and retrieval of multiple species. We show that the uncertainties on mass and radius should be accounted for in retrievals and that depth-dependent temperature information can be derived from high-resolution transmission spectroscopy data. Finally, we discuss the impact of selecting wavelength orders in the retrieval and the issues that arise when trying to identify a single species in a multi-species atmospheric model. This analysis allows us to understand better the results obtained through transmission spectroscopy and their limitations in preparation to the analysis of actual SPIRou data.
F. Debras, B. Klein, J. -F. Donati, T. Hood, C. Moutou, A. Carmona, B. Charnay, B. Bézard, P. Fouqué, A. Masson, S. Vinatier, C. Baruteau, I. Boisse, X. Bonfils, A. Chiavassa, X. Delfosse, G. Hebrard, J. Leconte, E. Martioli, M. Ould-elkhim, V. Parmentier, P. Petit, W. Pluriel, F. Selsis, L. Teinturier, P. Tremblin, M. Turbet, O. Venot, A. Wyttenbach
2023-08-28T11:57:38Z
http://arxiv.org/abs/2308.14511v2
# ATMOSPHERIX: II- Characterising exoplanet atmospheres through transmission spectroscopy with SPIRou ###### Abstract In a companion paper, we introduced a publicly-available pipeline to characterise exoplanet atmospheres through high-resolution spectroscopy. In this paper, we use this pipeline to study the biases and degeneracies that arise in atmospheric characterisation of exoplanets in near-infrared ground-based transmission spectroscopy. We inject synthetic planetary transits into sequences of SPIRou spectra of the well known M dwarf star Gl 15 A, and study the effects of different assumptions on the retrieval. We focus on (i) mass and radius uncertainties, (ii) non isothermal vertical profiles and (iii) identification and retrieval of multiple species. We show that the uncertainties on mass and radius should be accounted for in retrievals and that depth-dependent temperature information can be derived from high-resolution transmission spectroscopy data. Finally, we discuss the impact of selecting wavelength orders in the retrieval and the issues that arise when trying to identify a single species in a multi-species atmospheric model. This analysis allows us to understand better the results obtained through transmission spectroscopy and their limitations in preparation to the analysis of actual SPIRou data. keywords: exoplanets - planets and satellites: atmospheres - planets and satellites: gaseous planets - techniques: spectroscopic - methods: data analysis ## 1 Introduction In a companion paper (Klein et al., submitted to MNRAS, hereafter named paper I), we have introduced our publicly-available pipeline for the analysis of high resolution spectroscopy (HRS) data of exoplanet atmospheres. This pipeline was developed in the framework of the ATMOSPHERIX consortium, a gathering of observers and theoreticians created to optimize the study of ground-based HRS for exoplanet atmospheres at the French level. We have shown the validity and robustness of this pipeline for single-component isothermal planetary atmospheres. However, we know this is a crude simplification as more and more molecular species are discovered in exoplanet atmospheres (see review in Guillot et al., 2022) and departures from vertically-isothermal atmospheres are also commonly found thanks to stronger temperature constraints (e.g. Haynes et al., 2015; Gibson et al., 2020). More complex models would therefore be needed to be representative of actual observations but the more complex the model, the more degenerate the problem. It is therefore an important task to understand the sources of degeneracies in atmospheric retrievals in order to provide the most reliable parameter estimates. Such degeneracies have already been studied in low resolution spectroscopy (LRS) for more than 20 years (see e.g., Brown (2001) and the references in the introduction of Welbanks & Madhusudhan (2019)) but are less extensively studied in HRS, particularly in the infrared. Fisher & Heng (2019) have studied the information that can be obtained through the sodium doublet in the visible, concluding that HRS alone is not enough to determine appropriately the pressure that are probed by the sodium lines. The combination with LRS in Pino et al. (2018) might allow to resolve some of these questions. We obtained similar conclusions when including clouds in our companion paper, where the loss of the continuum by HRS exacerbated a degeneracy between cloud top pressure and water content. Clouds, in general, are a major source of work to understand plausible degeneracies in the spectra, both at high and low resolution (see e.g., Kitzmann & Heng (2018); Barstow (2020)). Inclusion of multi-dimensional effects further complicates this picture (Line & Parmentier, 2016; Pluirel et al., 2020; Welbanks & Madhusudhan, 2022). In this paper, we therefore focus on a few degeneracies and potential biases that are inherent to HRS with application on synthetic SPIRou transit data. We test three cases: uncertainties in the planet's mass and radius, non isothermal vertical structures and models with multiple molecular species with comparable mixing ratios. We first recall the process of data generation and reduction in section 2. In section 3, we then present our test cases and the results of template matching and Bayesian retrieval on the atmospheric parameters. This leads us to discuss how to optimize the detection and the ways forward in section 4, before concluding in Section 5. ## 2 Data generation and analysis The generation of the synthetic spectra and their reduction are extensively described in paper I. They are very shortly reminded here. ### Creation and reduction of synthetic data We simulate the observations of a planetary transit with a near-infrared (nIR) high-resolution spectrograph. This is done by injecting a synthetic planet atmosphere spectrum into a sequence of 192 spectra of the bright M dwarf Gl 15A collected during 5 hours with SPIRou in October 2020 and divided in two sets (see Table 1 and Figure 1). Gl 15 A is chosen both because we have many spectra of it with SPIRou and because it is a well-studied star in radial velocity. If its system contains a short-period Earth-like planet, it is shown that no Jovian planet orbits this star for periods of less than 10 years (see for example Figure 2 of Pinamonti et al. (2022).The observations sample the [0.9,2.5] \(\mu\)m wavelength range in 49 diffraction orders with a typical pixel size of 2.28 km.s\({}^{-1}\) and a spectral resolution of \(\sim 70000\). Data are reduced through the APERO pipeline (Cook et al., 2022) that calibrates the data in wavelength and applies state-of-the art telluric correction. The synthetic planet is based on the classical hot Jupiter HD189733 b (Bouchy et al., 2005) injected on a circular orbit and we decided to conserve four planetary and transit parameters to obtain data with consistent expected level of detection: (i) the transit depth, (ii) the transit duration, (iii) the ratio between the stellar radius and the atmospheric scale height and (iv) the atmospheric temperature. The injected planet spectra are all shifted by 30 km.s\({}^{-1}\) so that stellar molecular features and planet atmosphere absorption lines are not mixed. Once the planet is injected, we remove the stellar spectra and remaining telluric contaminations by dividing each observed spectrum within each order by a median spectrum. This step is performed successively in the Earth rest and stellar rest frames, and an additional high-pass filter is applied to the residual spectra, in order to correct for low-frequency variations in the continuum. Outliers are flagged and masked using a sigma-clipping procedure, and the residual time-varying telluric flux is corrected with an airmass detrending in the log-flux space. We then apply a principal component analysis (PCA) to get rid of the remaining correlated noise. An auto-encoder can be applied instead, although it is not yet mature for parameter retrieval and is limited to detection of molecular species as we cannot reproduce its effect efficiently on the models. Diffraction orders 57 to 54 (i.e. \(\sim\)1 300 to \(\sim\)1 500 nm) and 42 to 40 (i.e., \(\sim\)1 800 to \(\sim\)2 000 nm), located within nIR water absorption bands, are discarded. ### Uncovering the planetary signature Once the reduced data have gone through the PCA or auto-encoder step, the planetary signal is still largely buried under the noise. We either perform a template matching method between theoretical models and the reduced data, or a statistical exploration of the parameter space through nested sampling using the python module pymmultinst(Buchner et al., 2014; Feroz & Hobson, 2008; Feroz et al., 2009, 2019). The models are created with petitRADTRANS(Molliere et al., 2019) which provides the planetary radius as a function of wavelength. They are next tras Figure 1: Variations of photometric flux (top panel), airmass (panel 2), Geocentric-to-stellar rest frame RV correction (panel 3) and peak SNR per velocity bin during the two simulated transits of the HD 189733 b analog. On panels 1 and 4, the two different transits are respectively shown as blue dots and pink crosses. The vertical gray band indicates the primary transit of the simulated planet. The horizontal gray dashed line on the bottom panel indicates the average value of the peak S/N for the observed spectra. by calculating the ratio of planetary to stellar radius squared, the so-called transit depth. The correlation function (CCF) calculated for different planet velocimetric semi-amplitude (\(\rm K_{p}\)) and systemic Doppler shift (\(\rm V_{sys}\)) writes: \[CCF=\sum_{i}\frac{d_{i}m_{i}}{\sigma_{i}^{2}}, \tag{1}\] where \(m_{i}\), \(d_{i}\) and \(\sigma_{i}\) are respectively the flux in the model spectrum, the observed flux and the flux uncertainty at pixel \(i\) (corresponding to time \(t\) and wavelength \(\lambda\): \(d_{i}=d(t,\lambda)\).). This function is calculated and summed for every SPIRou order. More precisely: \[\sigma_{i}^{2}=\sigma^{2}(t,\lambda)=\frac{\sum_{i}\left(d(t,\lambda)-\overline {d(t)}\right)^{2}}{N_{\rm spectra}}\frac{\overline{\rm SNR}}{\overline{\rm SNR} (t)} \tag{2}\] where the bar denotes a time average and \(N_{\rm spectra}\) is the number of spectra. The barred SNR values are calculated for each order. In order to convert correlation value to significance of detection, we perform as is frequently done in literature, i.e., divide by the standard deviation of the correlation map away from the planetary signal. The nested sampling relies on the calculation of a likelihood \(L\), defined following the frameworks of Brogi and Line (2019) and Gibson et al. (2020): \[\mathcal{L}=\prod_{i=0}^{N}\frac{1}{\sqrt{2\pi\sigma_{i}}}\mathrm{exp}\left\{ -\frac{\left[m_{i}-ad_{i}\right]^{2}}{bor\sigma_{i}^{2}}\right\}, \tag{3}\] where \(a\) and \(b\) are scaling factors to account for incorrect modelling. a is set to 1 in this paper, and b is optimized globally as in Gibson et al. (2020). In order to account for the fact that the observed planet atmosphere spectrum is affected by the data reduction procedure, we degrade the model before comparing it to the data, following the procedure detailed in Gibson et al. (2022). First, we create the projector P on the vector space defined by our subset of PCA eigenvectors obtained in the data analysis. At each iteration of the nested sampling process, we then compute a sequence of model spectra, called M, matching the wavelength and time grids of the observations, and Doppler shifted are the values of \(\rm K_{p}\)and \(\rm V_{sys}\). We finally subtract the projection by P of M. Our final, degraded sequence of theoretical model \(\rm M^{\prime}\) is: \[M^{\prime}=\mathrm{exp}(\mathrm{log}\,M-P\mathrm{log}\,M). \tag{4}\] As we show in our companion paper, this step is crucial not to bias the retrieved planet parameters. Finally, as explained in our companion paper, we have the possibility to include a proxy for planetary rotation and winds. We simply convolve our 1D atmospheric models by a rotation kernel that considers the latitudinal speed variation due to rotation. This kernel can be modulated to take into account any latitudinal wind shape, such as superrotation. We expressed this kernel as two convolution products so that it is very efficient numerically speaking and allows one to retrieve the planet rotation rate in a parameter space exploration algorithm. ## 3 Application to Simulated Data ### Uncertainties in mass and radius Our first test was to keep a simple, isothermal model containing only water as in paper I but to change the radius \(R\) and gravitational acceleration \(g\) (proxy for mass \(M\) here as \(g\propto M/R^{2}\)) of the planet in the retrieval compared to the injected planet. We tested three different \begin{table} \begin{tabular}{c c c} \hline \hline **Stellar parameters** & **GI 15A** & \\ & Value & Reference \\ \hline Mass (\(M_{\odot}\)) & 0.400 \(\pm\) 0.008 & Ro21 \\ Radius (\(R_{\odot}\)) & 0.375 \(\pm\) 0.007 & Ro21 \\ Effective temperature (K) & 3742 \(\pm\) 30 & Ro21 \\ \(H\) magnitude & 4.476 \(\pm\) 0.2 & Cou03 \\ Systemic velocity [\(\rm km.s^{-1}\)] & 11.73 \(\pm\) 0.0001 & Fo18 \\ Limb Darkening (Quadratic) & 0.0156, 0.313 & C11 \\ \hline \hline **Planet parameters** & & \\ & HD 189733 b & Synthetic planet & Reference \\ \hline Transit depth (\%) & 2.2 \(\pm\) 0.1 & 2.2 & Ad19 \\ Radius (\(R_{J}\)) & 1.142 \(\pm\) 0.04 & 0.55 & – \\ Mass (\(M_{J}\)) & 1.13 \(\pm\) 0.05 & 0.572 & – \\ \(\rm g\,(m\,s^{-2}\)) & 22.45 \(\pm\) 1.5 & 49.18 & – \\ Orbital period (d) & 2.21857 \(\pm\) 0.00001 & 2.21857 & – \\ Mid transit time (BJD TBD) & 2458334.99089 \(\pm\) 0.0007 & 2459130.8962180 & Ad19 \\ Inclination (deg) & 85.7 \(\pm\) 0.1 & 90.0 & – \\ Eccentricity & 0.0 & 0.0 & – \\ Equilibrium temperature (K) & 1209 \(\pm\) 11 & 1209 & - \\ Orbital semi-amplitude (\(\rm km.s^{-1}\)) & 151.2 \(\pm\) 4.5 & 120.0 & Ad19 \\ Transit duration (h) & 1.84 \(\pm\) 0.04 & 1.84 & – \\ \hline \hline \end{tabular} * To gain some space in the table, we use aliases for the references. Ro21, Cu03, Fo18, C11 and Ad19 stand respectively for Rosenthal et al. (2021), Curti et al. (2003), Fouqué et al. (2018), Claret & Bloemen (2011) and Addison et al. (2019). \end{table} Table 1: Physical parameters for GI 15A, HD189733 b and for the simulated hot Jupiter used in the study. When taken from the literature, the reference of each parameter is indicated in the right-hand column\({}^{\dagger}\). cases: in Model A, gravity and radius are treated as free parameters in the retrieval. Model B imposes a wrong gravity and a good radius and we look at the effect on the temperature and water composition. Model C imposes a wrong gravity but allows only the radius to change. The results and their comparison with the isothermal model of Paper 1 are provided in Table 2. Through the section, \(R\) and \(g\) are expressed in planetary units. Regarding Model A, gravity and radius are recovered within a few percents of error. There is a large degeneracy between both parameters which translates into a smaller error bar on the normalized \(R/g\) ratio: the posterior distribution of \(R/g\) shown in Fig. 2 shows that the distribution is well matched with a Gaussian of standard deviation 0.14. The uncertainties on other physical parameters are comparable with paper I and the retrieval is globally comparable with paper I. When we simply imposed a lower gravity in the nested sampling algorithm compared to the injected model in Model B, the retrieval of composition and temperature gave lower values for mass mixing ratio and temperature. The water mass mixing ratio (MMR) is more affected, with the retrieved value 3 \(\sigma\) away from the injected value. However, contrary to the results of paper I, the input parameters are outside of the posterior distribution in the temperature-composition joint posterior(figure not shown): there is an actual bias that was not present in our analysis with correct mass and radius. Finally, Model C shows that varying only gravity or radius allows to obtain comparable results with Model 1 for water and temperature, with retrieved \(R/g\) close to 1. The results of this section show that, within our high SNR framework, we are sensitive to more than the sole amplitude of molecular lines. Indeed, observationnaly speaking we are sensitive to the variations of the transit depth with wavelength hence the important quantity is: \[O\sim\frac{HR_{\rm p}}{R_{\rm s}^{2}}, \tag{5}\] where \(H\) is the typical scale height of the atmosphere, \(R_{\rm p}\) the planetary radius and \(R_{\rm s}\) the stellar radius. For the simple case of an isothermal atmosphere: \[H=\frac{\mathcal{R}T}{Mg}, \tag{6}\] where \(\mathcal{R}\) is the ideal gas constant, \(T\) the temperature, \(M\) the molecular mass and \(g\) the gravitational acceleration. If the amplitude was the only concern, we would find that the mass and radius of the planet would be correlated with temperature as the code tries to match the \(HR_{\rm p}\) value to that of the injected planet. The fact that temperature and composition uncertainties remains globally insensitive to mass and radius as long as \(R/g\approx 1\) shows that we are not only matching amplitudes, but the shape of the lines as well which are affected by temperature and composition only. This analysis points towards the fact that the uncertainty in mass and radius should be included in the retrieval of atmospheric parameters rather than chosen as constants. It allows to avoid biases and, at least in our simple case, does not degrade the obtained atmospheric properties. For optimization purposes, only one of these quantities can be included in the retrieval, remembering that we are only sensitive to the ratio of radius and gravity. This will be particularly stringent for low-mass, distant exoplanets or planets around very active/young stars where the complicated radial velocity signature might lead to large uncertainties in the mass. ### Non vertically isothermal models In paper I, we only considered vertically-constant models in temperature and composition. Here, we test whether we are able to retrieve parameters that vary vertically, focusing on non-isothermal profiles. We implement a vertical temperature profile taken from 3D GCM simulations of HD 189733 b (Drummond et al., 2018). The temperature structure is averaged at both limbs and used as an input in the 1D petitRADTRANS modelling (see Fig. 3). We created two models with constant water volume mixing ratio (VMR) of \(10^{-3}\) and \(10^{-5}\) respectively. For our retrieval, we have tested four temperature prescription: in case "Isotherm" we assumed an isothermal profile. Case "Linear" used a 4 points temperature profile, where we retrieve the temperature at 4 different pressures (1 Pa, 100 Pa, \(10^{4}\) Pa and 1 bar) and linearly interpolate in log pressure between these points. The temperature at lower pressures than 1 Pa and higher than 1 bar is set as constant. Case "Lagrange" also used the same 4 points prescription but the interpolation was made through a Lagrange polynomial, ensuring a smooth temperature profile. Finally in case "Guillot", we have used the widely applied two temperature model of Guillot (2010) and retrieved 4 parameters: the internal temperature, the equilibrium temperature, the infrared opacity and the infrared to visible opacity ratio. As we wanted to focus on the temperature structure, we present results where we have fixed the water composition in the retrieval to the model composition. When we let this value as a free parameter, we always retrieved the appropriate water composition but the temperature retrieval is worsen due to expected (and already presented in paper 1) degeneracies. The retrieved temperatures profiles are shown in Figs. 3 and 4. We first notice that the temperature profile is poorly constrained: the standard deviation with pressure can easily reach hundreds of Kelvin, which is much more than the \(\approx 115\)K we obtained in the isothermal cases of paper I. The temperature is on average higher than the injected profile, but this mainly arises from the choice of priors which are not centered around the injected profile. In both cases, as seen in Fig. 4, the deep (\(\geq 1\) bar) and shallow (\(\leq 1\) Pa) atmosphere temperatures are just given by the priors: the distribution is close to the uniform prior distribution we chose. However, what we clearly see on Figs. 3 and 4 is that the retrievals are sensitive to different regions for the two models: the high (low) water concentration model is more sensitive to the higher (deeper) atmosphere. This was expected as the retrieved radius depends mostly on the regions which contribute the most to the water absorption lines, which corre Figure 2: Posterior distribution of \(\frac{R}{g}\) from Model A in planetary units. The injected ratio is 1. The orange line is a Gaussian with mean 0.936 and standard deviation 0.14 spond to pressures where the water column becomes optically thick (optical depth becoming greater than 1). This roughly corresponds to 100 Pa in the dense water model, and \(10^{4}\) in the other. Interestingly, although the mean profile is closer to the injected profile around \(10^{4}\) Pa in the low VMR case, the standard deviation of recovered temperature with pressure is always lower in the high VMR case. This arises from the lower SNR in the low VMR case: even at the pressures which contains most of the water information, the amplitude of the planetary signal is too low to permit a precise fit to the data. This is consistent with the isothermal retrieval, whose posterior temperature distribution is well matched by a gaussian with mean and variance \(1041\pm 63\) K in the high VMR case, and \(1195\pm 143\) K in the low VMR case. This test therefore shows that, on average, we are poorly sensitive to the temperature structure and that we are primarily probing pressures where the optical depth of molecular lines reaches 1. This means that we could potentially obtain a better understanding of the temperature profile by combining the information from different molecules: depending on their density and opacity, they will probe different pressure levels. A retrieval with a unique temperature profile for different molecules might therefore be less informative than trying to retrieve the temperature for different species and estimating where they provide most of their signal. ### Recovering a multi-species model We now consider the case of multiple species, namely H\({}_{2}\)O, CO, CH\({}_{4}\) and NH\({}_{3}\). Through this section, unless specified otherwise our synthetic atmosphere always contains these four molecules (in addition to H\({}_{2}\) and He which are largely dominant) with isothermal profile and we only vary the VMRs of each individual species. We created three synthetic transit sequences with three injected models labelled 1, 2 3. Model 1 used the MMRs reported in the table 4 of Giacobbe et al. (2021), Model 2 kept the same MMR for water but the other molecules were a factor of 10 lower and Model 3 a factor of 100 lower. The characteristics of the three models are summarised in Table 3 and plotted on Fig. 5, where we show the whole models and two zooms: one where water has low-amplitude absorption lines (around 1640 nm) and one where water absorption is dominant (around 1860 nm). In Appendix A, we show the cross-correlation maps for the three synthetic models. When we correlated the synthetic data with the injected models in Fig. 1, we recovered \(\approx 5\sigma\) detection in all cases, with Model 3 having the highest SNR and ratio between maximum positive and minimum negative value of correlation. This is not surprising as wee see on Fig. 5 that the amplitude of the lines is higher for this model, where the lower level of other species impacts less the global shape of the spectrum. We then correlated the synthetic data with models containing only one of the species. This is usually done in the literature for atmospheric characterisation to validate a detection of an individual species even if the atmosphere contains other constituents. When using all of the orders, CH\({}_{4}\) was detected (significance larger than \(3\sigma\)) for all 3 models as shown in Fig.10 for Model 3. NH\({}_{3}\) on the other hand is detected in Model 1, only marginally detected (detection around 2 \(\sigma\)) in Model 2 as shown in Fig. 11 and not detected in Model 3. Water is not detected in Model 1, detected in Model 2 as shown in Fig. 12 and detected over 4 \(\sigma\) in Model 3. Finally, we never detected CO as we explain in the next paragraph. These rather poor results led us to consider selecting orders, as we detail in the next section. Importantly, NH\({}_{3}\) and H\({}_{2}\)O in Model 1 and 2 were not robustly detected although, when we performed injection-recovery test with only these molecules at the same VMR they were easily detected (larger than 4 \(\sigma\)). This shows that the non detection of a given individual species does not systematically mean that it is absent from the atmosphere but can simply reflect a mismatch between a complex observed atmosphere spectrum and a too simplistic single-species model. For CO, we realised that the issue came from the stellar CO which prevents the detection of planetary CO. When we divide by the mean stellar spectra in the data reduction process we affect the planetary lines and hamper the detection. However, we also tested that the presence of CO in the synthetic data only marginally affected the retrieval of other species, due to its limited wavelength range of ab \begin{table} \begin{tabular}{l c c c} \hline \hline & **H\({}_{2}\)O** & **NH\({}_{3}\)** & **CH\({}_{4}\)** \\ Model 1 & No & Yes & Yes \\ Model 2 & Yes & Marginal & Yes \\ Model 3 & Yes & No & Yes \\ \hline \hline \end{tabular} \end{table} Table 4: Detection of individual species when correlating single-component models with the synthetic data detailed in Table 3. A detection means a SNR superior to 3, whereas a marginal detection is between 2 and 3. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**R (R\({}_{p}\))**} & \multicolumn{3}{c}{**g (g\({}_{p}\))**} & \multicolumn{3}{c}{_T\({}_{\rm eq}\) (K)_} & \multicolumn{3}{c}{**Water MMR**} \\ & True & Input & Retrieved & True & Input & Retrieved & True & Input & Retrieved & True & Input & Retrieved \\ \cline{2-11} P I & 1.0 & 1.0 & - & 1.0 & 1.0 & - & 900 & [200,2000] & 1013 \(\pm\) 117 & -2.11 & [-8,-1] & -2.49 \(\pm\) 0.41 \\ M A & 1.0 & [0.6,1.4] & \(0.94\pm 0.17\) & 1.0 & [0.6,1.4] & \(1.04\pm 0.18\) & 900 & [200,2000] & 1026 \(\pm\) 124 & -2.11 & [-8,-1] & -2.57 \(\pm\) 0.44 \\ M B & 1.0 & 1.0 & - & 1.0 & 0.75 & - & 900 & [200,2000] & 967 \(\pm\) 119 & -2.11 & [-8,-1] & -3.05 \(\pm\) 0.37 \\ M C & 1.0 & [0.5,1.5] & \(0.725\pm 0.13\) & 1.0 & 0.75 & - & 900 & [200,2000] & 1003 \(\pm\) 130 & -2.11 & [-8,-1] & -2.61 \(\pm\) 0.44 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of the retrieved parameters when varying mass and radius. The radius is in true planetary radius (R\({}_{p}\)), gravity in planetary gravity (g\({}_{p}\)) and water mass mixing ratio (MMR) in log. For each parameter, the first column is the model true value. The second column represent input values when only one number is provided or uniform prior range when the parameter is included in the retrieval. The third column is the retrieved values with \(1\sigma\) uncertainty. The description of the models is given in the text. P I is paper one, and M x means Model x. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Model 1** & **Model 2** & **Model 3** \\ \hline Temperature & 900 & 900 & 900 \\ log\({}_{10}\)(H\({}_{2}\)O) & -3.05 & -3.05 & -3.05 \\ log\({}_{10}\)(CO) & -1.8 & -2.8 & -3.8 \\ log\({}_{10}\)(NH\({}_{3}\)) & -3.0 & -4.0 & -5.0 \\ log\({}_{10}\)(CH\({}_{4}\)) & -1.5 & -2.5 & -3.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Physical parameters for the multi species models included used in the nested sampling retrieval. The temperature is in Kelvin and the abundances in mass mixing ratios, to be easily comparable with the posterior figures. sorption and well separated absorption lines. We will therefore not consider CO in the rest of this section although it is in the models. We then tried to retrieve the parameters with our nested sampling algorithm. We show the resulting posterior distributions in Appendix B. Several things can be noted: * The H\({}_{2}\)O abundance is poorly constrained in the two first models. This is not surprising as we exclude orders where tellurics are dominant, hence where water has its major impact on the spectrum. It was not an issue in the single species model but becomes problematic when other species are considered with the same VMR as water and the absorption lines of water are reduced in amplitude. In Model 3, we recover results comparable to those of paper I. * The temperature is always over-estimated and leads to large degeneracies with composition. When we fix the temperature, as shown in Fig. 10 compared to Fig. 11, the retrieval of other parameters is largely improved. * There is some degeneracy between H\({}_{2}\)O, CH\({}_{4}\) and NH\({}_{3}\) although we expect HRS to distinguish between molecular lines of different species. This rather counter-intuitive correlation is easily explained: for a given quantity of, say, H\({}_{2}\)O, increasing the quantity of CH\({}_{4}\) or NH\({}_{3}\) decreases the line depth by increasing the mean radius of the planet as seen in Fig. 5. The algorithm thus does not differentiate properly between low quantity of H\({}_{2}\)O and CH\({}_{4}\)/NH\({}_{3}\) or high quantities of all of them. Figure 4: Temperature as a function of pressure for the input profile (black) and all of the retrieved profiles from the linearly interpolated 4 points retrieval, with the mean profile in blue. The grey dashed lines represent the uniform prior ranges at the 4 pressures. Left: water volume mixing ratio of \(10^{-3}\). Right: water volume mixing ratio of \(10^{-5}\). Figure 3: Temperature as a function of pressure for the input profile (black) and the mean of the retrieved profiles with the three possible models discuted in the text (colors). Left: water volume mixing ratio of \(10^{-3}\). Right: water volume mixing ratio of \(10^{-5}\) * NH\({}_{3}\) is not recovered in Model 3 and only an upper limit for its content is obtained. However, we serendipitously removed a few orders where water has the highest signal (around the bands of water at 1.4 and 1.8 microns) and retrieved ammonia in Fig. 14. This retrieval shows two peaks : one at low temperature (close to the injected 900K) where NH\({}_{3}\) is poorly recovered but water and methane are, and one at much larger temperatures (few thousands kelvins) where the fit of the NH\({}_{3}\) composition is tightly centered around its injected composition, but at the cost of losing the detection of methane and having a degeneracy between water and temperature to ensure a constant line depth for ammonia. We understand this as the fact that the first peak has a maximum of likelihood from the fit of the fewer remaining water lines, whereas the second peak fits perfectly ammonia and provides a secondary maximum. We have verified that, when removing these orders in a pure water model, we do not obtain this second peak and simply recover water at the injected VMR. It is not clear how this result would translate to real planetary observation and whether we could potentially detect ammonia in secondary maxima, but it further confirms the degeneracies between composition and temperature in the amplitude of the lines and that care must be taken from the use of HRS only with simple priors. * In all cases, the typical error bar of log-VMR is 2 dex. It shows that an atmospheric retrieval with transmission spectroscopy is powerful to identify species but does not give a precise value of the composition (and temperature), except for much higher SNRs as in Line et al. (2021) or by coupling with LRS. If we now use the nested sampling process to retrieve individual species from the four-species synthetic data, we confirm the results obtained with the cross-correlation method. If one species is dominating, we are able to recover it with the nested sampling algorithm but if the three have comparable VMRs, they are not always recovered individually. Additionally, even if one species is dominant, we often recover too low an abundance compared to the injected value. Figure 5: Top: transit radius as a function of wavelength over the SPIRou domain for the three models of Table 3. The models have been shifted for visual comparison. Bottom: zoom on two different wavelength ranges. The models have not been shifted in the zooms. This is to be expected: as we see on Fig. 5, the depth of the absorption lines is reduced by the presence of other species. This translates into a lower recovered value of the VMR compared to the injected one, which is not an error of the algorithm but rather comes from too simple an assumption (that the multi-component model is equivalent to a combination of single-component models). Hence, low VMRs of given species in planetary atmospheres can simply arise from an erroneous chemical composition. Finally, we also performed a test in which we aimed to retrieve species that were not included in the model. We used the retrieval with H\({}_{2}\)O, CH\({}_{4}\) and NH\({}_{3}\) with an atmosphere model containing only H\({}_{2}\)O. The multinest algorithm converged towards a low (\(\leq 10^{-5}\) ) but non zero composition in CH\({}_{4}\) and NH\({}_{3}\), because of the degeneracies we already mentioned. It therefore shows that the best fit might not lead to a real individual detection and care must be taken when analyzing only posterior data. ### Order selection As we could not always detect molecules individually when considering all orders, we tried to define a merit function that would select or weight the wavelength range for each molecule. Two methods were tested: (i) we created a model with VMR = \(10^{-4}\) for H\({}_{2}\)O, NH\({}_{3}\) and CH\({}_{4}\) and calculated the pearson correlation coefficient with the single species model. For each molecule, we then only selected the orders where this correlation was larger than 0.5. (ii) We calculated the autocorrelation of the spectra of each individual species order by order and used it as weights in the CCF. For CH\({}_{4}\), the second method slightly improved the detection but only marginally. For water and NH\({}_{3}\), there was no reliable improvement by using either of the two methods. The second method works best for water in Model 2 but the first method is best in Model 1, whereas it is the opposite for NH\({}_{3}\), and in all cases the improvement is only marginal. We therefore could not rely on these methods to improve our detection limits in the general cases. However, as we mentioned in the previous section, removing a few orders dominated by water helped detecting and constraining the NH\({}_{3}\) composition with the nested sampling algorithm in Model 3. This shows that, although for individual molecules we did not find reliable ways to improve the significance of correlation, order selection can improve the retrieval for multiple species models. This is to be kept in mind when trying to rule out the presence of certain molecules from Bayesian exploration of the parameter space. Finally, it is interesting to note that our preferred orders for water detection (following method (i) or taking the 15 best orders of methods (ii)) are very different from those of Giacobbe et al. (2021). We actually don't recover water in Model 1 or 2, and only marginally in Model 3 with their wavelength domain. This points out toward either a difference in our two analyses, a much larger signal as they combine 5 transits of high atmospheric signature or that HD 209458 b has a much higher water volume mixing ratio than CH\({}_{4}\) and NH\({}_{3}\). ## 4 Discussion ### Combining transits Since addressing all sources of uncertainties and degeneracies is out of reach, we have focused on a few cases but have not mentioned the impact of stacking transits on the retrieval. Obviously, adding up many transits helps in identifying the atmospheric absorption by increasing the SNR as long as there is no (or low) variability in the planetary signature. The combination of several transits will be discussed in detail in forthcoming papers of the ATMOSPHERIX consortium with real data (Masson et al., in prep, Hood et al., in prep.). ### Improving the detection Throughout the two first articles of the ATMOSPHERIX consortium, we have focused on optimising the data reduction process. Further improvement of the data analysis framework will be required to characterise the atmospheres of the most challenging targets of the ATMOSPHERIX sample, either because of their low-amplitude atmosphere signals or due to the host star being too faint and/or active. The community is devoting substantial efforts to enhance the significance of molecular detection and get as much information as possible from the data. The use of an autoencoder, introduced in paper I, is one of such example. Among the possible improvements, we want to mention the works of Meech et al. (2022) and Rasmussen et al. (2022). Both teams use Gaussian processes to perform a spectrum retrieval and improve the data reduction process. Other technics have been presented in the literature although they have not yet been applied as systematically as template matching. We notably think about tomography (Watson et al., 2019) which is an interesting prospect to characterize exoplanets. If we were able to retrieve a mean line profile, Doppler imaging techniques inspired from stellar studies (e.g., Vogt et al. (1987)) could also be used to study the multi-dimensional structure of planets. This prospect is particularly interesting in the visible, where the lines have higher SNR, in emission spectroscopy and in the forthcoming era of 30+ meter telescopes. Globally, the use of HRS to characterize exoplanet atmospheres is less than 15 years old, and there are still lots of possibilities to improve the techniques. Such improvements might mitigate the conclusions of this paper as the detection level will be increased. We still expect degeneracies to be present and important in the process as we exposed them here and we advocate that there is a lack of studies focusing on the inherent degeneracies and limitations of the method thus far. ## 5 Conclusion In this paper, we have extended on the work of paper I, presenting our data analysis pipeline, by stuying different sources of uncertainty and degeneracy inherent to our analysis. We have shown that we are able to retrieve the correct model but that numerous degeneracies can drastically increase the error bars. We have focused on three issues: inaccuracies in the mass and radius, non-vertically isothermal profiles and the retrieval of multiple species. The conclusions of our tests are as follows: * The mass and radius of the planet should be included in the retrieval if they are uncertain, as this leads to a more reliable atmospheric retrieval. * The vertical temperature distribution of the planet's atmosphere is not easily retrieved as we are mostly sensitive to pressures where the optical depth approaches to 1. However, this also means that different molecules will be sensitive to different pressure levels which might allow one to probe the atmosphere at different depths, and to reconstruct a global temperature profile by combining information of different molecules at different pressures. * Models with multiple species introduce several degeneracies which can lead to erroneous conclusions: one can identify molecules that are not present or estimate inaccurately their mixing ratios. * When imposing the temperature, the retrieval is significantly improved. * Although transmission spectroscopy is good at detecting molecules, the 1\(\sigma\) uncertainty on the volume mixing ratio can reach up to 2 orders of magnitudes for our typical SNR of 200. Stacking many observations or using independent diagnostics such as LRS is necessary to reduce these uncertainties. * We did not find a reliable way to weight or select the SPIRou orders in order to improve the molecular detection for single species. We found however than selecting order can improve the retrieval of one species at the cost of a worse retrieval of another one and temperature in multi-species models. The combination of this paper and paper I gives an overview of the capacity of our pipeline to analyse SPIRou data of exoplanet atmospheres through transmission spectroscopy. They will serve as a basis for forthcoming papers of the ATMOSPHERIX consortium on real targets, whose studies are ongoing. ## Acknowledgements Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated from the summit of Maunakea by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. FD thanks the CNRS/INSU Programme National de Planetologie (PNP) and Programme National de Physique Stellaire (PNPS) for funding support. This work was supported by the Action Spectifique Numerique of CNRS/INSU. This work was granted access to the HPC resources of CALMIP supercomputing center under the allocation 2021-P21021. BK acknowledges funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 865624, GPRV). J.F.D, C.M, I.B. X.B, A.C., X.D., G.H., F.K acknowledge funding from Agence Nationale pour la Recherche (ANR, project ANR-18-CE31-0019 SPIaSH). AM, BC, BB, SV acknowledge funding from Programme National de Planetologie (PNP) and the Scientific Council of the Paris Observatory. X.D. and A.C. acknowledge funding by the French National Research Agency in the framework of the Investissements d'Avenir program (ANR-15-IDEX-02), through the funding of the "Origin of Life" project of the Universite de Grenoble Alpes.JL acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 679030/WHIPLABH), and from the french state: CNES, Programme National de Planetologie (PNP), the ANR (ANR-20-CE49-0009: SOUND). JFD and BK acknowledge funding from the European Research Council (ERC) under the H2020 research & innovation programme (grant agreement #740651 New-Worlds). O.V. acknowledges funding from the Agence National de la Recherche through the ANR project 'EXACT' (ANR-21-CE49-0008-01), from the Centre National d'Etudes Spatiales (CNES), and from the CNRS/INSU Programme National de Planetologie (PNP). Part of the work has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). WP acknowledge financial support from the SNSF for project 200021_200726. PT acknowledges supports by the European Research Council under Grant Agreement ATMO 757858. E.M. acknowledges funding from FAPEMIG under project number APQ-02493-22 and research productivity grant number 309829/2022-4 awarded by the CNPq, Brazil. We wish to thank J. Seidel for her constructive comments and questions. ## Data Availability The data are available upon request to the author and the code to analyze them is publicly available.
2306.02807
On Tail Decay Rate Estimation of Loss Function Distributions
The study of loss function distributions is critical to characterize a model's behaviour on a given machine learning problem. For example, while the quality of a model is commonly determined by the average loss assessed on a testing set, this quantity does not reflect the existence of the true mean of the loss distribution. Indeed, the finiteness of the statistical moments of the loss distribution is related to the thickness of its tails, which are generally unknown. Since typical cross-validation schemes determine a family of testing loss distributions conditioned on the training samples, the total loss distribution must be recovered by marginalizing over the space of training sets. As we show in this work, the finiteness of the sampling procedure negatively affects the reliability and efficiency of classical tail estimation methods from the Extreme Value Theory, such as the Peaks-Over-Threshold approach. In this work we tackle this issue by developing a novel general theory for estimating the tails of marginal distributions, when there exists a large variability between locations of the individual conditional distributions underlying the marginal. To this end, we demonstrate that under some regularity conditions, the shape parameter of the marginal distribution is the maximum tail shape parameter of the family of conditional distributions. We term this estimation approach as Cross Tail Estimation (CTE). We test cross-tail estimation in a series of experiments on simulated and real data, showing the improved robustness and quality of tail estimation as compared to classical approaches, and providing evidence for the relationship between overfitting and loss distribution tail thickness.
Etrit Haxholli, Marco Lorenzi
2023-06-05T11:58:25Z
http://arxiv.org/abs/2306.02807v1
# On Tail Decay Rate Estimation of Loss Function Distributions ###### Abstract The study of loss function distributions is critical to characterize a model's behaviour on a given machine learning problem. For example, while the quality of a model is commonly determined by the average loss assessed on a testing set, this quantity does not reflect the existence of the true mean of the loss distribution. Indeed, the finiteness of the statistical moments of the loss distribution is related to the thickness of its tails, which are generally unknown. Since typical cross-validation schemes determine a family of testing loss distributions conditioned on the training samples, the total loss distribution must be recovered by marginalizing over the space of training sets. As we show in this work, the finiteness of the sampling procedure negatively affects the reliability and efficiency of classical tail estimation methods from the Extreme Value Theory, such as the Peaks-Over-Threshold approach. In this work we tackle this issue by developing a novel general theory for estimating the tails of marginal distributions, when there exists a large variability between locations of the individual conditional distributions underlying the marginal. To this end, we demonstrate that under some regularity conditions, the shape parameter of the marginal distribution is the maximum tail shape parameter of the family of conditional distributions. We term this estimation approach as _cross-tail estimation (CTE)_. We test cross-tail estimation in a series of experiments on simulated and real data1, showing the improved robustness and quality of tail estimation as compared to classical approaches, and providing evidence for the relationship between overfitting and loss distribution tail thickness. Footnote 1: The code is available at [https://github.com/ehaxholli/CTE](https://github.com/ehaxholli/CTE) **Keywords:** Extreme Value Theory, Tail Modelling, Loss Function Distributions, Peaks-Over-Threshold, Cross-Tail-Estimation, Model Ranking ## 1 Introduction Loss function distributions form critical subjects of analysis, serving as barometers for machine learning model performance. In the context of a particular model and associated machine learning task, the authentic distribution of the loss function is typically elusive; we predominantly have access to a finite sample set, borne from diverse choices of training and testing sets. To facilitate performance comparisons across different models based on the underlying loss function distributions, a spectrum of methodologies has been established. Traditional strategies derive from information criteria such as the Akaike Information Criterion (AIC) Akaike (1973, 1974), an asymptotic approximation of the Kullback-Leibler divergence between the true data distribution and the fitting candidate, and its corrected version (AICc) Sugiura (1978); Hurvich and Tsai (1989), in addition to the Bayesian Information Criterion (BIC) Schwarz (1978). The application of these information criteria, especially the AIC, is often constrained by the multiple inherent approximations and assumptions Burnham and Anderson (2007), making them less feasible in certain scenarios. However, it warrants mention that more recent penalized criteria have considerably expanded their suitability for realistic setups Birge and Massart (1995); Arlot and Massart (2009). Simultaneously, other methodologies, termed splitting/resampling methods, have been devised, wherein a subset of the data is deployed to assess the performance of the trained model. This group of methodologies is expansive, predicated on a diverse range of partitioning and evaluation tactics devised to address data heterogeneity and imbalance Neyman (1934); Cochran (2007). In the domain of cross-validation strategies, the common metric employed for gauging model performance is the sample mean of the loss function distribution. This practice, though invariably providing a finite numerical value, does not assure the existence of the first statistical moment or those of higher order. Moreover, this metric, in spite of its prevalence, should not be construed as a sole indicator of the model's performance robustness or reliability, as it potentially overlooks the nuances and intricacies inherent to the underlying data distribution and model architecture. While it is true that MSE (or AIC) allow to rank models according to their relative performance on a given dataset, these scores still have limited value in quantifying the overall stability of a model. From a theoretical perspective, there is a strong correlation between the uppermost existing moment of a distribution and the thickness of its tail. This underscores the significance of examining the behavioural traits and decay rate of the tails of loss function distributions. In order to proceed, we first must be able to model the tails of distributions and to quantify their "thickness". Extreme Value Theory (EVT) is an established field concerned with modelling the tails of distributions. One of the fundamental results in EVT is the Pickands-Balkema-De Haan Theorem, which states that the tails of a large class of distributions can be approximated with generalized Pareto ones Pickands (1975); de Haan and Ferreira (2007). In practice, the shape and scale parameter of the generalized Pareto are approximated from a finite sample, while its location parameter is always zero. It is the shape parameter which quantifies tail thickness, with larger values corresponding to heavier tails. The resulting estimation method is called Peaks-Over-Threshold (POT). In the context of distributions of loss functions, for each training set, there is a corresponding conditional loss function distribution over points in the sample space. The actual total loss function distribution, the entity of our interest, is the weighted sum (integral) of all such conditional distributions, that is, it is the distribution created after marginalizing across the space of training datasets. In practice, we have a finite number of conditional distributions, as we have a finite number of training sets. Furthermore, for each of these conditional distributions, we only possess an approximation of them, derived from the samples in the testing set. The empirical approximation of the total loss function distribution therefore consists of the union of the sample sets of conditional distributions. Within this setting, the estimation of the tail shape of the total loss function distribution could be ideally carried out by applying POT on this union of samples. In theory, as we show in this work, the role of the thickest conditional tails in determining the decay rate of the marginal is preserved, since the marginal and conditional distributions are defined everywhere, which allows the assessment of tails at extreme locations. Unfortunately, in practice, the finiteness of the sampling affects the estimation of the tail of the marginal distribution, as the tails may be poorly or not even represented across different conditional distributions. To be more specific, during marginalization, samples from the tails of heavy tailed distributions can be overshadowed by the samples from the non-tail part of individual thin tailed ones. This suggests that modelling the tails of a marginal distribution by the usual application of POT can give inaccurate results in practice. In this paper, we develop a general method to mitigate the issue of estimating the tails of marginal distributions, when there exists a large variability between locations of the individual conditional distributions underlying the marginal. The proposed solution enables a reduction in the sample size requirements, in the experiments we conducted. To this end, we demonstrate that under some regularity conditions, the shape parameter of the marginal distribution is precisely the maximum tail shape parameter of the family of conditional distributions. We refer to the method constructed from this result as _cross tail estimation_, due to similarities that it shares with Monte Carlo cross validation. Furthermore, we show evidence of polynomial decay of tails of distributions of model predictions, and empirically demonstrate a relationship between the thickness of such tails and overfitting. An additional benefit of using the approach proposed here instead of the standard POT, is the reduced computational time in the case that the marginal is estimated from many conditional distributions. The following is a summary of the structure of the paper: In Section 2 we recall some of the main concepts and results from Extreme Value Theory. In Section 3, we state and generalize the main problem, which we tackle in Section 4, by building our theory. We conclude Section 4, by proving three statements which are useful for the experimental part, and by highlighting the relation between the tail of a distribution and its moments. In the final section, we show experimentally that our method can improve estimation in practice, as compared to the standard use of POT. ## 2 Related Work and Background This section initially provides a succinct overview of Monte Carlo cross validation, given its conceptual similarities with the proposed method, 'cross tail estimation'. The subsequent subsection outlines standard results and definitions from extreme value analysis, forming the foundational bedrock for the proofs presented in Section 4. ### Monte Carlo Cross Validation Let \(D=\{(x_{1},y_{1}),...(x_{n},y_{n})\}\), be a set of data samples drawn form the same distribution. During each iteration \(i\) we sample \(k\) samples \(D_{i}=\{(x_{\pi(1)},y_{\pi(1)}),...,(x_{\pi(k)},y_{\pi(k)})\}\) without repetition from the original dataset \(D\), and consider it as the training set for that iteration. The set \(D\setminus D_{i}\) is then used as the testing set. The quantity of interest, during iteration \(i\) is the sample mean of the loss of the model trained on \(D_{i}\), namely \(\hat{f}_{D_{i}}\), over the points of the testing set: \[\tilde{M}_{i}^{L}:=\frac{1}{|D\setminus D_{i}|}\sum_{j\in D\setminus D_{i}}L( \hat{f}_{D_{i}}(x_{j}),y_{j}), \tag{1}\] for a given loss function \(L\). We evaluate the total performance of the model, based on its average performance over different choices of the training/testing sets, that is, the true evaluation metric is: \[\tilde{M}^{L}:=\frac{1}{m}\sum_{i\in[m]}\tilde{M}_{i}^{L}=\frac{1}{m}\sum_{i \in[m]}\frac{1}{|D\setminus D_{i}|}\sum_{j\in D\setminus D_{i}}L(\hat{f}_{D_{ i}}(x_{j}),y_{j}), \tag{2}\] where \(m\) is the number of iterations. A detailed discussion on cross validation, elucidating its similarities with our proposed method for tail estimation in marginal loss function distributions, namely 'cross tail estimation', is presented in Subsection 3.3. ### Extreme Value Theory Extreme value theory (EVT) or extreme value analysis (EVA) is a branch of statistics dealing with the extreme deviations from the median of probability distributions. Extreme value theory is closely related to failure analysis and dates back to 1923, when Richard von Mises discovered that the Gumbell distribution is the limiting distribution of the maximum of an iid sequence, sampled from a Gaussian distribution. In 1928, Ronald A. Fisher and Leonard H. C. Tippett in Fisher and Tippett (1928), characterized the only three possible non-degenerate limiting distributions of the maximum in the general case: Frechet, Gumbel and Weibull. In 1943, Boris V. Gnedenko, gave a rigorous proof of this fact in Gnedenko (1943). This result is known Fisher-Tippett-Gnedenko theorem, and forms the foundation of EVT. The three aforementioned limiting distributions of the maximum can be written in compact form and they are known as the class of extreme value distributions: **Definition 1**: _The Generalized Extreme Value Distribution is defined as follows:_ \[G_{\xi,a,b}(x)=e^{-(1+\xi(ax+b))^{-\frac{1}{\xi}}},\quad 1+\xi(ax+b)>0, \tag{3}\] _where \(b\in\mathbb{R}\), \(\xi\in\mathbb{R}\setminus\{0\}\) and \(a>0\). For \(\xi=0\), we define the generalized Extreme Value Distribution as the limit when \(\xi\to 0\), that is_ \[G_{0,a,b}(x)=e^{-e^{-ax-b}}. \tag{4}\] **Theorem 2** (Fisher-Tippett-Gnedenko): _: Let \(X\) be a real random variable with distribution \(F_{X}\). Denote by \(\{X_{1},X_{2},...,X_{n}\}\) a set of iid samples from the distribution \(F_{X}\), and define \(M_{n}=\max\{X_{1},...,X_{n}\}\). If there exist two sequences \(\{c_{i}>0\}_{i\in\mathbb{N}}\) and \(\{d_{i}\in\mathbb{R}\}_{i\in\mathbb{N}}\), such that_ \[c_{n}^{-1}(M_{n}-d_{n})\xrightarrow{d}F\text{ as }n\rightarrow\infty, \tag{5}\] _for some non-degenerate distribution \(F\), then we must have \(F(x)=G_{\xi,a,b}(x)\), for some \(b,\xi\in\mathbb{R},a>0\)._ If \(X\) is a random variable as in Theorem 2, such that \(F(x)=G_{\xi,a,b}(x)\), we say that \(F_{X}\) is in the Maximum Domain of Attraction of \(G_{\xi,a,b}(x)\), and we write \(F_{X}\in MDA(\xi)\). Depending on whether \(\xi>0\), \(\xi=0\), \(\xi<0\), we say that \(F_{X}\) is in the MDA of a Frechet, Gumbell, or Weibull distribution respectively. **Definition 3**: _A Generalized Pareto distribution with location parameter zero is defined as below:_ \[G_{\xi,\sigma}(w)=\begin{cases}1-(1+\xi\frac{w}{\sigma}))^{-\frac{1}{\xi}}& \text{for }\xi\neq 0\\ 1-e^{-\frac{w}{\sigma}}&\text{for }\xi=0\end{cases}, \tag{6}\] _where \(w>0\) when \(\xi>0\) and \(0<w<-\frac{\sigma}{\xi}\) for \(\xi<0\). The shape parameter is denoted by \(\xi\), while the scale parameter by \(\sigma\)._ Pickands (1975), and Balkema and de Haan (1974) proved that the limiting distribution of samples larger than a threshold is a Generalized Pareto distribution, whose location parameter is zero. **Theorem 4** (Pickands-Balkema-De Haan): _: Let \(X\) be a random variable with distribution \(F_{X}\) and \(x_{F}\leq\infty\) such that \(\forall x>x_{F},\ \bar{F}_{X}(x)=0\). Then \(F_{X}\in MDA(\xi)\iff\exists g:(0,\infty)\rightarrow(0,\infty)\) such that_ \[\lim_{u\to x_{F}}\sup_{y\in[0,x_{F}-u]}|\bar{F}_{u}^{X}(y)-\bar{G}_{ \xi,g(u)}(y)|=0, \tag{7}\] _where \(\bar{F}_{u}^{X}(y)=\frac{1-F_{X}(y+u)}{1-F_{X}(u)}\)._ This result forms the basis of the well-known Peak-Over-Threshold (POT) method which is used in practice to model the tails of distributions. The shape parameter can be estimated via different estimators such as the Pickands Estimator or the Deckers-Einmahl-de Haan Estimator (DEdH), Dekkers et al. (1989). **Definition 5**: _Let \(X_{1}\), \(X_{2}\),...,\(X_{n}\) be iid samples from the distribution \(F_{X}\). If we denote with \(X_{1,n}\), \(X_{2,n}\),..., \(X_{n,n}\) the samples sorted in descending order, then the Pickands estimator is defined as follows:_ \[\hat{\xi}_{k,n}^{(P)}=\frac{1}{\ln 2}\ln\frac{X_{k,n}-X_{2k,n}}{X_{2k,n}-X_{4k, n}}. \tag{8}\] **Definition 6**: _Let \(X_{1}\), \(X_{2}\),...,\(X_{n}\) be iid samples from the distribution \(F_{X}\). If we denote with \(X_{1,n}\), \(X_{2,n}\),..., \(X_{n,n}\) the samples sorted in descending order, then the DEdH estimator is defined as follows:_ \[\hat{\xi}_{k,n}^{(H)}=1+H_{k,n}^{(1)}+\frac{1}{2}\left(\frac{(H_{k,n}^{(1)})^ {2}}{H_{k,n}^{(2)}}-1\right)^{-1}, \tag{9}\] _where_ \[H_{k,n}^{(1)}=\frac{1}{k}\sum_{j=1}^{k}(\ln X_{j,n}-\ln X_{k+1,n}) \tag{10}\] _and_ \[H_{k,n}^{(2)}=\frac{1}{k}\sum_{j=1}^{k}(\ln X_{j,n}-\ln X_{k+1,n})^{2}. \tag{11}\] An important result which we are going to use frequently in our proofs is Theorem 10, which can be found in Embrechts et al. (2013); de Haan and Ferreira (2007), and gives the connection between the maximum domain of attraction and slowly varying functions. **Definition 7**: _A positive measurable function \(L\) is called slowly varying if it is defined in some neighborhood of infinity and if:_ \[\lim_{x\to\infty}\frac{L(ax)}{L(x)}=1,\mbox{for all }a>0. \tag{12}\] **Theorem 8** (Representation Theorem, see Galambos and Seneta (1973)): _: A positive measurable function \(L\) on \([x_{0},\infty]\) is slowly varying if and only if it can be written in the form:_ \[L(x)=e^{c(x)}e^{\int_{x_{0}}^{x}\frac{u(t)}{t}dt}, \tag{13}\] _where \(c(t)\) and \(u(t)\), are measurable bounded functions such that \(\lim_{x\to\infty}c(t)=c_{0}\in(0,\infty)\) and \(u(t)\to 0\) as \(t\to\infty\)._ **Proposition 9**: _Mikosch et al. (1999) If \(L\) is slowly varying then for every \(\epsilon>0\):_ \[\lim_{x\to\infty}x^{-\epsilon}L(x)=0. \tag{14}\] **Proof** We give a proof in the Appendix for the sake of completeness. **Theorem 10**: _: If \(X\in MDA(\xi)\) and \(x_{F}\) is such that \(\forall x>x_{F},\ \bar{F}_{X}(x)=0\) then:_ * \(\xi>0\iff\bar{F}_{X}(x)=x^{-\frac{1}{\xi}}L(x)\)_, where L is slowly varying,_ * \(\xi<0\iff\bar{F}_{X}(x_{F}-\frac{1}{x})=x^{\frac{1}{\xi}}L(x)\)_, where L is slowly varying,_ * \(\xi=0\iff\bar{F}_{X}(x)=c(x)e^{-\int_{w}^{x}\frac{1}{a(t)}dt},\ w<x<x_{F}\leq\infty\)_, where_ \(c\) _is a measurable function satisfying_ \(c(x)\to c>0\) _as_ \(x\uparrow x_{F}\)_, and_ \(a(x)\) _is a positive, absolutely continuous function (with respect to Lebesgue measure) with density_ \(a^{\prime}(x)\) _having_ \(\lim_{x\uparrow x_{F}}a^{\prime}(x)=0\)_. If_ \(x_{F}<\infty\) _then_ \(\lim_{x\uparrow x_{F}}a(x)=0\) _as well._ ## 3 Setup and Problem Statement In the first part of this section, we give an example which illustrates why naively using the POT method can give unsatisfactory results. In the second part, we formalize the problem of tail modelling of total loss distributions and show that such modelling is prone to the weakness described in Subsection 1. In the third subsection, we introduce Cross-Tail-Estimation (CTE) which alleviates these issues, and we demonstrate an analogy between CTE and Cross-Validation. In the last subsection, we prepare the settings for Section 4, where we provide theoretical justifications (Theorem 21) for the use of CTE. ### Preamble Estimating the tails of marginal distributions via standard methods such as using POT directly, can give unsatisfactory results. In order to get a glimpse of the issue, let's assume that our variable of interest is \(X>0\), which in turns depends on the variable \(Z\). For simplicity we can assume that \(Z\) can be either \(0\) or \(1\), with equal probability, and if \(Z=0\) then \(f(x|Z=0)\) is a thick tailed distribution whose even first moment does not exist, while if \(Z=1\) then \(f(x|Z=1)\) is a Gaussian distribution, with a large mean. It is known that the tail shape parameter of \(f(x)=\sum_{i=1}^{n}p(Z=i)f(x|Z=i)\) is determined by the conditional distribution \(f(x|Z=i)\) with the thickest tail. In our case, \(n\) above is \(2\), and the tail of \(f(x)\) is defined by the fat tail of \(f(x|Z=0)\). Suppose we proceed with the standard POT approach, that is, we integrate out the random variable \(Z\), and subsequently estimate the shape parameter of the tail of \(f(x)\). In practice, when the number of samples is limited, it is possible that none of the samples of \(X\) from the fat tailed distributions exceeds those of the Gaussian due to the difference between their locations. Therefore, the sample tail of the marginal (mixture) distribution \(f(x)=\frac{1}{2}(f(x|Z=0)+f(x|Z=1))\) is defined by the sample tail of the Gaussian \(f(x|Z=1)\), while in reality, as mentioned, the tail of \(f(x)\) is defined by the fat tail of \(f(x|Z=0)\). Of course, in the ideal case where the sampling process is not finite, we would recover the true tail shape; however, for practical applications, estimating the shape tails parameters of \(f(x|Z=0)\) and \(f(x|Z=1)\) separately can be necessary. In some settings, random variable \(Z\) might have a continuous distribution, e.g. \(Range(Z)=\mathbb{R}\), instead of \(Range(Z)=\{1,2,..,n\}\). Such is the case presented in the next subsection, where \(f(x)=\int p(z)f(x|z)dz\). A natural question that arises in this scenario is whether the tail of the marginal \(f(x)\) is still determined by the largest tail of the conditional distributions \(f(x|z)\). As we will prove in Section 4, under some regularity conditions, the answer is in the affirmative. ### Problem Statement We assume that each data sample \(\boldsymbol{(X,Y)}\) comes from distribution \(\mathcal{D}\) and that the sampling is independent. We have used the symbol \(\boldsymbol{X}\) to denote the features and the symbol \(\boldsymbol{Y}\) to denote the labels. The training set will be defined as a random vector comprised of iid random vectors \(\boldsymbol{(X,Y)}\) sampled from \(\mathcal{D}\). More precisely, after fixing a natural number \(k\), we define a training set as \(\boldsymbol{V}=[\boldsymbol{(X,Y)_{1},(X,Y)_{2},...,(X,Y)_{k}}]\), where each \(\boldsymbol{(X,Y)}_{i}\) has distribution \(\mathcal{D}\). On the other hand, a test point naturally is defined as a sample from \(\mathcal{D}\), i.e., \(\boldsymbol{U}=\boldsymbol{(X,Y)}\). In practice, the realisation of \(\boldsymbol{U}\) should not be an entry in \(\boldsymbol{V}\). A model which is trained on \(\mathbf{V}\) to predict \(\mathbf{Y}\) from \(\mathbf{X}\) is denoted as \(\mathbf{\hat{h}_{V}(X)}\). The prediction error on the testing datum \(\mathbf{U}\) of a model trained on \(\mathbf{V}\) is denoted as \(W_{\mathbf{V}}(\mathbf{U})\). For the remainder of the paper we assume that \(W_{\mathbf{V}}(\mathbf{U})>0\) and notice that the probability density function of \(W_{\mathbf{V}}(\mathbf{U})\) is \[f_{W}(w)=\int f_{W,\mathbf{V}}(w,\mathbf{v})d\mathbf{v}=\int f_{\mathbf{V}}(\mathbf{v})f(w|\mathbf{V}= \mathbf{v})d\mathbf{v}=\int f_{\mathbf{V}}(\mathbf{v})f_{\mathbf{v}}(w)d\mathbf{v}, \tag{15}\] therefore the distribution function of \(W_{\mathbf{V}}(\mathbf{U})\) is: \[F_{W}(w)=\int f_{\mathbf{V}}(\mathbf{v})F_{\mathbf{v}}(w)d\mathbf{v}. \tag{16}\] \(F_{\mathbf{v}}(w)\) is the distribution of the prediction error (loss) of the model trained on training set \(\mathbf{v}\), while \(F(w)\) is the unconditional distribution of the loss. Our goal is to estimate the shape of the tails of \(F_{W}(w)\), by estimating the shape of the tails of the distributions \(F_{\mathbf{v}}(w)\) conditioned on the training sets \(\mathbf{v}\). ### Cross Tail Estimation We will denote with \(\xi_{\mathbf{v}}\) the tail shape parameter of \(F_{\mathbf{v}}(w)\) and with \(\xi\) the shape tail parameter of \(F_{W}(w)\). Our goal in Section 4 is to prove that under some regularity conditions if \(\exists\mathbf{v}\), such that \(\xi_{\mathbf{v}}>0\), then \(\xi=\max\{\xi_{\mathbf{v}}|\mathbf{v}\}\), and if \(\forall\xi_{\mathbf{v}}\leq 0\), then we have \(\xi\leq 0\). This motivates ``` Data:\(D=[\mathbf{(x,y)}_{1},\mathbf{(x,y)}_{2},...,\mathbf{(x,y)}_{n}]\) Define:\(A=\{\}\) Fix the number of training sets (rounds):\(m\in\mathbb{N}\) repeat 1. sample \(\mathbf{(x,y)}_{\pi(1)}\),..., \(\mathbf{(x,y)}_{\pi(k)}\) from \(\mathbf{(x,y)}_{1},\mathbf{(x,y)}_{2},...,\mathbf{(x,y)}_{n}\) 2. train model \(\mathbf{\hat{h}_{v}}\) on \(\mathbf{v}=[\mathbf{(x,y)}_{\pi(1)}\),..., \(\mathbf{(x,y)}_{\pi(k)}]\) 3. calculate the prediction errors \(W_{\mathbf{v}}(\mathbf{U})\) of model \(\mathbf{\hat{h}_{v}}\) on the testing set \(D\setminus\mathbf{v}\) 4. group the calculated prediction errors in the set \(E_{\mathbf{v}}(D)\) 5. apply the Pickands or DEdH estimator on \(E_{\mathbf{v}}(D)\) to estimate \(\xi_{\mathbf{v}}\) 6. add \(\hat{\xi}_{\mathbf{v}}\) to \(A\) until\(|A|=m\) return\(\max A\) if \(\max A>0\), else return 'non-positive' ``` **Algorithm 1** Naive Cross Tail Estimation Algorithm 1 which we name 'Naive Cross Tail Estimation' (NCTE). Since for each \(\mathbf{v}\), the estimated \(\hat{\xi}_{\mathbf{v}}\) is prone to estimation errors, taking the maximum \(\hat{\xi}_{\mathbf{v}}\) over all \(\mathbf{v}\) tends to cause NCTE to overestimate the true \(\xi\), especially when the number of conditional distributions \(F_{\mathbf{v}}(w)\) is large. For this reason we also present Algorithm 2, named 'Cross Tail Estimation' (CTE), where we split the samples from \(F_{\mathbf{v}}(w)\) into \(p\) sets in order to get \(p\) estimates of the tail shape parameter of \(F_{\mathbf{v}}(w)\), that is \(\{\hat{\xi}_{\mathbf{v}}^{1},\hat{\xi}_{\mathbf{v}}^{2},...,\hat{\xi}_{\mathbf{v}}^{p}\}\). Our final estimation of \(\xi_{\mathbf{v}}\) is the average of the \(p\) estimations, i.e., \(\frac{1}{p}\sum_{i=0}^{p}\hat{\xi}_{\mathbf{v}}^{i}\). A more detailed justification for utilizing Algorithm 2 is given in Appendix E. We notice that Algorithm 2 is identical to Algorithm 1 when \(p=1\). _Remark_: Estimating particular statistics of \(F_{W}(w)\) through the statistics of \(F_{\mathbf{v}}(w)\) as in in Algorithm 1 and 2 is a key component of Cross Validation. During cross validation, a training set \(\mathbf{v}\) and a testing set \(D\setminus\mathbf{v}\) are selected in each iteration, during which the following conditional expectation is then estimated: \[\mathbb{E}[W_{\mathbf{V}}(\mathbf{U})|\mathbf{V}=\mathbf{v}]=\int wf_{\mathbf{v}}(w)dw. \tag{17}\] The estimates of \(\mathbb{E}[W_{\mathbf{V}}(\mathbf{U})|\mathbf{V}]\) received in each iteration are then averaged to get an estimation of the total expectation: \[\begin{split}\mathbb{E}_{\mathbf{U},\mathbf{V}}(W_{\mathbf{V}}(\mathbf{U}))=\int wf (w)dw=\int f_{\mathbf{V}}(\mathbf{v})\int wf_{\mathbf{v}}(w)dwd\mathbf{v}=\\ =\int f_{\mathbf{V}}(\mathbf{v})\mathbb{E}[W_{\mathbf{V}}(U)|\mathbf{V}=\mathbf{v}] d\mathbf{v}=\mathbb{E}[\mathbb{E}[W_{\mathbf{V}}(\mathbf{U})|\mathbf{V}=\mathbf{v}]].\end{split} \tag{18}\] In the language of Section 3.2, the mean of distribution \(F_{W}(w)\) is the average of the means of the conditional distributions \(F_{\mathbf{v}}(w)\). This statement about sums stands parallel with our claim about extremes; that the shape parameter of the tail of \(F_{W}(w)\), if positive, is the maximum of the shape parameters of the tails of the conditional distributions \(F_{\mathbf{v}}(w)\). ### The General Problem Generalizing the problem stated in Section 3.2 requires considering a one dimensional random variable of interest \(X\), dependent on other random variables \(\{Z_{1},Z_{2},...,Z_{n}\}\), such that the probability density function of \(X\) is \[f_{X}(x)=\int f(z_{1},...,z_{n},x)d_{z_{1}}\cdots d_{z_{n}} \tag{19}\] \[=\int f(\mathbf{z})f(x|\mathbf{z})d\mathbf{z}=\int f(\mathbf{z})f_{\mathbf{z}}(x)d\mathbf{z}. \tag{20}\] Integrating with respect to \(x\) we get \[F_{X}(x)=\int f(\mathbf{z})F(x|\mathbf{z})d\mathbf{z}=\int f(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}. \tag{21}\] In this case, with regards to the previous section, we notice that \(\mathbf{Z}=\mathbf{V}\) is the training set on which we condition, while \(X=W\) is the random variable of interest. In Section 4, we give several results which relate the tails of \(F_{X}(x)\) and \(F(x|\mathbf{z})\), culminating with Theorem 4.1 which justifies the usage of the CTE algorithm, by providing limiting behaviour guarantees. ## 4 Theoretical Results In this section, we build our theory of modelling the tails of marginal distributions, which culminates with Theorem 4.1. We conclude this section by proving three statements which are useful in the experimental Section 5, and give the relation between the existence of the moments of a distribution and the thickness of its tails. Unless stated otherwise, the proofs of all the statements are given in Appendix A. ### Tails of marginal distributions For two given distributions, whose tails have positive shape parameters, we expect the one with larger tail parameter to decay slower. Indeed: **Lemma 11**: _If \(F_{1}\in MDA(\xi_{1})\) and \(F_{2}\in MDA(\xi_{2})\), and if \(\xi_{1}>\xi_{2}>0\), then \(\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=0\)._ In a similar fashion, regardless of the signs of the shape parameters, we expect the one with larger tail parameter to decay slower. In fact we have the following: **Lemma 12**: _If \(F_{1}\in MDA(\xi_{1})\) and \(F_{2}\in MDA(\xi_{2})\) then:_ 1. _If_ \(\xi_{1}>0\) _and_ \(\xi_{2}=0\) _then_ \(\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=0\)_._ 2. _If_ \(\xi_{1}=0,x_{F_{1}}=\infty\) _and_ \(\xi_{2}<0\) _then_ \(\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=0\)_._ 3. _If_ \(\xi_{1}>0\) _and_ \(\xi_{2}<0\) _then_ \(\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=0\)_._ Despite the fact that a linear combination of slowly varying functions is not necessarily slowly varying, the following statement holds true: **Lemma 13**: _If for \(i\in\{1,...,n\}\) we let \(L_{i}(x)\) be slowly varying functions, and \(\{a_{1},...,a_{n}\}\) be a set of positive real numbers, then_ \[L(x)=\sum_{i=1}^{n}a_{i}L_{i}(x)\] _is slowly varying._ In the case of a mixture of a finite number of distributions the following known result holds: **Theorem 14**: _Let \(Z:\Omega\to A\subset\mathbb{R}^{n}\) be a random vector where \(|A|<\infty\). At each point \(\mathbf{z}_{1},..,\mathbf{z}_{n}\in A\), we define a distribution \(F_{\mathbf{z}_{i}}(x)\in MDA(\xi_{i})\) and assume that \(\xi_{\max}:=\max(\xi_{1}=\xi_{z_{1}},...,\xi_{n}=\xi_{z_{n}})>0\). If the set \(\{p_{1},...,p_{n}\}\) is a set of convex combination parameters, that is \(\sum\limits_{i}p_{i}=1\) and \(p_{i}>0\) then:_ \[F(x)=\sum\limits_{i}^{n}p_{i}F_{\mathbf{z}_{i}}(x)\in MDA(\xi_{\max}). \tag{22}\] _If \(\xi_{\max}\leq 0\) then if \(\xi_{F}\) exists we have \(\xi_{F}\leq 0\)._ **Proof** While this result is well known, we give an alternative proof in Appendix A, using the Pickands-Balkema-De Haan Theorem. \(\blacksquare\) _From now on, we assume that the functions \(F_{A}(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}\) defined on any element \(A\) of the Borel \(\sigma-algebra\) induced by the usual metric are in the \(MDA\) of some extreme value distribution. Furthermore, we assume that the pdf \(f_{\mathbf{Z}}(\mathbf{z})\) is strictly positive everywhere in its domain._ Proposition 9 states that every slowly varying function is sub-polynomial. That is for any \(\delta>0\) and any slowly varying function \(L(x)\), if we are given any \(\gamma>0\), then we can find \(x(L,\delta,\gamma)>0\), such that for all \(x>x(L,\delta,\gamma)\), the inequality \(x^{-\delta}L(x)<\gamma\) holds. However, since \(x(L,\delta,\gamma)\) depends on the function \(L\), assuming that we have a family of \(\{L_{\mathbf{z}}|\mathbf{z}\in A\}\), where \(A\) is a measurable set, the set \(\{x(L_{\mathbf{z}},\delta,\gamma)|\mathbf{z}\in A\}\) can be unbounded, suggesting that the beginning of the tail of \(\bar{F}_{\mathbf{z}}(x)=x^{-\frac{1}{\delta x}}L_{\mathbf{z}}(x)\) can be postponed indefinitely across the family \(\{F_{\mathbf{z}}|\mathbf{z}\in A\}\). These concepts are formalized in the following: **Definition 15**: _For a set \(A\), the family of sub-polynomial functions \(\{L_{\mathbf{z}}(x)|\mathbf{z}\in A\}\) is called \(\gamma\)-uniformly sub-polynomial if for any fixed \(\delta>0\), there exists a \(\gamma(\delta)\) so that the set \(\{x_{0}|\mathbf{z}\in A\}\) is bounded from above, where \(x_{0}=x_{0}(L_{\mathbf{z}},\delta,\gamma)\) is the smallest value for which when \(x>x_{0}\) we have \(x^{-\delta}L_{\mathbf{z}}(x)<\gamma\)._ **Proposition 16**: _Let \(\mathbf{Z}:\Omega\to A\subset\mathbb{R}^{n}\) be a random vector where \(A\) is measurable and define a family of sub-polynomial functions \(\{L_{\mathbf{z}}(x)|\mathbf{z}\in A\}\), which we assume is \(\gamma\)-uniformly sub-polynomial. Then for a probability density function \(f_{\mathbf{Z}}(\mathbf{z})\) on \(A\) induced by \(\mathbf{Z}\), the function \(L(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})L_{\mathbf{z}}(x)d\mathbf{z}\) is sub-polynomial._ In the following theorem, we assume that all conditional distributions have positive tail shape parameters, and we show that the marginal distribution cannot have a tail shape parameter larger (smaller) than the largest (smallest) tail shape parameter across conditional distributions. Furthermore, if the tail shape parameters vary continuously across the space of conditional distributions, then the tail shape parameter of the marginal is precisely the same as the maximal tail shape parameter of the conditional distributions. **Theorem 17**: _Let \(\mathbf{Z}:\Omega\to A\subset\mathbb{R}^{n}\) be a random vector where \(A\) is measurable. At each point \(\mathbf{z}\in A\) define a distribution \(F_{\mathbf{z}}(x)\in MDA(\xi_{\mathbf{z}})\), and suppose there exist \(\xi_{lo},\ \xi_{up}\) such that \(\forall\mathbf{z}\in A,\ 0<\xi_{lo}\leq\xi_{\mathbf{z}}\leq\xi_{up}\). Furthermore, let \(L_{\mathbf{z}}(x)\) be the slowly varying function corresponding to \(F_{\mathbf{z}}(x)\). If the family \(\{L_{\mathbf{z}}(x)|\mathbf{z}\in A\}\) is \(\gamma\)-uniformly sub-polynomial, then for \(F(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}\) we have \(\xi_{lo}\leq\xi_{F}\leq\xi_{up}\). Furthermore, if \(\xi_{\mathbf{z}}\) is continuous in \(\mathbf{z}\), then \(\xi_{F}=\xi_{\max}\), where \(\xi_{\max}:=\sup\{\xi_{\mathbf{z}}|\mathbf{z}\in A\}\)._ Similarly to the case when \(F_{\mathbf{z}}(x)\) are in the \(MDA(\xi_{\mathbf{z}})\) for \(\xi_{\mathbf{z}}>0\), if we wish to extend the results above, regularity conditions are required for the \(\xi_{\mathbf{z}}\leq 0\) case. We notice that if \(F_{z}(x)\in MDA(\xi)\) for \(\xi\leq 0\), then \(\bar{F}_{\mathbf{z}}(x)\) itself is sub-polynomial, whether its support is bounded or not. This observation motivates the following: **Definition 18**: _For a set \(A\), define the family of distribution functions \(\mathcal{F}_{\mathcal{A}}=\{F_{\mathbf{z}}(x)|\mathbf{z}\in A\}\), and define \(A^{+}=\{\mathbf{z}|\xi_{\mathbf{z}}>0\},\ A^{-}=\{\mathbf{z}|\xi_{\mathbf{z}}\leq 0\}\). We say family \(\mathcal{F}_{\mathcal{A}}\) has stable cross-tail variability if,_ * \(\{L_{\mathbf{z}}(x)|\mathbf{z}\in A^{+}\}\) _is_ \(\gamma\)_-uniformly sub-polynomial,_ * \(\{\bar{F}_{\mathbf{z}}(x)|\mathbf{z}\in A^{-}\}\) _is_ \(\gamma\)_-uniformly sub-polynomial._ We notice that in the previous theorem, if for all \(\mathbf{z}\) we have \(0<\xi_{\mathbf{z}}\leq\epsilon\), then \(\xi_{F}\leq\epsilon\). If the corresponding family \(\mathcal{F}_{\mathcal{A}}=\{F_{\mathbf{z}}(x)|\mathbf{z}\in A\}\) has stable cross-tail variability, this holds independently from the lower bound of \(\{\xi_{\mathbf{z}}|\mathbf{z}\in A\}\). Indeed: **Lemma 19**: _Let \(\mathbf{Z}:\Omega\to A\subset\mathbb{R}^{n}\) be a random vector where \(A\) is measurable. At each point \(\mathbf{z}\in A\) define a distribution \(F_{\mathbf{z}}(x)\in MDA(\xi_{\mathbf{z}})\), and suppose that \(\forall\mathbf{z}\in A,\ \xi_{\mathbf{z}}\leq\epsilon\). If the family \(\{F_{\mathbf{z}}(x)|\mathbf{z}\in A\}\) has stable cross-tail variability, then for \(F(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}\) we have \(\xi_{F}\leq\epsilon\)._ **Corollary 20**: _Let \(\mathbf{Z}:\Omega\to A\subset\mathbb{R}^{n}\) be a random vector where \(A\) is measurable. At each point \(\mathbf{z}\in A\) define a distribution \(F_{\mathbf{z}}(x)\in MDA(\xi_{\mathbf{z}})\), and suppose that \(\forall\mathbf{z}\in A,\ \xi_{\mathbf{z}}\leq 0\). If the family \(\{F_{\mathbf{z}}(x)|\mathbf{z}\in A\}\) has stable cross-tail variability, then for \(F(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}\) we have \(\xi_{F}\leq 0\)._ **Proof** We notice that for any \(\epsilon>0\), we have \(\xi_{\mathbf{z}}<\epsilon\) for all \(\mathbf{z}\in A\). Hence, from the previous Lemma we conclude that \(\xi_{F}\leq\epsilon,\forall\epsilon>0\). \(\blacksquare\) Finally, we prove the generalization of Theorem 17 in the case that the tail shape parameters \(\xi_{\mathbf{Z}}\) of the conditional distributions are real numbers: **Theorem 21**: _Let \(\mathbf{Z}:\Omega\to A\subset\mathbb{R}^{n}\) be a random vector where \(A\) is measurable. At each point \(\mathbf{z}\in A\) define a distribution \(F_{\mathbf{z}}(x)\in MDA(\xi_{\mathbf{z}})\), where \(\xi_{\mathbf{z}}\) is continuous and \(\xi_{\max}>0\). If the family \(\{F_{\mathbf{z}}(x)|\mathbf{z}\in A\}\) has stable cross-tail variability, then for \(F(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}\) we have \(\xi_{F}=\xi_{\max}\). In the case that \(\xi_{\max}\leq 0\) then \(\xi_{F}\leq 0\)._ Examples when the conditions of Theorem 21 hold, as well as when they are violated, can be found in Appendix C and B, respectively. ### Useful propositions for the experimental part In this subsection, we prove three statements which are useful in the experimental Section 5, and state the well-known relation between the existence of the moments of a distribution and the thickness of its tails. **Proposition 22**: _Let \(F_{X}\) be the distribution of the random variable \(X\). We define \(X_{1}\) to be a random variable whose distribution is the normalized right tail of \(F_{X}\), that is:_ \[F_{X_{1}}(x)=\begin{cases}0&\text{for }x\leq 0\\ \frac{F(x)-F(0)}{1-F(0)}&\text{for }x>0\end{cases}. \tag{23}\] _Similarly we define \(X_{2}\) whose distribution is the normalized left tail of \(F_{X}\),_ \[F_{X_{2}}(x)=\begin{cases}0&\text{for }x<0\\ \frac{F(0)-F(-x)}{F(0)}&\text{for }x\geq 0\end{cases}. \tag{24}\] _If \(F_{X_{1}}\in MDA(\xi_{1})\), \(F_{X_{2}}\in MDA(\xi_{2})\), and \(\max\{\xi_{1},\xi_{2}\}>0\), then:_ \[\xi_{|X|}=\max\{\xi_{1},\xi_{2}\}.\] _If \(F_{X_{1}}\in MDA(\xi_{1})\), \(F_{X_{2}}\in MDA(\xi_{2})\), and \(\max\{\xi_{1},\xi_{2}\}\leq 0\), then:_ \[\xi_{|X|}\leq 0.\] **Proof** Since \[\begin{split} F_{|X|}(x)=\mathbb{P}(|X|<x)=\mathbb{P}(X<x|X>0) \mathbb{P}(X>0)+\mathbb{P}(-X<x|X\leq 0)\mathbb{P}(X\leq 0)\\ =p_{1}F_{X_{1}}(x)+p_{2}F_{X_{2}}(x),\end{split} \tag{25}\] Theorem 14 gives the desired conclusion. **Proposition 23**: _Let \(X\) be a random variable such that \(X\in MDA(\xi_{X}>0)\). If we define \(Y\) to be equal to \(X^{\alpha}\), for some \(\alpha\in\mathbb{R}^{+}\), then \(Y\in MDA(\xi_{Y})\) where \(\xi_{Y}=\alpha\xi_{X}\). If \(\xi_{X}\leq 0\) then \(\xi_{Y}\leq 0\)._ It is important to notice that we can estimate the shape of the tail of \(W_{\boldsymbol{V}}(\boldsymbol{U})\) by also conditioning on the test label \(\boldsymbol{y}\): \[f_{W}(w)=\int f_{W,\boldsymbol{Y}}(w,\boldsymbol{y})d\boldsymbol{y}=\int f_{ \boldsymbol{Y}}(\boldsymbol{y})f(w|\boldsymbol{Y}=\boldsymbol{y})d\boldsymbol {y}=\int f_{\boldsymbol{Y}}(\boldsymbol{y})f_{\boldsymbol{y}}(w)d\boldsymbol{y} \tag{26}\] \[F_{W}(w)=\int f_{\boldsymbol{Y}}(\boldsymbol{y})F_{\boldsymbol{y}}(w)d \boldsymbol{y}. \tag{27}\] We use this fact to prove the following: **Proposition 24**: _Let the loss function be defined as \(W_{\mathbf{V}}(\mathbf{U})=|Y-\hat{f}_{\mathbf{V}}(\mathbf{X})|^{p}\) for some \(p\in\mathbb{R}^{+}\), and let \(F_{y}(t)\) be the distribution of \(\hat{f}_{\mathbf{V}}(\mathbf{X})\) given \(Y\). If we assume that the distribution of the labels \(Y\) has bounded support \(S\), that the family \(\{F_{y}(t)|y\in S\}\) has stable cross-tail variability, and that the shape parameters \(\xi_{y}\) of \(F_{y}(t)\) change continuously, then the tail shape parameters of \(W_{\mathbf{V}}(\mathbf{U})\) and \(|\hat{f}_{\mathbf{V}}(\mathbf{X})|^{p}\) share the same sign, and are identical if either of them is positive._ There exists a strong connection between the Maximum Domain of Attraction of a distribution, and the existence of its moments (see Embrechts et al. (2013)): **Proposition 25**: _If \(F_{|X|}\) is the distribution function of a random variable \(|X|\), and \(F_{|X|}\in MDA(\xi)\) then:_ \[\text{i) if }\xi>0,\text{ then }\mathbb{E}[|X|^{r}]=\infty,\forall r\in( \frac{1}{\xi},\infty), \tag{28}\] \[\text{ii) if }\xi\leq 0,\text{ then }\mathbb{E}[|X|^{r}]<\infty,\forall r\in( 0,\infty). \tag{29}\] This means that, for a model with a positive loss function whose distribution has a shape parameter that is bigger than one, even the first moment of that loss function distribution does not exist. Hence, we would expect that our model has an infinite mean, which would suggest that this model should be eliminated during model ranking. However, if all models possess an infinite mean, it is not advisable to disregard models with smaller medians. In Proposition 24, we showed that if we condition on the testing set, under some assumptions, we can estimate the shape of the total loss distribution, that is the distribution of \(W_{\mathbf{V}}(\mathbf{U})\), by simply investigating the models prediction, without the need for target data. This can also be motivated from the moments of \(W_{\mathbf{V}}(\mathbf{U})\) as shown in Appendix D. ## 5 Experiments In this section, we demonstrate the significance of Theorem 21. In the first subsection, we show experimental evidence that the estimated shape parameter of the marginal distribution, under the assumption that we have an abundance of sample points, coincides with the maximal shape parameter of individual conditional distributions. In the second subsection, we show that when the sample size is finite, as it is the case in the real world, the method proposed by Theorem 21 (cross tail estimation) can be necessary to reduce the required sample size for proper tail shape parameter estimation of marginal distributions. Furthermore, in the third subsection, we compare the standard POT and cross tail estimation on real data. For the considered regression scenarios, we notice that when these shape parameters are calculated by cross tail estimation, the magnitude of shape parameters of the distribution of model predictions increases significantly when the model overfits. We also notice that such a relationship does not appear in the case that we use directly the POT method to estimate the aforementioned shape parameters. Finally, in the fourth subsection, we discuss the computational advantages of using cross tail estimation. ### Validity of Cross Tail Estimation in Practice The main problem that we tried to tackle in the previous section, was estimating the shape parameters of the tail of distribution \(F(x)\): \[F(x)=\int f(\mathbf{z})F_{\mathbf{z}}(x)d\mathbf{z}, \tag{30}\] via tail shape estimation of the conditional distributions \(F_{\mathbf{z}}(x)\). In what follows, we give two experiments showing that this is feasible in practice. #### 5.1.1 Experimental Setting For simplicity, we set \(\mathbf{z}\) to be one dimensional, and thus denote the conditional distributions \(F_{\mathbf{z}}\) as \(F_{z}\), where \(z\in\mathbb{R}\). In this case equation (30) becomes \[F(x)=\int f(z)F_{z}(x)dz. \tag{31}\] First, we define \(f(z)\) as a mixture of Gaussian distributions. To do so we choose a mean \(\mu_{i}\) from a uniform distribution in \([-5,5]\) and then a standard deviation \(\sigma_{i}\) from a uniform distribution between \([0,4]\), and together they define a Gaussian distribution \(g_{i}(z)\). We repeat this process for 30 Gaussian distributions and define \(f(z)=\sum_{i=1}^{30}\frac{g_{i}(z)}{30}\). Second, we define the function \(\xi_{z}\) as \[\xi_{z}=\frac{\frac{(nz+2m^{2}+kz^{3})e^{-|z|}+a}{b}+c}{d}, \tag{32}\] where \(n=1\), \(m=2\), \(k=2\), \(b=5.76\), \(a=-3b-3.80\), \(d=(\frac{7}{8}\xi_{\max}+\frac{29}{8})^{-1}\) and \(c=d\xi_{\max}+3\). The \(\xi_{\max}\) in the variables \(c,d\) determines the maximum value that the function \(\xi_{z}\) takes as long as \(\xi_{\max}\in[-4,5]\). More details about the function \(\xi_{z}\) are provided in Appendix G. Third, we define \(F_{z}(x)\) as a generalized Generalized Pareto if \(\xi_{z}\leq 0\), otherwise we define it as \(F_{z}(x)=1-x^{-\frac{1}{\xi_{z}}}\). The choice of \(\xi_{\max}\) completely determines each \(\xi_{z}\) and hence each \(F_{z}(x)\), thus it fully defines \(F(x)\) in Equation 31. In our experiments the parameter \(\xi_{\max}\) takes the following 45 values \(\{-4,-4+0.2,-4+0.4,...,5\}\), that is \(\xi_{j}=-4+\frac{2j}{10}\), where \(j\in\{0,...45\}\). Each choice of \(j\) defines a particular \(F_{j}(x)\) on the left side of Equation 31. Also since the maximum \(\xi_{j}\) determines \(\xi_{z}\) then we denote \(\xi_{z}\) as \(\xi_{z,j}\). For each \(j\) we repeat the following process \(p\) times: 1. Define an empty List J and repeat \(M\) times the steps (a), (b), (c). 1. Sample a \(z\) from the \(f(z)\) defined above 2. For that \(z\) calculate \(\xi_{z,j}\) (given that \(\xi_{\max}=\xi_{j}\)) 3. For the given \(\xi_{z,j}\) sample a point \(x\) from a Generalized Pareto if \(\xi_{z,j}\leq 0\), otherwise sample from \(F_{z}(x)=1-x^{-\frac{1}{\xi_{z,j}}}\). Add this sample to List J. 2. Use the Pickands or DEdH estimator on these \(M\) samples in List J to estimate the shape parameter of \(F_{j}(x)\). According to Theorem 21 this estimated value \(\hat{\xi}_{j}^{k}\) should be precisely \(\xi_{j}\). As guided by the ideas laid in Appendix E, our final estimation of \(\xi_{j}\) after \(p\) repetitions of the process above is \(\hat{\xi}_{j}=\frac{1}{p}\sum_{k=1}^{p}\hat{\xi}_{j}^{k}\). In the next subsections we show the results of performing this experiment for each \(j\) using the Pickands and the DEdH estimators. #### 5.1.2 Cross tail estimation using the Pickands estimator In this subsection, we show the results of the experiment described in Subsection 5.1.1, when the Pickands Estimator is applied. The results are shown in Figure 1, where the number \(M\) defined in the previous subsection takes the following values \(\{10^{5},10^{6},10^{7},10^{8}\}\) and we set Figure 1: In cases where the maximum tail shape parameter in the mixture of conditional distributions is positive, the estimated shape parameter of the marginal is equal to this maximal value. If this maximum value is negative, the estimated shape parameter is negative. We utilized the Pickands estimator. \(p=10\). We have executed the experiment 10 times, and to account for variability across the different runs, we have computed the mean and standard deviation of the results. #### 5.1.3 Cross tail estimation using the DEdH estimator In this subsection, we present the results of the experiment described in Subsection 5.1.1, when the DEdH Estimator is employed. The results are illustrated in Figure 2, where the number \(M\) defined in the previous subsection takes the values \(\{10^{5},10^{6},10^{7},10^{8}\}\) and set \(p=10\). We have executed the experiment 10 times, and to account for variability across the different runs, we have computed the mean and standard deviation of the results. ### Addressing High Variance in the Location of Conditional Distributions: The Necessity of Cross Tail Estimation (CTE) In subsection 5.1, we presented empirical evidence to substantiate Theorem 21. Notably, for computational expediency, we elected to set all conditional distributions with a location Figure 2: In cases where the maximum tail shape parameter in the mixture of conditional distributions is positive, the estimated shape parameter of the marginal is also positive and equal to this maximal value. However, if this maximum value is negative, the estimated shape parameter is also negative. We utilize the DEdH estimator as our estimator of choice. parameter of zero. This decision was motivated by the fact that, if location parameters were permitted to exhibit significant variability, the direct Peaks Over Threshold (POT) approach would necessitate an unfeasibly large sample size to verify our claims. This issue is addressed in the current subsection, wherein we illustrate that the Conditional Tail Expectation (CTE) approach provides a suitable remedy. Specifically, in subsection 5.2.1, we outline modifications to the experimental setup from subsection 5.1 that allow for variation in the location parameter, and present the experimental results accordingly. In subsection 5.2.2, we apply the CTE approach to the same distributions as in subsection 5.2.1, and demonstrate that it allows for correct estimation of shape parameters. Additional experiments, in more simplified settings, highlighting the necessity of CTE are provided in Appendix F. 2.1 Applying POT directly when the location of conditional distributions exhibits substantial variability In order to ensure high variability of the location of conditional distributions \(F_{z}(x)\), we modify step (c) of the sampling process in Subsection 5.1.1 as follows: Figure 3: The direct application of POT fails to retrieve the true shape of the marginal. We utilize the Pickands estimator as our estimator of choice. * For the given \(\xi_{z,j}\) sample a point \(x\) from a Generalized Pareto if \(\xi_{z,j}\leq 0\), otherwise sample from \(F_{z}(x)=1-x^{-\frac{1}{\xi_{z,j}}}\). If \(\xi_{z,j}>0\) translate \(x\) by adding \(\frac{1}{\xi_{z,j}}^{4}\). Add this sample to List J. This adaptation ensures that conditional distributions with lower shape parameters are situated at greater distances from the origin, thereby augmenting the probability that their tails will dominate over those that exhibit heavier tails. The results (Figure 3 and 4) show that the estimators predict that the shape parameter of the tail is constantly 4 as the tail of the marginal is determined by \(\xi_{z,j}{}^{-4}\) instead of \(1-x^{-\frac{1}{\xi_{z,j}}}\) which merely becomes noise around \(\xi_{z,j}^{-4}\). This changes once \(\xi_{\max}=\xi_{j}\) becomes larger than 4, in which case the tails of the conditional distribution are once again determined by \(1-x^{-\frac{1}{\xi_{z,j}}}\). #### 5.2.2 Enhancing parameter estimation accuracy through the CTE approach We demonstrate that the CTE method can effectively recover the true shape of the tail of the marginal, even in cases where the conditional distributions exhibit highly varying Figure 4: The direct application of POT fails to retrieve the true shape of the marginal. We utilize the DEdH estimator as our estimator of choice. locations, as was observed in the previous subsection. To ensure objectivity, we define the functions \(f(z)\), \(\xi_{z}\), and \(F_{z}(x)\) in a consistent manner as before, thereby ensuring that all marginal distributions under consideration are equivalent to those studied in previous cases. As per the definition of the CTE, the sampling procedure is the following: 1. Sample K values \(z\) from \(f(z)\). For each \(z\) repeat \(p\) times the steps (a), (b), (c). 1. Calculate \(\xi_{z,j}\) (given that \(\xi_{\text{max}}=\xi_{j}\)) 2. For the given \(\xi_{z,j}\) sample N points from a Generalized Pareto if \(\xi_{z,j}\leq 0\), otherwise sample from \(F_{z}(x)=1-x^{-\frac{1}{\xi_{z,j}}}\). 3. Use the Pickands or DEdH estimator on these \(N\) samples to get an estimate \(\hat{\xi}^{l}_{z,j}\) of the shape parameter \(\hat{\xi}_{z,j}\) of \(F_{z,j}(x)\). 2. As guided by the ideas laid in Appendix E, our final estimation of \(\xi_{z,j}\) after \(p\) repetitions of the process above is \(\hat{\xi}_{z,j}=\frac{1}{p}\sum_{l=1}^{p}\hat{\xi}^{l}_{z,j}\). Figure 5: Retrieving the true shape of the marginal is possible using CTE. We utilize the Pickands estimator as our estimator of choice. 3. We select the maximal \(\hat{\xi}_{z,j}\) from the \(K\) predicted values (corresponding to the \(K\) sampled \(z\)). According to Theorem 21 this estimated maximal \(\hat{\xi}_{j}\) should be close to \(\xi_{j}\). We set \(p=10\) at all times. Furthermore, for the sake of fairness, we sample the same number of points from each marginal distribution as in the previous subsection, that is, we set \(KN=M\). Since we set \(K=50\), in order for \(M\) to take values in \(\{1e5,1e6,1e7,1e8\}\), \(N\) needs to take values in \(\{2e3,2e4,2e5,2e6\}\). We execute the experiment 10 times, and to account for variability across the different runs, we compute the mean and standard deviation of the results. They are shown in Figure 5 and 6. Naturally, the more \(K\) is increased the more likely we are to sample the \(z\) corresponding to the conditional distribution with the maximal shape parameter. Hence, Theorem 21 provides assurance that as the value of \(K\) increases, our estimation progressively converges to the true shape parameter of the marginal distribution. Figure 6: Retrieving the true shape of the marginal is possible using CTE. We utilize the DEdH estimator as our estimator of choice. ### Model performance inference improvements via cross tail estimation, relative to POT In what follows, we show the results of two experiments, where we observe that cross tail estimation can improve the estimation of the shape of the tail in realistic settings. Furthermore, we observe that in these cases, the thickness of the tail is positively correlated with over-fitting, therefore inference regarding the performance of the model is improved when using CTE instead of POT. #### 5.3.1 Gaussian Processes In this experiment, our data is composed of a one-dimensional time series taken from the UCR Time Series Anomaly Archive 2Wu and Keogh (2020), which we reorganize in windows of size 2, and use each window to fit a Gaussian process (GP) model in order to predict the next value in the series. Our complete dataset \(D\) is composed of \(n=1e4\) windows. On Figure 7: Experimental results in the case of testing Gaussian processes. Left: The Pickands estimator is used. Right: The DEdH estimator is used. In both cases we notice that CTE estimates larger shape parameters of the loss function distributions for models which overfit. This is not the case when POT is applied directly. The first black vertical line marks the first model with lower MSE than the model with the smallest length scale parameter (the point where the models stop overfitting). The second black vertical line marks the model in from which MSE starts growing again (the point when models begin underfitting). The MSE is presented in log scale and has been further linearly scaled to fit the plot. each run we randomly select 340 points of \(D\) for training (denote \(D_{i}\)), and then group the predictions of the model on the \(1e4\) points of \(D\) into an array which we denote by \(\hat{Y}_{i}\). Then we split \(\hat{Y}_{i}\) into five equally sized subsets \(\hat{Y}_{i,j}\). We proceed to estimate the shape parameter of the tails of the prediction of the model, for given training set \(D_{i}\). This is done by applying the Pickands/DEdH estimator to \(\hat{Y}_{i,j}\), receiving \(\hat{\xi}_{i,j}\) and then as per Appendix E, we get the estimate \(\hat{\xi}_{i}=\frac{1}{5}\sum_{j=1}^{5}\hat{\xi}_{i,j}\) which corresponds to \(\hat{Y}_{i}\). We repeat this process 1000 times (for 1000 choices of the training set \(D_{i}\)), and select as our estimation of the shape parameter of the tail of the distribution of our loss function, the maximum individual estimated parameter: \(\hat{\xi}_{i}=\max\{\hat{\xi}_{i}|i\in[1000]\}\). On the other hand, we also calculate the MSE on the testing set \(D\setminus D_{i}\) after the model has been trained on \(D_{i}\). To check the difference of performance of the direct POT of tail shape estimation and cross tail estimation, we also calculate the shape parameter of the overall distribution of prediction models, through the standard method, by applying Pickands/DEdH estimator on \(Y=\bigcup\limits_{i=1}^{1000}\hat{Y}_{i}\). These experiments are repeated for length scale parameters given in the \(x-\)axis of Figure 7 as well as in Appendix H. We repeat every experiment 200 times to account for variability across different runs, we compute the mean and standard deviation of the results. In Figure 7, we notice that when the CTE approach is used, the shape parameter is significantly larger for models which have a large MSE. In Appendix H, we illustrate that the MSE is large for small scale parameters due to overfitting (Figure 12). Furthermore, the shape parameter only drops to (under) zero, when the model starts underfitting for length scale parameters bigger than \(2.5e7\). In Appendix H (Figure 13), it is shown that for such large values of the length scale parameter, the predictions become roughly constant. On the other hand, if POT is applied directly, then the estimated shape parameters are not significantly larger for models which overfit compared to those that do not. This is because conditioning on the training set, the predicted values on the test set vary significantly with regards to the their location. Hence, the tail that is estimated by the direct application of the POT approach is sometimes simply the one translated the furthest from the origin. Thus, if there is some negative correlation between the magnitude of the location and the size of the estimated shape parameter across different conditional, then we expect POT to underestimate the true shape parameter of the marginal. This is shown in Appendix H, for the model with the highest estimated shape parameter (290). The variability (sorted) of the estimated shape parameters of the 1000 conditionals for each length scale parameter is given in Figure 15 of Appendix H, together with the corresponding \(97th\) percentile (threshold) from each corresponding conditional distribution. We notice that indeed, quite often the difference between locations is large, and that the largest threshold often corresponds to conditional distributions with small, even negative shape parameters. The outcomes presented herein are robust with regards to the application or non-application of the method explicated in Appendix E in conjunction with the direct Peaks Over Thresh old (POT) approach. Furthermore, the findings presented in Figure 7 demonstrate near equivalence in relation to the magnitude of the selected threshold (in this study, we evaluated 99.7 and 99.997 percentiles). #### 5.3.2 Polynomial Kernels This experiment is almost identical to the previous one, with the only differences being that the models we test now are polynomial kernels, and the set of possible candidate models in this case is defined by the degree of the polynomial kernel. We test polynomial kernels of degree from 1 to 9. As before, we repeat this experiment 200 times. The results are shown in Figure 8. ### Computational Simplifications Another benefit to using cross tail estimation is the reduction of computational time, as for a given number \(m\) of conditional distributions, with \(n\) samples for each, instead of joining all testing samples together in an array of size \(m*n\), we perform calculations in \(m\) arrays of size \(n\) in parallel. This becomes useful in practice during shape parameter estimation, as using Pickands estimators requires sorted samples, where best algorithms for sorting require \(n\log(n)\) operations for a vector of size \(n\). Hence our method which Figure 8: Experimental results in the case of testing polynomial kernels. Left: The Pickands estimator is used. Right: The DEdH estimator is used. In both cases we notice that CTE estimates larger shape parameters of the loss function distributions for models which overfit. This is not the case when POT is applied directly. The black vertical line marks the infection point of the MSE. The MSE is presented in log scale and has been further linearly scaled to fit the plot. requires \(n\log(n)\) operations is much faster in practice than the standard POT approach which requires \(mn\log(mn)\), in a setting where \(m\) and \(n\) are of approximately of the same order. ## 6 Conclusion We study the problem of estimating the tail shape of loss function distributions, and explain the complications that arise in performing this task. We notice that such complications arise in general during the estimation of the tail shape of marginal distributions. In order to mitigate such shortcomings, we propose a new method of estimating the shape of the right tails of marginal distributions and give theoretical guarantees that the tail of the marginal distribution coincides with the thickest tail of the set of conditional distributions composing the marginal. We give experimental evidence that our method works in practice, and is necessary in applications with small sample sizes. Using the aforementioned method, we show experimentally that the tails of distribution functions in many cases can have non-exponential decay, as well as that it is possible that not even their first moment exists. Furthermore, we discover an interesting phenomena regarding the relationship between the overfitting of a model, and the thickness of the tails of its prediction function distribution, in the experiments we conducted. Potential additional applications of the method we develop include improving classic tail modelling, as well as the threshold selection for model comparison in anomaly detection Su et al. (2019). Furthermore, cross tail estimation could be used to estimate the existence of the moments of loss function distributions, and thus can be considered as a potential elimination criteria for models whose first moment does not exist. This work has been supported by the French government, through the 3IA Cote d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. The authors are grateful to the OPAL infrastructure from Universite Cote d'Azur for providing resources and support. ## Appendix A: Proofs ### Proof of Proposition 9 We notice that if \(L(x)\) converges the statement is trivial. However, if it does not then: \[\begin{split}\lim_{x\to\infty}x^{-\epsilon}L(x)&=\lim _{x\to\infty}\frac{L(x)}{x^{\epsilon}}=\lim_{x\to\infty}\frac{e^{c(x)}e^{\int_{ x_{0}}^{x}\frac{u(y)}{y}dy}}{x^{\epsilon}}=\lim_{x\to\infty}\frac{e^{c(x)}e^{ \int_{x_{0}}^{x}\frac{u(y)}{y}dy}}{e^{\epsilon}\log(x)}=\\ &=\lim_{x\to\infty}e^{c(x)}e^{\int_{x_{0}}^{x}\frac{u(y)}{y}dy- \epsilon\log(x)}=\lim_{x\to\infty}e^{c(x)}e^{\log(x)(\frac{\int_{x_{0}}^{x} \frac{u(y)}{y}dy}{\log(x)}-\epsilon)}.\end{split} \tag{33}\] Using L'Hopital's rule we get: \[\lim_{x\to\infty}\frac{\int_{x_{0}}^{x}\frac{u(y)}{y}}{\log(x)}=\lim_{x\to \infty}\frac{\frac{u(x)}{x}}{\frac{1}{x}}=\lim_{x\to\infty}u(x)=0, \tag{34}\] therefore \[\lim_{x\to\infty}e^{\log(x)(\frac{\int_{x_{0}}^{x}\frac{u(y)}{y}dy}{\log(x)}- \epsilon)}=0. \tag{35}\] ### Proof of Lemma 11 From Theorem 10, we get that \[F_{1}\in MDA(\xi_{1})\iff\bar{F}_{1}(x)=x^{-\frac{1}{\xi_{1}}}L_{1}(x),\] and \[F_{2}\in MDA(\xi_{2})\iff\bar{F}_{2}(x)=x^{-\frac{1}{\xi_{2}}}L_{2}(x),\] where \(L_{1}(x)\) and \(L_{2}(x)\) are slowly varying functions. Therefore \[\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=\lim_{x\to\infty}x^{ \frac{1}{\xi_{1}}-\frac{1}{\xi_{2}}}\frac{L_{2}(x)}{L_{1}(x)}=\lim_{x\to\infty }x^{\alpha}\frac{L_{2}(x)}{L_{1}(x)}, \tag{36}\] since \[\xi_{1}>\xi_{2}\implies-\frac{1}{\xi_{1}}>-\frac{1}{\xi_{2}}\implies\alpha:= \frac{1}{\xi_{1}}-\frac{1}{\xi_{2}}<0.\] On the other hand \(L(x):=\frac{L_{2}(x)}{L_{1}(x)}\) is defined in a neighborhood of infinity as \(L_{1}(x)\neq 0\), and is also a slowly varying function as \[\lim_{x\to\infty}\frac{L(ax)}{L(x)}=\lim_{x\to\infty}\frac{\frac{L_{2}(ax)}{L_ {1}(ax)}}{\frac{L_{2}(x)}{L_{1}(x)}}=\lim_{x\to\infty}\frac{\frac{L_{2}(ax)}{L _{1}(ax)}}{\frac{L_{1}(ax)}{L_{1}(x)}}=1,\] and since the quotient of positive measurable functions, is positive and measurable. Therefore, using Corollary 1, Equation (36) becomes \[\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=\lim_{x\to\infty}x^{ \alpha}\frac{L_{2}(x)}{L_{1}(x)}=\lim_{x\to\infty}x^{\alpha}L(x)=0. \tag{37}\] **Proof of Lemma 12** 1. If \(\xi_{1}>0\) and \(\xi_{2}=0\) then \[\lim_{x\to\infty}\frac{\bar{F}_{2}(x)}{\bar{F}_{1}(x)}=\lim_{x\to\infty}\frac{c(x )e^{-\int_{w}^{x}\frac{g(t)}{a(t)}dt}}{x^{-\frac{1}{t}}L(x)}=\lim_{x\to\infty} \frac{c(x)e^{-\log(x)\left(\frac{f_{w}^{x}\frac{g(t)}{a(t)}dt}{10\log(x)}- \frac{1}{\xi}\right)}}{L(x)},\] (38) using L'Hopital's rule: \[\lim_{x\to\infty}\frac{\int_{w}^{x}\frac{g(t)}{a(t)}dt}{\log(x)}=\lim_{x\to \infty}\frac{\frac{g(x)}{a(x)}}{\frac{1}{x}}=\lim_{x\to\infty}\frac{x}{a(x)},\] (39) we distinguish two cases: if \(\lim_{x\to\infty}a(x)\neq\infty\) then \(\lim_{x\to\infty}\frac{x}{a(x)}=\infty\), while if \(\lim_{x\to\infty}a(x)=\infty\) then using L'Hopital's rule again, we obtain \[\lim_{x\to\infty}\frac{x}{a(x)}=\lim_{x\to\infty}\frac{1}{a^{\prime}(x)}=\infty.\] (40) Thus, in both cases \[=\lim_{x\to\infty}\frac{c(x)e^{-\log(x)\left(\frac{f_{w}^{x}\frac{g(t)}{a(t)} dt}{10\log(x)}-\frac{1}{\xi}\right)}}{L(x)}=\lim_{x\to\infty}\frac{c(x)x^{- \left(\frac{f_{w}^{x}\frac{g(t)}{a(t)}dt}{10\log(x)}-\frac{1}{\xi}\right)}}{L (x)}=0.\] (41) Statements 2. 3. and 4. are trivial. **Proof of Lemma 13** Since \(L(x)\) is positive and measurable (linear combination of finite measurable functions), the only part left to prove is that \[\lim_{x\to\infty}\frac{L(ax)}{L(x)}=1,\forall a>0.\] First we prove that \[\lim_{x\to\infty}\frac{L_{1}(ax)+L_{2}(ax)}{L_{1}(x)+L_{2}(x)}=1,\forall a>0.\] Indeed, for each \(\epsilon>0\), there exist \(x_{1},x_{2}\) such that for \(x>x_{1}\) we have \(|\frac{L_{1}(ax)}{L_{1}(x)}-1|<\epsilon\) and for \(x>x_{2}\) we have \(|\frac{L_{2}(ax)}{L_{2}(x)}-1|<\epsilon\). Hence for \(x_{0}=\max\{x_{1},x_{2}\}\), \(x>x_{0}\) implies \(|L_{1}(ax)-L_{1}(x)|<L_{1}(x)\epsilon\) and \(|L_{2}(ax)-L_{2}(x)|<L_{2}(x)\epsilon\) therefore \(|L_{1}(ax)+L_{2}(ax)-(L_{1}(x)+L_{2}(x))|=|L_{1}(ax)-L_{1}(x)+L_{2}(ax)-L_{2}(x )|\leq|L_{1}(ax)-L_{1}(x)|+|L_{2}(ax)-L_{2}(x)|<(L_{1}(x)+L_{2}(x))\epsilon\) hence \(|\frac{L_{1}(ax)+L_{2}(ax)}{L_{1}(x)+L_{2}(x)}-1|<\epsilon\). Now, we notice that for every \(a_{i}>0\), we get \(\lim_{x\to\infty}\frac{a_{i}L_{i}(ax)}{a_{i}L_{i}(x)}=1\), and \(a_{i}L_{i}(x)\) is positive as well as measurable. This implies that \(a_{1}L_{1}\) and \(a_{2}L_{2}\) are slowly varying functions, and therefore based of the previous result we get \[\lim_{x\to\infty}\frac{a_{1}L_{1}(ax)+a_{2}L_{2}(ax)}{a_{1}L_{1}(x)+a_{2}L_{2} (x)}=1,\forall a>0.\] Using induction finishes the proof of the Lemma. #### Proof of Theorem 14 Since if \(\xi_{\mathbf{z}_{i}}<0\) then \(\exists x_{0}>0\), such that \(\forall x>x_{0}\) we have \(F_{\mathbf{Z}_{i}}(x)=0\), this means that the tail of the distribution is not affected by \(F_{\mathbf{Z}_{i}}(x)\). In fact if \(\xi_{\max}<0\) then \(F\) will have finite support hence \(\xi_{F}\leq 0\). Furthermore if \(\xi_{\max}=0\) from Lemma 12 we get that \(\xi_{F}\leq 0\). Therefore for the case \(\xi_{\max}>0\) we only consider the setting where \(\xi_{i}\geq 0\). \[\bar{F}_{u}(w)=\frac{1-F(u+w)}{1-F(u)}=\frac{\sum\limits_{i}^{n}p_{i}(1-F_{\mathbf{ z}_{i}}(u+w))}{\sum\limits_{i}^{n}p_{i}(1-F_{\mathbf{z}_{i}}(u))}=\sum\limits_{i}^{n} \frac{\bar{F}_{\mathbf{z}_{i}}(u+w)}{\sum\limits_{j}^{n}\frac{p_{j}}{p_{i}}\bar{F}_ {\mathbf{z}_{j}}(u)} \tag{42}\] \[=\sum\limits_{i}^{n}\frac{\bar{F}_{\mathbf{z}_{i}}(u+w)}{\bar{F}_{\mathbf{z}_{i}}(u)} \frac{\bar{F}_{\mathbf{z}_{i}}(u)}{\sum\limits_{j}^{n}\frac{p_{j}}{p_{i}}\bar{F}_ {\mathbf{z}_{j}}(u)}=\sum\limits_{i}^{n}\frac{\bar{F}_{\mathbf{z}_{i}}(u+w)}{\bar{F}_{ \mathbf{z}_{i}}(u)}\frac{1}{\sum\limits_{j}^{n}\frac{p_{j}}{p_{i}}\frac{\bar{F}_{ \mathbf{z}_{j}}(u)}{\bar{F}_{\mathbf{z}_{i}}(u)}}. \tag{43}\] We denote with \(i(\max)\) the index corresponding to \(\xi_{\max}\) and finish our proof using Pickand's theorem: \[\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\bar{F}_{u}(y)-\bar{G}_{\xi_{\max},g(u )}|=\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\sum\limits_{i}^{n}\frac{\bar{F}_{ \mathbf{z}_{i}}(u+w)}{\bar{F}_{\mathbf{z}_{i}}(u)}\frac{1}{\sum\limits_{j}^{n}\frac{p_ {j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}}(u)}{\bar{F}_{\mathbf{z}_{i}}(u)}}-\bar{G}_{ \xi_{\max},g(u)}| \tag{44}\] \[=\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\sum\limits_{i}^{n}\frac{\bar{F}_{ \mathbf{z}_{i}}(u+w)}{\bar{F}_{\mathbf{z}_{i}}(u)}\frac{1}{1+\sum\limits_{j\neq i}^{n} \frac{p_{j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}}(u)}{\bar{F}_{\mathbf{z}_{i}}(u)}}- \bar{G}_{\xi_{\max},g(u)}| \tag{45}\] \[\leq\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\frac{\bar{F}_{\mathbf{z}_{i(\max)}}( u+w)}{\bar{F}_{\mathbf{z}_{i(\max)}}(u)}\frac{1}{1+\sum\limits_{j\neq i(\max)}^{n} \frac{p_{j}}{p_{i(\max)}}\frac{\bar{F}_{\mathbf{z}_{j}}(u)}{\bar{F}_{\mathbf{z}_{i}}(u )}}-\bar{G}_{\xi_{\max},g(u)}| \tag{46}\] \[\leq\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\frac{\bar{F}_{\mathbf{z}_{i(\max)}}( u+w)}{\bar{F}_{\mathbf{z}_{i(\max)}}(u)}-\bar{G}_{\xi_{\max},g(u)}| \tag{47}\] \[+\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\frac{1}{1+\sum\limits_{j \neq i(\max)}^{n}\frac{p_{j}}{p_{i(\max)}}\frac{\bar{F}_{\mathbf{z}_{j}}(u)}{\bar {F}_{\mathbf{z}_{i}(\max)}(u)}}-1||\frac{\bar{F}_{\mathbf{z}_{i(\max)}}(u+w)}{\bar{F}_ {\mathbf{z}_{i(\max)}}(u)}|\] \[+\lim_{u\to\infty}\sup_{w\in[0,\infty]}\sum\limits_{i\neq i(\max)} ^{n}|\frac{\bar{F}_{\mathbf{z}_{i}}(u+w)}{\bar{F}_{\mathbf{z}_{i}}(u)}||\frac{1}{1+ \sum\limits_{j\neq i}^{n}\frac{p_{j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}}(u)}{\bar {F}_{\mathbf{z}_{i}}(u)}}|\] \[\leq\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\frac{\bar{F}_{\mathbf{z}_{i( \max)}}(u+w)}{\bar{F}_{\mathbf{z}_{i(\max)}}(u)}-\bar{G}_{\xi_{\max},g(u)}|\] \[+\lim_{u\to\infty}|\frac{1}{1+\sum\limits_{j\neq i(\max)}^{n} \frac{1}{p_{i(\max)}}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{\bar{F}_{\mathbf{z}_{i(\max)}}( u)}}-1| \tag{48}\] \[+\lim_{u\to\infty}\sum\limits_{i\neq i(\max)}^{n}|\frac{1}{1+ \sum\limits_{j\neq i}^{n}\frac{p_{j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{ \bar{F}_{\mathbf{z}_{i}(u)}}}|.\] The first expression, \[\lim_{u\to\infty}\sup_{w\in[0,\infty]}|\frac{\bar{F}_{\mathbf{z}_{i( \max)}}(u+w)}{\bar{F}_{\mathbf{z}_{i(\max)}}(u)}-\bar{G}_{\xi_{\max},g(u)}| \tag{49}\] goes to zero due to Pickands Theorem while the expression, \[\lim_{u\to\infty}|\frac{1}{1+\sum\limits_{j\neq i(\max)}^{n}\frac{p_{j}}{p_{i (\max)}}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{\bar{F}_{\mathbf{z}_{i(\max)}}(u)}}-1| \tag{50}\] converges to \(0\) as well because from Lemma 11 we have \(\lim_{u\to\infty}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{\bar{F}_{\mathbf{z}_{i(\max)}}(u)}=0\) for every \(j\). Finally the last expression, \[\lim_{u\to\infty}\sum\limits_{i\neq i(\max)}^{n}|\frac{1}{1+\sum \limits_{j\neq i}^{n}\frac{p_{j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{\bar{F} _{\mathbf{z}_{i}(u)}}}| \tag{51}\] equals \(0\) since in each sum \(\sum\limits_{j\neq i}^{n}\frac{p_{j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{ \bar{F}_{\mathbf{z}_{i}(u)}}\), there exists an index \(j\) such that \(\bar{F}_{\mathbf{z}_{j}}(u)=\bar{F}_{\mathbf{z}_{i(\max)}}(u)\), implying that \(\sum\limits_{j\neq i}^{n}\frac{p_{j}}{p_{i}}\frac{\bar{F}_{\mathbf{z}_{j}(u)}}{ \bar{F}_{\mathbf{z}_{i}(u)}}\to\infty\). In the derivation above we assumed that the \(F_{\mathbf{z}_{i(\max)}}\) which corresponds to \(\xi_{\max}\) is unique. In the case that this is not true we notice that for \(F_{1}\) and \(F_{2}\) which share the same corresponding parameter \(\xi>0\) we have \[p_{1}F_{1}(x)+p_{2}F_{2}(x)=x^{-\frac{1}{\xi}}(p_{1}L_{1}(x)+p_{2}L_{2}(x))=x^{ -\frac{1}{\xi}}L(x), \tag{52}\] and since \(L(x)>0\), from Lemma 13 we have that \(L(x)\) is slowly varying, therefore \(p_{1}F_{1}(x)+p_{2}F_{2}(x)\in MDA(\xi)\). ### Proof of Proposition 16 First, we fix \(\delta>0\). We can find a \(x(\gamma,\delta)>0\), such that for \(x>x(\gamma,\delta)\), we can bound \(x^{-\delta}L_{z}(x)<\gamma\) for all \(z\in A\) simultaneously. This implies that \(f_{Z}(z)x^{-\delta}L_{z}(x)\) is bounded by \(f_{z}(z)\gamma\). Since \(\int_{z}f_{z}(z)\gamma dz=\gamma<\infty\), by dominated convergence we get \[\lim_{x\to\infty}x^{-\delta}\int_{A}f_{Z}(z)L_{z}(x)dz=\lim_{x\to\infty}\int_{ A}f_{Z}(z)x^{-\delta}L_{z}(x)dz=\int_{A}\lim_{x\to\infty}f_{Z}(z)x^{-\delta}L_{z}(x )dz=0. \tag{53}\] **Proof of Theorem 17** We will first assume that \(\xi_{F}>0\). Since \(\bar{F}(x)=x^{-\frac{1}{\xi_{F}}}L_{F}(x)\), for every \(\epsilon>0\): \[\begin{array}{c}\frac{\bar{F}(x)}{x^{-\frac{1}{\xi_{lo}-\epsilon}}}=\frac{x^{- \frac{1}{\xi_{F}}}L_{F}(x)}{x^{-\frac{1}{\xi_{lo}-\epsilon}}}=\frac{\int_{A}f_ {\boldsymbol{Z}}(\boldsymbol{z})x^{-\frac{1}{\xi_{\boldsymbol{z}}}}L_{ \boldsymbol{z}}(x)d\boldsymbol{z}}{x^{-\frac{1}{\xi_{lo}-\epsilon}}}=\\ \int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{-\frac{1}{\xi_{ \boldsymbol{z}}}+\frac{1}{\xi_{lo}-\epsilon}}L_{\boldsymbol{z}}(x)d \boldsymbol{z}=\int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{\alpha( \boldsymbol{z})}L_{\boldsymbol{z}}(x)d\boldsymbol{z}.\end{array} \tag{54}\] We notice that \(\xi_{\boldsymbol{z}}\geq\xi_{lo}>\xi_{lo}-\epsilon\implies-\frac{1}{\xi_{ \boldsymbol{z}}}\geq-\frac{1}{\xi_{lo}}>-\frac{1}{\xi_{lo}-\epsilon}\) hence \(\alpha(\boldsymbol{z})=-\frac{1}{\xi_{\boldsymbol{z}}}+\frac{1}{\xi_{lo}- \epsilon}>0\). Considering that \[\lim_{x\to\infty}\frac{\bar{F}(x)}{x^{-\frac{1}{\xi_{lo}-\epsilon}}}=\lim_{x \to\infty}\int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{\alpha(\boldsymbol{z}) }L_{\boldsymbol{z}}(x)dz, \tag{55}\] by using Fatou's lemma: \[\lim_{x\to\infty}\int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{\alpha( \boldsymbol{z})}L_{\boldsymbol{z}}(x)d\boldsymbol{z}\geq\int_{A}\lim_{x\to \infty}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{\alpha(\boldsymbol{z})}L_{ \boldsymbol{z}}(x)d\boldsymbol{z}=\infty, \tag{56}\] we get \[\lim_{x\to\infty}\frac{x^{-\frac{1}{\xi_{lo}-\epsilon}}}{\bar{F}(x)}=0, \tag{57}\] implying \[\lim_{x\to\infty}\frac{x^{-\frac{1}{\xi_{lo}-\epsilon}}}{x^{-\frac{1}{\xi_{ \bar{\xi}_{lo}}}}L_{F}(x)}=\lim_{x\to\infty}\frac{x^{-\frac{1}{\xi_{lo}- \epsilon}+\frac{1}{\xi_{F}}}}{L_{F}(x)}=0, \tag{58}\] therefore \[\xi_{lo}-\epsilon<\xi_{F},\forall\epsilon>0\text{ thus }\xi_{lo}\leq\xi_{F}. \tag{59}\] Now we turn to prove that \(\xi_{F}\leq\xi_{up}\). As before, \[\begin{array}{c}\frac{\bar{F}(x)}{x^{-\frac{1}{\xi_{up}+\epsilon}}}=\frac{x ^{-\frac{1}{\xi_{F}}}L_{F}(x)}{x^{-\frac{1}{\xi_{up}+\epsilon}}}=\frac{\int_{ A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{-\frac{1}{\xi_{\boldsymbol{z}}}}L_{ \boldsymbol{z}}(x)d\boldsymbol{z}}{x^{-\frac{1}{\xi_{up}+\epsilon}}}=\\ \int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{-\frac{1}{\xi_{\boldsymbol{z}}} +\frac{1}{\xi_{up}+\epsilon}}L_{\boldsymbol{z}}(x)d\boldsymbol{z}=\int_{A}f_ {\boldsymbol{Z}}(\boldsymbol{z})x^{\beta(\boldsymbol{z})}L_{\boldsymbol{z}}( x)d\boldsymbol{z}.\end{array} \tag{60}\] We notice that \(\xi_{\boldsymbol{z}}\leq\xi_{up}<\xi_{up}+\epsilon\implies-\frac{1}{\xi_{ \boldsymbol{z}}}\leq-\frac{1}{\xi_{up}}<-\frac{1}{\xi_{up}+\epsilon}\) hence \(\beta(\boldsymbol{z})=-\frac{1}{\xi_{\boldsymbol{z}}}+\frac{1}{\xi_{up}+ \epsilon}<-\delta<0\). This last inequality, combined with the fact that the family \(\{L_{\boldsymbol{z}}(x)|x\in\mathbb{R}\}\) is \(\gamma\)-uniformly sub-polynomial, implies that \[f_{\boldsymbol{Z}}(\boldsymbol{z})x^{\beta(\boldsymbol{z})}L_{\boldsymbol{z}}( x)\leq f_{\boldsymbol{Z}}(\boldsymbol{z})x^{-\delta}L_{\boldsymbol{z}}(x)\leq f_{ \boldsymbol{Z}}(\boldsymbol{z})\gamma, \tag{61}\] for some \(\gamma>0\). Since \(\int_{\boldsymbol{z}}f_{\boldsymbol{Z}}(\boldsymbol{z})\gamma d\boldsymbol{z}= \gamma<\infty\), by dominated convergence \[\lim_{x\to\infty}\frac{\bar{F}(x)}{x^{-\frac{1}{\xi_{up}+\epsilon}}}=\lim_{x \to\infty}\int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z})x^{\beta(\boldsymbol{z})}L _{\boldsymbol{z}}(x)d\boldsymbol{z} \tag{62}\] \[\lim_{x\to\infty}\int_{A}f_{\mathbf{Z}}(\mathbf{z})x^{\beta(\mathbf{z})}L_{\mathbf{z}}(x)d\mathbf{z}= \int_{A}\lim_{x\to\infty}f_{\mathbf{z}}(\mathbf{z})x^{\beta(\mathbf{z})}L_{\mathbf{z}}(x)d\mathbf{z}=0, \tag{63}\] meaning \[\lim_{x\to\infty}\frac{\bar{F}(x)}{x^{-\frac{1}{\xi_{up}+\epsilon}}}=0, \tag{64}\] which implies \[\lim_{x\to\infty}\frac{x^{-\frac{1}{\xi_{F}}}L_{F}(x)}{x^{-\frac{1}{\xi_{up}+ \epsilon}}}=\lim_{x\to\infty}x^{\frac{1}{\xi_{up}+\epsilon}-\frac{1}{\xi_{F}}} L_{F}(x)=0, \tag{65}\] therefore we get \[\xi_{up}+\epsilon>\xi_{F},\forall\epsilon>0\text{ hence }\xi_{F}\leq\xi_{up}. \tag{66}\] Now we prove that indeed \(\xi_{F}>0\). It is simple to show that \(\xi_{F}\) cannot be negative. Indeed, if \(\xi_{F}\) is negative, it means that \(F\) has finite support which is not possible as for each fixed \(x\), we have \(F_{\mathbf{z}}(x)>0,\forall\mathbf{z}\in A\), therefore \(\forall x\in\mathbb{R},F(x)>0\). Proving that \(\xi_{F}\neq 0\) is slightly less trivial. For every distribution \(G_{0}\in MDA(0)\) and for \(\epsilon<\xi_{lo}\) \[\frac{\bar{F}(x)}{\bar{G}_{0}(x)}=\frac{\bar{F}(x)}{x^{-\frac{1}{\epsilon}}} \frac{x^{-\frac{1}{\epsilon}}}{\bar{G}_{0}(x)}=\frac{\int_{A}f_{\mathbf{Z}}(\mathbf{z })x^{-\frac{1}{\xi_{\mathbf{z}}}}L_{\mathbf{z}}(x)d\mathbf{z}}{x^{-\frac{1}{\epsilon}}} \frac{x^{-\frac{1}{\epsilon}}}{\bar{G}_{0}(x)}. \tag{67}\] As before we can prove that the first fraction \(\frac{\bar{F}(x)}{x^{-\frac{1}{\epsilon}}}\to\infty\). The expression \(\frac{x^{-\frac{1}{\epsilon}}}{\bar{G}_{0}(x)}\) goes to \(\infty\) as well due to Lemma 12, thus \[\lim_{x\to\infty}\frac{\bar{F}(x)}{\bar{G}_{0}(x)}=\infty. \tag{68}\] If \(\xi_{F}\) was \(0\), then for some \(G_{0}\in MDA(0)\) we would have \[\lim_{x\to\infty}\frac{\bar{F}(x)}{\bar{G}_{0}(x)}=\lim_{x\to\infty}1=1, \tag{69}\] hence \(\xi_{F}\neq 0\). Finally we prove that, if \(\xi_{\mathbf{z}}\) is continuous in \(\mathbf{z}\) and \(\xi_{\max}\) exists, then we have \(\xi_{F}=\xi_{\max}\). We will first separate \(A\) in two sets \(A_{1},A_{2}\), where \(A_{1}=\{\mathbf{z}|\xi_{\max}-\lambda\leq\xi_{\mathbf{z}}\leq\xi_{\max}\}\) and \(A_{2}=\{\mathbf{z}|\xi_{lo}\leq\xi_{\mathbf{z}}<\xi_{\max}-\lambda\}\). Since \(\xi_{\mathbf{z}}\) is continuous, then the pre-image of each of the measurable sets \([\xi_{\max}-\lambda,\xi_{\max}],[\xi_{lo},\xi_{\max}-\lambda)\) will be measurable. In addition, since \([\xi_{\max}-\lambda,\xi_{\max}]\) and \([\xi_{lo},\xi_{\max}-\lambda)\) contain an open set, then so will \(A_{1}\) and \(A_{2}\), implying that \(p_{i}=\mathbb{P}(A_{i})>0\), where \(i\in\{1,2\}\). Thus, \[\begin{split}\bar{F}(x)=\int_{A}f_{\mathbf{Z}}(\mathbf{z})\bar{F}_{z}(x)d \mathbf{z}=p_{1}\int_{A_{1}}\frac{f_{\mathbf{Z}}(\mathbf{z})}{p_{1}}\bar{F}_{\mathbf{z}}(x)d \mathbf{z}+p_{2}\int_{A_{2}}\frac{f_{\mathbf{Z}}(\mathbf{z})}{p_{2}}\bar{F}_{\mathbf{z}}(x)d \mathbf{z}\\ =p_{1}\bar{F}_{1}(x)+p_{2}\bar{F}_{2}(x).\end{split} \tag{70}\] From the first part of the Theorem: \(\xi_{1}\in[\xi_{\max}-\lambda,\xi_{\max}]\), and \(\xi_{2}\in[\xi_{lo},\xi_{\max}-\lambda]\), where \(F_{i}\in MDA(\xi_{i}),\ i=1,2\). On the other hand Theorem 14 implies that \(\xi_{F}=\xi_{1}\), therefore \(\xi_{F}\in[\xi_{\max}-\lambda,\xi_{\max}]\) for all \(\lambda>0\). We conclude that \(\xi_{F}=\xi_{\max}\). **Proof of Lemma 19** We assume that \(\xi_{F}>\epsilon\). Then as in the earlier derivations, due to dominated convergence and Lemmas 11 and 12, for any \(\delta>0\), we get: \[\begin{split}\lim_{x\to\infty}\frac{x^{-\frac{1}{\xi_{F}}}L_{F}(x)} {x^{-\frac{1}{\epsilon+\delta}}}=\lim_{x\to\infty}\frac{\bar{F}(x)}{x^{-\frac{ 1}{\epsilon+\delta}}}=\lim_{x\to\infty}\int_{A}f_{\boldsymbol{Z}}(\boldsymbol {z})\frac{\bar{F}_{\boldsymbol{z}}(x)}{x^{-\frac{1}{\epsilon+\delta}}}d \boldsymbol{z}\\ =\lim_{x\to\infty}\int_{A^{+}}f_{\boldsymbol{Z}}(\boldsymbol{z}) \frac{\bar{F}_{\boldsymbol{z}}(x)}{x^{-\frac{1}{\epsilon+\delta}}}d \boldsymbol{z}+\lim_{x\to\infty}\int_{A^{-}}f_{\boldsymbol{Z}}(\boldsymbol{z} )\frac{\bar{F}_{\boldsymbol{z}}(x)}{x^{-\frac{1}{\epsilon+\delta}}}d \boldsymbol{z}\\ =\int_{A^{+}}\lim_{x\to\infty}f_{\boldsymbol{Z}}(\boldsymbol{z}) \frac{x^{-\frac{1}{\epsilon_{\boldsymbol{z}}}}}{x^{-\frac{1}{\epsilon+\delta}} }L_{\boldsymbol{z}}(x)d\boldsymbol{z}+\lim_{A^{-}}\lim_{x\to\infty}f_{ \boldsymbol{Z}}(\boldsymbol{z})\frac{\bar{F}_{\boldsymbol{z}}(x)}{x^{-\frac{ 1}{\epsilon+\delta}}}d\boldsymbol{z}=0.\end{split} \tag{71}\] therefore \(\xi_{F}<\epsilon+\delta,\forall\delta>0\), contradicting our assumption \(\xi_{F}>\epsilon\). **Proof of Theorem 21** The proof is similar to that of the last statement in Theorem 17. We will first separate \(A\) in two sets \(A_{1},A_{2}\), where \(A_{1}=\{\boldsymbol{z}|\xi_{\max}-\lambda\leq\xi_{\boldsymbol{z}}\leq\xi_{\max}\}\) and \(A_{2}=\{\boldsymbol{z}|\xi_{\boldsymbol{z}}<\xi_{\max}-\lambda\}\). Since \(\xi_{\boldsymbol{z}}\) is continuous, then the pre-image of each of the measurable sets \([\xi_{\max}-\lambda,\xi_{\max}],(-\infty,\xi_{\max}-\lambda)\), will be measurable. In addition, since \([\xi_{\max}-\lambda,\xi_{\max}]\) and \((-\infty,\xi_{\max}-\lambda)\) contain an open set, then so will \(A_{1}\) and \(A_{2}\), implying that \(p_{i}=\mathbb{P}(A_{i})>0\), where \(i\in\{1,2\}\). \[\begin{split}\bar{F}(x)=\int_{A}f_{\boldsymbol{Z}}(\boldsymbol{z} )\bar{F}_{z}(x)d\boldsymbol{z}+p_{1}\int_{A_{1}}\frac{f_{\boldsymbol{Z}}( \boldsymbol{z})}{p_{1}}\bar{F}_{\boldsymbol{z}}(x)d\boldsymbol{z}+& p_{2}\int_{A_{2}}\frac{f_{\boldsymbol{Z}}( \boldsymbol{z})}{p_{2}}\bar{F}_{\boldsymbol{z}}(x)d\boldsymbol{z}\\ &=p_{1}\bar{F}_{1}(x)+p_{2}\bar{F}_{2}(x).\end{split} \tag{72}\] Based on Theorem 17 and Lemma 19: \(\xi_{1}=\xi_{\max}\), and \(\xi_{2}\in(-\infty,\xi_{\max}-\lambda]\), where \(F_{i}\in MDA(\xi_{i}),\;i=1,2\). From Theorem 14, we conclude that \(\xi_{F}=\xi_{\max}\). The last statement in the Theorem, that is, if \(\xi_{\max}\leq 0\) then \(\xi_{F}\leq 0\), is simply Corollary 20. **Proof of Proposition 23** In the case that \(\xi_{X}>0\), based on our assumptions there exists \(L(x)\) such that \[\mathbb{P}(X>x)=\bar{F}_{X}(x)=x^{-\frac{1}{\xi_{X}}}L_{1}(x). \tag{73}\] Therefore \[\bar{F}_{Y}(x)=\mathbb{P}(Y>x)=\mathbb{P}(X^{\alpha}>x)=\mathbb{P}(X>x^{\frac {1}{\alpha}})=(x^{\frac{1}{\alpha}})^{-\frac{1}{\xi_{X}}}L_{1}(x^{\frac{1}{ \alpha}})=x^{-\frac{1}{\alpha\xi_{X}}}L_{2}(x). \tag{74}\] We conclude that \(Y\in MDA(\alpha\xi_{X})\). On the other hand if \(\xi_{X}\leq 0\) then \(\xi_{Y}\leq 0\), because if \(\xi_{Y}>0\), then from the first part we would have \(\xi_{X}=\frac{1}{\alpha}\xi_{Y}>0\). **Proof of Proposition 24** We will first prove the case when \(p=1\). If we fix \(y\) and denote with \(\xi_{y}^{h-},\xi_{y}^{h+}\) the shape parameters of the left and right tail of \(p(\hat{f}_{\boldsymbol{V}}(\boldsymbol{X})\big{|}y)\), then assuming that at least one of them is positive, from Proposition 21 we know that the tail shape parameter of \(p(|\hat{f}_{\mathbf{V}}(\mathbf{X})||y)\) is \(\xi^{h}_{y}=\max\{\xi^{h-}_{y},\xi^{h+}_{y}\}\). We notice now that \(\xi^{h-}_{y},\xi^{h+}_{y}\) are the right and left tail shape parameters of \(p(-\hat{f}_{\mathbf{V}}(\mathbf{X})|y)\), therefore they are the right and left tail shape parameters of the distribution \(p(y-\hat{f}_{\mathbf{V}}(\mathbf{X})|y)\). Due to this, if we denote with \(\xi^{g}_{y}\) the tail shape parameter of \(p(|y-\hat{f}_{\mathbf{V}}(\mathbf{X})||y)\), using Proposition 21 once again we have that \(\xi^{g}_{y}=\max\{\xi^{g+}_{y},\xi^{g-}_{y}\}=\max\{\xi^{h-}_{y},\xi^{h+}_{y} \}=\xi^{h}_{y}\), where \(\xi^{g-}_{y},\xi^{g+}_{y}\) are the left and right shape parameters of \(p(y-\hat{f}_{\mathbf{V}}(\mathbf{X})|y)\). If both \(\xi^{h-}_{y},\xi^{h+}_{y}\) are non-positive then from Proposition 21, \(\xi^{h}_{y}\) is non-positive, and furthermore \(\xi^{g}_{y}\) is non-positive, otherwise we could go in the reverse direction and prove that \(\xi^{g}_{y}>0\) implies that either \(\xi^{g-}_{y}=\xi^{h+}_{y}\) is positive, or that \(\xi^{g+}_{y}=\xi^{h-}_{y}\) is positive. Now, we denote by \(G_{y}(s)\) the distribution of \(|y-\hat{f}_{\mathbf{V}}(\mathbf{X})|\) given \(y\), and prove that the family \(\{G_{y}(s)|y\in S\}\) has stable cross-tail variability. For each \(y\) we denote with \(t_{0}(y)\) the smallest value after which the sub-polynomial assumption is satisfied by \(F_{y}(t)\). Similarly we define \(s_{0}(y)\) for \(G_{y}(s)\). Since the family \(\{F_{y}(t)|y\in S\}\) has stable cross-tail variability, then each such \(t_{0}(y)\) exists, and furthermore the set \(\{t_{0}(y)|y\in S\}\) is bounded from above. Since each \(s_{0}(y)\) is only displaced by a magnitude of \(|y|\) from \(t_{0}(y)\), and since the set \(S\) is bounded, then we can conclude that \(\{s_{0}(y)|y\in S\}\) is bounded from above. We denote \(\xi^{g},\xi^{h}\) the tail shape parameters of \(|Y-\hat{f}_{\mathbf{V}}(\mathbf{X})|\) and \(|\hat{f}_{\mathbf{V}}(\mathbf{X})|\) respectively. Using Theorem 21 twice we get that if there is at least one \(\xi^{h}_{y}=\xi^{g}_{y}>0\) then \(\xi^{h}=\max\{\xi^{h}_{y}|y\in S\}=\max\{\xi^{g}_{y}|y\in S\}=\xi^{g}>0\), otherwise \(\xi^{h}\leq 0,\xi^{g}\leq 0\). Finally we finish the proof by applying Proposition 22 on \(|Y-\hat{f}_{\mathbf{V}}(\mathbf{X})|\) and \(|\hat{f}_{\mathbf{V}}(\mathbf{X})|\). ## Appendix B: Examples where the regularity conditions do not hold Below we give examples where the regularity conditions do not hold: **Example 1**: Let \(f_{U}(u)\) be a uniform distribution, and \(g_{u}(w)\) an exponential distribution with parameter \(\frac{1}{u}\). Clearly, the expectation of \(g_{u}(w)\) at each \(u\in(0,1)\) exists. However for \[h(w)=\int_{0}^{1}f_{U}(u)g_{u}(w)du=\int_{0}^{1}ue^{-uw}du \tag{75}\] the expectation is \[\int_{0}^{\infty}\int_{0}^{1}wf_{U}(u)g_{u}(w)dudw=\int_{0}^{1}\int_{0}^{ \infty}wue^{-uw}dwdu=\int_{0}^{1}\frac{1}{u}du \tag{76}\] In this example, we can see that even though all the distributions \(g_{u}(w)\) have shape parameter \(0\), the shape parameter of \(h(w)\) is bigger or equal to one. This is because the beginning of the exponential behaviour of the tail is delayed indefinitely across the elements of the family, violating the \(\gamma\)-uniform sub-polynomial assumption. Below we give an example of a family of slowly-varying functions \(\{L_{z}(x)|z\in A\}\), where \(A\) is compact and \(L_{z}(x)\) is continuous in \(x\) and \(z\), but \(\{L_{z}(x)|z\in A\}\) is not \(\gamma\)-uniformly sub-polynomial. In this case, the non slowly-varying behaviour (non sub-polinomiality) of \(L_{z}(x)\), or in other words, the tail of \(F_{z}(x)\), is postponed indefinitely across the family of \(\{F_{z}(x)|z\in A\}\) **Example 2**: Let \(L_{z}(x)\), for \(z\in[0,1]\), be defined as below: \[L_{z}(x)=\begin{cases}1+zx^{4-(z-\frac{1}{x})^{2}}&\text{for }x\in(1,\frac{1}{z}) \\ 1+\frac{1}{z^{3}}&\text{for }x\in(\frac{1}{z},\infty)\end{cases} \tag{77}\] when \(z\neq 0\) and \(L_{0}=1\) for \(x\in(\frac{1}{z},\infty)\). For \(x^{-1}\) we define \(F_{z}(x)=x^{-1}L_{z}(x)\), that is: \[F_{z}(x)=\begin{cases}x^{-1}+zx^{3-(z-\frac{1}{x})^{2}}&\text{for }x\in(1, \frac{1}{z})\\ x^{-1}+\frac{1}{z^{3}}x^{-1}&\text{for }x\in(\frac{1}{z},\infty)\end{cases} \tag{78}\] when \(z\neq 0\) and \(F_{0}=x^{-1}\text{ for }x\in(\frac{1}{z},\infty)\). One can check that \(F_{z}(x)\) and \(L_{z}(x)\) are continuous in \(z\). On the other hand for a given \(z\), \(F_{z}(\frac{1}{z})=z+z^{-2}\), meaning that \(F_{z}(\frac{1}{z})\) tends to infinity, when \(z\) tends to zero. Therefore \(\{L_{z}(x)|z\in A\}\) is not \(\gamma\)-uniformly sub-polynomial. ## Appendix C: Examples where the regularity conditions hold Below we give examples where the regularity conditions do hold: **Example 3**: Let \(\bar{F}_{z}(x)=x^{-z}=x^{-\frac{1}{\frac{1}{2}-\xi_{z}}}\) for \(z\in(1,\infty)\), and let \(\bar{F}(x)=e\int_{1}^{\infty}e^{-z}\bar{F}_{z}(x)dz\). Then \(\bar{F}(x)=x^{-1}\frac{1}{1+\ln x}=x^{-1}L(x)\), where \(L(x)=\frac{1}{\ln x}\) is slowly varying as both \(1\) and \(\ln x\) are slowly varying. **Example 4**: Let \(\bar{F}_{z}(x)=x^{-z}\ln x^{z}\) for \(z\in(1,2)\), and let \(\bar{F}(x)=\int_{1}^{2}\bar{F}_{z}(x)dz\). Then \(\bar{F}(x)=x^{-1}-2x^{-2}+x^{-1}\frac{1}{\ln x}-x^{-2}\frac{1}{\ln x}=x^{-1}( 1-2x^{-1}+\frac{1}{\ln x}-x^{-1}\frac{1}{\ln x})=x^{-1}L(x)\), where \(L(x)=1-2x^{-1}+\frac{1}{\ln x}-x^{-1}\frac{1}{\ln x}\) is slowly varying. ## Appendix D: Moment based motivation In Proposition 24, we showed that under certain conditions, we could estimate the shape of the tail of the distribution of \(W_{\mathbf{V}}(\mathbf{U})\) without using test labels. This can also be motivated from the moments of \(W_{\mathbf{V}}(\mathbf{U})\). Indeed, conditioning on the test label \(y\) we have \[\mathbb{E}[W_{\mathbf{V}}^{p}(\mathbf{U})|Y=y]=E_{\mathbf{V}}[(y-\hat{f}_{ \mathbf{V}}(\mathbf{x}))^{p}|y] \tag{79}\] \[=\sum_{k=0}^{p}\binom{p}{k}y^{k}(-1)^{p-k}E_{\mathbf{V}}[\hat{f}_{\bm {V}}^{p-k}(\mathbf{x})|y] \tag{80}\] We can see that for test label \(y\), if the moment \(p\) of \(\hat{f}_{\mathbf{V}}(\mathbf{x})\) given \(y\) exists then the moment \(p\) of \(W_{\mathbf{V}}(u)\) given \(y\) exists. If each \(E_{\mathbf{V}}[\hat{f}_{\mathbf{V}}^{j}(\mathbf{x})|y]\), \(j\in\{1,...,p\}\) changes continuously with \(y\) then \(\mathbb{E}[W_{\mathbf{V}}^{p}(\mathbf{U})|y]\) is continuous with respect to \(y\). Further assuming that the support of \(Y\) is compact, then moment \(p\) of \(W_{\mathbf{V}}(\mathbf{U})\), that is, \(\mathbb{E}[W_{\mathbf{V}}^{p}(\mathbf{U})]=\mathbb{E}_{y}\mathbb{E}[W_{\mathbf{V}}^{p}(\bm {U})|Y=y]\) will exist as well. Under these conditions, if \(\hat{f}_{\mathbf{V}}(\mathbf{x})\) is a non-negative function, then the existence of \(\mathbb{E}[\hat{f}_{\mathbf{V}}^{p}(\mathbf{x})]=\mathbb{E}_{y}\mathbb{E}[\hat{f}_{\mathbf{V} }^{p}(\mathbf{x})|y]\) guarantees the existence of \(\mathbb{E}[\hat{f}_{\mathbf{V}}^{p}(\mathbf{x})|y]\) for almost all \(y\), thus it ensures the existence of \(\mathbb{E}[W_{\mathbf{V}}^{p}(\mathbf{U})]\). ## Appendix E Reducing the variability of the estimated shape parameters It is proven in Dekkers and Haan (1989), that under certain conditions on \(k\) (in particular that \(\frac{k(n)}{n}\to 0\) as \(n\to\infty\)) the Pickands Estimator has an asymptotically Gaussian distribution: \(\sqrt{k(n)}(\hat{\xi}_{k,n}^{(P)}-\xi)\xrightarrow{d}\mathcal{N}(0,\sigma^{2} (\xi))\). This implies that for large \(n\), we roughly have \(\hat{\xi}_{k,n}^{(P)}\sim\mathcal{N}(\xi,\frac{\sigma^{2}(\xi)}{k(n)})\). Minding the size of \(n\), we can split the \(n\) samples into \(m\) groups such that \(n=m\frac{n}{m}\), and such that we still have roughly \(\hat{\xi}_{k,\frac{n}{m}}^{(P)}\sim\mathcal{N}(\xi,\frac{\sigma^{2}(\xi)}{k( \frac{n}{m})})\). Since we can estimate \(\hat{\xi}_{k,\frac{n}{m}}^{(P)}\) for each of the \(m\) groups we can define the average estimation as \(\hat{\xi}_{k,\frac{n}{m}}^{(P),avg}=\frac{1}{m}\sum_{i=1}^{m}\hat{\xi}_{k, \frac{n}{m}}^{(P),i}\). Under the assumption that samples from such groups are independent, we get that \(\hat{\xi}_{k,\frac{n}{m}}^{(P),avg}\sim\mathcal{N}(\xi,\frac{\sigma^{2}(\xi)}{ mk(\frac{n}{m})})\). Since \(k(n)=o(n)\), we can choose to reduce the variance 'linearly' by keeping \(\frac{n}{m}\) constant and increasing \(m\), instead of increasing the sub-linear \(k(n)\). This becomes quite apparent if we set \(k(n)=\log n\) or \(k(n)=\sqrt{n}\). Indeed, for \(k(n)=\log n\), the ratio between the variances of the direct approach and our approach is \[\frac{m\log\frac{n}{m}}{\log n}=\frac{m\log\frac{n}{m}}{\log m+\log\frac{n}{m }}=\frac{mC}{\log m+C}\to\infty \tag{81}\] as \(m\to\infty\). Similarly for \(k(n)=\sqrt{n}\), \[\frac{m\sqrt{\frac{n}{m}}}{\sqrt{n}}=\sqrt{m}\to\infty \tag{82}\] as \(m\to\infty\). Here we can see that even if we fix \(m\) and then allow each group with size \(\frac{n}{m}\) to grow as \(n\) increases, the variance is still \(\sqrt{m}\) times smaller using our approach. The asymptotically Gaussian distribution property holds in the case of the DEdH estimator if one knows that \(\xi>0\) (Hill estimator, Davis and Resnick (1984)). Furthermore, both estimators \(H_{k,n}^{(1)}\) and \(H_{k,n}^{(2)}\) in Definition 6 jointly possess this property, Dekkers et al. (1989). ## Appendix F The inadequacy of the direct POT usage on mixture distributions In this section, we illustrate two cases where cross tail estimation is necessary for proper tail shape estimation. ### Uniform Case In our experimental procedure, we randomly select samples adhering to two distinct power law distributions. Each of these distributions has a unique characteristic shape parameter - one has a shape parameter of 1, while the other possesses a shape parameter of 0.5. For our random sampling process, we afford equal probability, precisely 50%, to both these distributions. This means there is an identical chance of picking a sample from either of these power law distributions, each with their respective shape parameters. When we examine an experimental set of \(10^{3}\) sampled points from each of these distributions, the resulting pattern becomes apparent as shown in Figure 9 (left). We find that if we amalgamate all the sampled data points from both distributions into a unified array, and subsequently apply Pickands Estimator on this consolidated data set, the process yields a sub-optimal estimation of the distribution tail. The outcome is unsatisfactory as it fails to reveal the accurate shape of the tail, thereby defeating the purpose of the estimation. However, we discover that there is a noticeable enhancement in the quality of the estimation when we bolster the sample size from the initial \(10^{3}\) to a considerably larger size of \(2*10^{4}\). This increase in sample size permits us to retrieve the true shape of the distribution tail. Using CTE however, we find that a sample size of just \(10^{3}\) proves to be adequate in obtaining a satisfactory estimation of the distribution tail. As illustrated in Figure 9 (right), this method leads to an accurate estimation with a substantially smaller sample size. Therefore, our method introduces an efficient pathway towards achieving accurate estimations with fewer resources, thereby demonstrating its potential superiority over the traditional Pickands Estimator. Figure 9: Standard estimation of the shape parameter of the tails by simply applying the Pickands’ Estimator, on average, gives poor results on fewer data (left). Cross tail estimation (CTE) gives the correct estimation on average. (right). ### Non-Uniform Case Similarly, in the second experiment, we sample with 20% probability from a distribution with power law tails with shape parameter 1, and with 80% probability probability from a distribution with power law tails with shape parameter 0.5. When sampling \(5*10^{3}\) points from each distribution, Figure 10, we are not able to properly estimate the tail if we join all the samples together in a common array and then apply the Pickands' Estimator. But, if we increase the sample size from \(5*10^{3}\) to \(5*10^{7}\), we manage to retrieve the the true tail shape of the mixture. However, using our method, \(5*10^{3}\) samples are already sufficient to get a proper estimation. ## Appendix G Additional details with regards to Section 5.1 Below we provide Figure 11 which illustrates how \(\xi_{z}\) evolves depending on the \(\xi_{\max}\) which is given as input. The parameter \(\xi_{\max}\) takes the following 45 values \(\{-4,-4+0.1,-4+0,2,...,5\}\). Figure 10: Standard estimation of the shape parameter of the tails by simply applying the Pickands’ Estimator, on average, gives poor results on fewer data (left). Cross tail estimation (CTE) gives the correct estimation on average. (right). Figure 11: The evolution of \(\xi_{z}\) depending on the value of \(\xi_{\max}\). ## Appendix H Additional details with regards to Section 5.3 Figure 12: The performance of Gaussian process on train and test data depending on the length scale parameter. First half of the cases. Figure 13: The performance of Gaussian process on train and test data depending on the length scale parameter. Second half of the cases. Figure 14: The performance of polynomial kernels on train and test data depending on the degree. Figure 15: For each length scale parameter of the Gaussian Process, we present the variability (sorted) of the estimated shape parameters across 1000 conditional distributions (defined by the choice of training sets). Jointly, we also present the 97th percentile of the conditional distributions corresponding to each estimated shape parameter. Figure 16: We run 32 times the Gaussian Process experiment for length scale parameter value of 290. On each run, we calculate thresholds (sorted) of the 1000 conditional distributions determined by the 1000 choices of the training set, as well as their corresponding shape tail parameters. We see that higher thresholds correspond to lower shape parameters
2303.16074
Evolutionary Design of the Memory Subsystem
The memory hierarchy has a high impact on the performance and power consumption in the system. Moreover, current embedded systems, included in mobile devices, are specifically designed to run multimedia applications, which are memory intensive. This increases the pressure on the memory subsystem and affects the performance and energy consumption. In this regard, the thermal problems, performance degradation and high energy consumption, can cause irreversible damage to the devices. We address the optimization of the whole memory subsystem with three approaches integrated as a single methodology. Firstly, the thermal impact of register file is analyzed and optimized. Secondly, the cache memory is addressed by optimizing cache configuration according to running applications and improving both performance and power consumption. Finally, we simplify the design and evaluation process of general-purpose and customized dynamic memory manager, in the main memory. To this aim, we apply different evolutionary algorithms in combination with memory simulators and profiling tools. This way, we are able to evaluate the quality of each candidate solution and take advantage of the exploration of solutions given by the optimization algorithm.We also provide an experimental experience where our proposal is assessed using well-known benchmark applications.
Josefa Díaz Álvarez, José L. Risco-Martín, J. Manuel Colmenar
2023-03-07T10:45:51Z
http://arxiv.org/abs/2303.16074v1
# Evolutionary Design of the Memory Subsystem1 ###### Abstract The memory hierarchy has a high impact on the performance and power consumption in the system. Moreover, current embedded systems, included in mobile devices, are specifically designed to run multimedia applications, which are memory intensive. This increases the pressure on the memory subsystem and affects the performance and energy consumption. In this regard, the thermal problems, performance degradation and high energy consumption, can cause irreversible damage to the devices. We address the optimization of the whole memory subsystem with three approaches integrated as a single methodology. Firstly, the thermal impact of register file is analyzed and optimized. Secondly, the cache memory is addressed by optimizing cache configuration according to running applications and improving both performance and power consumption. Finally, we simplify the design and evaluation process of general-purpose and customized dynamic memory manager, in the main memory. To this aim, we apply different evolutionary algorithms in combination with memory simulators and profiling tools. This way, we are able to evaluate the quality of each candidate solution and take advantage of the exploration of solutions given by the optimization algorithm.We also provide an experimental experience where our proposal is assessed using well-known benchmark applications. keywords: NSGA-II, Grammatical Evolution, Hardware design optimization, Memory subsystem design + Footnote †: journal: Computer Science and Engineering ## 1 Introduction Memory hierarchy has a significant impact on performance and energy consumption in the system. This impact is estimated about 50% of the total energy consumption in the chip [1]. This places the memory subsystem as one of the most important sources to improve both performance and energy consumption. Concerns such as thermal issues or high energy consumption can cause a significant performance degradation, as well as irreversible damages to the devices therefore increasing the energy cost. Previous works have shown that saving energy in the memory subsystem can effectively control transistors aging effect and can significantly extend lifetime of the internal structures [2]. Technological changes combined with the development of communications have led to the great expansion of mobile devices such as smartphones, tablets, etc. Mobile devices have evolved rapidly to adapt to the new requirements, giving support to multimedia applications. These devices are supplied with embedded systems, which are mainly battery-powered and usually have less computing resources than desktop systems. Additionally, multimedia applications are usually memory intensive, so they have high performance requirements which implies a high energy consumption. These features increase the pressure on the whole memory subsystem. Processor registers, smaller in size, work at the same speed than the processor and consume less energy compared with other levels of the memory subsystem. However, the energy consumption and access time rise when the file size increases due to a higher number of registers and ports. Regarding the cache memory, it has been identified as a cold area in the chip, although the peripheral circuits and the size of the cache are the most influencing factors to cause a temperature increase [3], facing the accesses to the cache memory because of specific applications. However, cache memory affects both performance and energy consumption. In fact, energy consumption of the on chip cache memory is considered to be responsible of 20% to 30% of the total consumption in the chip [4]. A suitable cache configuration will improve both metrics. In terms of performance, the main memory is the slowest component compared with the cache memory and processor registers. Running programs request the allocation and deallocation of memory blocks, and the Dynamic Memory Manager (DMM) is in charge of this task. Current multimedia applications have highly dynamic memory requirements, so optimizing the memory allocator is a crucial task. Solving a memory allocation request is a complex task and the allocation algorithm must minimize internal and external fragmentation problems. Therefore, efficient tools must be provided to DMM designers for evaluating the cost and the efficiency of DMMs, facilitating the design of customized DMMs. In this paper we present a methodology based on Evolutionary Algorithms (EA), which is divided into three layers tackling different components of the memory hierarchy and performing the optimization process of each layer according to the running applications. Then, the first layer is the registers file, the second is the cache memory and the last one is the DMM, which works on the main memory. Figure 1 shows the three optimization layers surrounded with different dashed lines, and the tools involved within each optimization process, which will be deeply explained in the rest of the paper. In a previous work [5], we presented an approach based on Grammatical Evolution (GE) with a wide design space, where the complete set of parameters defined is considered and a specific cache memory configuration was chosen as a baseline. The GE approach had good results, in the absence of other results to be compared with. The problem is clearly multi-objective and thus the GE approach considered a weighted objective function. Hence, the optimization problem was later addressed through a multi-objective approach with NSGA-II [6]. On the one hand, this approach was customized with a fixed cache size for both the instructions and data cache. On the other hand, a different cache memory configuration was used as the baseline. Thus, GE and NSGA-II approaches use a different set of parameters. As a consequence, results could not be directly compared in order to take a decision. In this paper we provide several new contributions regarding the cache design. Firstly, we perform the experiments using the NSGA-II algorithm in the same conditions of the GE proposal, both the design space and the baseline. This configuration allows a direct comparison among both algorithms. Additionally, two baseline caches, included in general purpose devices, are added to the analysis because the first one belonged to a specific purpose device. Finally, we have added a statistical test to verify the relevance of the results. Therefore, this work completes the set of tests previously made and provide us enough information to decide the algorithm to be applied in the cache design optimization. In addition to the cache design, we propose in this paper to apply evolutionary techniques to the register file configuration and the DMM which, considered in conjunction with the cache, comprise the whole memory subsystem in a computer. For both the register file and the DMM we propose the algorithms, perform the experiments and analyze the results on both objectives of our fitness function: execution time and energy consumption. Besides, we have incorporated statistical tests to verify the relevance or our results in both the register file and the DMM optimizations. Up to our knowledge, a complete 3-layer approach as the one we propose has not been reported previously in the literature. We have also focused our experiments on the ARM architecture, which is present in many of the current embedded multimedia systems. Selected applications have been adapted in order to better fit to each one of the memory layers that we optimize. As we will show later in this work, the cache memory policies and the DMMs are most sensitive to improvement. All the algorithms are coded in Java using the JECO library [7]. Besides, the experimentation has been conducted in a computer provided with an Intel i5 660 processor running at 3.3 GHz, with 8GB of RAM and using the Ubuntu Desktop 14.04 operating system. The rest of the paper is organized as follows. Next section summarizes the related work on the topic. Section 3 describes the thermal, performance and energy models applied. Section 4 addresses the thermal impact on the processor registers. Section 5 presents the optimization process aimed to automatically design cache configurations in order to improve performance and reduce energy consumption. Section 6 describes the optimization process to automatically evaluate and design customized DMMs, which will improve performance and reduce the memory fragmentation problem. In Section 7, we present our conclusions and describe the future work. Figure 1: Memory subsystem layers and tools involved in this optimization methodology. First layer is the register file structure, the second one corresponds to the cache memory and the third layer is the dynamic memory which works on the main memory. ## 2 Related work Many works can be found in the literature regarding the memory optimization. Next, we will review the closest literature to our work, separating the papers into the three memory layers we have studied. Concerns about thermal problems, performance degradation and high energy consumption are neither new nor insignificant in the memory subsystem. The register file is identified as a component that consumes high energy, between 15%-36% in embedded processors [8]. Multimedia applications increase the exchange of information between the register file and the next level of the memory hierarchy. As mentioned in [9], the number of concurrent accesses is increased and thereby the chip temperature and the need for power dissipation. So, a lower energy consumption reduces the temperature and the need of power dissipation. Thus, system reliability and performance are improved. This problem has been addressed with different hardware and software techniques. Atienza et al. [10] are focused on DSP (Digital Signal Processing) and ASIP (Application-Specific Instruction-Set Processor) architectures, specially led to multimedia applications in embedded systems. The authors apply the DVS (Dynamic Voltage Scale) technique to change to low-power state the unused registers. The energy consumption improvement they report is over 60%. This work focuses on the energy consumption, but not on the temperature. Zhou et al. [11] assign to the compiler the task of distributing the registers access, within the limits of each registers file, in a multi-bank organization. This process is made after the traditional allocation phase and during the registers allocation phase. The proposal, designed for a limited set of VLIW or RISC architectures, reduces the power density and the peak temperature between 4\({}^{\circ}\)C and 7\({}^{\circ}\)C. Recently, Sabry et al. [12] proposed a new mechanism to distribute uniformly the register accesses. This approach, implemented in a commercial compiler, reduces hot spots by 91% and the mean and peak temperature by 11%. In contrast with previously mentioned works, we use an analytic process to measure the thermal impact of register accesses on the processor temperature. After that, we apply an evolutionary optimization algorithm, which generates a re-assignation that exchanges register accesses to the register file. Therefore, highly accessed registers are spaced far apart, and as a consequence, temperature is reduced. Cache memory behavior is determined by its parameters, which form the so-called cache configuration. Therefore, the problem is to find the optimal cache configuration for a set of applications running on a system. This will not only improve the performance and the energy consumption, but will also provide long-term reliability. Cache memory optimization has been widely addressed. New developments in well-known techniques allow optimizing the cache memory. Wang et al. [13] presented Futility Scaling, a new replacement-based partitioning scheme. This scheme controls the size of the partition and it is able to maintain both large partition and high associativity. Adegbija and Gordon [14] designed a phase-based cache tuning algorithm for multimedia applications to determine the best cache configuration for each execution phase. Phase classification breaks applications execution using a fixed tuning interval. The proposed algorithm analyses each configuration for one interval to determine the best configuration or the next one to be explored. Wang et al. [15] proposed dynamic cache reconfiguration for real time embedded systems. They minimize energy consumption performing a static analysis at runtime. Zang et al. [16] applied _way-concatenation_ to reconfigure cache in embedded systems by software and minimize the energy consumption. Recently, new technologies and core-based processor technologies, such as ARM946E-S TM [17], allow changing the cache configuration for each application. Changes affect the main parameters: capacity, block size and associativity. However, every application has a different memory access pattern. Hence, an efficient algorithm is needed to determine the optimal values for each parameter on each application. Previously mentioned approaches optimize a few number of parameters: cache size, block size and associativity, each one with a few possible values. On the other hand, dynamic reconfiguration adds complexity to the memory subsystem design, usually being penalized with extra cost in execution time. Besides, concurrent applications increase this penalty because of the multiple calls to the reconfiguration. In relation to static profiling, Feng et al. [18] applied a new cache replacement policy to perform the replacement decision based on the reuse of information of the cache lines and the requested data developing to reuse information predictors: a profile-based static predictor and a runtime predictor. However, these approaches only improve the replacement algorithm of the cache. Recently, Gordon-Ross et al. [19] studied the interaction between code reordering and cache configuration, obtaining excellent results. However, this technique is applied to the instructions cache, and our systematic optimization method is applied to the full configuration of both the instructions and data caches. None of these approaches simultaneously optimized cache performance and energy consumption for a target set of applications, as our methodology does. Our proposal optimizes cache size, line size, associativity, replacement algorithm and search algorithm for both instructions and data cache, and also write policy for data cache. Optimizing the dynamic memory management subsystem is considered a crucial task to efficiently execute multimedia applications on all kind of systems, including embedded systems. In this regard, Del Rosso [20] evaluated the performance of different DMMs on embedded real time systems. Metrics applied are the internal fragmentation and a new metric named _performance speed metric_. However, energy consumption is not analyzed. Atienza et al. in [21] proposed a method to evaluate the memory use and energy consumption by a DMM, but it needs to be implemented and integrated in the target application. Risco et al. [22] presented an optimization algorithm, based on Grammatical Evolution (GE), to design customized DMMs. Each DMM was evaluated by a DMM simulator [23]. Although this approach allows us to improve the average performance, memory use and energy consumption of the memory subsystem, the classification process returns a complex taxonomy of DMMs. Moreover, the applications profiling is made by overloading _malloc()_ and _free()_ functions, which requires applications to be modified and re-compiled for each target application, which is a time consuming and error prone task. We present a methodology, which does not need to be integrated in the target application, to automatically evaluate and design customized DMMs. Our methodology is based on GE and also performs a static profiling of applications. In fact, our optimization process produces customized DMMs that are better or equal than well known general purpose DMMs, such as Kingsley (used in Windows systems) and Lea (used in GNU/Linux systems). The first one is considered the fastest, and the second one, the more efficient with respect to the memory usage. As seen, we propose the optimization of the complete memory subsystem under both the execution time and energy consumption objective functions. After a thorough review of the literature, we have not found any similar approach to compare with. Hence, as we will show in the experimental experience, we have compared our results with baseline configurations coming from the state of the art of the memory design. ## 3 Thermal, energy and performance model The proposed framework is based on the simulation of performance and energy consumption models. These models are used as the input for the optimization algorithms, which find an optimized design. In order to address these works, we have to apply thermal, energy and performance models, which are next described. ### Thermal model Estimating the thermal impact in an Integrated Circuit (IC) needs the simulation of thermal conduction between power sources (transistors and interconnects) and heat sinks to the ambient environment. This is analogous to modeling electrical conduction and it is governed by the known Fourier's law. Taking into account one dimension, the thermal problem can be addressed through differential equations [24]. According to Brooks et al.[25] the following equation governs thermal conduction in a chip: \[\rho c\frac{\partial T(\vec{r},t)}{\partial t}=\nabla\cdot\left[K\left(\vec{r} \right)\nabla T\left(\vec{r}.t\right)+p\left(\vec{r},t\right)\right] \tag{1}\] \(\rho\) is the material density, \(c\) represents the mass heat capacity, \(T\left(\vec{r},t\right)\) and \(K\left(\vec{r}\right)\) are the temperature and thermal conductivity of the material at position \(\vec{r}\) and time t, and \(p\left(\vec{r},t\right)\) is the power density of the heat source. Extended information is available in the original reference. This analysis is carried out applying the method of finite differences. Thus, elements are discretised by dividing the IC area into single elements of equal size. The thermal component of each element can be individually calculated depending on the time, material, power dissipation and temperature of its neighbors. Thus, every element interacts with the rest through heat dissipation, and it has a power dissipation, temperature, capacitance and thermal resistivity to adjacent elements. Thus, the thermal impact in an internal point of the chip can be solved by using (2). \[\frac{CdT(t)}{d_{t}}+AT(t)=Pu(t) \tag{2}\] where C is the thermal capacitance matrix as an \(NxN\) diagonal matrix, \(A\) represents the thermal conductivity matrix as a sparse matrix sizing \(NxN\). \(T(t)\) and \(P\) are \(Nx1\) temperature and power vectors and \(u(t)\) is the time step function. We apply steady-state thermal analysis, that means the heat flow and power consumption do not vary over time. Hence, terms that depend on time disappear, and Equation (2) becomes Equation (3), where the IC temperature, represented with a set of cells, is estimated based on the individual register power and its thermal conductivity. \[AT=P\to T=A^{-1}P \tag{3}\] \(T\) represents the temperature calculated as \(A^{-1}\), which is the inverse of \(A\), multiplied by the power vector \(P\). ### Energy model In order to address the register file and cache memory optimization, we need to estimate the energy consumption because of the number of accesses to the register file and cache memory structures during the execution of a given set of multimedia applications. We have to determine the energy consumed per access to the register file structure to estimate the energy consumption of this structure. In this context, we apply the model detailed in [26], which describes how to measure cache energy consumption and performance based on a limited number of cache accesses. Authors defined this model as simple, and suitable to measure energy and performance improvement for reconfigure non-cache systems. Energy model is explained according to Equation (4), where terms directly related to the cache memory can be drop, given that this structure is not addressed in the register file characterization. \[E_{total}=E_{read}+E_{write}+\underbrace{E_{leak(std)}+E_{c}-m+E_{mp}+E_{misc}} \tag{4}\] Thus \(E_{total}\) is the total energy consumption in Joules (J), \(E_{read}\) and \(E_{write}\) correspond to the energy consumption by read and write register file accesses, which are computed by equations (5) and (6). In those equations, \(n_{read}\) is the number of read accesses, \(E_{dyn\_read}\) is the dynamic read energy, \(n_{write}\) represents the number of write accesses and \(E_{dyn\_write}\) is the dynamic write energy. \(n_{read}\) and \(n_{write}\) are computed during the profiling phase of the application. \[E_{read}= n_{read}*E_{dyn\_read} \tag{5}\] \[E_{write}= n_{write}*E_{dyn\_write} \tag{6}\] In order to address cache memory optimization, we apply the energy and performance model described in [27]. For the sake of clarity, next section briefly explains the performance model. Interested readers can find detailed information in the original reference. Energy model is described by Equation (7), which determines the energy consumption for a cache configuration. \[Energy =\underbrace{\widetilde{energy\times CPU_{power}}+I_{access}\times I _{access\_energy}+D_{access}\times D_{access\_energy}+}_{I_{miss}\times I_{ access\_energy}\times I_{line\_size}+}_{miss}\times D_{access\_energy}\times D_{line\_size}+ \tag{7}\] \[I_{miss}\times DRAM_{access\_power}\times(DRAM_{access\_time}+I _{line\_size}\times\frac{1}{DRAM_{bw}})+\] \[D_{miss}\times DRAM_{access\_power}\times(DRAM_{access\_time}+D _{line\_size}\times\frac{1}{DRAM_{bw}})\] where \(DRAM_{access\_power}\) is the power consumption for each DRAM access, and \(I_{access\_energy}\) and \(D_{access\_energy}\) correspond to the energy consumption for instructions and data cache accesses, respectively. Terms \(I_{access}\times I_{access\_energy}\) and \(D_{access}\times D_{access\_energy}\) calculate the energy consumption because of instructions and data cache, respectively. \(I_{miss}\times I_{access\_energy}\times I_{line\_size}\) and \(D_{miss}\times D_{access\_energy}\times D_{line\_size}\) is the energy cost of filling data into instruction and data caches from main memory, when a miss occurs. Last two terms calculate the energy cost of the DRAM to respond to cache misses. In our approach, we remove the first term of the Energy equation \(execTime\times CPU_{power}\) because of three reasons: (1) the term \(CPU_{power}\) is constant and the term \(execTime\) is already being minimized in the first objective, (2) it represents the amount of energy consumed by the CPU and we are optimizing just the performance and energy consumed by the memory subsystem, and (3) in the case of a multi-objective optimization, all the objectives must be as orthogonal as possible, i.e., the term \(execTime\) is redundant. ### Performance model Performance model allows us to obtain the execution time for the cache memory. This model is based on the number of hits and misses in the cache memory subsystem and the time needed to solve them. Equation (8) shows how the execution time is computed. Each component is described below, although a widely and detailed explanation can be found in [27]. \[T= Icache_{access}\times Icache_{access\_time}+Icache_{miss}\times DRAM _{access\_time}+ \tag{8}\] \[Icache_{miss}\times Icache_{line\_size}\times\frac{1}{ DRAM\_width}+\] \[Dcache_{access}\times Dcache_{access\_time}+Dcache_{miss} \times DRAM_{access\_time}+\] \[Dcache_{miss}\times Dcache_{line\_size}\times\frac{1}{ DRAM\_width}\] Terms \(Icache_{access}\) and \(Dcache_{access}\) correspond to the number of accesses to the instructions and data cache memory, respectively. \(Icache_{miss}\) and \(Dcache_{miss}\) are the number of cache misses. \(Icache_{access\_time}\) and \(Dcache_{access\_time}\) represent the time needed to solve each access to the instructions and data cache, respectively. \(DRAM_{access\_time}\) is the main memory latency. \(Icache_{line\_size}\) and \(Dcache_{line\_size}\) are the line or block size for the instructions and data cache, and \(DRAM\_width\) represents the bandwidth of the DRAM. Thus, \(Icache_{access}\times Icache_{access\_time}\) and \(Dcache_{access}\times Dcache_{access\_time}\) compute the total time needed to solve all accesses to the instructions and data cache, respectively. \(Icache_{miss}\times DRAM_{access\_time}\) and \(Dcache_{miss}\times DRAM_{access\_time}\) are the total time solving accesses to the main memory, as a consequence of misses in the instructions and data cache. \(Icache_{miss}\times Icache_{line\_size}\times\frac{1}{DRAM\_width}\) and \(Dcache_{miss}\times Dcache_{line\_size}\times\frac{1}{DRAM\_width}\) computes the total time needed to fill an instructions or date cache line, when a miss happens. All equations use seconds for time, Watts for power, Joules for energy, bytes for cache line size and bytes/sec for bandwidth. ## 4 Register file optimization The first layer of our methodology is the register file optimization. We present a methodology that takes into account the temperature increase due to the accesses that happen while a multimedia application is running. Then it evaluates the thermal impact of different spatial distributions of the logical registers. It applies a Multi-Objective Evolutionary Algorithm (MOEA) to obtain the optimized solutions, finally proposing the spatial distributions which better reduce the thermal impact. This re-assignment of registers virtually increases the distance 2 between registers with higher number of accesses, and it results in a decrease of temperature. Footnote 2: Distancing means assigning logical registers highly accessed during a program execution to registers that are physically separated in the register file. In order to assess our proposal, we have selected a basic register file configuration inspired in two VIIW and ARM architectures. We focus on the 32 general purpose registers for VIIW and, in the case of ARM, on the 16 available for all processor modes by replication. Both architectures have different behavior patterns allowing us to analyze and optimize the thermal impact under completely different scenarios. We have adopted a multigrid design to simulate the physical area of the register file, where the parameters needed by the simulator are the internal structure size, the number of cells and the cell size according to the target architecture. Following the design described in [28], the register file area is divided into single cells, that will be modified in concordance with the target size. Table 1 details the physical measures of the register file on both architectures. Measures are expressed in microns and cells. Columns named "width" and "height" represent the size of the register file in terms of cells, where a register is 3 cells wide by 3 cells high. The next three columns are the size in microns for an individual register and the width and height of the register file. Figure 2 shows the methodology that we have applied. Firstly, the target ap \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & **Cells** & \((\mathbf{width}\times\mathbf{height})\) & **Register file** (\(\mu m\)) \\ \hline Arch. & Num. of Registers & Width & Height & Size (\(\mu m^{2}\)) & Width & Height \\ \hline VIIW & 32 & 90 & 96 & \(3\mu m\times 3\mu m\) & 270 & 270 \\ ARM & 16 & 90 & 48 & \(3\mu m\times 3\mu m\) & 288 & 144 \\ \hline \hline \end{tabular} \end{table} Table 1: Physical parameters of register file with a register size of 3 cells high and 90 cells width. plication is simulated by Trimaran [29], a tool widely applied to obtain multiple metrics during running applications. Similarly, energy consumption per access is computed by CACTI [30], which is a well-known cache simulator used for estimating energy consumption for different processor structures. These off-line processes must be executed only once. Next, the program trace is processed by the thermal simulator, which generates a customized XML file. This XML file contains one row for each register, which has several associated elements: label, position \(x\),_y_ (inside the design area), width, height and power density, according to the thermal and energy model proposed (\(dp=P_{\text{reg}}/A_{\text{reg}}\)). The XML file is provided as input to the optimization algorithm (MOEA in the Figure), which simulates the internal structure in concordance with the given architecture. The MOEA produces solutions where the geometric register configuration is determined, as well as the register position on each configuration. In this case, the compiler will perform the register re-assignment. Our optimization process has just two objective functions: (I) minimize the thermal impact because of the register accesses and the influence of neighbors Figure 2: Optimization methodology. (1) Applications are simulated by Trimaran to obtain program traces. (2) CACTI tool is used to compute energy consumption per access. (3) Thermal simulator, designed in Matlab, processes program traces and applies the thermal model defined. (4) Optimization process with NSGA-II as MOEA distributes accesses through the register file and minimizes the thermal impact. cells, described by Equation (9), where \(c\) is a given configuration, \(dp_{i}\) and \(dp_{j}\) correspond to the power density of the registers \(i\) and \(j\) and \(d_{ij}\) is the Euclidean distance between them, and (II) fit the physical viability of the design area that controls \(X\) and \(Y\) positions inside it. Final human decision is needed to select the best solutions, which are analyzed in the next section. Therefore, we have selected the NSGA-II multi-objective algorithm as the required MOEA, following the classical implementation described in [31]. Our code is publicly available in [7]. Parameters for the NSGA-II algorithm are specified in Table 2, and they were adjusted after some preliminary experiments. \[f(c)=\sum_{\text{i}=1}^{\text{N}_{\text{reg}}}\frac{(dp_{i}\times dp_{j})}{d_{ ij}} \tag{9}\] We study three possible topologies, \(C1\), \(C2\) and \(C3\) for both VLIW and ARM architectures as shown in Table 3. These topologies have been chosen based on preliminary tests. According to the number of registers, we looked for alternatives to place more or fewer cells into contact with the external part of the register file, which allows us to better study the thermal behavior. Regarding the target applications, we have selected a subset of the multimedia benchmark Mediabench [32] (_epic_, _unepic_, _cjpeg_, _djpeg_, _mpegdec_, _mpegenc_, _gsmdecode_, _gsmencode_, _rawcaudio_, _rawdaudio_), widely used in the scientific community to increase the register file traffic. Experimental results show that the thermal impact in the VLIW architecture is not significant. The maximum increase in temperature for VLIW is \(0.4319^{\circ}C\) \begin{table} \begin{tabular}{l c} \hline \hline Parameter & Value \\ \hline Generations & 250 \\ Population Size & 100 \\ Chromosome Length & _Num. registers_ \\ Crossover & 0.9 (fixed point) \\ Mutation & 1/_Num. registers_ \\ \hline \hline \end{tabular} \end{table} Table 2: Parameters for the NSGA-II algorithm. in \(C1\) and \(C2\) configurations and \(0.4318^{\circ}C\) in \(C3\). In the case of ARM this increase is \(5.3044^{\circ}C\) in \(C1\) and \(C2\), and \(5.3036^{\circ}C\) in \(C3\). So, we focus on the ARM architecture, where the temperature increase presents some interesting values that we consider to be optimized. Figure 3 shows the thermal impact on configuration \(C3\) before the optimization with a 2 rows \(\times\) 8 columns topology, as an example. For the sake of the space, Figures display 8 rows and 2 columns. Table 4 shows the improvement percentage for all applications after the optimization. Figure 4 shows the case of \(C3\) configuration graphically, after the optimization process, which presents the best behavior among the studied topologies. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**VLIW Architecture**} & \multicolumn{4}{c}{**ARM Architecture**} \\ \hline Conf. & \(Rows\times Columns\) & Width & Hight & Conf. & \(Rows\times Columns\) & Width & Hight \\ \hline C1 & \(32\times 1\) & 90 & 96 & C1 & \(16\times 1\) & 90 & 48 \\ C2 & \(16\times 2\) & 180 & 48 & C2 & \(8\times 2\) & 180 & 24 \\ C3 & \(4\times 8\) & 720 & 12 & C3 & \(2\times 8\) & 360 & 6 \\ \hline \hline \end{tabular} \end{table} Table 3: Physical description of register file. Figure 3: Thermal impact because of accesses to general purpose registers in a typical ARM architecture before the optimization. \(C3\) topology has 2 rows and 8 columns, although in better shape, graphs are shown in \(8\times 2\) format. As seen in both Figures, _hot spots_ have been moved in the majority of the cases. However, in some of the benchmarks like _cjpeg_ and _djpeg_, the influence of the most accessed register in their neighborhood is so high that the adjacent registers are highly affected by its temperature increase. However, the average temperature values shows that, despite the decrease in terms of temperature is not relevant, the proposed method is able to reduce the thermal impact of the register file on the processor temperature in all the architectures, configurations and multimedia applications addressed. As shown in Figure 4, registers highly accessed are placed in the outside borders of the register file, in order to facilitate power dissipation. With the aim to verify the statistical relevance of the data obtained, we have performed the statistical _t-Student_ test. We have used the statistical software _R_[33] to perform this test and compute the _p-value_. The _p-value_ is a statistical measure that allows determining if our results are significant. \(T\) represents the statistical value used to make the test. Regarding the freedom degree, which \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & \multicolumn{2}{c}{**epic**} & \multicolumn{2}{c}{**unepic**} & \multicolumn{2}{c}{**cjpeg**} & \multicolumn{2}{c}{**djpeg**} & \multicolumn{2}{c}{**gsmdec**} \\ \hline & Avg & Max & Avg & Max & Avg & Max & Avg & Max & Avg & Max \\ \hline C1 & 3.87 & 2.14 & 5.55 & 3.10 & 1.84 & 1.01 & 1.22 & 0.67 & 4.12 & 2.28 \\ C2 & 3.74 & 2.15 & 5.51 & 3.10 & 1.81 & 1.01 & 1.21 & 0.67 & 4.03 & 2.29 \\ C3 & 3.78 & 2.15 & 5.54 & 3.11 & 1.83 & 1.01 & 1.22 & 0.67 & 4.13 & 2.29 \\ \hline & \multicolumn{2}{c}{**gsmenc**} & \multicolumn{2}{c}{**rawc**} & \multicolumn{2}{c}{**rawd**} & \multicolumn{2}{c}{**mpegdec**} & \multicolumn{2}{c}{**mpegenc**} \\ \hline & Avg & Max & Avg & Max & Avg & Max & Avg & Max & Avg & Max \\ \hline C1 & 4.15 & 2.31 & 2.61 & 1.44 & 1.00 & 0.55 & 7.75 & 5.36 & 1.56 & 0.85 \\ C2 & 4.07 & 2.31 & 2.57 & 1.44 & 1.00 & 0.55 & 7.68 & 5.36 & 1.53 & 0.86 \\ C3 & 4.17 & 2.32 & 2.59 & 1.44 & 1.01 & 0.55 & 7.73 & 5.36 & 1.55 & 0.86 \\ \hline \end{tabular} \end{table} Table 4: ARM. Improvement percentage with respect to the average and maximum temperature. is the number of freely chosen values in a sample which allow reaching a value, we consider the number of registers. Hence, for the problem at hand we verify statistically that the decrease of temperature obtained after the optimization process is not relevant in both the ARM and VLIW architectures. Table 5 shows the statistical results for the ARM architecture in the proposed topologies and all multimedia applications. For each topology and optimized solution, the temperature increase of its registers is compared to the average of the maximum temperature in the non-optimized configuration. The _p-value_ is higher than 0.05 in all cases, thus we conclude that the decrease of temperature obtained is not relevant. The same conclusion is applicable to the VLIW architecture, where we have obtained _p-value_ values over 0.05 in all cases, too. We have omitted the VLIW table for the sake of space. ## 5 Cache memory optimization The second layer of our methodology is the cache memory, as previously shown in Figure 1. We propose an optimization approach which is able to Figure 4: Thermal map for the \(C3\) topology in ARM architecture, after the multi-objective optimization process. Registers with lower thermal impact are placed next to others with higher thermal impact, so the temperature in the whole structure decreases for all applications, as Table 4 shows. determine cache configurations for multimedia embedded systems and require less execution time and energy consumption. As seen in Figure 5, this layer is divided into two off-line phases (labeled as 1 and 2) and a third phase devoted to optimization (labeled as 3). Firstly, the off-line phases are executed just once before the optimization. Next, the optimization process uses as input the results of the previous two phases. The cache characterization phase is performed by CACTI, which computes access times and energy consumed by the addressed structures. These values are necessary to calculate the objective functions while the evolutionary algorithm is running. The application profiling phase is carried out using Trimaran, which compiles all cache memory accesses into program traces. In the third phase, the NSGA-II optimization algorithm evaluates each candidate solution with the \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Application & \multicolumn{2}{c}{**C1**} & \multicolumn{2}{c}{**C2**} & \multicolumn{2}{c}{**C3**} \\ \hline & T & _p-value_ & T & _p-value_ & T & _p-value_ \\ \hline epic & 0.2161 & 0.5841 & -0.3366 & 0.3705 & 0.3335 & 0.6283 \\ unepic & 0.231 & 0.5898 & -0.0638 & 0.475 & 0.2802 & 0.6084 \\ cjpeg & 0.001 & 0.5004 & -0.0554 & 0.4783 & 0.031 & 0.5122 \\ djpeg & 0.0148 & 0.5058 & -0.0337 & 0.4868 & 0.0261 & 0.5102 \\ gsmdec & 0.2933 & 0.6133 & -0.1101 & 0.4569 & 0.3326 & 0.628 \\ gsmenc & 0.2969 & 0.6147 & -0.1122 & 0.4561 & 0.3358 & 0.6292 \\ rawcaudio & 0.0354 & 0.5139 & -0.1318 & 0.4484 & 0.0526 & 0.5206 \\ rawdaudio & 0.0325 & 0.5127 & -0.0098 & 0.4262 & 0.0158 & 0.5062 \\ mpegdec & 0.1721 & 0.5672 & -0.1808 & 0.4295 & 0.2144 & 0.5835 \\ mpegenc & -0.1289 & 0.4496 & -0.0411 & 0.4839 & 0.0009 & 0.5003 \\ \hline \hline \end{tabular} \end{table} Table 5: t-Student test for each application and configuration with respect to the average of the maximum non-optimized temperature in the ARM architecture. help of Dinero IV [34]. In this case, we have modified our implementation of NSGA-II in order to be able to evaluate solutions using an external program like Dinero IV. Dinero IV is a cache simulator, which given a program trace as input, computes the number of hits and misses of memory accesses. These metrics multiplied by either the access delays and the energy per access previously given by CACTI, provide the optimization algorithm a quality measure for each individual under evaluation. Table 6 shows the parameter values used to configure the NSGA-II algorithm. These values were selected after some preliminary experiments. \begin{table} \begin{tabular}{l c} \hline \hline Parameter & Value \\ \hline Generations & 250 \\ Population Size & 100 \\ Chromosome Length & 11 \\ Crossover & 0.9 (fixed point) \\ Mutation & 1/11 \\ \hline \hline \end{tabular} \end{table} Table 6: Parameters for the NSGA-II algorithm. Figure 5: Optimization process: (1) cache characterization, (2) application profiling and (3) cache optimization. Given the different values that each one of the cache configuration parameters may take, a very high number of combinations can be generated. Figure 6 shows the design space defined by the set of cache parameters (cache size, block size, associativity, replacement algorithm and prefetch algorithm for both the instruction and data caches, and also write policy for data cache) and their possible values. The NSGA-II optimization algorithm deals again with the objectives corresponding to execution time and energy consumption, applying the previously defined equations (8) and (7). As we will describe in the next section, the resulting cache configurations are compared with three baseline cache configurations. However, our methodology might use any other cache configuration as a reference baseline. Experimental results are based on the ARM architecture, particularly ARM-920T [35]. Again, we have selected a set of applications from Mediabench [32]. In this case, we have also considered _pegwitdec_ and _pegwitenc_ in addition to those benchmarks previously mentioned, all of them with their standard inputs. It is well-known that multimedia applications from Mediabench increase the pressure on the cache memory due to the intrinsic nature of data they manage. Every application has been run for \(7.5\times 10^{7}\) instructions to reach a balance Figure 6: Taxonomy for a cache configuration design space. Both instructions and data caches, labeled as I-Cache and D-Cache, must be customized with their corresponding values. between the total execution time, the size of the generated program traces and a proper number of instructions. The optimization process driven by NSGA-II is able to find cache configurations that presents an average improvement of execution time and energy consumption of 33.9% and 71.8% respectively, taking a baseline as a reference. Table 7 shows the three baseline configurations used in order to test our methodology. The first baseline is similar to the L1 cache of the first core in the GP2X video games console. The second and third one are implemented in Cortex-A9 and Cortex-A15 processor. The results we obtained are shown in Table 8, where the comparison with the three baselines is made. It is important to highlight that the Pareto set3 of some applications regarding the Baseline 1 includes solutions with negative improvement percentage for one of the objectives. These values not only reduce the average improvement in an application but also the total average improvement. We have not removed or penalized these solutions because, on one hand, they are not unfeasible solutions but they are solutions with performance worse than the baseline reference and, on the other hand, they can illustrate the decision maker how far a configuration can perform if one of the objectives is relaxed. As a consequence, the average percentages corresponding to Baseline 1 would be increased to 34.98% and 79.34% in terms of execution time and energy \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**Instructions**} & **Cache** & \multicolumn{6}{c}{**Data Cache**} \\ \hline \hline **Size** & **BS** & **A** & **RA** & **PA** & **Size** & **LS** & **A** & **RA** & **PA** & **WP** \\ \hline 16 KB & 32 B & 4 & LRU & On-Dem. & 16 KB & 32 B & 4 & LRU & On-Dem. & Copy-Back \\ \hline 32 KB & 64 B & 4 & Random & Always & 32 KB & 64 B & 4 & Random & Always & Copy-Back \\ \hline 32 KB & 64 B & 2 & LRU & Always & 32 KB & 64 B & 2 & LRU & Always & Copy-Back \\ \hline \hline \end{tabular} \end{table} Table 7: Baseline cache configurations. Cache size (Size), block size (BS), associativity (A), replacement algorithm (RA), prefetch algorithm (PA) and write policy (WP) are shown for the three configurations. consumption, respectively. These percentages would be higher, if we define a quality standard for the solutions, as part of the human decision-making phase. In relation to the other two baselines, corresponding to the _SoC Apple AX_ series included in different Apple devices like Cortex-A9 (iPad 2, iPod-touch o iApple-TV) and Cortex-A15 (iPhone 5, iPhone 5S), the results are even better than in the other baseline. The optimization process finds cache configurations that improve on average 49.37%, 93.24% and 44.84%, 93.82% with respect to baselines 2 and 3 in terms of execution time and energy consumption, respectively, as shown in Table 8. Figures 7 and 8 display the cache configurations in the form of Pareto fronts where the axis correspond to the execution time and energy consumption objective functions. As seen in the plots, in all applications the solutions are uniformly distributed in the objective space. Some applications such as _epic_, _unepic_, _gsmdec_ and _gsmenc_ present a greater uniformity than the rest, but all of them provide a high number of solutions. Given that the cache configurations provided by the optimization algorithm present different performances, they all should be provided to the cache designer. The expert will decide the best solution to fit the requirements of the target system. In this context, our method simplifies this selection process and provides a set of good solutions to the system designers. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Baseline 1**} & \multicolumn{2}{c|}{**Baseline 2**} & \multicolumn{2}{c}{**Baseline 3**} \\ **App.** & **Ex. T. (\%)** & **Energy (\%)** & **Ex. T. (\%)** & **Energy (\%)** & **Ex. T. (\%)** & **Energy (\%)** \\ \hline epic & 44.9 & 76.3 & 55.77 & 91.94 & 51.35 & 92.63 \\ unepic & 33.4 & 26.7 & 42.58 & 93.82 & 37.05 & 94.39 \\ gmdec & 31.4 & 84.3 & 36.25 & 94.24 & 30.39 & 94.74 \\ gmenc & 19.0 & 83.6 & 33.76 & 94.07 & 31.15 & 94.36 \\ cjpeg & 27.2 & 71.1 & 52.63 & 91.04 & 47.82 & 91.78 \\ djpeg & 16.5 & 72.4 & 44.86 & 91.06 & 40.02 & 92.01 \\ pegwitdec & 27.1 & 83.9 & 51.31 & 95.15 & 46.77 & 95.44 \\ pegwitenc & 35.8 & 84.8 & 53.76 & 95.96 & 49.51 & 96.20 \\ rawcaudio & 48.8 & 84.8 & 59.45 & 94.39 & 55.18 & 94.83 \\ rawdaudio & 48.1 & 78.4 & 60.02 & 91.94 & 55.81 & 92.58 \\ mpgdec & 37.9 & 71.2 & 52.40 & 90.68 & 48.48 & 91.77 \\ mpgenc & 37.9 & 43.9 & 49.65 & 94.60 & 44.55 & 95.06 \\ \hline **Average** & **33.9** & **71.8** & **49.37** & **93.24** & **44.84** & **93.82** \\ \hline \hline \end{tabular} \end{table} Table 8: Average improvement percentage of solutions belonging to Pareto set vs. the three baselines under study. With the aim to measure the relevance of results presented, we have computed the statistical _t-Student_ test as in the previous section. The obtained results are shown in Table 9, where _FD_ is the freedom degree. In this case, the freedom degree corresponds to the number of solutions of the Pareto set in all applications. We observe in the table that the _p-value_ is far lower than 0.05 for all applications with respect to the performance and the energy consumption. As a result of the statistical test, we can say the results obtained with the proposed optimization methodology are relevant. ## 6 Dynamic memory management optimization The third layer of our methodology consists on an optimization framework based on GE and static profiling of applications to improve the dynamic memory manager (DMM) for multimedia applications, which have high dependence of dynamic memory. This is a non-intrusive method that allows to automatically evaluate complex implementations of DMMs. Figure 7: Pareto front representation for NSGA-II for epic, cjpeg, mpegdec, gsmdec, pegwitdec and rawcaudio applications. Results show that solutions are uniformly distributed in the design space and cover a wide region. In order to evaluate our proposal, we have selected six memory intensive applications: _hmmer_, _dealII_, _soplex_, _calculix_, _gcc_ and _perl_. In addition, we have taken the Lea DMM (implemented in GNU/Linux systems) and the Kingsley DMM (implemented in Windows systems) as references to normalize the performance of the results. In fact, we analyzed the execution time, memory footprint and temperature of the memory, according to the thermal model proposed in [25], and the energy consumption following the energy model developed in [26] in a set of preliminary experiments. As a result, we found that the Lea DMM has a high impact on the performance and in the memory footprint, circa 43.25% and 22.9%, respectively. On the other hand its influence is not significant with respect to the temperature and energy consumption, which is 0.0006% and 0.48% on average. The Kingsley DMM presented a similar behavior, so we decided to use the first two metrics of performance and footprint. Similarly to the other layers, in this case our methodology is divided into Figure 8: Pareto front representation for NSGA-II for unepic, dipeg, mpegenc, gsmenc, peg-witenc and rawdaudio applications. Results show that solutions are uniformly distributed in the design space and cover a wide region. three phases, as shown in Figure 9, a detailed view of 3rd layer in Figure 1. The first phase obtains application traces with the Pin instrumentation tool [36]. The second phase analyzes the target application trace and creates the grammar that best fits with the application patterns. Finally, the optimization algorithm based on GE is run, coupled to a DMM simulator [23], which will collect the metrics needed to obtain the quality for each DMM evaluated. These metrics are the number of memory accesses, memory usage, de/allocations, splittings and coalescings. The execution time devoted to the DMM is calculated as the computational complexity given that the system uses simulation time instead of the execution time. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Time**} & \multicolumn{3}{c}{**Energy**} \\ \hline **Application** & **FD** & **T** & _p-value_ & **T** & _p-value_ \\ \hline cjpeg & 24 & -5.47 & 6.42E-006 & -15.87 & 1.59E-014 \\ \hline dipeg & 19 & -3.58 & 1.01E-003 & -14.48 & 5.10E-009 \\ \hline epic & 12 & -50.71 & 1.13E-012 & -11.61 & 3.48E-005 \\ \hline unepic & 26 & -23.79 & 2.20E-016 & -1.23 & 1.15E-001 \\ \hline unepic & 23 & -22.09 & 2.20E-016 & -5.52 & 6.41E-003 \\ \hline gsmdec & 16 & -17.66 & 3.23E-012 & -32.69 & 2.20E-016 \\ \hline gsmenc & 14 & -7.59 & 1.27E-003 & -28.48 & 4.28E-011 \\ \hline mpegdec & 16 & -25.96 & 8.30E-012 & -11.38 & 2.20E-006 \\ \hline mpegenc & 15 & -24.21 & 9.77E-011 & -1.60 & 6.50E-002 \\ \hline mpegenc & 13 & -22.93 & 3.36E-009 & -17.61 & 9.38E-008 \\ \hline pegwitdec & 11 & -4.53 & 4.31E-004 & -22.55 & 7.35E-008 \\ \hline pegwitenc & 12 & -13.09 & 9.15E-006 & -25.81 & 3.49E-009 \\ \hline rawcaudio & 12 & -63.21 & 2.20E-016 & -21.43 & 3.11E-008 \\ \hline rawdaudio & 12 & -54.49 & 4.81E-013 & -13.65 & 5.70E-006 \\ \hline \hline \end{tabular} \end{table} Table 9: t-Student test for the Pareto set of the applications with respect to a baseline cache. of real time. It is important to explain that the grammar of the GE algorithm will be used to compose the custom DMMs. Therefore, we decided to produce a different grammar for each one of the target applications in order to reduce the search space and, therefore, improve the optimization process. For the sake of space we do not describe here the grammar, but the interested reader may obtain detailed information about these kind of grammars in [22]. The parameters of the GE algorithm are detailed in Table 10, and they were adjusted after some preliminary experiments. Besides, in our preliminary experiments, we have also verified that the behavior of the applications is similar among different executions. Thus, each target application must be executed just once to obtain the profiling report, which can be used to evaluate different DMMs. Although GE performs well and does not require high amounts of memory, it tends to fall into a local optimum if it is not correctly set up [37]. To address this challenge, we have been successfully using premature convergence prevention Figure 9: DMM optimization: (1) obtains applications traces, through Pin as the instrumentation tool; (2) designs the customized grammar according to the application trace with and (3) runs optimization algorithm with GE, which generates a customized DMM and the DMM simulator is called to obtain the necessary metrics to evaluate it. techniques [38], that are next explained. Premature convergence of a Genetic Algorithm arises when the chromosomes of some high rated individuals quickly dominate the population, reducing diversity, and constraining it to converge to a local optimum. Premature convergence is one of the major shortcomings when trying to model low variability magnitudes by using GE techniques. To overcome the lack of variety in the population, work by Melikhov et al. [39] proposes the usage of Social Disaster Techniques (SDT). This technique is based on monitoring the population to find local optima, and apply an operator: 1. _Packing_: all individuals having the same fitness value except one are fully randomized. 2. _Judgment day_: only the fittest individual survives while the remaining are fully randomized. Work by Rocha et al. [40] proposes the usage of Random Offspring Generation (ROG) to prevent the crossover of two individuals with equal genotype, as this would result in the offspring being equal to the parents. Individuals are tested before crossover and, if equal, then one offspring (1-RO) or both of them (2-RO) are randomly generated. Both previous solutions have shown important benefits in classical Genetic \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Generations & 250 \\ Population Size & 100 \\ Chromosome Length & 200 \\ Selection mechanism & Tournament (size=2) \\ Crossover & 0.8 (fixed point) \\ Mutation & 0.02 \\ Maximum\(\mathit{wraps}\) & 3 \\ \hline \hline \end{tabular} \end{table} Table 10: Parameters for the GE algorithm. Algorithms problems. In our work, we use these techniques to improve the convergence time of our solutions, with excellent results. Otherwise, without these enhancements, the standard implementation of GE does not perform well and is not able to find good solutions in a reasonable amount of time, as we have already tested. The GE optimization algorithm is implemented as an improvement of a Genetic Algorithm with integer chromosomes where the grammar decodification is included in the evolutionary process. As in the previous cases, we have published the code of this algorithm in our JECO library [7]. Regarding the quality of the solutions, the algorithm uses the objective function described in Equation 10 to select the best possible DMM among the candidate solutions. The execution time (\(T\)) and the memory use (\(M\)) have equal weight and they are normalized to the corresponding Kingsley and Lea DMMs, which are considered the fastest and more efficient (in terms of memory footprint) DMMs, respectively. Thus, \(T\) and \(M\) are the execution time and the memory use for the DMM that is currently being evaluated; \(T_{Kng}\) and \(M_{Lea}\) are the execution time and the memory use for Kingsley and Lea DMMs, respectively. \[F=0.5\times\frac{T}{T_{Kng}}+0.5\times\frac{M}{M_{Lea}} \tag{10}\] This methodology has been tested with six memory-intensive applications from the SPEC benchmarks [41] using standard inputs: _hmmer_, _dealII_, _soplex_, _calculix_, _gcc_ and _perl_. We have compared the DMMs obtained by the GE algorithm (GEA) with five different general purpose DMMs: _Kingsley_ (KNG), _Doug Lea_ (LEA), a _buddy system_ based on the Fibonacci algorithm (FIB), a list of _10 segregated free-lists_ (S10) and an _exact segregated free list_ (EXA). Table 11 shows the average improvement percentage of GEA versus the general purpose DMMs we compare with. As seen in the table, this method reduces the objective function of weighted execution time and memory use by 59.27% on average. Besides, all the comparisons are positive for the GEA, therefore obtaining better results than the general purpose DMMs. Hence, our methodology is able to automatically design customized DMMs according to a given application in a non-intrusive way, improving the performance of standard DMMs. We have tested the relevance of the results obtained performing a statistical analysis with the Wilcoxon's matched pair test. For the GEA DMM, the execution time was compared to Kingsley and the memory use was compared to LEA, which are the faster and more efficient respectively using all the applications under study. The results of these tests are shown in Table 12. The tests performed confirm the conclusions previously mentioned. The test between GEA and Kingsley in performance gives a _p-value_ of 0.395, which indicates that we cannot ensure the samples are different. In fact, the results obtained by GEA in terms of performance are close to Kingsley. Moreover, the Wilcoxon's test with respect to the memory use provides us a _p-value_ of 0.142, which indicates that results obtained are very similar. Again, this is obtained because the use of memory of GEA is not very different from the memory consumption of LEA. Additionally, the DMM obtained with our GEA methodology was compared \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & KNG & LEA & FIB & S10 & EXA & Average \\ \hline Obj. value = \(100\times\frac{F_{*}-F_{GEA}}{F_{*}}\) & 9.13\% & 62.52\% & 51.81\% & 86.88\% & 86\% & 59.27\% \\ Performance = \(100\times\frac{T_{*}-T_{GEA}}{T_{*}}\) & 1.17\% & 72.44\% & 62.62\% & 85.74\% & 90.78\% & 62.55\% \\ Memory = \(100\times\frac{M_{*}-M_{GEA}}{M_{*}}\) & 16.03\% & 23.14\% & 15.08\% & 38.88\% & 59.96\% & 30.62\% \\ \hline \hline \end{tabular} \end{table} Table 11: Average improvement percentage. GEA vs. KNG, LEA, FIB, S10 y EXA. \begin{table} \begin{tabular}{l c} \hline \hline DMMs & _p-value_ \\ \hline GEA vs Kingsley (time) & 0.3951 \\ GEA vs LEA (memory) & 0.1415 \\ \hline \hline \end{tabular} \end{table} Table 12: Wilcoxon test on the optimization DMM in relation to GEA vs Kingsley (time) and LEA (memory). to Kingsley and LEA in memory use and execution time, respectively, in order to evaluate the relevance of our results regarding the measure that is not best for the reference DMMs. Table 13 shows results obtained. According to these tests, the memory use of GEA is better than Kingsley, and the Wilcoxon's test demonstrates the relevance of the results obtained with a _p-value_ lower than 0.0018. The comparison between LEA and GEA in terms of execution time also confirms that the results obtained by GEA are significant, with a _p-value_ lower than 0.002), and it improves the performance with respect to LEA. ## 7 Conclusions and future work We have presented a method to optimize the memory subsystem of a computer addressing three different levels: register file, cache memory and dynamic memory management in the main memory. In all these levels we propose an evolutionary algorithm as the optimization engine, which is helped by other applications, either in a closed loop, either in off-line phases. The optimization of the register file is based on a first step where a static profiling of the target applications is performed. Then, a multi-objective evolutionary algorithm is run, returning a set of solutions corresponding to register re-assignments. As a result, highly accessed registers are spaced far apart. In spite of thermal impact is not significant, we found some values worth to be studied and apply the optimization process. Our results obtain a reduction in the maximum temperature of 7.75% and 10.79% for some applications in \begin{table} \begin{tabular}{l l} \hline \hline DMMs & _p-value_ \\ \hline GEA vs Kingsley (memory) & 0.001753 \\ GEA vs LEA (time) & 0.001988 \\ \hline \hline \end{tabular} \end{table} Table 13: Wilcoxon test on the optimization DMM in relation to GEA vs Kingsley (memory) and LEA (time). ARM and VIW architectures, respectively. This approach, as a consequence of reducing temperature, facilitates heat dissipation. In the optimization of the single level cache memory we consider both the instructions and data caches, trying to reduce the execution time and the energy consumption due to cache memory operations. In this case, we propose a framework divided into three phases: two off-line phases responsible of cache characterization and the applications profiling, and a third phase which is driven by the evolutionary algorithm. The experiments return a set of cache configurations which, in terms of average execution time and average energy consumption, improve more than 34% and 79% respectively, compared with three baseline configurations. Regarding the dynamic memory management, our approach is divided into three phases: applications profiling, grammar generation and the optimization process based on Grammatical Evolution (GE). On average, we have obtained custom DMMs that improve the weighted function of execution time and memory use in a 59.27%, normalized with respect to the best general purpose DMMs. The execution time of the experiments in the three memory layers was very diverse because it depends on the size of the target application, the configuration of the algorithm and the time devoted to evaluation of the different simulators that we call. Hence, our current optimization times are slow, and they are situated in the range of several hours for each experiment. Therefore, we will try to improve the execution time by incorporating parallel execution, as well as to fine-tune the configuration of the algorithms. The proposed methodologies provide a useful framework to system designers facing the task of optimizing the memory subsystem of a device according to running applications. As future work we propose the integration of the three frameworks into a complete tool able to automatically optimize the three levels of the memory hierarchy without human interaction. In addition, we will study the use of artificial data in order to better understand the behavior of our algorithms, which could provide us clues to improve the tuning of our methods.
2307.04187
Predictive Coding For Animation-Based Video Compression
We address the problem of efficiently compressing video for conferencing-type applications. We build on recent approaches based on image animation, which can achieve good reconstruction quality at very low bitrate by representing face motions with a compact set of sparse keypoints. However, these methods encode video in a frame-by-frame fashion, i.e. each frame is reconstructed from a reference frame, which limits the reconstruction quality when the bandwidth is larger. Instead, we propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame. The residuals can be in turn coded in a predictive manner, thus removing efficiently temporal dependencies. Our experiments indicate a significant bitrate gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC, on a datasetof talking-head videos
Goluck Konuko, Stéphane Lathuilière, Giuseppe Valenzise
2023-07-09T14:40:54Z
http://arxiv.org/abs/2307.04187v1
# Predictive Coding for Animation-Based Video Compression ###### Abstract We address the problem of efficiently compressing video for conferencing-type applications. We build on recent approaches based on image animation, which can achieve good reconstruction quality at very low bitrate by representing face motions with a compact set of sparse keypoints. However, these methods encode video in a frame-by-frame fashion, i.e., each frame is reconstructed from a reference frame, which limits the reconstruction quality when the bandwidth is larger. Instead, we propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame. The residuals can be in turn coded in a predictive manner, thus removing efficiently temporal dependencies. Our experiments indicate a significant bitrate gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC, on a dataset of talking-head videos. Goluck Konuko\({}^{\dagger}\), Stephane Lathuiliere\({}^{\ddagger}\), Giuseppe Valenzise\({}^{\dagger}\)\({}^{\dagger}\) Universite Paris-Saclay, CentraleSupelec, Laboratoire des signaux et systemes \({}^{\ddagger}\) LTCI, Telecom Paris, Institut Polytechnique de Paris, France Video compression, image animation, generative models, video conferencing, predictive coding ## 1 Introduction Recent work on learning-based video coding for videoconferencing applications has shown that it is possible to compress videos of talking heads with extremely low bitrate, without significant losses in visual quality [1, 2, 3, 4, 5, 6]. The basic tenet of these methods is that face motion can be represented through a compact set of sparse keypoints [7], which can be transmitted and used at the decoder side to animate a reference video frame. However, despite the impressive coding performance of these methods at very low bitrates, existing animation-based codecs for videoconferencing still have several bottlenecks. Firstly, when the available bitrate increases, the reconstruction quality quickly reaches saturation, and conventional coding tools such as HEVC or VVC perform better. Secondly, bitrate variability in current schemes is complex, unlike conventional coding methods where a simple quantization parameter can be used to regulate bitrate. Finally, animation-based codecs operate on a frame-by-frame basis, which is inefficient for eliminating temporal redundancy in the video. This paper addresses these limitations by proposing a _predictive coding_ scheme for videoconferencing applications. Specifically, we interpret the keypoint-based image animation used in previous codecs [1] as a _spatial predictor_ of the current (target) frame, as depicted in Figure 1. The residual between the animated and the target frame is then coded and used at the decoder side to correct the animated target frame. Since animation residuals exhibit _temporal_ correlation, we also encode them in a predictive manner, i.e., we predict the current animation residual based on the previously decoded residual and encode the prediction difference. It is worth noting that this approach is similar in principle to the classic video coding prediction loop, with the important distinction that residual coding and animation are _jointly_ learned in an end-to-end fashion. We name our method **RDAC**, for Residual Deep Animation Codec. Our results demonstrate significant rate-distortion improvements compared to standard codecs such as HEVC and VVC, as measured by several classical and learning-based perceptual quality metrics. Furthermore, the proposed technique has the additional advantage of reducing temporal drift compared to previous frame-by-frame approaches. ## 2 Related Work Image animation models have been applied to compress talking head videos at ultra-low bitrates in conferencing-type applications [1, 2, 3, 4, 5, 6]. Different from other learning-based compression frameworks [8, 9, 10, 11, 12, 13, 14], the animation-based codecs in [3] and [4] propose architectures that use a variable number of motion keypoints to change the reconstruction quality within a small range of low bitrates. The deep animation codec (DAC) in our previous work [1] offers the possibility to vary the bitrate by creating a list of reference frames from which the best reconstruction is computed. Specifically, a new reference frame is added to the decoder buffer if all the available frames give reconstruction below a predefined threshold. However, this approach may introduce temporal jittering when adjacent animated frames are predicted from different reference frames. Using second-order motion coherence [6] introduces spatio-temporal stability in the decoded video, hence reducing the jittering. However, this architecture is still limited in terms of quality vari ability since it relies only on face animation. In our recent work [2], we proposed a hybrid coding architecture (HDAC) that uses a low-quality HEVC bitstream as side information to enhance the final result of the animation codec. While improving on previous methods, the use of this low-quality auxiliary stream limits in practice the possibility to reconstruct high-frequency details. In this work, we propose a residual deep animation codec (RDAC) that learns a compact representation of the residual between a frame and its animation-based prediction, and encodes this residual using temporal prediction. ## 3 Proposed Method A general scheme of the proposed residual deep animation codec is depicted in Fig. 1. The components of the proposed system are detailed as follows: Section 3.1 introduces the frame prediction and residual coding and Section 3.2 presents temporal learning in the residual space. ### Deep Image Animation Prediction and Residual Coding We leverage the principles developed in the First Order Model [7] for image animation and our prior works [1, 2] for animation-based prediction. The image animation process works by estimating a sparse set of motion landmarks using a keypoint detector (KPD) which is a UNet-like architecture from [7]. The keypoints are used by a motion transfer network (MTN) that generates the optical flow between a decoded reference image \(\mathbf{\tilde{X}}_{0}\) and the desired target \(\mathbf{X}_{t}\). Subsequently, the optical-flow map is applied to the feature space representation of the reference frame derived by the encoder of an autoencoder network. The deformed source features are assumed to be a close approximation of the target frame's feature representation and are used by a decoder network to produce the final animation \(\mathbf{\hat{X}}_{t}\). We build on this animation framework by including an encoder network that learns a latent representation of \(\mathbf{R}_{t}=\mathbf{X}_{t}-\mathbf{\hat{X}}_{t}\)_i.e._ the residual after animation as illustrated in Fig. 1. We start with the architecture of the variational autoencoder network [9] used for learned image compression frameworks. However, since the residual images have very sparse features we mitigate the potential encoding of a noisy latent representation by increasing the number of downsampling convolutional layers from 3 to 5 and symmetrically increase the number of upsampling layers. ### Using Temporal Correlation in the Residual Layer For a sequence of target frames \(\mathbf{X}_{1}\rightarrow\mathbf{X}_{T}\) animated from a single reference frame, \(\mathbf{X}_{0}\), we observe that the residual differences \(\mathbf{R}_{1}\rightarrow\mathbf{R}_{T}\) have a high temporal correlation. In this paper, we use a simple differential coding scheme to exploit this temporal correlation. Specifically, we compute the temporal difference signal between consecutive frame residuals, \(\mathbf{D}_{t}=\mathbf{R}_{t}-\mathbf{\hat{R}}_{t-1}\), as shown in Fig. 1. Note that, in general, more sophisticated prediction schemes are possible, that could bring additional temporal decorrelation, e.g., any dense or block-based motion compensated scheme. In this work, we demonstrated coding gains even with a suboptimal zero-motion temporal predictor, leaving the study of more advanced prediction schemes to future work. The difference signal \(\mathbf{D}_{t}\) is coded using an additional Figure 1: **Proposed RDAC framework**. From a single reference frame, motion keypoints and a compact residual layer, our framework reconstructs a video with high perceptual and pixel fidelity. Note that when previously decoded frame information is available, it is used to improve the prediction accuracy of the next target residual through temporal correlation learning. autoencoder network, which is trained together with the animation-based predictor and the reconstruction network. The decoding process consists in reconstructing the residual \(\mathbf{\tilde{R}}_{t}=\mathbf{\tilde{D}}_{t}+\mathbf{\tilde{R}}_{t-1}\). The reconstructed residual is then concatenated to the animation-based predictor \(\mathbf{\hat{X}}_{t}\) and passed as input to a reconstruction network that produces the final decoded frame \(\mathbf{\tilde{X}}_{t}\). The reconstruction network consists of 2 convolution layers and 3 ResNet blocks. ### Model Training We initialize the animation module with pre-trained models from [1]. The loss terms for image animation are the same as in [1, 7], while the rate-distortion loss \(\mathcal{L}_{RD}\) is derived as described in [9]: \[\mathcal{L}_{RD}=\lambda\cdot\text{MSE}(\mathbf{R_{t}},\mathbf{\hat{R}_{t}}) +\text{Rate} \tag{1}\] where the bitrate cost in bits-per-pixel (bpp) is computed from the entropy estimate of the residual latent representation. ## 4 Experiments and Results ### Evaluation Protocol We randomly select 30 video sequences from the VoxCeleb test set with minimum lengths of 128 frames. We note that changing the GOP size affects the average reconstruction quality of the video sequences. Therefore, we encode the sequences with GOP sizes 16, 32, 64, and 128 and select the best reconstruction point at each bitrate from a union of the computed metrics _i.e._ the convex hull of all the GOP configurations. The reference frame is encoded with QP 30 using the BPG codec (HEVC intra) and the motion keypoints as well as the compressed residuals are entropy coded using a context-adaptive arithmetic coder with a Prediction by Partial Match (PPM) model [15]. HEVC and VVC (VTM-11) metrics are computed under low-delay configurations with high QP values to minimize bitrate. We also compare against the LPIPS-VGG metrics reported for BeyondKP [5] and Face-Vid2Vid [3] since they use comparable test conditions. Notice that for these last two methods, we only have a single bitrate point, since they do not support bitrate variability beyond 10 kbps. MSE loss is used at training time for residual learning. However, the other loss terms used in training the network optimize for perceptual quality. Therefore, we restrict our evaluation to use only perceptual metrics and multi-scale pixel fidelity metrics. ### RD Evaluation In Tab. 1, we note over 70% bitrate savings for perceptual-based metrics _i.e._ LPIPS [16], msVGG [17] and DISTS [18] as well as over 40% bitrate savings for pixel-based metrics over HEVC. In Fig. 2 we make a visual comparison of our proposed framework with HEVC and VVC in the low bitrate range. Fig. 3 illustrates the rate-distortion performance using the LPIPS metric. RDAC significantly improves performance of conventional video codecs over a wide range of bitrates, and it outperforms previous animation-based codecs which do not employ predictive coding. ### Ablation study and temporal drift An advantage of using a closed-loop prediction scheme for temporal coding of residuals is that it avoids the temporal drifting affecting previous open-loop schemes such as DAC. This is supported by Fig. 4, where we show the temporal reconstruction quality (measured with MS-SSIM) of our framework and DAC. We also investigate to which extent the temporal prediction contributes to the RD gains, over a frame-by-frame scheme to code the prediction residuals \(\mathbf{R}_{t}\). To this end, we remove the temporal feedback loop in Fig. 1, encoding the residuals as all Intra. Tab. 2 reports the gains of our proposed RDAC (with temporal prediction) over this simpler solution, demonstrating that reducing temporal correlation has a significant impact on coding performance. ### Computational complexity In Tab. 3, we make a complexity evaluation by comparing the coding or decoding time for a single interframe. The animation-based models DAC, HDAC, and our framework are evaluated on a CPU and GPU while the HEVC and VVC codecs are only evaluated on a CPU since they do not have \begin{table} \begin{tabular}{c c c c} \hline \hline _Metrics_ & _HDAC_ & _VVC_ & _HEVC_ \\ \hline **msVGG** & -55.10 & -66.65 & -74.99 \\ **DISTS** & -48.46 & -63.01 & -82.62 \\ **LPIPS-VGG** & -38.43 & -48.73 & -78.96 \\ **LPIPS** & -6.45 & -33.11 & -75.96 \\ **FSIM** & -42.89 & -16.02 & -63.10 \\ **MS-SSIM** & -56.97 & -20.11 & -52.05 \\ \hline \hline \end{tabular} \end{table} Table 1: **Bitrate savings (% BD-BR)** computed over 30 video sequences with 128 frames from VoxCeleb test set. The bitrate savings are measured with HDAC [2], HEVC and VVC codec as anchors \begin{table} \begin{tabular}{c c c c c} \hline \hline _DISTS_ & _LPIPS_ & _msVGG_ & _FSIM_ & _MS-SSIM_ \\ \hline -9.35 & -14.32 & -5.65 & -9.20 & -6.86 \\ \hline \hline \end{tabular} \end{table} Table 2: **Bitrate Savings (% BD-BR)** from our framework with temporal residual learning versus no temporal residual (10 sequences/64 frames) GPU acceleration capability. We note that our proposal adds only a moderate level of complexity relative to HEVC. However since we achieve bitrate savings greater than VVC, we consider this additional complexity as an acceptable tradeoff for the target application. ## 5 Conclusions Animation-based compression offers the possibility to transmit videos with very low bitrate. However, it is often limited to reconstructing the outputs at a fixed quality level, cannot scale efficiently when higher bandwidth is available, and does not compress efficiently temporal redundancies in the signal. In this paper, we propose a coding scheme that integrates image animation (re-interpreted as a frame predictor) with classical predictive coding principles, where we exploit both spatial and temporal dependencies to achieve a coding gain. Our RDAC codec outperforms previous methods and standard codecs by a large margin on a dataset of talking head videos, despite the very simple temporal prediction approach employed. **Acknowledgement:** This work was funded by Labex DigiCosme - Universite Paris-Saclay. This work was performed using HPC resources from GENCI-IDRIS \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{**CPU (Intel Corei7)**} & \multicolumn{2}{c}{**GPU (RTX 3090)**} \\ & _Enc._ & _Dec._ & _Enc._ & _Dec._ \\ \hline **HEVC** & 0.09 & 0.005 & - & - \\ **VVC** & 13.5 & 0.01 & - & - \\ \hline **DAC** & 0.04 & 0.35 & 0.03 & 0.02 \\ **HDAC** & 0.10 & 0.35 & 0.12 & 0.02 \\ **RDAC (Ours)** & 0.52 & 0.50 & 0.21 & 0.02 \\ \hline \hline \end{tabular} \end{table} Table 3: **Computational Complexity:** Time to encode/decode 1 frame (in seconds). The computation time is estimated for the highest RD point of our framework. Figure 4: **Reconstruction quality as a function of time:** RDAC temporal prediction avoids temporal drift, in contrast with open-loop schemes such as DAC, where quality degrades as the target frames get farther from the reference. Figure 3: **RD Performance:** Our method achieves considerable bitrate gains over a wider range of bitrates relative to both state-of-the-art video compression tools and the previously proposed frameworks. Figure 2: **Visual Comparison of coding results.** A qualitative comparison of our proposed coding framework shows significant quality improvement over HEVC and VVC at low bitrates. We observe fewer smoothing and blocking artifacts as well as better color and style preservation in the reconstructed frames with our framework.
2307.10695
Self2Self+: Single-Image Denoising with Self-Supervised Learning and Image Quality Assessment Loss
Recently, denoising methods based on supervised learning have exhibited promising performance. However, their reliance on external datasets containing noisy-clean image pairs restricts their applicability. To address this limitation, researchers have focused on training denoising networks using solely a set of noisy inputs. To improve the feasibility of denoising procedures, in this study, we proposed a single-image self-supervised learning method in which only the noisy input image is used for network training. Gated convolution was used for feature extraction and no-reference image quality assessment was used for guiding the training process. Moreover, the proposed method sampled instances from the input image dataset using Bernoulli sampling with a certain dropout rate for training. The corresponding result was produced by averaging the generated predictions from various instances of the trained network with dropouts. The experimental results indicated that the proposed method achieved state-of-the-art denoising performance on both synthetic and real-world datasets. This highlights the effectiveness and practicality of our method as a potential solution for various noise removal tasks.
Jaekyun Ko, Sanghwan Lee
2023-07-20T08:38:01Z
http://arxiv.org/abs/2307.10695v1
# Self2Self+: Single-Image Denoising with Self-Supervised Learning and Image Quality Assessment Loss ###### Abstract Recently, denoising methods based on supervised learning have exhibited promising performance. However, their reliance on external datasets containing noisy-clean image pairs restricts their applicability. To address this limitation, researchers have focused on training denoising networks using solely a set of noisy inputs. To improve the feasibility of denoising procedures, in this study, we proposed a single-image self-supervised learning method in which only the noisy input image is used for network training. Gated convolution was used for feature extraction and no-reference image quality assessment was used for guiding the training process. Moreover, the proposed method sampled instances from the input image dataset using Bernoulli sampling with a certain dropout rate for training. The corresponding result was produced by averaging the generated predictions from various instances of the trained network with dropouts. The experimental results indicated that the proposed method achieved state-of-the-art denoising performance on both synthetic and real-world datasets. This highlights the effectiveness and practicality of our method as a potential solution for various noise removal tasks. ## Introduction In image denoising, a clean image \(x\) is recovered from a noisy image \(y\). This operation is frequently modeled as \(y=x+n\), where \(n\) represents the measurement noise. Typically, \(n\) is assumed to be additive white Gaussian noise (AWGN) with a standard deviation \(\sigma\). Neural networks (NNs) have demonstrated excellent performance in low-level vision tasks, including image denoising and super-resolution. The prevalent approach for training an image denoising NN involves using pairs of synthetic noisy observations \(y_{i}\) and clean reference \(x_{i}\). Using the trainable model parameter vector \(\theta\) and a vast paired dataset, a denoising model \(f_{\theta}\) was trained by solving the following equation: \[\theta^{*}=\operatorname*{argmin}_{\theta}\sum_{i}\mathcal{L}(f_{\theta}(y_{i }),x_{i}), \tag{1}\] where \(\mathcal{L}(\cdot,\cdot)\) denotes the loss function that computes the distance between the two images. In this method, retaining abundant training samples yields a superior denoising performance; however, this process can occasionally be costly, challenging, or even unattainable. To address this issue, self-supervised methods using only noisy external images have been introduced. In the Noise2Noise (N2N) method [1], multiple pairs of two noisy images acquired from the same scene are used for training. The Noise2Void (N2V) [13] and Noise2Self (N2S) [2] methods adopt the _blind-spot_ strategy to train self-supervised models that prevent models from learning an identity mapping. However, this method is still not feasible for removing real-world noise satisfying the aforementioned prerequisites. First, acquiring numerous noisy images per incident or obtaining an external image that accurately represents the target noise distribution can be challenging and expensive. Second, regarding noise distribution, methods that rely on noise model assumptions often struggle to perform well. This is because the actual noise distribution in such images is typically unknown and can deviate significantly from the assumed noise model. Finally, the _blind-spot_ strategy incurs high computational cost, which poses a significant limitation for real-world applications. Therefore, methods have been proposed that learn solely from the input image without any prerequisites to overcome these problems. Ulyanov _et al._[12] introduced the so-called _deep image prior_ (DIP), in which a deep learning model was used to extract an image prior. Xu _et al._[14] proposed a noise-as-clean strategy to resolve domain disparity in image priors and noise statistics between the training and test data. In the Self2Self (S2S) method [15], Bernoulli sampling and dropout are used to improve performance and reduce prediction variance. In this study, we developed the Self2Self+ (S2S+), a self-supervised image-denoising framework inspired by Self2Self [15], which is trained only on the input image. Specifically, using Bernoulli sampling and dropout [1] as the base scheme, we introduced a gated convolution (GConv) [16] and image quality assessment (IQA) loss function. Gated convolution learns a soft mask from the data and overcomes the partial convolution (PConv) [17] phenomenon, in which all channels in the same layer share an identical mask. Based on no-reference IQA (NR-IQA), the IQA loss function is adopted to maneuver the training of the deep learning model by minimizing the image quality score dif ference. The contributions of our study are as follows: * A single-image self-supervised learning model was proposed to mitigate the limitation of requiring numerous training images consistent with the target data. * This method exhibited effective performance on both synthetic and real-world datasets, outperforming state-of-the-art single-image self-supervised methods. This indicates its strong denoising ability and generalization capability. ## Related Work ### Non-Learning Image Denoisers Various non-learning-based denoisers utilize predefined priors to model the noise distribution, which helps guide the denoising process. A commonly used prior in image denoising is the self-similarity prior obtained using non-local methods [1, 1, 13]. The _NL-means_[1] and CBM3D [1] methods exhibit excellent performance by presuming the existence of similar patches in a single image and using them in the noise removal process. ### Image Denoisers Trained with Noisy-Clean Image Pairs Image denoising methods based on NNs have rapidly evolved. Generally, a supervised learning method with a set of noisy-clean pairs is used for training. In the DnCNN method [14], a combination of an NN and residual learning strategy is used for image denoising, which considerably exceeds the denoising performance of non-learning methods. Since the introduction of the DnCNN method, a large number of NNs [13, 14, 15, 16, 17] have been developed to enhance the denoising accuracy. ### Image Denoisers Trained with Numerous Noisy Images Gathering noisy-clean pairs for supervised learning is difficult and expensive, which restricts the use of supervised denoisers. Therefore, the N2N method [10] uses pairs of two noisy images of the same incident for training, rather than using noisy-clean pairs. By assuming that the noise characteristics of the noisy-noisy pair are independent, the N2N method exhibits a similar performance to supervised denoising NNs. This is because the expectation of the mean squared error of the noisy-noisy pair is identical to that of the noisy-clean pair. However, assembling numerous noisy-noisy pairs remains challenging. Therefore, based on the _blind-spot_, Krull _et al._[11] proposed the N2V method, which only utilized unorganized noisy images for training. This method forces an NN to predict each pixel based on the adjacent pixels. Therefore, the method prevents the NN from learning an identity mapping, which is considered as a severe overfitting problem. Specifically, the loss is computed only on the image pixels that are replaced by the values of their randomly selected neighboring pixels. In the N2S method [1] and probabilistic extension of the N2V method (PN2V) [11], a framework similar to that of the N2V method was used. The external noisy images used for training should possess relevant content and noise statistics that are representative of the noise characteristic of the noisy image being restored. ### Image Denoisers Trained Only with A Single Noisy Image To address the abovementioned problem, denoising approaches with NNs using only noisy input images have been proposed. These networks represent the most adaptable approaches in real-world operations. In the DIP method [15], a generative NN is trained, in which a random input is projected onto a distorted image. This method assumes that priority over random sequences such as noise helps to understand useful image patterns. The S2S method [1] adopts Bernoulli-sampled instances of a noisy input image to produce multiple image pairs and avoid convergence to identity mapping. Moreover, dropout was used in both training and denoising stages to reduce the variance in the predictions. Inspired by the Neighbor2Neighbor method [12], Lequyer _et al._[1] introduced Noise2Fast (N2F) method, in which a novel down-sampling approach called _chequerboard downsampling_ was used to obtain a training set of four discrete images. This method trained the denoising NN by learning the mapping between adjacent pixels. ### Image Quality Assessment IQA plays a crucial role in computer vision tasks, serving as a reference indicator for evaluating image quality. IQA methods can be categorized into two main categories. The first category is full-reference IQA (FR-IQA), in which a degraded image is compared with the original undistorted image for quality assessment. The strength of FR-IQA is that visual sensitivity can be computed using the difference between the degraded and reference images. This process adequately helps to mimic the behavior of the human visual system (HVS). The most common methods are the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). To enhance the correlation with human perception, visual information fidelity (VIF) [18], multiscale SSIM (MS-SSIM) [10], feature similarity (FSIM) [14], and learned perceptual image patch similarity (LPIPS) [14] have been introduced. The second category is the no-reference IQA (NR-IQA) in which the quality of a degraded image is analyzed without using the original undistorted image. Compared with FR-IQA, NR-IQA mostly conducts the HVS process by relying on feature extraction. Thus, for this process, NNs are used to map the relationship between the image features and quality scores. Based on NNs, CNNIQA Kang et al. (2014), Deep-IQA Bosse et al. (2018), DBCNN Zhang et al. (2020), and PaQ-2-PiQ Ying et al. (2020) have been proposed. ## Methodology In this section, we first introduce the architecture of the proposed method and then present a detailed explanation of the single-image self-supervised training and denoising schemes. ### Neural Network Architecture The NN structure of the proposed method, wherein an autoencoder framework is employed, is shown in Figure 1. Given a noisy input image of size \(H\times W\times C\), the generated mask is concatenated with the input image to obtain the spatial information of the missing pixels, yielding an input of size \(H\times W\times 2C\). The encoder first expands the number of input channels to \(H\times W\times 48\) using two GConv Yu et al. (2019) layers, wherein LeakyReLU (LReLU) is used as the activation function. The feature map is then down-sampled using a max-pooling layer and input into a subsequent GConv layer. This process continues until the final output of the encoder reaches \(\frac{H}{32}\times\frac{W}{32}\times 48\). The number of channels for all outputs in the encoder is fixed at 48. A combination of nearest-neighbor interpolation with a scaling factor of two, a concatenation operation, and two vanilla convolutional layers with LReLUs are continuously used in the decoder. The concatenation operation accepts the up-sampled feature map and the output of the encoder, which has the same spatial size as the output of the decoder. Except for the last layer, all convolutional layers in the decoder are combined with dropout and have 96 output channels. For the final output, a single vanilla convolutional layer without dropout and LReLU transforms the feature map into \(H\times W\times C\) to match the input image size. ### Training Scheme To prevent the NN from converging on an identity mapping, a group of numerous Bernoulli-sampled instances of a noisy image \(y\) was generated. With instances of \(y\) defined as \(\{\hat{y}_{m}\}_{m=1}^{M}\), the sampled instance is computed as follows: \[\hat{y}:=b\odot y, \tag{2}\] where \(b\) denotes a binary Bernoulli vector instance, in which a Bernoulli distribution with \(p\in(0,1)\) is used to independently sample the pixel values, and \(\odot\) represents element-wise multiplication. Each \(m\) in a group of image pairs \(\{(\hat{y}_{m},\bar{y}_{m})\}_{m=1}^{M}\) is formulated as follows: \[\hat{y}_{m}:=b_{m}\odot y;\qquad\bar{y}_{m}:=(1-b_{m})\odot y. \tag{3}\] Here, a set of image pairs was used to train the NN by minimizing the corresponding self-supervised loss function. \[\mathcal{L}_{self-supervised}=\sum_{m=1}^{M}\lVert f_{\theta}(\hat{y}_{m})-\bar{y }_{m}\rVert_{b_{m}}^{1}, \tag{4}\] where \(\theta\) denotes the network parameter vector and \(f_{\theta}\) is the denoising NN. Unlike the S2S method Quan et al. (2020), we used the sum absolute error (SAE) instead of the sum squared error (SSE) to compute the loss, where \(\lVert\cdot\rVert_{b}^{1}\) is equal to \(\lVert(1-b)\odot\cdot\rVert_{1}^{1}\). The aggregation of the loss function over the entire pair calculates the difference over all image pixel values owing to the random selection of masked pixels using a Bernoulli process. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Probability & Training Step \\ \hline \hline Synthetic & CBSD68 & 0.4 & 4000 \\ \hline Real-world & SIDD & 0.9 & 1000 \\ noise & PolyU & 0.7 & 5000 \\ \hline \hline \end{tabular} \end{table} Table 1: Probability of the dropout layers and Bernoulli sampling, and number of training steps of our method selected for each dataset. Figure 1: Proposed S2S+ architecture for single-image self-supervised learning. Moreover, assuming that the noise components are independent and have zero mean, the expectation of the loss function defined in Eq. (4) with respect to noise \(n\) for an arbitrary \(f_{\theta}\) is expressed as follows: \[\mathbb{E}_{n}\left[\sum_{m=1}^{M}\|f_{\theta}(\hat{y}_{m})-\bar{y}_{m}\|_{b_{m} }^{1}\right]=\begin{cases}\sum_{m=1}^{M}\|f_{\theta}(\hat{y}_{m})-x\|_{b_{m}}^{ 1},\\ if\ f_{\theta}(\hat{y}_{m})>y\\ \sum_{m=1}^{M}\|x-f_{\theta}(\hat{y}_{m})\|_{b_{m}}^{1}.\\ otherwise\end{cases} \tag{5}\] Using the pairs of Bernoulli-sampled instances \(\{\hat{y}_{m},\bar{y}_{m}\}\) is similar to using the pairs of sampled instances \(\hat{y}_{m}\) and cleaning reference \(x\) to train the NN with sufficient pairs. Please refer to the supplementary material for further details. In addition to the self-supervised loss function, we introduced an IQA loss function to improve the image restoration process. Because NR-IQA methods only use input images to compute quality scores, they can be used as IQA loss functions for single-image learning tasks. Specifically, we used the PaQ-2-PiQ (Ying et al., 2020) method, pretrained with the KonIQ-10k dataset (Hosu et al., 2020). To use the local and global features of the input image, this method cropped multiple patches of various sizes from the image to calculate the mean opinion score (MOS), which was normalized to a range of 0 to 100. Because images with satisfactory qualities exhibit a higher MOS, the IQA loss function computes the SSE between the predicted score of a denoised image and 100. Thus, using the NN implemented in the PaQ-2-PiQ method, denoted by \(f_{P2P}\), the IQA loss function is expressed as the following equation. \[\mathcal{L}_{IQA}=\sum_{m=1}^{M}\|100-f_{P2P}(f_{\theta}(\hat{y}_{m}))\|_{2}^{2}. \tag{6}\] Our full loss function is a combination of \(\mathcal{L}_{self-supervised}\) and \(\mathcal{L}_{IQA}\), which is formulated as follows: \[\mathcal{L}=\mathcal{L}_{self-supervised}+\lambda_{IQA}\mathcal{L}_{IQA}, \tag{7}\] where the weight of the IQA loss function was empirically chosen as given below. \(\lambda_{IQA}=2\times 10^{-8}\). ### Denoising Scheme Similar to the S2S method, we adopted dropout layers in the denoising NN to reduce prediction variance. Because \begin{table} \begin{tabular}{c|c c c c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Non-learning or single-image learning methods} & \multicolumn{4}{c}{Dataset-based deep learning methods} \\ \cline{2-11} & LPF & CBM3D & DIP & S2S & N2F & Ours & DnCNN & RED30 & N2V & N2N \\ \hline Noise Level & \multicolumn{4}{c|}{PSNR (dB)} \\ \hline \(\sigma=15\) & 24.01 & 33.20 & 27.06 & 28.18 & 28.00 & **29.97** & 33.10 & 33.38 & 27.34 & 33.27 \\ \(\sigma=25\) & 23.38 & 30.16 & 26.69 & 27.24 & 26.68 & **28.71** & 30.68 & 30.87 & 26.61 & 30.42 \\ \(\sigma=50\) & 21.46 & 25.86 & 24.76 & 24.39 & 24.07 & **26.51** & 27.62 & 27.80 & 24.82 & 26.49 \\ \hline Noise Level & \multicolumn{4}{c|}{SSIM} \\ \hline \(\sigma=15\) & 0.7846 & 0.9592 & 0.8638 & 0.9125 & 0.9008 & **0.9307** & 0.9595 & 0.9614 & 0.884 & 0.962 \\ \(\sigma=25\) & 0.7421 & 0.9251 & 0.8552 & 0.8857 & 0.8631 & **0.9076** & 0.9325 & 0.9356 & 0.8514 & 0.9328 \\ \(\sigma=50\) & 0.6205 & 0.8399 & 0.8064 & 0.786 & 0.7773 & **0.8557** & 0.8774 & 0.8831 & 0.7916 & 0.8634 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative results of our method and other baselines on _CBSD68_ corrupted by AWGN with noise levels \(\sigma=\{15,25,50\}\). Our results are marked in **bold**. Figure 2: Qualitative results of our method and other baselines on _CBSD68_ corrupted by AWGN with a noise level \(\sigma=\{15,25,50\}\). Our results are marked in **bold**. dropout is still valid at the denoising stage, various NNs can be produced from the trained NN, which provides estimators with a certain measure of independence. In the denoising stage, initiating a dropout on the constructed layers of the trained NN \(f_{\theta_{s}}\), creates different NNs, denoted by \(f_{\theta_{1}},\cdots,f_{\theta_{N}}\). Furthermore, the Bernoulli-sampled instances of \(y\) are provided as the input of the generated NNs to obtain the restored images \(\hat{x}_{1},\cdots,\hat{x}_{N}\). To acquire the final reconstructed image \(x^{*}\), the restored images were averaged by the following computation: \[x^{*}=\frac{1}{N}\sum_{n=1}^{N}\hat{x}_{n}=\frac{1}{N}\sum_{n=1}^{N}f_{\theta_ {n}}(b_{M+n}\odot y). \tag{8}\] In this method, we used 500 Bernoulli-sampled instances to obtain the final recovered image. ## Experiment In this section, the implementation details are presented. Then, the proposed method is extensively evaluated by utilizing state-of-the-art denoising methods with synthetic and real-world noisy images. Finally, an ablation study is conducted to evaluate the effectiveness of GConv and introduced loss functions. ### Implementation Details Throughout the experiments, all GConv and vanilla convolutional layers had a kernel size of \(3\times 3\), strides of 1, and reflection padding of length 2 to reduce artifacts. The hyperparameter of each LReLU was set to 0.2. We used the Adam [10] optimizer with a learning rate initialized to \(4\times 10^{-4}\). The probabilities of dropout layers and Bernoulli sampling, as well as the training steps selected for each dataset, are listed in Table 1. The proposed method was implemented using the PyTorch [10] framework and an NVIDIA GEFORCE RTX 3090 Ti GPU. ### Evaluation on Synthetic Noise Removal For synthetic noise experiments, we used the DIV2K dataset [1], which contained 800 images with 2K resolution, to train dataset-based deep learning methods. To develop a training dataset, we randomly cropped \(256\times 256\) patches and added an AWGN with noise levels \(\sigma=\{15,25,50\}\) to all patches. For testing, we used the CBSD68 dataset used in [11] with 68 images. For the baseline comparisons, we selected non-learning or single-image learning methods, that are, low-pass filtering (LPF), CMB3D [12], DIP [21], S2S [13], and N2F [1], and dataset-based methods, namely, DnCNN [10], RED30 [14], N2V [15], and N2N [12], for performance comparison. As illustrated in Figure 2, non-learning and single-image learning methods leave residual noise on the wall and window chassis. Notably, the DIP method is unable to retain texture because of excessive blurring. However, the proposed method effectively removed noise while preserving structures, particularly compared with the S2S method. As listed in Table 2, our method Figure 3: Qualitative results of our method and other baselines on _SIDD_. surpassed the performance of non-learning or single-image learning methods, including, LPF, DIP, S2S, and N2F, and N2V, a dataset-based deep learning method, by at least 2.24 dB in PSNR and 0.0406 in SSIM. Furthermore, when considering a noise level of \(\sigma=50\), our method outperformed the CBM3D and N2N methods, which demonstrates the denoising robustness of the proposed method against strong noise levels. We speculated that using GConv, SAE in the self-supervised loss function induces effective noise removal, whereas IQA loss function helps to retain the overall texture. Please refer to the supplementary material for more qualitative results. ### Evaluation on Real-World Noise Removal For the real-world noise experiments, we used two real-world datasets with CMB3D [1], DIP [21], S2S [14], and RED30 [15] selected as the baseline methods for performance comparisons. First, we used the SIDD dataset [1], which was compiled using five smartphone cameras for ten static scenes. For the dataset-based deep learning methods, we used the SIDD-Medium dataset, which encompasses 320 pairs of noisy and corresponding reference images with 4K or 5K resolutions. All images were randomly cropped to generate \(256\times 256\) patches. For testing, 1280 cropped patches of size \(256\times 256\) were collected from the SIDD validation dataset. As shown in Figure 3, all baselines, except for RED30, exhibited residual noise and artifacts. In particular, S2S did not sufficiently eliminate noise, and the resulting blur degraded the quality of letters in the output. In contrast, the proposed method eliminated intense noise while accurately maintaining the colors and structures of the letters. Please refer to the supplementary material for more qualitative results. Next, we utilized the PolyU dataset [16], which comprises 40 different static scenes captured by five cameras. We cropped 100 regions of \(512\times 512\) from these scenes and randomly selected 70 images cropped to \(256\times 256\) patches to train the dataset-based deep learning methods. The remaining 30 images were used for testing. As shown in Figure 4, all baselines, apart from RED30, left residual noise or excessively smoothed the images, with the S2S method producing the least distinct lines on the leaves. In contrast, our method depicted sharp lines with a well-preserved leaf texture. Please refer to the supplementary material for more qualitative results. The quantitative metrics of both datasets are reported in Table 3, which compares our method's generalized denoising performance with that of several baselines. ### Ablation Study To determine the effectiveness of the novel methodologies implemented with S2S+, we performed an ablation study for a comprehensive analysis. In particular, we considered the effect of GConv, IQA loss function, and dimensions of the self-supervised loss function. First, to analyze the effectiveness of GConv, we integrated only GConv into the base network. As presented in Table 4, PSNR and SSIM are en Figure 4: Qualitative results of our method and other baselines on _PolyU_. hanced by 0.23 dB and 0.0108, respectively, as the vanilla convolution is replaced by the GConv. This phenomenon indicates that filling in missing pixel values by adopting GConv can be used to predict appropriate values and reduce artifacts using the prior spatial information from the generated mask. Subsequently, we added \(L_{IQA}\) to the self-supervised loss function to verify its effectiveness. The results with \(L_{IQA}\) achieved superior outcomes for both PSNR and SSIM by 0.04 dB and 0.0011, respectively. Therefore, integrating \(L_{IQA}\) in training reduces the MOS difference distance. Moreover, this phenomenon encourages the NN to obtain more acceptable results for HVS, which eventually increases the values of FR-IQA metrics. Finally, the dimensions of \(\mathcal{L}_{self-supervised}\) are examined. When SSE, \(L_{2-ss}\), is used instead of SAE, \(L_{1-ss}\), PSNR and SSIM both decrease considerably by 1.25 dB and 0.0252, respectively, which corresponded to the lowest outcome. Thus, we speculate that using \(L_{1-ss}\) in our method prevents excessive blurring and improves high-frequency restoration. ## Conclusion We proposed a novel single-image self-supervised deep learning method. The method does not require any prerequisites to construct the training dataset because only a noisy input image is used for training. Based on the S2S method [14], we used a dropout-based scheme in both the training and denoising stages to reduce the variance of the prediction and prevent overfitting. Moreover, we used gated convolution at the encoder to replace the removed pixels with appropriate values by learning a soft mask. We also introduced the IQA loss function to generate results that were more adequate for HVS. Experiments on synthetic and real-world noise removal indicate that the proposed method outperforms other non-learning or single-image learning-based methods and produced images with less residual noise and fewer artifacts. Therefore, the proposed method is a promising solution for practical denoising problems. \begin{table} \begin{tabular}{c c c c|c c} \hline GConv & \(L_{IQA}\) & \(L_{1-ss}\) & \(L_{2-ss}\) & PSNR (dB) & SSIM \\ \hline ✗ & ✗ & ✓ & ✗ & 26.70 & 0.8534 \\ ✓ & ✗ & ✓ & ✓ & 26.96 & 0.8642 \\ ✓ & ✓ & ✓ & ✗ & **27.00** & **0.8653** \\ ✓ & ✓ & ✗ & ✓ & 25.75 & 0.8401 \\ \hline \end{tabular} \end{table} Table 4: Ablation study on _CBSD68_ corrupted by AWGN with a noise level \(\sigma=50\). GConv: gated convolution. \(L_{IQA}\): IQA loss function. \(L_{1-ss}\): SAE in the self-supervised loss function. \(L_{2-ss}\): SSE in the self-supervised loss function. The best results are marked in **bold**. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c|}{SIDD Dataset} \\ \cline{2-6} & \multicolumn{2}{c|}{Non-learning or single-image learning methods} & \multicolumn{2}{c}{Dataset-based deep learning methods} \\ \cline{2-6} & CBM3D & DIP & S2S & Ours & RED30 \\ \hline PSNR (dB) & 31.71 & 29.43 & 30.84 & **34.11** & 38.61 \\ \hline SSIM & 0.7825 & 0.7338 & 0.7263 & **0.8903** & 0.9517 \\ \hline \end{tabular} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{6}{c|}{PolyU Dataset} \\ \cline{2-6} & CBM3D & DIP & S2S & Ours & RED30 \\ \hline PSNR (dB) & 36.74 & 36.06 & 35.30 & **37.33** & 37.65 \\ \hline SSIM & 0.9755 & 0.9785 & 0.9472 & **0.9815** & 0.9823 \\ \hline \end{tabular} \end{table} Table 3: Quantitative results of our method and other baselines on _SIDD_ and _PolyU_. Our results are marked in **bold**. ## Additions to section 3: Methodology ### Gated Convolution Generally, vanilla convolutions are used for image inpainting tasks [16, 15] to fill in missing regions such that the generated results are visually and semantically acceptable. Given the input feature map with the _C-channel_, each pixel at \((i,j)\) in the output map with the _C'-channel_ of the vanilla convolution is formulated as follows: \[O_{i,j}=\sum_{\Delta i=-k^{\prime}_{h}}^{k^{\prime}_{h}}\sum_{\Delta j=-k^{ \prime}_{w}}^{k^{\prime}_{w}}W_{k^{\prime}_{h}+\Delta i,k^{\prime}_{w}+\Delta j }\cdot I_{i+\Delta i,j+\Delta j}, \tag{9}\] where \(j\), \(i\) denote x-axis, y-axis of the output map, \(k_{h}\), \(k_{w}\) represent the kernel size, \(k^{\prime}_{h}=\frac{k_{h}-1}{2}\), \(k^{\prime}_{w}=\frac{k_{w}-1}{2}\), \(W\) are convolutional filters and \(I_{i+\Delta i,j+\Delta j}\), \(O_{i,j}\) represent the input and output feature map. However, visual artifacts, such as blurriness, color inconsistency, and unnatural responses beiseging holes, are induced because vanilla convolutions use the same filters on all valid, invalid, and mixed pixels. To mitigate this problem, partial convolution (PConv) [15] was introduced, in which a masking and re-normalization process were adopted to force the convolution to be dependent only on valid pixels. Therefore, PConv was computed as follows: \[O_{i,j}=\begin{cases}\sum\sum W\cdot(I\odot\frac{M}{sum(M)}),&if\ sum(M)>0\\ 0,&otherwise\end{cases} \tag{10}\] where \(M\) is a binary mask in which 0 and 1 denote the invalid and valid pixels at location \((i,j)\), respectively, and \(\odot\) represents element-wise multiplication. The following rule is applied to \(M\) for refurbishment after each PConv operation: \(m^{\prime}_{i,j}=1\) and \(iif\ sum(M)>0\). However, PConv still has some limitations of invalid pixels in \(M\) are steadily transformed into a value of 1 as the layers deepen, and all channels in the same layer share an identical mask. Yu _et al._[21] introduced gated convolution (GConv) to learn soft masks from data instead of using unlearnable hard-gating mask, as in PConv. GConv is expressed as follows: \[\begin{split} Gating_{i,j}=\sum\sum W_{g}\cdot I,\\ Feature_{i,j}=\sum\sum W_{f}\cdot I,\\ O_{i,j}=\phi(Feature_{i,j})\odot\sigma(Gating_{i,j}),\end{split} \tag{11}\] where \(W_{g}\) and \(W_{f}\) represent two convolutional filters, \(\sigma\) is a sigmoid function used to project output values between zero and one, and \(\phi\) denotes an activation function. Because the pixels of the noisy input image are randomly masked using Bernoulli sampling, the NN must fill in missing values, as in the image inpainting task, to restore the image. Therefore, GConv is suitable for our method, and we used it only at the encoder, because the NN can fill in most of pixel values in the encoding process. Moreover, using vanilla convolutions in the decoder reduces the number of model parameters and prevents the overfitting problem. ### Proof on Self-Supervised Loss Function The self-supervised loss function can be rewritten as follows: \[\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-\bar{y}_{m}\right\|_{b_{m}}^{1}= \sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-y\right\|_{b_{m}}^{1}. \tag{12}\] If \(f_{\theta}(\hat{y}_{m})>y\), Eq. (12) can be expressed as follows: \[\begin{split}&\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-y \right\|_{b_{m}}^{1}=\\ &\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-(x+n)\right\|_{b_{ m}}^{1}=\\ &\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-x\right\|_{b_{m}}^{ 1}-\sum_{m=1}^{M}\left\|n\right\|_{b_{m}}^{1}.\end{split} \tag{13}\] However, if \(f_{\theta}(\hat{y}_{m})<y\), Eq. (12) can be formulated as follows: \[\begin{split}&-\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-y \right\|_{b_{m}}^{1}=\\ &-\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-(x+n)\right\|_{b_{ m}}^{1}=\\ &\sum_{m=1}^{M}\left\|x-f_{\theta}(\hat{y}_{m})\right\|_{b_{m}}^{1 }+\sum_{m=1}^{M}\left\|n\right\|_{b_{m}}^{1}.\end{split} \tag{14}\] By considering the last term in Eqs. (13) and (14), the expectation is: \[\begin{split}&\mathbb{E}_{n}\left[\sum_{m=1}^{M}\left\|n\right\|_{b_{ m}}^{1}\right]=\\ &\mathbb{E}_{n}\left[\sum_{m=1}^{M}\left\|(1-b_{m})\odot n\right\|_{ 1}^{1}\right]=\\ &\sum_{m=1}^{M}\left\|(1-b_{m})\odot\mu\right\|_{1}^{1}=\\ &\sum_{m=1}^{M}\left\|(1-b_{m})\odot 0\right\|_{1}^{1}=0.\end{split} \tag{15}\] Combining Eqs. (13) and (15) are formulated as follows: \[\begin{split}&\mathbb{E}_{n}\left[\sum_{m=1}^{M}\left\|f_{\theta}( \hat{y}_{m})-\bar{y}_{m}\right\|_{b_{m}}^{1}\right]=\\ &\mathbb{E}_{n}\left[\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})- x\right\|_{b_{m}}^{1}\right]-\mathbb{E}_{n}\left[\sum_{m=1}^{M}\left\|n\right\|_{b_{ m}}^{1}\right]=\\ &\sum_{m=1}^{M}\left\|f_{\theta}(\hat{y}_{m})-x\right\|_{b_{m}}^{1 }.\end{split} \tag{16}\] Moreover, integrating Eqs. (14) and (15) are computed as follows: \[\begin{split}&\mathbb{E}_{n}\left[-\sum_{m=1}^{M}\|f_{\theta}(\hat{ y}_{m})-\bar{y}_{m}\|_{b_{m}}^{1}\right]=\\ &\mathbb{E}_{n}\left[\sum_{m=1}^{M}\|x-f_{\theta}(\hat{y}_{m})\|_{ b_{m}}^{1}\right]+\mathbb{E}_{n}\left[\sum_{m=1}^{M}\|n\|_{b_{m}}^{1}\right]=\\ &\sum_{m=1}^{M}\|x-f_{\theta}(\hat{y}_{m})\|_{b_{m}}^{1}.\end{split} \tag{17}\] Thus, coupling Eqs. (16) and (17) yields the following: \[\begin{split}&\mathbb{E}_{n}\left[\sum_{m=1}^{M}\|f_{\theta}( \hat{y}_{m})-\bar{y}_{m}\|_{b_{m}}^{1}\right]=\begin{cases}\sum_{m=1}^{M}\|f_{ \theta}(\hat{y}_{m})-x\|_{b_{m}}^{1},\\ if\ f_{\theta}(\hat{y}_{m})>y\\ &\sum_{m=1}^{M}\|x-f_{\theta}(\hat{y}_{m})\|_{b_{m}}^{1}.\\ otherwise\end{cases}\end{split} \tag{18}\] ## 4 Additions to section 4: Experiment ### Synthetic Noise Removal In addition, we present qualitative results on the CBSD68 dataset used in [11] with 68 images. Denoising experiments were performed on the additive white Gaussian noise (AWGN) with levels of \(\sigma=\{15,25,50\}\). Figure 5, 6, 7, 8, 9, and 10 show the qualitative results of our method along with those of selected baselines. Non-learning and single-image learning methods, including LPF, DIP [13], S2S [3], and N2F [10], and N2V [11], a dataset-based deep learning method, left residual noise and excessively smoothed images, with artifacts increasing in frequency along with noise levels. By contrast, our method covered a wide range of noise levels with fewer artifacts and more texture details. Moreover, the proposed method produced images with qualities similar to those of CBM3D [1] and N2N [12] with a noise level of \(\sigma=50\). ### Real-World Noise Removal We also present qualitative results on the real-world noise datasets. Qualitative results for the SIDD [1] are shown in Figure 11. The CBM3D, DIP, and S2S methods did not sufficiently eliminate noise, exhibiting significant degradation of image quality as well as color inconsistencies. In contrast, our method effectively denoised the images while retaining textural details. Qualitative results for the PolyU dataset [11] are shown in Figure 12. The CBM3D, DIP, and S2S methods did not sufficiently remove noise or significantly smoothed the image. In contrast, our method sufficiently eliminated noise while preserving the lines on the surface.
2307.01940
An Adaptive Overcurrent Protection for Solar-based DC Microgrids Using IEC 61850
Over-Current (OC) protection is one of the pervasive protections in solar-based DC microgrids. Fast operation is a key advantage of its popularity. On the other hand, utilizing OC in DC microgrids has some challenges that are not in AC grids. Some of these challenges are related to the grounding approach of the DC microgrid, and others are related to the high rise time of the fault current in DC microgrids. Considering these challenges, an adaptive OC scheme with high selectivity and speed is presented in this paper. The proposed scheme is communication-assisted and relies on IEC 61850 protocol. In this scheme, different setting groups for each OC relay are defined, and based on the grid and fault conditions, a setting group is selected. This option is performed considering the data transferred via communication level using IEC 61850 protocol between relays. To evaluate the efficiency of the proposed scheme, simulations using MATLAB software and the experimental tests using OPAL-RT real-time simulator and Zenon software are presented.
Saeed Sanati, Maher Azzouz, Ahmed Awad
2023-07-04T22:07:02Z
http://arxiv.org/abs/2307.01940v1
# An Adaptive Overcurrent Protection for Solar-based DC Microgrids Using IEC 61850 ###### Abstract Over-Current (OC) protection is one of the pervasive protections in solar-based DC microgrids. Fast operation is a key advantage of its popularity. On the other hand, utilizing OC in DC microgrids has some challenges that are not in AC grids. Some of these challenges are related to the grounding approach of the DC microgrid, and others are related to the high rise time of the fault current in DC microgrids. Considering these challenges, an adaptive OC scheme with high selectivity and speed is presented in this paper. The proposed scheme is communication-assisted and relies on IEC 61850 protocol. In this scheme, different setting groups for each OC relay are defined, and based on the grid and fault conditions, a setting group is selected. This option is performed considering the data transferred via communication level using IEC 61850 protocol between relays. To evaluate the efficiency of the proposed scheme, simulations using MATLAB software and the experimental tests using OPAL-RT real-time simulator and Zenon software are presented. adaptive protection, DC microgrid, IEC 61850, solar-based microgrid, over current protection ## I Introduction Microgrids are local networks that have energy consumers and producers. They can work in grid-connected or islanded mode. Thus, microgrids should be controlled and protected locally. Microgrids are divided into AC and DC categories. DC microgrids are becoming increasingly prevalent due to the rise in demand for DC power, which includes traction systems, office equipment, household appliances, lighting equipment, as well as DC power sources such as fuel cells, solar photovoltaic (PV) panels, and DC energy storage devices [1, 2]. DC microgrids do not encounter certain issues that are present in AC grids, such as synchronization, frequency control, reactive power control, and harmonics [3]. On the other hand, DC microgrids exhibit distinct fault characteristics from AC grids due to various factors such as the low impedance in the DC system, the short length of transmission lines and cables, the presence of the DC link capacitor, and the grounding arrangement. Fault currents in DC microgrids have a large amplitude and short rise time. Therefore, it reaches its maximum value rapidly as short as 2 \(ms\). Additionally, grounding in DC microgrids may result in high fault impedance, leading to a reduction in fault current. Hence, fault detection by Over-Current (OC) protection is challenging [4]. Furthermore, current transformers' saturation in DC microgrids exacerbates the fault detection issue [5-7]. During high current DC faults, instrument transformers, especially current transformers, are prone to mislead the protection relays. High transient fault currents in DC microgrids can seriously damage converters, cables, transmission lines, circuit breakers (CBs), and other network equipment [8, 9]. One of the most common protection schemes for fault isolation in distribution networks is OC. However, due to the need for high operation speed in DC microgrids, protection coordination of OC relays is challenging. OC protection is divided into two schemes, directional and bidirectional. Bidirectional OC protects both the area in front and back of the relay, whereas directional OC only protects the area in front of the relay. Directional OC is more common considering better performance in terms of selectivity rather than bidirectional OC. The OC scheme assumed in this paper is directional. Fig. 1 illustrates the coordination of two downstream and upstream OC relays. If a fault occurs, as shown in Fig. 1(a), the downstream relay should trigger the trip command, and the upstream relay should not operate so that less outage occurs in the grid. In other words, the relays should be coordinated, and the protection system should work with high selectivity. In this case, assuming the definite-time scheme, the settings of the OC relays should be adjusted, as shown in Fig. 1(b). The time difference between \(t_{1}\) and \(t_{2}\) ensures that the relays operate selectively. In DC microgrids, due to the need for high-speed fault clearance, maintaining a minimal time difference \(\Delta t=t_{2}-t_{1}\) is crucial, even if it exacerbates coordination due to the potential for high-impedance faults arising from the unique grounding configuration. Reducing the time differences \(\Delta t\) may disturb the coordination of the protection system and causes the reduction of the protection selectivity, thus increasing unnecessary outages. The timing of fault clearance due to the operation of the relay \(R_{2}\) is shown in Fig. 2. Considering this timing, the minimum time required for coordination, \(\Delta t_{min}\), should be obtained according to (1) for selective protection. In (1), \(\Delta t_{min}\) is based on the periods demonstrated in Fig. 2. The tripping relay receives the command trip from the OC relay and amplifies the trip signal to energize the CB trip coil. \(t_{TR}\) is defined as the trip relay delay as displayed in Fig. 2. \(t_{CBop}\) is the required time for the operation of the mechanical parts of CB. \(t_{Arc}\) is the delay such that any arc between the CB contacts is distinguished. \(t_{reset}\) is the time required to reset the OC relay as depicted in Fig. 2. According to [10-13], \(t_{TR}\), \(t_{CBop}\), \(t_{Arc}\) and \(t_{reset}\) are around 4 ms, 15 ms, 5 ms, and 5 ms, respectively, for DC circuit breakers and OC relays with the standard inverse scheme. Therefore, \(\Delta t_{min}\) to ensure selectivity is equal to 29 ms. \[\Delta t_{min}=t_{TR}\,+\,t_{CBop}\,+\,t_{Arc}\,+\,t_{reset} \tag{1}\] In solar-based DC microgrids, the peak fault current can occur within a very short duration of just 2 ms. Consequently, the traditional coordination method in DC microgrids faces a challenge in balancing the need for rapid protection operation with coordination requirements. The protection of DC microgrids using OC relays was discussed in [1]. This method is fast and communication-assisted, but it can only be applied to power grids with a radial topology. In [14] and [15], OC protection was implemented for a DC microgrid with rectifiers that can limit fault currents. However, implementing such a protection method in more complex DC microgrid architectures may result in longer fault clearance times or unnecessary disconnections of larger parts of the grid during faults [10]. In [16], a framework was proposed based on the integration of unit-based protection, which has high sensitivity, speed, and selectivity, but it has low sensitivity to high-impedance faults [10]. [17] proposed a method based on adding a parallel LC filter to each pole to have resonance at a specific frequency during faulty conditions. Then, a Discrete Wavelet Transform (DWT) was utilized to extract this frequency for fault identification. Despite the ability to operate in low fault current magnitudes, that method requires additional elements and is not fast enough to clear the high rise time faults. [18] proposes an adaptive OC scheme for DC microgrids which is statistical-based. That method is efficient under various operating scenarios, including instantaneous switching operations of sources or loads, where fault impedance varies during the fault, however, it is applicable only on the ring type grids. In this paper, an adaptive OC scheme for solar-based DC microgrids is presented, which is suitable for all grid architectures and does not need adding extra power equipment in the DC microgrid. Also, there is no need to have rectifiers with the ability to limit the fault current in this method. In the proposed method, according to clustering DC microgrid status and fault conditions, several setting groups are defined for each relay. At any moment, only one of the setting groups, which guarantees the highest speed and the best selectivity, is activated for a relay. The selection of the setting group is done through the information exchanged between the protection relays at the communication level. Network conditions and the status of other relays are shared between relays through the communication level. The communication protocol at the communication level is IEC61850. Based on this protocol, the Figure 1: OC relay placement and coordination, (a) upstream and downstream relay placement, and (b) Relay coordination considering definite-time characteristics. Figure 3: Single line diagram of the investigated DC microgrid. Figure 2: OC relay and CB timing during the fault condition. required information is exchanged in the form of Generic Object-Oriented Substation Event (GOOSE) messages. The proposed method is implemented on an interconnected DC microgrid, which consists of consumers, PV-based DGs, and storage devices. The investigated grid is shown in Fig. 3. This grid is a developed network based on the 14-bus IEEE benchmark network [17, 19]. An experimental test was conducted to assess the effectiveness of the proposed method, and the results are presented alongside simulation results. ## II Principle of The Protection Coordination In The Proposed Method For protection coordination in the proposed method, several setting groups are defined for OC relays. The number of these setting groups depends on the DC microgrid topology, while the setting parameters for each group depending on the DC microgrid conditions and possible faults. Choosing which setting settings group to be used by a relay is done via the exchange of information at the communication level. In case of a fault, each OC relay that has been picked up checks the pickup information of other relays. Then, it decides to issue a trip command instantaneously or after a delay \(\Delta t\) according to (1). For this purpose, two types of information are used. Information about the status of DC microgrid switches and information about the operation of adjacent OC relays. In the following subsections, the effect of the microgrid operation status, the operation of the adjacent OC relays, and protocol configuration are investigated. ### _Effect of the grid operation condition_ The condition of grid operation includes the grid topology, location and sizes of loads and generators, energy storage devices, state of CBs and disconnector switches, state of the system grounding in the DC microgrid, impedances and lengths of transmission lines and cables, and condition of connection point to the upstream network. In the proposed method, different operating conditions are checked in terms of the fault current. For conditions in which the minimum fault current differs by less than 10%, the same setting group is considered. To explain the proposed method, the solar-based DC microgrid shown in Fig. 3 is considered, which represents a modified version of the standard 14-bus IEEE grid [17, 19]. The method implementation on the R12 relay is explained as an example. The locations of sources, consumers, energy storage devices, OC relays, and CBs are demonstrated in Fig. 3, and the parameters of the investigated DC microgrid are given in Table I. The investigated DC microgrid voltage is \(\pm\)750 VDC for all areas (except buses \(B7\) and \(B8\)), considering EU LVDC6002/95/EC guidelines [20]. For \(B7\) and \(B8\), the voltage level is 380 VDC, which is one of the most common DC voltage levels [21]. The grid is connected to the AC systems at two points, i.e., \(B1\) and \(B8\). \(B1\) is a slack bus with the capacity of 1.5 MVA, and a 300 kVA synchronous generator is connected to \(B8\). The voltage source converters at \(B1\) control the DC grid voltage level and operate at unity power factor. Other sources in the investigated grid are solar PV generations with the capacity of 500 kW, 400 kW, and 300 kW placed at \(B6\), \(B3\), and \(B2\), respectively. Solar PV plants are connected to the grid via DC/DC boost converter. The solar-based power sources include PV arrays made of twenty series-connected modules per string. The number of parallel strings is related to the source capacity. The maximum power of each module is 200 W and its voltage at the maximum power point is 29 V. The grounding system for the investigated DC microgrid is TN-S, the same as the grounding scheme recommended by most guidelines [17]. Usually, the OC relay's pickup current is set between twice the nominal line load and half of the minimum fault current in their protection zones [22]. There is a delay between the pickup and trip command issued by the OC relay, as displayed in Fig. 2. This time is related to the OC protection scheme and its configuration and setting. The inverse definite minimum time (IDMT) is the most common scheme for OC relays. The pickup, drop, and trip timings in this scheme are according to (2) [23, 24]. \[t=T\times\begin{bmatrix}k\\ \left(\frac{I}{I_{g}}\right)^{\alpha}-1\end{bmatrix} \tag{2}\] where, \(t\) is the total operation time, \(k\) and \(\alpha\) and \(L\) are factors following Table II, \(I\) is the measured current, and \(I_{g}\) is the pickup value, which is commonly considered the minimum fault current value, and \(T\) is the time multiplier setting varying from 0.025 to 1.5 [23, 24]. The OC scheme is non-unit protection; thus, it does not have a defined operation zone. However, for the best selectivity, the relays should work in such a way that the fault range is isolated and the fault is cleared with the minimum amount of outage. Therefore, the primary zone of each relay usually has a small overlap with the primary zone of the upstream or downstream relays. For example, the selective primary zones for relays \(R12\), \(R23\), \(R24\), and \(R25\) are shown in Fig. 4, while the secondary zone for each OC relay is the same as the primary zone of its downstream relay. For example, the \(R12\) secondary zone is the same as the primary zones of \(R23\), \(R24\), and \(R25\). According to the protection zone of \(R12\), the minimum fault current will occur at the end of the primary protection zone and the longest distance from the \(R12\) relay. To consider all the possible conditions for the state of the switches in the DC microgrid in Fig. 3, the minimum fault current values are obtained by modeling the grid in Simulink/MATLAB and simulating the conditions, as presented in Table III. The inverters in simulations are made by connecting a thyristor D bridge and a thyristor Y bridge. The inverters are equipped with an internal DC fault and low-voltage AC voltage protection. Double contingencies for line outages and source outages are also considered in Table III. In Fig. 5 and according to Table III, all the values of the minimum fault currents are categorized into different groups, called setting groups. The minimum fault current is the fault current peak value. In the traditional scheme for OC relays, only a minimum value of the fault current is one of the main parameters of the relay settings. That value, in the traditional scheme, is the minimum value among all the values in Table III. However, in the proposed adaptive method, the required number of setting groups is defined instead of a single value. Each setting group represents the minimum fault currents that differ by less than 10% and is obtained by experiment. Therefore, according to Table III and Fig. 5, seven setting groups are defined for \(R12\). The highest minimum fault current is 881 \(A\). The difference between two minimum fault currents in consecutive groups should not exceed 10%; thus, the width of each setting group is defined as 85 \(A\). This setting group width leads to defining the seven setting groups shown in Fig. 5. Considering the grid operation condition, one of the setting groups in the relay is active at any time. For example, the first setting group is active in one of the following conditions: 1. Line L15 is out and all sources are operating, or 2. Line L25 is out and S8 is disconnected, or 3. Line L25 is out and S1 is disconnected, or 4. Line L23 is out and S8 is disconnected, or 5. Line L23 is out and S3 is disconnected. ### _Effect of adjacent relay operation :_ In the proposed method for a fast and selective operation, instead of coordinating relays with a definite time, each OC relay uses the operation information of other OC relays to make decisions. For relays that are in the same direction as each other, the downstream relays share their operation information with the upstream relays. For relays in opposite directions, each relay shares its operation information with all relays and uses the information of downstream relays. It also uses the operation information of the adjacent relay in the opposite direction and its upstream relays. For example, considering Fig. 3, relay \(R12\) uses the operation information of \(R23\), \(R24\), and \(R25\) as its downstream relays. The operation information is shared contentiously before and during the fault occurrence and includes the pickup and drop status of those relays. Also, \(R12\) will use the operation information of relay \(R21\) as the opposite direction adjacent relay and the operation information of relays \(R32\), \(R42\), and \(R52\) as upstream relays of relay \(R21\). Each relay Fig. 4: Selective primary zones for R12 and its downstream OC relays considering investigated DC microgrid. Fig. 5: Different setting groups defined considering Table III. will issue an instantaneous trip if it is picked up and the opposite direction adjacent relay is picked up, according to Fig. 6. The proposed method is applied to radial and meshed microgrids. Fig. 6 is a simplified radial topology to display the faults that may be detected by \(R1\). In general, in the proposed method, each relay that is picked up works according to the following instructions: 1. If the opposite direction adjacent relay is picked up, it issues an instantaneous trip command. Such a situation is described in Fig. 6(a). 2. If downstream same-direction and downstream opposite-direction relays are picked up, a trip command will be issued after time \(\Delta t\) according to (1). Such conditions are described in Figs. 6(b) and 6(c). 3. Other possible conditions will be considered as protection failure of other relays and the instantaneous trip command will be issued. As an example, a protection failure is detected if none of the downstream opposite-direction relays are picked up. The flowchart of the proposed method is shown in Fig. 7. At the beginning, the OC relay sends its status into the communication network by GOOSE messages and receives the status of other OC relays and the operational condition of the DC microgrid. Then, the setting group is selected regarding the received information. Afterward, the relay employs the measured current to determine if it is greater than the setting group threshold or not. If this criterion is passed and the opposite direction adjacent relay is picked up, the process moves to the next step. In the next step, an instantaneous trip is issued if the downstream same-direction relay is picked up. However, the trip command is sent after a delay only if the opposite-direction relay is picked up. ### _Relay communications and protocol configuration_ The IEC 61850 protocol provides a base for high-speed peer-to-peer communication. This protocol improves the control and protection communication in substations, power lines, power plants, and microgrids without imposing requirements for installing extra equipment [25, 26]. IEC 61850 can be implemented on a vast majority of protection systems that rely on the communication infrastructure and are able to interoperate and communicate with other elements [27]. GOOSE and Sample Value (SV) messages are two main specific communication service mappings defined in IEC 61850. The SV messages are employed for the acquisition of raw values measured by instrument transformers, sensors, or measuring units. This service transfers digitized instantaneous quantities into multicast Ethernet frames, such as primary currents and voltages [25]. GOOSE messages are unidirectional, time-critical, and multicast. Multicast messaging property allows all Intelligent Electronic Devices (IEDs) to send a GOOSE message that can carry both binary and analog values, although they are primarily used to indicate changes in the state of parameters, like CB states. Considering VLAN priority, these messages are used for fast protection and control information exchange between IEDs. GOOSE service is suitable for time-critical applications Fig. 6: Different possible fault conditions sensed by \(R1\) relay. Fig. 7: Proposed Algorithm. such as protection functions [25]. In the proposed method for transferring GOOSE messages, the communication configuration is displayed in Fig. 8. High-speed operation requirement leads to utilizing high-speed communication infrastructure in the proposed method. Considering the short line length in DC microgrids, fiber optic is a common choice for communication networks. The minimum proposed speed is 100 _Mbps_, which is quite available with fiber optics active equipment, like network switches, routers, Synchronous Digital Hierarchy (SDH), and Plesiochronous Digital Hierarchy (PDH). The GOOSE message transferring delay, considering a 100 _Mbps_ communication network, is as short as 0.066 _ms_. However, this delay could be up to 1.8 _ms_ considering the chosen security algorithm [28]. Application of the security algorithm is not mandatory according to the IEC61850 standard. The communication level in the proposed method is isolated from external communication networks. Therefore, to increase the communication speed, security algorithms are not implemented on the GOOSE packets. ## III Real-time Simulations Of the Proposed Method The efficiency of the proposed method is evaluated with real-time simulations, using OPAL-RT OP1400 real-time simulator and MATLAB. To simulate transferring the GOOSE messages, i.e., related to the adjacent OC relays and grid operation states, Zenon automation software is used. The configuration of the test bed for experimental results is displayed in Fig. 9, and the photograph of the implemented setup is shown in Fig. 10. The test grid for real-time simulations is shown in Fig. 3, with details given in Table I. The proposed algorithm in Fig. 7 is implemented for relay \(R12\) using Zenon on a laptop. It should be noted that the proposed algorithm should be applied for all of the OC relays in the DC microgrid and relay \(R12\) is a sample for case study in our experimental tests. The DC microgrid operation status and other OC relays' operation statuses are modeled using MATLAB on a desktop computer and simulated using a real-time simulator. The studied scenarios are made by changing the condition of the DC microgrid. The information of these changes, including grid operation status and OC relays' operation statuses, are transferred by GOOSE messages from OPAL-RT to \(R12\), which is simulated on a laptop. Afterward, the information of \(R12\) operation, if exists, is sent to OPAL-RT by GOOSE messages, and the DC grid status is updated according to that information. In the real-time simulations, according to Fig. 5, seven setting groups for \(R12\) are defined. Deferent studied scenarios include pole-ground and pole-pole short circuit occurrence on \(L12\) and every adjacent power line while every single power generation source in the DC microgrid is absent or present. Regarding fault occurrence on \(L12\)'s adjacent power lines, two cases are tested, (a) correct protection operation and (b) protection failure. In the first case, \(R12\)'s adjacent OC relays operate before the operation of \(R12\), while in the second case, \(R12\) operates in its secondary zone for backup protection because of the failure of its adjacent OC relays. Considering these two case studies, Tables IV and V show the OC operation times for the proposed method in comparison with the standard inverse scheme for pole-pole faults. Fig. 8: Communication configuration. Fig. 10: The photograph of the implemented setup. Fig. 9: The configuration of the test bed. For fault scenarios in these tables, the relay operation times of the proposed method are much lower than those of the standard inverse scheme. As shown in Table IV, \(R\)12 correctly waits for other relays to operate in the fault scenarios where the fault location is in the secondary zone of the relay. However, if the protection failure is detected considering the proposed method, \(R\)12 operates faster than the standard inverse scheme to clear the fault, as displayed in Table V. The current waveform seen by \(R\)12 and the operation sequences according to the proposed method for the first case study, i.e. pole-pole fault occurs on \(L\)12 without any power source outage, is shown in Fig. 11. Also, the standard inverse scheme sequences are displayed comparatively in Fig.11. The effect of the inverters, converters and their protection functions is considered by modeling the converters/inverters internal topology and their internal protections including DC fault and low voltage AC voltage protections. The time of the CB and OC relay actions are displayed in Fig. 11. The relay pickup point is displayed in Fig. 11 for the proposed method versus the standard inverse scheme which occurs after fault inception. It can be seen that the relay pickup s faster for the proposed method. Samely, the relay trip command is issued faster in the proposed method rather than in the standard inverse scheme. The whole process of relay operation for the proposed method is 3.01 ms faster than the standard inverse scheme, regarding the timing displayed in Fig. 11. As depicted in Fig. 11 and Table IV, the proposed method is faster in fault detection/clearing as compared to the standard inverse scheme. For the case of adjacent protection failure, i.e., pole-pole fault occurring on \(L\)23 without any power source outage, the current waveform and the OC operation sequences are depicted in Fig. 12. As it is shown in Fig. 12, the fault clearance using the proposed method is 13 _ms_ faster than the standard inverse scheme. It can be seen considering the timing sequence related to the relay operation and CB action, displayed in Fig. 12. The real-time simulations are repeated considering a pole-ground fault, and the results are illustrated in Figs. 13-14. The condition of the DC microgrid is kept similar to that of the cases depicted in Figs. 11-12, where a fault occurs at \(L\)12 while all power lines and power sources are operational. Fig. 13 displays the current waveform and the OC relay sequences in the case of correct adjacent relay operation. Fig. 14 shows those waveforms in the case of adjacent relay failure with the same prerequisites. According to Figs. 13-14, the fault clearance with the proposed is 3 _ms_ and 12 _ms_ faster than that with the standard inverse scheme for the protection failure and protection normal operation, respectively. Fig. 11: The current waveform and the OC operation sequences in the case of correct adjacent relay operation while pole-pole fault occurs at \(L\)12 without any power source outage. Fig. 12: The current waveform and the OC operation sequences in the case of adjacent relay failure while pole-pole fault occurs at \(L\)23 without any power source outage. ## IV Conclusion In this paper, an adaptive OC protection scheme is presented. This proposed scheme is useful for solar-based DC microgrids because of specific parameters of these grids, like the high rise time of fault current. The proposed method uses the communication level to access the shared information of the microgrid operation condition and adjacent OC relays' operation. Also, the proposed method uses clustering the minimum values of fault currents as setting groups to improve protection coordination and speed. Proven by real-time simulations, the proposed method is faster than the standard inverse scheme which is commonly used in OC protection.
2303.02766
Dissipative Capture of Planets Into First-Order Mean-Motion Resonances
The emergence of orbital resonances among planets is a natural consequence of the early dynamical evolution of planetary systems. While it is well-established that convergent migration is necessary for mean-motion commensurabilities to emerge, recent numerical experiments have shown that the existing adiabatic theory of resonant capture provides an incomplete description of the relevant physics, leading to an erroneous mass scaling in the regime of strong dissipation. In this work, we develop a new model for resonance capture that self-consistently accounts for migration and circularization of planetary orbits, and derive an analytic criterion based upon stability analysis that describes the conditions necessary for the formation of mean-motion resonances. We subsequently test our results against numerical simulations and find satisfactory agreement. Our results elucidate the critical role played by adiabaticity and resonant stability in shaping the orbital architectures of planetary systems during the nebular epoch, and provide a valuable tool for understanding their primordial dynamical evolution.
Konstantin Batygin, Antoine C. Petit
2023-03-05T20:14:10Z
http://arxiv.org/abs/2303.02766v1
# Dissipative Capture of Planets Into First-Order Mean-Motion Resonances ###### Abstract The emergence of orbital resonances among planets is a natural consequence of the early dynamical evolution of planetary systems. While it is well-established that convergent migration is necessary for mean-motion commensurabilities to emerge, recent numerical experiments have shown that the existing adiabatic theory of resonant capture provides an incomplete description of the relevant physics, leading to an erroneous mass scaling in the regime of strong dissipation. In this work, we develop a new model for resonance capture that self-consistently accounts for migration and circularization of planetary orbits, and derive an analytic criterion based upon stability analysis that describes the conditions necessary for the formation of mean-motion resonances. We subsequently test our results against numerical simulations and find satisfactory agreement. Our results elucidate the critical role played by adiabaticity and resonant stability in shaping the orbital architectures of planetary systems during the nebular epoch, and provide a valuable tool for understanding their primordial dynamical evolution. Orbital dynamics, Perturbation theory ## 1 Introduction Orbital resonances facilitate long-term exchange of energy and angular momentum within planetary systems, thereby playing a critical role in their long-term evolution. The preference for orbital commensurability - first quantified in a statistically rigorous manner by Dermott (1968a,b) - is a well-known attribute of the solar system's architecture, which is particularly pronounced among the satellites of Jupiter, Saturn, and Uranus (see Murray and Dermott, 1999 and the references therein). Beyond the realm of the solar system, resonant configurations can be found in appreciable proportion within the census of giant and sub-Jovian exoplanets alike (e.g., Gozdziewski et al., 2016; Mills et al., 2016; Luger et al., 2017; Petit et al., 2020; Nesvorny et al., 2022; Dai et al., 2023). Intriguingly, the importance of resonant dynamics likely goes well beyond the population of planetary systems that are presently entrained in mean-motion commensurabilities. That is to say, the evolutionary role played by _transient_ mean-motion resonances is almost certainly more significant than a superficial examination of the data may indicate. To this end, numerous lines of evidence suggest that the outer solar system itself originated in a compact multi-resonant configuration before becoming temporarily unstable and eventually settling in its current state by way of dynamical friction (Batygin and Brown, 2010; Nesvorny and Morbidelli, 2012). Such a sequence of events may in fact constitute a relatively typical post-nebular evolutionary path of planetary systems, and recent modeling has shown that both the period ratio distribution, as well as the degree of intra-system uniformity of short-period super-Earths, can be satisfactorily reproduced if the majority of systems originate as resonant chains that subsequently relax towards more widely-spaced orbits through dynamical instabilities (Izidoro et al., 2017, 2021; Goldberg and Batygin, 2022; Batygin and Morbidelli, 2023). Despite their extant and inferred prevalence, mean motion resonances do not arise as an innate byproduct of the planet formation process itself. Instead, they are established as a consequence of orbital convergence facilitated by dissipative effects (Goldreich, 1965; Henrard, 1982). Within protoplanetary nebulae, this occurs naturally due to planet-disk interactions (i.e., type-I migration; Goldreich and Tremaine, 1980; Ward, 1997) - particularly in the inner regions of disks, where magnetospheric cavities create bonafide traps for planetary orbits (Masset et al., 2006). In addition to the necessity of convergent migration, resonance capture requires stability of the resonant equilibrium and adiabaticity. Crudely speaking, this means that dissipative torques must not exceed gravitational perturbations in magnitude, and that resonant dynamics must operate "faster" than the timescale associated with extrinsic (that is, disk-driven) forcing of the orbits. In this vein, Batygin (2015) proposed an analytic criterion for adiabatic capture in the unrestricted 3-body problem, by equating the resonant libration (bounded oscillation) period to the migratory resonance-crossing time. While this criterion yields quantitatively adequate results in the regime where orbital migration ensues in absence of other dissipative effects, the recent simulation suite of Kajtazi et al. (2023) has shown that the behavior of resonance capture is qualitatively different if convergent migration is accompanied by strong eccentricity damping. Evidently, disk-driven orbital circularization alters the efficiency of resonance capture in a non-trivial manner. The principle goal of this Letter is to understand the process of resonance capture in presence of direct dissipation, from theoretical grounds. That is, in this work, we employ perturbation theory to quantify conditions under which stable resonant dynamics can be established, derive an analytic criterion for such dissipative capture, and confirm our results with numerical experiments. The remainder of the manuscript is organized as follows. In section 2, we outline a simplified sketch of the resonance stability argument within the context of the circular restricted 3-body problem. We generalize our analytical framework to the unrestricted elliptic problem and compare our results with numerical simulations in section 3. We summarize and discuss our findings in section 4. ## 2 The restricted problem As the simplest starting point for our analysis, we adopt the circular restricted 3-body problem as a paradigm, wherein one of the secondary bodies is taken to have negligible mass, while the other is assumed to reside on a circular orbit. A similar approach has been undertaken in the recent study of Huang & Ormel (2023). To be clear, we make the restricted approximation in this section strictly for comprehensibility: the work of Sessin & Ferraz-Mello (1984); Wisdom (1986), as well as a number of more recent studies (Batygin & Morbidelli, 2013; Petit et al., 2017; Hadden, 2019) have shown how the perturbative treatment of first-order resonances within the restricted problem can be generalized to the full 3-body problem1, and we carry out this generalization in the next section. Footnote 1: Within the context of the full 3-body problem, results depend predominantly on the sum of the planetary masses, rather than their ratio (Deck et al., 2013; Deck & Batygin, 2015) ### Perturbation Theory Model HamiltonianUpon averaging over short-periodic terms and expanding the interaction potential (i.e., the disturbing function) to leading order in eccentricity and inclination, the governing Hamiltonian for a \(k:k-1\) mean-motion resonance takes the form: \[\mathcal{H} = -\frac{\mathcal{G}\,M_{\star}}{2\,a^{\prime}}\,-\,\frac{\mathcal{ G}\,m}{a^{\prime}} \tag{1}\] \[\times f\,e^{\prime}\cos(k\,\lambda^{\prime}-(k-1)\,n\,t-\varpi^{\prime}).\] In the above expression, the Keplerian orbital elements have their usual meanings, \(f\) is constant of order unity2, and the primed variables refer to the outer body, which we take to be massless. This choice circumvents any consideration of over-stable librations, which can ensue if the massive perturber resides on an exterior orbit (Deck & Batygin, 2015). We note however, that the statistical analysis of Huang & Ormel (2023) indicates that even under a reversed mass-ordering, over-stable librations are expected to be rare in real protoplanetary disks. In addition, we have explicitly written the mean longitude of the inner orbit explicitly as a product of its mean motion, \(n=\sqrt{\mathcal{G}\,M_{\star}/a^{3}}\), and time. In other words, the physical setup of our problem has: \(M\gg m\neq 0;m^{\prime}=0;e=0;\lambda=n\,t\). Footnote 2: The coefficient \(f\) depends only on the semi-major axis ratio, and evaluates to \(f\approx 1.2\) for \(k=2\) and \(f\approx 0.8\,k\) for \(k\geqslant 3\). To simplify the functional form of \(\mathcal{H}\), we follow the well-documented procedure of expanding the leading (Keplerian) term of equation (1) in the vicinity of exact commensurability, to second order in \(\delta\mathcal{L}=\mathcal{L}-[\mathcal{L}]\), where \(\mathcal{L}=\sqrt{\mathcal{G}\,M_{\star}\,a^{\prime}}\) and \([\mathcal{L}]=\sqrt{\mathcal{G}\,M_{\star}\,[a^{\prime}]}=\sqrt{\mathcal{G} \,M_{\star}\,(k/(k-1))^{2/3}\,a}\) represents the maximal specific angular momentum of the test-particle orbit, evaluated at the nominal resonance semi-major axis, \([a^{\prime}]\). Switching to the canonically conjugated action-angle variables (e.g., Peale, 1986) \[\Phi = [\mathcal{L}]\,(e^{\prime})^{2}/2\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \phi=k\,\lambda^{\prime}-(k-1)\,n\,t-\varpi^{\prime}\] \[\Psi = \delta\mathcal{L}-k\,\Phi\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ the conjugated action, \(\Psi\), is dictated entirely by extrinsic forces. The case of pure migration.--Before proceeding to consider the full dissipative problem, let us pause and recall some qualitative aspects of the well-studied instance of pure orbital migration, where external (non-Hamiltonian) forces do not affect the \((\Phi,\phi)\) degree of freedom directly. In this case, it is easy to see that convergent migration will cause the action \(\Psi\) to diminish without bound, changing the topological structure of the phase-space portrait of \(\mathcal{H}\) in concert (Henrard and Lemaitre, 1983; Batygin, 2015). If the evolution of \(\Psi\) is slow compared to resonant dynamics, then an adiabatic invariant - which corresponds to the phase-space area encircled by the orbit - emerges as a quasi-integral of motion (Henrard, 1982; Neishtadt, 1984). In the practically important case where orbits originate with zero eccentricity far away from resonance, the initially occupied phase-space area is null, meaning that as long as the adiabatic condition is satisfied3, the trajectory must remain confined to the \(\phi=\pi\) resonant equilibrium point (since it encapsulates zero phase-space area). Simultaneously, as \(\Psi\) evolves to highly negative values, the eccentricity grows perpetually as \(e^{\prime}\sim\sqrt{-2\,\Psi/(k\,[\mathcal{L}])}\). In this manner, forces that facilitate the convergent migration of orbits translate to sustained eccentricity excitation, once the resonant lock is established. Footnote 3: An important additional caveat is that no encounters with the separatrix take place. The case of concurrent migration and circularization.--The arguably more physically realistic scenario - wherein convergent migration occurs together with efficient orbital damping - is different from the aforementioned case of pure migration in a number of important ways. First and foremost, the evolution of the action \(\Psi\) is no longer unbounded, and instead stabilizes at an equilibrium value, which in turn dictates the equilibrium eccentricity. To quantify this, consider the following generic parameterizations of semi-major axis decay and orbital circularization: \[\frac{1}{a^{\prime}}\frac{da^{\prime}}{dt}=-\frac{1}{\tau_{m}^{\prime}}-\frac{ 2\,e^{\prime 2}}{\tau_{m}^{\prime}/\mathcal{K}}\hskip 28.452756pt\frac{1}{e^{ \prime}}\frac{de^{\prime}}{dt}=-\frac{\mathcal{K}}{\tau_{m}^{\prime}}, \tag{4}\] where \(\tau_{m}^{\prime}\) is the convergent migration timescale and \(\mathcal{K}=\tau_{m}^{\prime}/\tau_{e}^{\prime}\) is the ratio of semi-major axis and eccentricity damping timescales. It is worth noting that (Pichierri et al., 2022) have recently shown that the eccentric contribution to semi-major axis damping in equations (4) arises self-consistently within planet-disk interaction formulae that are routinely implemented in \(N-\)body codes to mimic the effects of the gaseous nebula (Papaloizou and Larwood, 2000). Moreover, for type-I migration, \(\mathcal{K}\) has a well-defined dependence on the the geometric aspect ratio, \(h/r\), (Tanaka et al., 2002; Tanaka and Ward, 2004): \[\mathcal{K}=\left(\frac{1}{e^{\prime}}\frac{de^{\prime}}{dt}\right)\!\left( \frac{1}{a^{\prime}}\frac{da^{\prime}}{dt}\right)^{-1}\propto\left(\frac{h}{r }\right)^{-2}, \tag{5}\] though the dimensionless pre-factor of this dependence is specified by the disk's particular structure. As an example, for a locally isothermal Mestel (1963) type disk (where the surface density varies inversely with the semi-major axis - i.e., \(\Sigma\propto 1/a^{\prime}\)), this pre-factor is approximately 0.2, such that \(\mathcal{K}\sim\mathcal{O}(10^{2})\) for \(h/r\sim 0.05\)(see also Lee and Peale, 2002). Nevertheless, values for \(\mathcal{K}\) that are substantially higher (and lower) are expected to arise in realistic model nebulae; we will revisit the relevant scalings in the next section. With the relevant damping formulae defined, it is straightforward to derive the equilibrium value for \(e^{\prime}\). Recalling the definition of \(\Psi\) from equations (2), let us set the time-derivative of \(\Psi\) equal to zero: \[\frac{d\Psi}{dt} =\frac{1}{2}\sqrt{\frac{\mathcal{G}\,M_{\star}}{a^{\prime}}}\frac {da^{\prime}}{dt}-2\,[\mathcal{L}]\,e^{\prime}\,\frac{de^{\prime}}{dt}\] \[=-\frac{\mathcal{L}}{2}\,\frac{1+4\,\mathcal{K}\,\Phi/[\mathcal{ L}]}{\tau_{m}^{\prime}}+2\,\mathcal{K}\,\frac{k\,\Phi}{\tau_{m}^{\prime}}=0. \tag{6}\] It is expected that equilibrium will be reached close to nominal resonance4 such that \(\delta\mathcal{L}\approx 0\). Thus, replacing \(\mathcal{L}\) by \([\mathcal{L}]\) in the above equation and recalling from equations (2) that \(\Phi=[\mathcal{L}]\,e^{\prime 2}/2\), we obtain: Footnote 4: Notice that this is not the case for the case of dissipative divergent migration, where the system can follow equilibrium loci far away from nominal commensurability (Pichierri et al., 2019; Goldberg and Batygin, 2021). \[\left(e^{\prime}\right)_{\rm eq}\to\frac{1}{\sqrt{2\left(k-1\right)\mathcal{K }}}\sim\frac{h}{r}. \tag{7}\] Indeed, resonantly-excited planetary eccentricities within protoplanetary nebulae are expected to be comparable to the disk aspect ratio (which is itself equal to the inverse Mach number of the Keplerian flow), as the above expression suggests (e.g., see also Pichierri et al., 2018). The terminal step of the calculation is to evaluate the stability of the resonant fixed point. Here, a second important distinction with the pure migration case comes into view: in presence of explicit eccentricity damping, the phase-space portrait rotates counter-clockwise, such that the equilibrium value of the critical angle, \(\phi\), shifts to \(\phi_{\rm eq}=\pi+\epsilon\), where \(\epsilon\) is determined by the strength of dissipative effects (Batygin & Morbidelli, 2013). Importantly, criticality is achieved when \(\epsilon\to\pi/2\) and \(\delta\mathcal{L}\to 0\) - a configuration where the resonant torque is maximized. It thus follows that in this state, \[\left(\Psi\right)_{\rm eq}\to-k\left(\Phi\right)_{\rm eq}=\frac{\left[ \mathcal{L}\right]}{4\left(k-1\right)\mathcal{K}}. \tag{8}\] Accounting for dissipative effects, the equilibrium equations take the form: \[\frac{d\phi}{dt} =\frac{\partial\mathcal{H}}{\partial\Phi}=k\,n^{\prime}-\left(k-1 \right)n-\frac{\mathcal{F}\,\cos(\left(\phi\right)_{\rm eq})}{\sqrt{2\left( \Phi\right)_{\rm eq}}}\] \[-\frac{3\,k\,n^{\prime}\left(\left(\Psi\right)_{\rm eq}+k\left( \Phi\right)_{\rm eq}\right)}{\left[\mathcal{L}\right]}=0\] \[\frac{d\Phi}{dt} =-\frac{\partial\mathcal{H}}{\partial\phi}-2\,\mathcal{K}\,\frac {\left(\Phi\right)_{\rm eq}}{\tau_{m}^{\prime}}=-\mathcal{F}\,\sqrt{2(\Phi)_ {\rm eq}}\,\sin(\left(\phi\right)_{\rm eq})\] \[-2\,\mathcal{K}\,\frac{\left(\Phi\right)_{\rm eq}}{\tau_{m}^{ \prime}}=0. \tag{9}\] The first (\(\dot{\phi}=0\)) equation is trivially satisfied. The second (\(\dot{\Phi}=0\)) equation, on the other hand, yields the criterion for the shortest migration timescale that allows for resonant capture: \[\tau_{m}^{\prime} =\frac{M_{*}}{f\,m}\sqrt{\frac{\mathcal{K}}{2\left(k-1\right)} \frac{a^{\prime 3}}{\mathcal{G}\,M_{*}}}\] \[\approx\frac{5}{4}\frac{M_{*}}{m}\sqrt{\frac{\mathcal{K}}{2 \left(k-1\right)^{3}}}\frac{1}{n}, \tag{10}\] where we have used the compact \(k\approx k-1\) approximation to evaluate \(f\approx 4\,k/5\). The expression agrees with the one recently obtained by Huang & Ormel (2023), who arrived at it through a somewhat distinct - but ultimately equivalent - approach. ### Numerical Experiments Whether the process of resonant capture is controlled by stability - and thus follows the criterion given by equation (10) - or adiabaticity (as discussed in e.g., Batygin, 2015), depends on the efficiency of orbital circularization. In the limit of weak damping ("small \(\mathcal{K}\)"), we may reasonably expect that adiabaticity will serve as the more stringent constraint, while stability will dominate in the regime of rapid ("large \(\mathcal{K}\)") damping. Moreover, the transitionary value of \(\mathcal{K}\) is likely to significantly exceed unity, since the eccentricity-damping timescale should be contrasted against the resonance-crossing time - a quantity that is proportional to \(\tau_{m}^{\prime}\), Figure 1: Numerical simulations of a \(k=3\) resonant encounter within the context of the circular restricted 3-body problem. The top and bottom panels depict phase-space evolution and the time-series of the orbital frequency ratio, respectively. Differently colored curves correspond to distinct migration timescales, as labeled in the inset between the two panels. The simulation setup is as follows: an exterior test particle, initialized on a circular, planar \(a^{\prime}=0.15\,\)AU orbit around a \(M_{*}=1M_{\odot}\) star, migrates convergently towards a \(m=1\times 10^{-5}\,M_{\odot}\approx 3\,M_{\oplus}\) planet residing at \(a=0.1\,\)AU, eventually encountering the 3:2 mean-motion combsurability. Eccentricity damping is applied with a characteristic timescale that is a factor of \(\mathcal{K}=1000\) shorter than the migration time, \(\tau_{m}^{\prime}\). Simultaneous migration and circularization of the outer orbit is indicated with shaded arrows on the diagram. As \(\tau_{m}^{\prime}\) is reduced from \(80\,\)kyr toward the analytically-predicted critical value of \(5\,\)kyr (equation 10), the equilibrium value of the resonant angle, \(\phi\), tends from \(\pi\) to \(3\pi/2\). The equilibrium value of the eccentricity, on the other hand, stabilizes at \(e^{\prime}=1/\sqrt{4\,\mathcal{K}}\approx 0.016\) in all cases. The behavior of this heavily damped system is fully consistent with expectations provided by analytic theory. but is much smaller in magnitude. In the remainder of this section, we will use numerical experiments to verify the validity of equation (10), as well as to explore the capture process in the heavily (\(\mathcal{K}=1000\)) and moderately (\(\mathcal{K}=100\)) damped regimes. Our numerical experiments follow the conventional scheme of implementing dissipative effects into an \(N-\)body framework using the formulae of Papaloizou & Larwood (2000). For definitiveness, we adopted a physical setup that is reminiscent of short-period extrasolar Super-Earth systems such as Kepler-59 and Kepler-128: a \(1\,M_{\odot}\) star encircled by a \(m=10^{-5}\,M_{\odot}\approx 3\,M_{\oplus}\) planet on a circular orbit at \(a=0.1\,\)AU (Hadden & Lithwick, 2016; Saad-Olivera et al., 2020). In addition, we initialized an exterior test-particle into the simulation on a \(e^{\prime}=0\), \(a=0.15\,\)AU orbit, such that the first commensurability encountered by the system is the 3:2 period-ratio. To drive convergent migration, non-gravitational forces were only applied to the test-particle. The system of ODEs was integrated using the Bulirsch-Stoer algorithm with an accuracy parameter of \(\hat{\epsilon}=10^{-10}\). In both the \(\mathcal{K}=1000\) and \(\mathcal{K}=100\) simulation suites, we carried out five numerical experiments, first setting \(\tau^{\prime}_{\rm m}\) equal to the value given by the stability criterion (10) and doubling it in every run. Figures (1) and (2) depict the results of these simulations: the bottom panels show the time-series of the orbital frequency ratio (equal to the period ratio), and the top panels show the phase-space evolution of the system. Overall, the results of the numerical experiments with \(\mathcal{K}=1000\) conform to the theoretical expectations outlined above. That is, as the test-particle approaches the 3:2 resonance, \(e^{\prime}\) stabilizes at the equilibrium value given by equation (7) - shown on the phase-space plot with a gray circle - independent of the adopted \(\tau^{\prime}_{m}\). Conversely, the stationary value of the critical angle ratchets up towards \(\phi=3\,\pi/2\) as \(\tau^{\prime}_{m}\) approaches the critical value of \(\sim 5000\) years, as dictated by equation (10). Though all of the shown runs result in stable capture within a 3:2 resonance, we have also confirmed that resonant locking fails for shorter (e.g., 4500 year) migration timescales, in agreement with the predictions of the analytical theory. Unlike their more heavily-damped counterparts, the \(\mathcal{K}=100\) simulations deviate notably from our analytical stability arguments. While the system equilibrates at the predicted state as long as the convergence time is long, the process of resonance capture is accompanied by a growing libration amplitude, as \(\tau^{\prime}_{m}\) approaches the critical value of \(\sim 1500\) years (recall that it scales as \(\propto\sqrt{\mathcal{K}}\)). For this reason, the resonant equilibrium becomes compromised at a migration timescale that is a factor of \(\sim 2\) longer than that predicted by equation (10). Indeed, the run with \(\tau^{\prime}_{m}=1500\) years results in passage through the 3:2 commensurability and subsequent capture into the 5:4 resonance (a configuration reminiscent of the Kepler-307 system; Jontof-Hutter et al., 2016). Evidently, for the given mass-ratio \(m/M_{\star}\), a value of \(\mathcal{K}\) significantly in excess of 100 is required to fully suppress the growth of the phase-space area during orbital convergence, giving way to adiabaticity as the process that largely determines the outcome of resonant encounters. Figure 2: Same as Figure 1 but with a reduced eccentricity damping factor of \(\mathcal{K}=100\). While resonant capture ensues for long migration timescales, the analytically-predicted critical value of \(\tau^{\prime}_{m}=1.5\,\)kyr leads to passage through the 3:2 commensurability and capture into the 5:4 resonance instead. Failure of the stability criterion (10) in this example indicates that the orbital circularization is sufficiently slow that a different mechanism – namely, adiabaticity – regulates resonance capture. ## 3 The Unrestricted Problem ### Analytical Theory With the qualitative picture outlined within the simplified framework of the restricted problem above, generalization of our results to the full resonant three-body problem is relatively undemanding. Retaining the nearly coplanar and low-eccentricity approximations but putting no limits on the planetary mass-ratio \(m/m^{\prime}\), the governing Hamiltonian takes the form: \[\mathcal{H} =-\frac{m^{3}}{2}\bigg{(}\frac{\mathcal{G}\,M_{\star}}{\Lambda} \bigg{)}^{2}-\frac{m^{\prime 3}}{2}\bigg{(}\frac{\mathcal{G}\,M_{\star}}{ \Lambda^{\prime}}\bigg{)}^{2}\] \[-\mathcal{A}\sqrt{2\,\Gamma}\,\cos(k\,\lambda^{\prime}-(k-1)\, \lambda+\gamma)\] \[-\mathcal{B}\sqrt{2\,\Gamma^{\prime}}\,\cos(k\,\lambda^{\prime}- (k-1)\,\lambda+\gamma^{\prime}), \tag{11}\] where \(\mathcal{A}=(\mathcal{G}^{2}\,M_{\star}\,m\,m^{\prime 3}/\Lambda^{\prime 2}) \,(g/\sqrt{\Lambda})\) and \(\mathcal{B}=(\mathcal{G}^{2}\,M_{\star}\,m\,m^{\prime 3}/\Lambda^{\prime 2}) \,(f/\sqrt{\Lambda^{\prime}})\) are pre-factors similar to \(\mathcal{F}\)(Batygin & Morbidelli, 2013; Hadden, 2019). Furthermore, in the above expression, we have used the conventional system of Poincare action-angle variables, defined in the \(m\ll M_{\star}\) and \(e\ll 1\) limit as: \[\Lambda =m\,\sqrt{\mathcal{G}\,M_{\star}\,a} \lambda =\mathcal{M}+\varpi\] \[\Gamma =\Lambda\,e^{2}/2 \gamma =-\varpi, \tag{12}\] with equivalent definitions for the primed quantities. Notice that unlike equation (1), the Hamiltonian (11) now contains two harmonic terms, and we have maintained the Keplerian contributions in their unexpanded form (this will not affect our analysis). As in the preceding section, we must now consider the stability of the global fixed point of the resonant Hamiltonian, in presence of migration and dissipation. In fact, similar analyses have previously been carried out in the literature (Batygin & Morbidelli, 2013; Terquem & Papaloizou, 2019), but typically in the limit of weak friction. Central to our calculation is the long-term evolution of the semi-major axes, which - unlike the case of the restricted problem - can be directed inward, outward, or can be stationary, depending on the signs and magnitudes of the individual planetary migration timescales, \(\tau_{m}\) and \(\tau^{\prime}_{m}\). Nevertheless, the maintenance of the resonant relationship between the orbits necessitates that \(\dot{\Lambda}/\Lambda=\dot{\Lambda}^{\prime}/\Lambda^{\prime}\). This equality can be computed directly from Hamilton's equation: \[\frac{1}{\Lambda}\frac{d\Lambda}{dt}=-\frac{\partial\,\mathcal{H}}{\partial\, \lambda}-\frac{\Lambda\,\big{(}\tau_{e}+2\,\tau_{m}(2\,\Gamma/\Lambda)\big{)} }{2\,\tau_{m}\,\tau_{e}}, \tag{13}\] where we have expressed the dissipative contribution (given by equation 4) in canonical coordinates. Collecting the Hamiltonian terms on the LHS, we have: \[\frac{2\,\big{(}k\,(\Lambda+\Lambda^{\prime})-\Lambda^{\prime} \big{)}}{\Lambda\,\Lambda^{\prime}}\bigg{(}\mathcal{A}\,\sqrt{2\,\Gamma}\,\sin (\varphi)+\mathcal{B}\,\sqrt{2\,\Gamma^{\prime}}\,\sin(\phi)\bigg{)}\] \[=\frac{1}{\tau_{m}}+\frac{4\,\Gamma}{\Lambda\,\tau_{e}}-\frac{1} {\tau^{\prime}_{m}}-\frac{4\,\Gamma^{\prime}}{\Lambda^{\prime}\,\tau^{\prime} _{e}}. \tag{14}\] Similarly to equation (2), in the above expression, we have used \(\varphi\) and \(\phi\) to denote the resonant harmonics containing \(\varpi\) and \(\varpi^{\prime}\), respectively. The dependence of equation (3.1) on the resonant angles as well as the actions \(\Gamma\) and \(\Gamma^{\prime}\) can be eliminated by considering the equilibrium of the eccentricities. This equilibrium is given by \(\dot{\Gamma}=-\partial\mathcal{H}/\partial\gamma-2\,\Gamma/\tau_{e}=0\), with an identical expression for \(\Gamma^{\prime}\). In particular, we obtain: \[\mathcal{A}\,\sqrt{2\,\Gamma}\,\sin(\varphi) =-2\,\Gamma/\tau_{e}\] \[\mathcal{B}\,\sqrt{2\,\Gamma^{\prime}}\,\sin(\phi) =-2\,\Gamma^{\prime}/\tau^{\prime}_{e}. \tag{15}\] From this expression, it is easy to see how the sinusoidal terms in equation (3.1) can be eliminated. Moreover, in analogy with the preceding section, criticality is attained as \(\varphi\to\pi/2\), \(\phi\to 3\pi/2\), which yields \(\Gamma=\mathcal{A}^{2}\,\tau_{e}^{2}/2;\Gamma^{\prime}=\mathcal{B}^{2}\,\tau_{ e}^{\prime 2}/2\). Upon direct substitution into equation (3.1) and setting \(a=((k-1)/k)^{2/3}a^{\prime}=\alpha\,a^{\prime}\), we obtain the criterion for resonant capture: \[\frac{1}{\tau^{\prime}_{m}}-\frac{1}{\tau_{m}}=\frac{2\,k\, \mathcal{G}\,M_{\star}}{a^{\prime 3}}\bigg{(}\frac{m}{M_{\star}}+\frac{m^{ \prime}}{\sqrt{\alpha}\,M_{\star}}\bigg{)}\] \[\times\bigg{(}\frac{g^{2}\,m^{\prime}}{\sqrt{\alpha}\,M_{\star}} \tau_{e}+\frac{k-1}{k}\frac{f^{2}\,m}{M_{\star}}\,\tau^{\prime}_{e}\bigg{)}\] \[\approx\frac{32\,\mathcal{G}\,k^{3}\,\big{(}m+m^{\prime}\big{)} \,\big{(}m^{\prime}\,\tau_{e}+m\,\tau^{\prime}_{e}\big{)}}{25\,M_{\star}\,a^{ \prime 3}} \tag{16}\] It is trivial to check that this criterion reproduces equation (10) in the limit where \(m^{\prime}\to 0\). Although here we have derived criterion (3.1) directly from Hamiltonian (11), an equivalent - albeit somewhat more mathematically involved - approach would have been to first reduce \(\mathcal{H}\) to an integrable form that only contains a single resonant harmonic (see e.g., Batygin & Morbidelli, 2013), and then analyze the stability of its equilibrium under dissipation. This approach yields identical results to those delineated above and is reproduced in the Appendix. ### Numerical Experiments As a quantitative test of the generalized criterion derived above, we have repeated the numerical simulations described in the previous section, this time setting \(m^{\prime}=m\), and sampling a broad range of orbital convergence times and planetary masses, in order to map the capture criterion for the 3:2 resonance on the \((m/M_{\star},\bar{r}_{m}/P)\) plane, where \(P\) is the orbital period of the inner object. For definitiveness, in these simulations, we assigned a common value of \(\mathcal{K}\) to both planets (such that \(\tau_{e}=\tau_{e}^{\prime}=\bar{\tau}_{m}/\mathcal{K}\)), but applied orbital decay only to the outer body such that \(\bar{\tau}_{m}=\tau_{m}\,\tau_{m}^{\prime}/(\tau_{m}-\tau_{m}^{\prime})=\tau_{m} ^{\prime}\). Finally, to prevent the system from spiraling onto the central star without careful modeling of the disk's inner edge (e.g., Izidoro et al., 2017, 2021), we simply rescaled both of the semi-major axes at every time-step5, maintaining the inner planet at \(a=0.1\,\mathrm{AU}\). Footnote 5: We have simulated the assembly of the Galilean moons into the Laplace resonance using an identical method in a previous study (Batygin and Morbidelli, 2020). The results of these numerical experiments are shown in Figure (3). Instances where capture into the 3:2 resonance was successful are shown with filled gray points whereas runs that resulted in passage through the commensurability are shown with empty circles. In both the \(\mathcal{K}=1000\) (left panel) and \(\mathcal{K}=100\) (right panel) simulation suites, a clear power-law threshold emerges on the diagrams, though the slope of this boundary is subtly distinct. As already shown in the proceeding section, for the adopted mass-ratios, the \(\mathcal{K}=1000\) case is regulated by stability, whereas the \(\mathcal{K}=100\) case is controlled by adiabaticity. To confirm this expectation, we have over-plotted the \(k=3\), \(m=m^{\prime}\) stability and adiabaticity criteria (see Batygin, 2015): \[\left(\frac{\bar{\tau}_{m}}{P}\right)_{\mathrm{stab}}=\frac{5}{32 \,\pi}\frac{M_{\star}}{m}\sqrt{\frac{\mathcal{K}}{6}}\] \[\left(\frac{\bar{\tau}_{m}}{P}\right)_{\mathrm{ad}}=\frac{5}{384} \bigg{(}\frac{125}{4}\bigg{)}^{1/9}\bigg{(}\frac{M_{\star}}{m}\bigg{)}^{4/3}, \tag{17}\] with dashed and solid lines, respectively. These two criteria adequately explain the numerical results and highlight the distinct regimes of resonance capture that can ensue within moderately and heavily dissipative planet-formation environments. ### Scaling for Type-I Migration As a final theme of this section, let us move away from parameterized simulations considered above, and compare our results with more realistic simulations of resonance capture driven by type-I disk migration. For a planet of mass \(m\) residing on an orbit with semi-major axis \(a\), the characteristic timescale associated with type-I migration is the spiral density wave propagation time Figure 3: A dimensionless mass vs. migration time map of \(k=3\) resonance capture for an equal-mass (\(m=m^{\prime}\)) planetary system. The filled circles indicate parameter combinations where numerical simulations yield successful resonant capture, whereas empty circles indicate passage through the 3:2 resonance. The left and right panels correspond to the heavily (\(\mathcal{K}=1000\)) and moderately (\(\mathcal{K}=100\)) damped regimes, respectively. Both panels additionally show analytic stability and adiabaticity capture criteria as dashed and solid lines. While the stability criterion regulates capture in the heavily-dissipated case, adiabaticity controls the outcome of resonant encounters in the moderately damped regime. (Tanaka & Ward, 2004): \[\tau_{\rm wave}=\frac{1}{n}\frac{M_{\star}}{m}\frac{M_{\star}}{\Sigma\,a^{2}} \bigg{(}\frac{h}{r}\bigg{)}^{4}, \tag{18}\] where \(\Sigma\) is the surface density of the nebula, evaluated at the planetary semi-major axis. For nearly circular and planar orbits, the eccentricity damping timescale differs from \(\tau_{\rm wave}\) only by a numerical factor of order unity, \(f_{e}=0.78\), such that \(\tau_{e}=\tau_{\rm wave}/f_{e}\). The semi-major axis damping time, on the other hand, exceeds the wave propagation time by a large margin: \(\tau_{m}=\tau_{\rm wave}\,(r/h)^{2}/f_{a}\). Notably, the proportionality constant depends on the index of the disk's surface density profile, \(s\), in a linear manner: \(f_{a}=2.7+1.1\,s\)(Tanaka et al., 2002). Employing these dependencies, Kajtazi et al. (2023) used the surface density of the disk as a physical proxy for migration speed, and formulated the results of their numerical simulation suite in terms of a critical value of \(\Sigma\) required for resonance capture. Carrying out their numerical experiments in a non-evolving, flared, \(s=1\) nebula, Kajtazi et al. (2023) simulated the orbital convergence of a pair of equal-mass planets, suppressing disk-driven migration of the inner object, which was initialized at \(0.1\,\)AU. Intriguingly, they found that the critical value of \(\Sigma\) exhibits a clear dependence on the cube of the disk's aspect ratio but is independent of the planetary mass (i.e., critical \(\Sigma\propto\,m^{0}\,h^{3}\)). Let us examine if these scalings can be derived from our analytical criterion. Plugging in the aforementioned type-I expressions for \(\tau_{m}\) and \(\tau_{e}\) into equation (16), we obtain: \[\frac{f_{a}\,m^{\prime}\,n^{\prime}\,\Sigma\,a^{\prime 2}}{M_{\star}}\bigg{(} \frac{h}{r}\bigg{)}^{-2}=\frac{128\,{\cal G}\,k^{3}\,m^{\prime}\,M_{\star}}{25 \,f_{e}\,n^{\prime}\,\Sigma\,a^{\prime 5}}\bigg{(}\frac{h}{r}\bigg{)}^{4}, \tag{19}\] where we have set \(\bar{\tau}_{m}=\tau_{m}^{\prime}\); \(m=m^{\prime}\) following Kajtazi et al. (2023), and have assumed the compact approximation valid for \(k\gtrsim 3\) for simplicity. Solving for \(\Sigma\) and relating the value to the reference surface density, \(\Sigma_{0}\), at \(r_{0}=1\,\)AU, we recover the scalings found in simulations: \[\frac{\Sigma\,a^{\prime 2}}{M_{\star}}=\frac{\Sigma_{0}\,r_{0}\,a^{\prime}}{M_{ \star}}=\sqrt{\frac{2\,k^{3}}{f_{a}\,f_{e}}}\bigg{(}\frac{h}{r}\bigg{)}^{3}. \tag{20}\] As a concluding step, to assess the degree of quantitative agreement between theory and numerical experiments, in Figure (4), we plot our analytical criterion for a range of resonance indexes together with the numerical results of (Kajtazi et al., 2023, their Fig. 6). Though the agreement is not exact, we find that the analytical result does not deviate from the numerical findings by more than 20%. ## 4 Discussion Capture of planets into mean-motion resonances is an expected outcome of evolution within protoplanetary nebulae, and in this work, we have derived an analytic criterion for resonance locking in presence of migration and orbital circularization. Our theoretical framework is based upon a perturbative treatment of the unrestricted gravitational three-body problem (Peale, 1986; Murray & Dermott, 1999 and the refs therein), and thus assumes small eccentricities and inclinations, while placing no restrictions on the planet-planet mass ratio. Fundamentally, our derivation rests upon the stability analysis of the resonant equilibrium in presence of dissipation, and complements the previously obtained criterion that stems from a consideration of adiabaticity (Batygin, 2015). We note that the stability argument considered here concerns the elementary question of the existence of an equilibrium point for the resonant variables and not the long-term post-capture evolution. To this end, it has been shown by Goldreich & Schlichting (2014); Deck & Batygin (2015); Xu et al. (2018) that depending on the ratio of the planetary masses and eccentricity damping timescales, systems where resonant capture is initially successful can eventually escape from resonance by way of over-stability. Although beyond the scope of our study, these dynamics, along with stochastic growth Figure 4: Comparison between analytic and numeric determination of resonance capture. Using the nebular surface density as a proxy for the critical migration rate, the results of the Kajtazi et al. (2023) simulation suite (\(\mathcal{K}\approx 570\)) are shown with blue points for a range of resonant indexes, \(k\). Analytic stability and adiabaticity criteria are over-plotted with gray and black points, respectively. Although not exact, the critical value of the surface density given by the stability criterion (equation 16) matches the numerical data to within 20%. of the libration amplitude by nebular turbulence (Adams et al., 2008; Batygin and Adams, 2017), synodic modulation of resonant angles (Pichierri and Morbidelli, 2020; Goldberg et al., 2022), etc., give rise to additional constraints on the kind of architectures that can be established within protoplanetary disks. It is interesting to note that although the criteria for resonant capture based upon stability and adiabaticity yield distinct scalings with the planetary mass, for typical parameters pertinent to planetary migration within circumstellar disks, they can give results that are quantitatively similar. Ultimately, for resonant locking to ensue, both criteria - stability and adiabaticity - must be satisfied, such that whichever one yields the longer critical timescale for orbital convergence plays the controlling role. Accordingly, the specific capture regime is dictated by the combination of the ratio of the cumulative planetary mass to the mass of the star as well as the ratio of the migration and circularization timescales. For sub-Jovian planet pairs, the analysis carried out in section (3) indicates that the outcomes of resonant encounters in systems with \(\mathcal{K}=10^{2}\) and \(\mathcal{K}=10^{3}\) are determined by adiabaticity and stability respectively, such that the transitionary value of the timescale ratio lies in between these two regimes. While the appropriate value of \(\mathcal{K}\) is bound to be case-specific, comparison of our results with the published simulation suite of Kajtazi et al. (2023), indicates the that stability criterion (16) reproduces the scalings seen in the numerical experiments. Still, it remains important to keep in mind that within the context of the flared disk model of Kajtazi et al. (2023), the disk aspect ratio at a stellocentric distance of \(0.1\,\mathrm{AU}\) (where the resonant encounter is set up to take place) is \((h/r)=0.033\times(0.1)^{1/4}\approx 0.019\), corresponding to \(\mathcal{K}\approx 570\). Thus, it is reasonable to expect that if the resonant encounters were to instead take place at \(10\,\mathrm{AU}\), where \((h/r)\gtrsim 0.05\) and \(\mathcal{K}\lesssim 100\), adiabaticity would have played the deterministic role. In summary, the analytical framework developed here adds to our understanding of the dynamics of resonant encounters, and serves as a basis for further interpretation of the early evolution of planetary systems. ###### Acknowledgements. K. B. is grateful to Caltech, the David and Lucile Packard Foundation, and the National Science Foundation (grant number: AST 2109276) for their generous support. During the preparation of this paper, we have become aware that Huang and Ormel (2023, submitted) arrived at similar arguments simultaneously and independently. ## Appendix A Derivation of the stability criterion from an integrable Hamiltonian In this appendix, we present an alternative derivation of the stability criterion given by equation (16), based upon the reduction of the resonant three-body problem to an integrable Hamiltonian. In particular, we follow the formalism outlined in Deck and Batygin (2015), switching to their notation for consistency. Because this reduction is well documented (_e.g._ Batygin and Morbidelli, 2013; Deck et al., 2013; Petit et al., 2017; Hadden, 2019), we limit ourselves to recalling the relevant variables and refer the interested reader to the cited works. As in the main text, we consider the general problem of two planets of mass \(m_{1}\) and \(m_{2}\) orbiting a star of mass \(M_{\star}\) in the plane. We assume that the planets experience a convergent migration and are close to the crossing of the \(k\):\(k-1\) resonance. The integrable Hamiltonian has the form: \[\mathcal{H}=-\frac{1}{2}(\Phi^{\prime}-\Gamma^{\prime})^{2}-\sqrt{2\Phi^{ \prime}}\cos(\phi), \tag{1}\] where \(\Phi^{\prime}\) is the renormalized Sessin variable (Sessin and Ferraz-Mello, 1984) \[\Phi^{\prime} =\frac{1}{Q}\frac{k}{k-1}\frac{\alpha_{\mathrm{res}}}{\zeta+ \alpha_{\mathrm{res}}}\frac{\zeta\sqrt{\alpha_{\mathrm{res}}}}{2(R^{2}+\zeta \sqrt{\alpha_{\mathrm{res}}})}\sigma^{2}, \tag{2}\] \[\sigma^{2} =R^{2}e_{1}^{2}+e_{2}^{2}-2Re_{1}e_{2}\cos(\Delta\varpi),\] where \(R=|f_{k,27}(\alpha_{\mathrm{res}})|/f_{k,31}^{\prime}(\alpha_{\mathrm{res}})\), \(\zeta=m_{1}/m_{2}\), \(\alpha_{\mathrm{res}}=(1-1/k)^{2/3}\) and the functions \(f_{27/31}\) are the resonant coefficients (Murray and Dermott, 1999). \(Q\) is a renormalization factor for the actions \[Q=\varepsilon_{\mathrm{p}}^{2/3}\zeta\frac{\alpha_{\mathrm{res}}^{5/6}}{k-1} \left(\frac{f_{k,31}^{\prime 2}}{9k(1+\zeta)^{2}}\frac{R^{2}+\zeta\sqrt{\alpha_{ \mathrm{res}}}}{(\alpha_{\mathrm{res}}+\zeta)^{5}}\right)^{1/3} \tag{3}\] defined by (Deck and Batygin, 2015, their Appendix A) and \(\varepsilon_{\mathrm{p}}=(m_{1}+m_{2})/M_{\star}\). The angle \(\phi\) is conjugated with \(\Phi^{\prime}\) and is the generalized resonant angle. \(\Gamma^{\prime}\), on the other hand, is a parameter that is related to the system's angular momentum. Maintaining the notation of Deck & Batygin (2015), we have the relative migration timescale \[\frac{1}{\tau_{a}}=\frac{1}{\tau_{a,2}}-\frac{1}{\tau_{a,1}}, \tag{4}\] and the eccentricity damping timescale \[\frac{1}{\tau_{e}}=\frac{1}{\tau_{e,1}}+\frac{\zeta}{\tau_{e,2}}. \tag{5}\] Note that without making any approximations in the computation of the dissipation onto the action \(\Phi^{\prime}\), one obtains the following eccentricity damping timescale \[\frac{1}{\tau_{e}}=\frac{1}{\tau_{e,1}}+\frac{\sqrt{\alpha}}{R^{2}}\frac{ \zeta}{\tau_{e,2}} \tag{6}\] This expression slightly differs from that given in Deck & Batygin (2015) by a few percent for \(k\geq 2\) but is more accurate for the 2:1 MMR. Deck & Batygin (2015) also define the timescale \[\frac{1}{\tau_{a,e}}=\frac{1}{\tau_{e,1}}-\frac{\alpha}{R^{2}}\frac{\zeta^{2}} {\tau_{e,2}}. \tag{7}\] The damping on the action \(\Phi^{\prime}\) is expressed as \[\left.\frac{\mathrm{d}\Phi^{\prime}}{\mathrm{d}t^{\prime}}\right|_{\mathrm{dis }}=C_{0}\Phi^{\prime} \tag{8}\] where we assume \(\tau_{a}\gg\tau_{e}\); neglecting higher order terms in eccentricities, we have: \[C_{0}=-\frac{2\gamma_{1}}{\tau_{e}}, \tag{9}\] where \[\gamma_{1}=\frac{R^{2}}{R^{2}+\zeta\sqrt{\alpha}}, \tag{10}\] and for completness we define \(\gamma_{2}=1-\gamma_{1}\). Damping on the Hamiltonian parameter \(\Gamma^{\prime}\) is expressed as \[\left.\frac{\mathrm{d}\Gamma^{\prime}}{\mathrm{d}t^{\prime}}\right|_{\mathrm{ dis}}=A_{0}+A_{1}\Phi^{\prime} \tag{11}\] with \[A_{0}=\frac{\zeta\sqrt{\alpha}}{2Qk\eta_{2}^{2}}\frac{1}{\tau_{a}}\quad\text{ and}\quad A_{1}=\frac{-2p\gamma_{1}}{k\eta_{2}}\frac{1}{\tau_{a,e}}, \tag{12}\] where \[\eta_{2}=\frac{k-1}{k}+\zeta\sqrt{\alpha} \tag{13}\] with \(\eta_{1}=\eta_{2}/(\zeta\sqrt{\alpha})\). The equations of motion6 of the dissipative problems are Footnote 6: Expressed in terms of the renormalized time \(t^{\prime}\). \[\frac{\mathrm{d}\Phi^{\prime}}{\mathrm{d}t^{\prime}} =-\sqrt{2\Phi^{\prime}}\sin\phi+\frac{\eta_{2}^{3}C_{0}\Phi^{ \prime}}{n_{2}Q|\mathcal{K}_{2}|} \tag{14}\] \[\frac{\mathrm{d}\phi}{\mathrm{d}t^{\prime}} =-(\Phi^{\prime}-\Gamma^{\prime})-\frac{1}{\sqrt{2\Phi^{\prime}}}\cos\phi\] (15) \[\frac{\mathrm{d}\Gamma^{\prime}}{\mathrm{d}t^{\prime}} =\frac{\eta_{2}^{3}}{n_{2}Q|\mathcal{K}_{2}|}(A_{0}+(A_{1}+C_{0} )\Phi^{\prime}) \tag{16}\] where \(n_{j}\) is the mean motion and \(\mathcal{K}_{2}=-3(k-1)^{2}(\alpha_{\rm res}+\zeta)^{5}/(\zeta\alpha_{\rm res})\) is the second order derivative of the Keplerian Hamiltonian at the exact resonance. Looking for an equilibrium, we have \[\Phi^{\prime}_{\rm eq}=\frac{-A_{0}}{C_{0}+A_{1}} \tag{17}\] and, \[\sin\phi_{\rm eq}=\frac{\eta_{2}^{3}\sqrt{\Phi^{\prime}_{\rm eq}}C_{0}}{\sqrt{ 2}Q|\mathcal{K}_{2}|n_{2}}. \tag{18}\] The equilibrium exists if the absolute value of the right hand side of the above expression is smaller than 1. Accordingly, one can derive a condition on the migration speed \(\tau_{a}\) that is, without further approximation: \[\frac{1}{\tau_{a}n_{2}}<\sqrt{2}\varepsilon_{\rm p}\sqrt{\frac{\tau_{e}}{\tau_ {a}}}G(k,\zeta,p_{a,e}) \tag{19}\] where \(G\) is a function solely of the resonant index \(k\), the planet mass ratio \(\zeta\) and the ratio of the eccentricity damping timescales through the parameter \(p_{a,e}\) \[G(k,\zeta,p_{a,e})=\frac{\zeta\sqrt{\alpha_{\rm res}}+R^{2}}{1+\zeta}\sqrt{ \frac{\frac{k-1}{k}+\zeta\sqrt{\alpha_{\rm res}}}{R^{2}\alpha_{\rm res}}}f^{ \prime}_{k,31}\sqrt{k}\sqrt{1+p_{a,e}}, \tag{20}\] where \[p_{a,e}=\frac{p}{k-1+k\zeta\sqrt{\alpha_{\rm res}}}\frac{\tau_{e}}{\tau_{a,e}}. \tag{21}\] The expression (19) is equivalent to Eq. (16) for \(p=0\). In the case of an outer test particle, \(\zeta\rightarrow+\infty\), using expression Eq. (6) for \(\tau_{e}\), the criterion becomes \[\frac{1}{\tau_{a}n_{2}}<\sqrt{2}\varepsilon_{\rm p}\sqrt{\frac{\tau_{e,2}}{ \tau_{a}}}f^{\prime}_{k,31}\sqrt{k}\sqrt{1-\frac{p}{k}}. \tag{22}\] Taking \(p=0\) yields the expression given by equation (10) derived from the restricted problem. Similarly \(p=1\) corresponds to the result obtained by Huang & Ormel (2023).
2301.03201
Safehaul: Risk-Averse Learning for Reliable mmWave Self-Backhauling in 6G Networks
Wireless backhauling at millimeter-wave frequencies (mmWave) in static scenarios is a well-established practice in cellular networks. However, highly directional and adaptive beamforming in today's mmWave systems have opened new possibilities for self-backhauling. Tapping into this potential, 3GPP has standardized Integrated Access and Backhaul (IAB) allowing the same base station serve both access and backhaul traffic. Although much more cost-effective and flexible, resource allocation and path selection in IAB mmWave networks is a formidable task. To date, prior works have addressed this challenge through a plethora of classic optimization and learning methods, generally optimizing a Key Performance Indicator (KPI) such as throughput, latency, and fairness, and little attention has been paid to the reliability of the KPI. We propose Safehaul, a risk-averse learning-based solution for IAB mmWave networks. In addition to optimizing average performance, Safehaul ensures reliability by minimizing the losses in the tail of the performance distribution. We develop a novel simulator and show via extensive simulations that Safehaul not only reduces the latency by up to 43.2% compared to the benchmarks but also exhibits significantly more reliable performance (e.g., 71.4% less variance in achieved latency).
Amir Ashtari Gargari, Andrea Ortiz, Matteo Pagin, Anja Klein, Matthias Hollick, Michele Zorzi, Arash Asadi
2023-01-09T08:35:52Z
http://arxiv.org/abs/2301.03201v2
# Safehaul: Risk-Averse Learning for Reliable mmWave Self-Backhauling in 6G Networks ###### Abstract Wireless backhauling at millimeter-wave frequencies (mmWave) in static scenarios is a well-established practice in cellular networks. However, highly directional and adaptive beamforming in today's mmWave systems have opened new possibilities for self-backhauling. Tapping into this potential, 3GPP has standardized Integrated Access and Backhaul (IAB) allowing the same base station to serve both, access and backhaul traffic. Although much more cost-effective and flexible, resource allocation and path selection in IAB mmWave networks is a formidable task. To date, prior works have addressed this challenge through a plethora of classic optimization and learning methods, generally optimizing a Key Performance Indicator (KPI) such as throughput, latency, and fairness, and little attention has been paid to the reliability of the KPI. We propose Safehaul, a risk-averse learning-based solution for IAB mmWave networks. In addition to optimizing average performance, Safehaul ensures reliability by minimizing the losses in the tail of the performance distribution. We develop a novel simulator and show via extensive simulations that Safehaul not only reduces the latency by up to 43.2% compared to the benchmarks, but also exhibits significantly more reliable performance, e.g., 71.4% less variance in achieved latency. Millimeter-Wave Communication, Integrated Access and Backhaul (IAB), Self-backhauling, Wireless Backhaul ## I Introduction The emergence of mmWave cellular systems created a unique opportunity for cellular operators to leverage a scalable and cost-effective approach to deal with network densification. The fact that mmWave base stations can support fiber-like data rates facilitates the use of the same base station for both access and backhaul traffic, a solution which in 3GPP parlance is referred to as Integrated Access and Backhaul (IAB). Consequently, 3GPP has included IAB in the standard [1, 2] covering the details on architecture, higher layer protocols, and the radio. Although Release 17 of the 5G-NR defines the interfaces, architectures, and certain system parameters, the actual configuration and resource allocation is left to operators. Traditional self-backhauled networks featured fixed-wireless links decoupled from access networks with static configurations. In contrast, IAB should account for the dynamic nature of the backhaul links (particularly in dense mmWave deployments) and their integration with the access network. Further, IAB allows the traffic to traverse several hops (i.e., base stations) to reach its destination, adding a new dimension to the problem's complexity. _In addition to the scheduling problem, an IAB network should: \((i)\) solve the problem of path selection and link activation at the backhaul while considering inter-cell interference and \((ii)\) decide on serving access or backhaul traffic depending on the access load and the ingress backhaul traffic from neighboring base stations._ **Prior work.** Methodologically, the majority of the existing works [3, 4, 5, 6, 7, 8, 9, 10, 11] focus on classic optimization techniques to solve the above-mentioned problem. However, given the large number of parameters involved, such formulations often result in (non-)convex problems that are too complex for real-time operations. Nonetheless, they are valuable indicators to mark the upper-bound performances. Recently, some works focus on more practical solutions which can be deployed in real networks [12, 13, 14]. Specifically, these works leverage Reinforcement Learning (RL) to tackle both resource allocation and/or path selection in IAB mmWave networks and demonstrate that RL-based solutions achieve real-time performance. Regardless of the methodology, prior works mostly aim at maximizing the network capacity [3, 4, 5, 6, 7, 8, 9, 10], minimizing latency [15, 16] and improving throughput fairness [4, 17]. Although optimizing capacity and latency is a challenging task by itself, network operators are often more _concerned about the reliability of such approaches_. This is the underlying reason that many commercial products rely on _simplified_ but reliable algorithms for resource allocation, despite their sub-optimal performance.". In this article, we propose Safehaul, a reinforcement learning-based solution for reliable scheduling and path selection in IAB mmWave systems under network dynamics. We use the concept of risk aversion, commonly used in economics [18, 19], to measure and enhance the reliability of Safehaul. The following summarizes our contributions: * We model the scheduling and path selection problem in IAB mmWave networks as a multi-agent multi-armed bandit problem (Section III). We consider multiple fiber base stations simultaneously supporting many self-backhauled mmWave base stations. In our model, the self-backhauled base stations independently decide the links to activate. The consensus among the base stations is reached via standard-defined procedures (Section IV-C). * We present the first solution to provide reliable performance in IAB-enabled networks (Section IV). Specifically, we investigate the joint minimization of the average end-to-end latency and its expected tail loss for each base station. To this aim, we propose Safehaul, a learning approach that leverages a coherent risk measure called Conditional Value at Risk (CVaR) [18]. CVaR measures the tail average of the end-to-end latency distribution that exceeds the maximum permitted latency, thus ensuring the reliability of the network. * We provide a new means of simulating multi-hop IAB networks by extending NVIDA's newly released GPU-accelerated simulator, i.e., Sionna [20] (Section V). Specifically, we add codebook-based analog beamforming capability for both uplink and downlink communications. Further, we extend Sionna by implementing system-level components such as layer-2 schedulers and buffers and Backhaul Adaptation Protocol (BAP)-like routing across the IAB network. We believe our IAB extensions will be instrumental for the open-source evaluation of future research on self-backhauled mmWave networks. * Exploiting the above simulator, we evaluate and benchmark Safehaul against two of the state-of-the-art algorithms [16, 21]. The results confirm that Safehaul is significantly more reliable than benchmarks as it exhibits much tighter variance both in terms of latency (up to 71.4%) and packet drop rate (at least 39.1%). Further, Safehaul achieves up to 43.2% lower average latency and 11.7% higher average throughput than the reference schemes. ## II System Model We consider a cellular system with \(N\) base stations capable of self-backhauling and \(D\) base station with a fiber connection to the core network. Following 3GPP terminology, we refer to self-backhauled base stations as IAB-nodes (BS\({}_{node}\)) and the fiber base stations as IAB-donors (BS\({}_{donor}\)). Each IAB-node connects to the core network via a (multi-hop) wireless link to an IAB-donor. The sets of all BSs\({}_{node}\) and BSs\({}_{donor}\) are denoted by \(\mathcal{N}=\{1,\ldots,N\}\) and \(\mathcal{D}=\{1,\ldots,D\}\), respectively. The system works in a time-slotted fashion starting from time slot \(i=1\) until a finite time horizon \(I\). All the time slots \(i=1,\ldots,I\) have the same duration. The IAB-nodes are equipped with two RF chains. One RF chain is used exclusively for the communication with cellular users (access network), while the second RF chain is used for self-backhauling. In line with the 3GPP specification [22], we assume half-duplex self-backhauling, i.e., in each time slot \(i\) an IAB-node can either transmit data, receive data or remain idle. We model the connections between the base stations as a graph \(\mathcal{G}_{i}=\{\mathcal{V},\mathcal{E}_{i}\}\), see Fig. 1. The set \(\mathcal{V}=\mathcal{N}\cup\mathcal{D}\) of vertices is formed by all the BS\({}_{node}\) and BS\({}_{donor}\) in the system. The set \(\mathcal{E}_{i}\) of edges is composed of the available wireless links \((n,l)\) between a BS\({}_{node}\)\(n\in\mathcal{N}\) and any BS (BS\({}_{donor}\) or BS\({}_{node}\)) \(l\in\mathcal{V}\) in time slot \(i\). Note that the graph \(\mathcal{G}_{i}\) is not static. In a given time slot \(i\), some links can be unavailable due to failure, blockage, or interference. Thus, only feasible wireless links are considered in the set \(\mathcal{E}_{i}\). The path \(X_{n,d}\) from BS\({}_{node}\)\(n\) to any BS\({}_{donor}\)\(d\) is a sequence of intermediate links \((n,l)\). Note that \(X_{n,d}\) changes over time according to the traffic loads of the intermediate BS\({}_{node}\) and the channel conditions. We model the activation of link \((n,l)\) with the binary variable \(x_{n,l,i}\). When \(x_{n,l,i}=1\), the link is activated and BS\({}_{node}\)\(n\) transmits to BS \(l\in\mathcal{V}\) during time slot \(i\). Conversely, \(x_{n,l,i}=0\) indicates the link is deactivated and no transmission can occur. Each BS\({}_{node}\)\(n\) has a finite data buffer with capacity \(B_{n}^{\max}\) to store the backhaul data to be transmitted to any of the BSs\({}_{donor}\). In each time slot \(i\), BS\({}_{node}\)\(n\) is characterized by its load and average queuing time. The load, denoted by \(B_{n,i}\in\mathbb{N}\), indicates the number of data packets stored in the buffer at the beginning of time slot \(i\). The average queuing time \(t_{n,i}^{4}\in\mathbb{R}^{+}\) is the average number of time slots the current packets in the data buffer have been stored. Additionally, we denote by \(t_{n,l,i}^{\text{tx}}\in\mathbb{R}^{+}\) and \(M_{n,l,i}\in\mathbb{R}^{+}\) the transmission time from BS\({}_{node}\)\(n\) to BS \(l\) in time slot \(i\), and the amount of successfully transmitted data on the link, respectively. We define the maximum tolerable latency \(T_{\max}\) as the maximum time a packet can take from its source BS\({}_{node}\) to any BS\({}_{donor}\). Any packet that is not delivered before \(T_{\max}\) milliseconds will be dropped. The average maximum end-to-end latency \(\bar{T}_{n,d}\) from BS\({}_{node}\)\(n\) to BS\({}_{donor}\)\(d\) is the average, over the complete time horizon \(I\), of the maximum delay a packet originating from BS\({}_{node}\)\(n\) takes to reach any BS\({}_{donor}\)\(d\) in time slot \(i\). This is calculated as: \[\bar{T}_{n,d}=\frac{1}{I}\sum_{i=1}^{I}T_{n,d,i}, \tag{1}\] where \(T_{n,d,i}\) is the maximum end-to-end latency among all the packets originating in BS\({}_{node}\)\(n\) which reach BS\({}_{donor}\)\(d\) in time slot \(i\). \(T_{n,d,i}\) is a sample of the random variable \(T_{n,d}\) drawn from an unknown stationary probability distribution \(P\) that depends on the activated links \(x_{n,l,i}\), the cellular user's mobility, the location of the BS\({}_{node}\)\(n\), the interference in the system, and the queue dynamics. Considering (1), the average end-to-end latency in the system \(\bar{T}\) is defined as \[\bar{T}=\frac{1}{ND}\sum_{n=1}^{N}\sum_{d=1}^{D}\bar{T}_{n,d}. \tag{2}\] Fig. 1: Example of a graph \(\mathcal{G}_{i}\) ## III Problem Formulation The joint minimization of the average end-to-end latency and the expected value of its tail loss in IAB-enabled networks is formulated in this section. We first introduce \(\mathrm{CVaR}\), the risk metric accounting for minimizing the events in which the end-to-end latency is higher than \(T_{\max}\). Next, we formulate the optimization problem in the complete network. ### _Preliminaries on CVaR_ Traditionally, latency minimization in IAB-enabled networks has focused on optimizing the expected value of a latency function [15, 16]. However, such an approach fails to capture the time variability of the latency distribution, thus leading to unreliable systems in which \(T_{n,d,i}>T_{\max}\), for any \(i=1,...,I\), \(n\in\mathcal{N}\) and \(d\in\mathcal{D}\). _For this purpose, we consider not only the average end-to-end latency \(\bar{T}\) in the system, but also its expected tail loss based on the CVaR_[18, 23]. Having in mind that \(T_{n,d}\) is a random variable, we assume it has a bounded mean on a probability space \((\Omega,\mathcal{F},P)\), with \(\Omega\) and \(\mathcal{F}\) being the sample and event space, respectively. Using a risk level \(\alpha\in(0,1]\), the \(\mathrm{CVaR}_{\alpha}(T_{n,d})\) of \(T_{n,d}\) at risk level \(\alpha\) quantifies the losses that might be encountered in the \(\alpha\)-tail. More specifically, it is the expected value of \(T_{n,d}\) in its \(\alpha\)-tail distribution [23]. Formally, \(\mathrm{CVaR}_{\alpha}(T_{n,d})\) is defined as [18] \[\mathrm{CVaR}_{\alpha}(T_{n,d})=\min_{q\in\mathbb{R}}\left\{q+\frac{1}{\alpha }\mathbb{E}\left[\max\{T_{n,d}-q,0\}\right]\right\}, \tag{3}\] where the expectation in (3) is taken over the probability distribution \(P\). Note that lower \(\mathrm{CVaR}_{\alpha}(T_{n,d})\) results in higher reliability in the system because the expected end-to-end latency in the \(\alpha\)-worst cases is low. Moreover, note that \(\alpha\) is a risk aversion parameter. For \(\alpha=1\), \(\mathrm{CVaR}_{\alpha}(T_{n,d})=\mathbb{E}[T_{n,d}]\) which corresponds to the traditional risk-neutral case. Conversely, for \(\alpha=0\), \(\lim_{\alpha\to 0}\mathrm{CVaR}_{\alpha}(T_{n,d})=\mathrm{sup}\{T_{n,d}\}\). \(\mathrm{CVaR}\) has been shown to be a coherent risk measure, i.e., it fulfills monotonicity, subadditivity, translation invariance, and positive homogeneity properties [24]. ### _Optimization Problem_ We aim to jointly minimize the average end-to-end latency and its expected tail loss for each \(\text{BS}_{node}\). For this purpose, we decide which of the \((n,l)\) links to activate in each time slot \(i\) during the finite time horizon \(I\). In the following, we formulate the optimization problem from the network perspective and consider the sum over all the \(\text{BS}_{node}\) in the system. The latency minimization problem should consider three different aspects: \((i)\) link activation is constrained by the half-duplex nature of self-backhauling, \((ii)\) only data stored in the data buffers can be transmitted, and \((iii)\) packet drop due to buffer overflow should be avoided. Formally, the problem is written as: \[\underset{\{x_{n,l,i},i\in[1,I]\}}{\text{minimize}} \sum_{n\in\mathcal{N}}\sum_{d\in\mathcal{D}}\bar{T}_{n,d}+\eta \mathrm{CVaR}_{\alpha}(T_{n,f}) \tag{4a}\] \[\mathrm{subject\ to}\] \[\sum_{l\in\mathcal{V}}x_{n,l,i}+\sum_{l\in\mathcal{N}}x_{l,n,i}=1,\qquad n\in\mathcal{N},\] (4b) \[\sum_{j=1}^{i}B_{n,i}\geq\sum_{j=1}^{i}M_{n,l,i}, n\in\mathcal{N},l\in\mathcal{V},\] (4c) \[\sum_{j=1}^{i}B_{l,j}+\sum_{j=1}^{i}M_{n,l,j}\leq B_{l}^{\max},n \in\mathcal{N},l\in\mathcal{V},\] (4d) \[x_{n,l,i}\in\{0,1\}, n\in\mathcal{N},l\in\mathcal{V}. \tag{4e}\] In (4a), \(\eta\in[0,1]\) is a weighting parameter to trade between minimizing the average end-to-end latency \(\bar{T}_{n,d}\) and the expected loss of its tail. As the considered scenario is not static, solving (4) requires complete non-causal knowledge of the system dynamics during the complete time horizon \(I\). However, in practical scenarios, knowledge about the underlying random processes is not available in advance. For example, the IAB-node's loads \(B_{n,i}\) depend not only on the transmitted and received backhaul data, but also on the randomly arriving data from its users. Similarly, the amounts of transmitted data \(M_{n,l,i}\) depend on the varying channel conditions of both BS \(n\) and \(l\). As a result, the exact values of \(T_{n,l,i}\), \(B_{n,i}\) and \(M_{n,l,i}\) are not known beforehand. For this reason, we present in Sec. IV Safehaul, a multi-agent learning approach to minimize in each \(\text{BS}_{node}\) the average end-to-end latency and the expected value of the tail of its loss. ## IV Our proposed solution: Safehaul In this section, we describe Safehaul, a multi-agent learning approach for the joint minimization of the average end-to-end latency and its expected tail loss in IAB mmWave networks. In Safehaul, each \(\text{BS}_{node}\) independently decides which links \((n,l)\) to activate in every time slot \(i\) by leveraging a multi-armed bandit formulation. The consensus among the \(\text{BS}_{node}\) is reached by exploiting the centralized resource coordination and topology management role of IAB-donors [1, Sec. 4.7.1]. ### _Multi-Armed Bandit Formulation_ Multi-armed bandits is a tool well suited to problems in which an agent makes sequential decisions in an unknown environment [25]. In our scenario, each \(\text{BS}_{node}\)\(n\) decides, in each time slot \(i\), which of the links \((n,l)\) to activate without requiring prior knowledge about the system dynamics. The multi-armed bandit problem at \(\text{BS}_{node}\)\(n\) can be characterized by a set \(\mathcal{A}_{n}\) of actions and a set \(\mathcal{R}_{n}\) of possible rewards. The rewards \(r_{n,i}\in\mathcal{R}_{n}\) are obtained in each time slot \(i\) as a response to the selected action \(a_{n,i}\in\mathcal{A}_{n}\). Specifically, the actions are the links that \(\text{BS}_{node}\)\(n\) can activate, and the rewards are a function of the observed latency. We define \(\mathcal{A}_{n}\) as \[\mathcal{A}_{n}=\{(n,l),(m,n)|n,m\in\mathcal{N},\,l\in\mathcal{V}\}, \tag{5}\] where link \((n,n)\) indicates the \(\text{BS}_{node}\)\(n\) remains idle. As blockages, overloads, or failures might render certain links \((n,l)\) temporarily unavailable, we define the set \(\mathcal{A}_{n,i}\subseteq\mathcal{A}_{n}\) of available actions in time slot \(i\) as \[\mathcal{A}_{n,i}=\{(n,l),(l,n)|(n,l),(l,n)\in\mathcal{E}_{i}\}. \tag{6}\] Selecting action \(a_{i}=(n,l)\) in time slot \(i\) implies \(x_{n,l,i}=1\). The rewards \(r_{n,i}\) are a function of the end-to-end latencies \(T_{n,d,i}\) and depend on whether at BS\({}_{node}\)\(n\) a link \((n,l)\) or \((l,n)\) is activated. BS\({}_{node}\)\(n\) is connected to the BS\({}_{donor}\) via multi-hop wireless links. Consequently, \(T_{n,d,i}\) cannot be immediately observed when a link \((n,l)\), with \(l\notin\mathcal{D}\) is activated. In fact, the destination BS\({}_{donor}\)\(d\) might not even be known to BS\({}_{node}\)\(n\) at time slot \(i\). To overcome this limitation, we define the rewards \(r_{n,i}\) as a function of the next-hop's estimated end-to-end latency \(\hat{T}_{l,d,i}\) as \[r_{n,i}=\begin{cases}t_{l,i}^{\text{q}}+t_{n,l,i}^{\text{tx}}+\hat{T}_{l,d,i}, &\text{for link }(n,l)\\ t_{n,i}^{\text{q}}+\hat{T}_{n,d,i},&\text{for link }(l,n),\end{cases} \tag{7}\] where \(\hat{T}_{l,d,i}\) is calculated as \[\hat{T}_{l,d,i}=\min_{(l,m)\in\mathcal{E}_{i}}\hat{T}_{l,m,i}. \tag{8}\] ### _Latency and CVaR Estimation_ As given in (7) and (8), BS\({}_{node}\)\(n\) learns which links \((n,l)\) to activate by building estimates of the expected latency \(\hat{T}_{n,l}\) associated to each of them. Let \(K_{n,l,i}=\sum_{j=1}^{i}x_{n,l,i}\) be the number of times link \((n,l)\) has been activated up to time slot \(i\). \(\hat{T}_{n,l}\) is updated using the sample mean as \[\hat{T}_{n,l,i+1}=\frac{K_{n,l,i}\hat{T}_{n,l,i}+r_{n,i}}{K_{n,l,i}+1}, \tag{9}\] where the subindex \(i\) is introduced to emphasize that the estimate is built over time. The CVaR definition given in (3) requires \(T_{n,d}\) which, as discussed before, is known a priori. Hence, we leverage the non-parametric estimator derived in [26] to estimate the CVaR of a link \((n,l)\). To this aim, let \(\tilde{r}_{n}^{1},\ldots,\tilde{r}_{n}^{K_{n,l,i}}\) be all the rewards received up to time \(i\) ordered in a descending fashion, i.e., \(\tilde{r}_{n}^{1}\geq\cdots\geq\tilde{r}_{n}^{K_{n,l,i}}\). The estimated \(\widehat{\text{CVaR}}_{i}(n,l)\) at time slot \(i\) is calculated as [26] \[\widehat{\text{CVaR}}_{i}(n,l)=\frac{1}{\lceil\alpha K_{n,l,i}\rceil}\sum_{k =1}^{\lceil\alpha K_{n,l,i}\rceil}\tilde{r}_{n}^{k}. \tag{10}\] Using the estimates in (9) and (10), BS\({}_{node}\) computes the value \(Q_{n}(a_{n,i}=(n,l))\) associated to the selected action \(a_{n}\in\mathcal{A}_{n}\), and defined as \[Q_{n}(a_{n,i})=\hat{T}_{n,l,i}+\widehat{\eta\text{CVaR}}_{i}(n,l). \tag{11}\] Note that (11) is aligned with the objective function in (4a). Actions with an associated low value \(Q_{n}(a_{n,i})\) lead to lower end-to-end latency and a low expected value on its tail. ### _Consensus_ All the BS\({}_{node}\) independently decide which links to activate based on their estimates of the end-to-end latency. As a consequence, conflicting actions may be encountered. A conflict occurs when two or more BS\({}_{node}\)\(n\) and \(m\) aim at activating a link to a common BS \(l\), \(l\in\mathcal{V}\), i.e., \(x_{n,l,i}=x_{m,l,i}=1\). We reach consensus by first retrieving the buffer and congestion status of the various IAB-nodes, leveraging the related BAP layer functionality [1, Sec. 4.7.3]. With this information at hand, conflicts are resolved by prioritizing the transmission of the BS\({}_{node}\) with the larger queuing times \(t_{n,i}^{\text{q}}\) and loads \(B_{n,i}\). Then, we let the IAB-donor mark as _unavailable_ the time resources of the remaining base stations with conflicting scheduling decisions [1, Sec. 10.9]. Note that as the learning is performed at each BS\({}_{node}\), only the link activation decision and the weighted sum of \(t_{n,i}^{\text{q}}\) and \(B_{n,i}\) are transmitted. Thus, low communication overhead is maintained. ### _Implementation of Safehaul_ Here, we describe how the above-mentioned solution can be implemented in a real system. Specifically, we elaborate on the required inputs and the interactions among the different entities as well as the pseudo-code of Safehaul, see Alg. 1. Safehaul is executed at each BS\({}_{node}\)\(n\). For its implementation, the network operator provides \(\alpha\), \(\eta\) and \(\mathcal{A}_{n}\) as an input. \(\alpha\) is the risk level parameter that influences the level of reliability achieved in the system. Similarly, \(\eta\) controls the impact the minimization of the latency in the \(\alpha\)-worst cases has on the overall performance. Both parameters, \(\alpha\) and \(\eta\), are set by the network operator depending on its own reliability requirements. The set \(\mathcal{A}_{n}\) depends on the considered network topology which is perfectly known by the network operator. The set \(\mathcal{A}_{n}\) includes all links \((n,l)\) and \((l,n)\) to and from the first-hop neighbors of BS\({}_{node}\)\(n\). ``` 1:\(\alpha\), \(\eta\), \(A_{n}\) 2:Initialize \(\hat{T}_{n,l}\), \(\widehat{\text{CVaR}}(n,l)\), and \(Q_{n}\) for all \((n,l)\in\mathcal{E}_{l}\) 3:Set counters \(K_{n,l}=0\) and initial action \(a_{n,1}=(n,n)\) 4:for every time slot \(i=1,...,I\)do 5: perform action \(a_{n,i}\) and observe reward \(r_{n,i}\)\(\triangleright\) Eq. (7) 6: increase counter \(K_{n,l}\) by one 7: update latency estimate \(\hat{T}_{n,l}\)\(\triangleright\) Eq. (9) 8: update CVaR estimate \(\widehat{\text{CVaR}}(n,l)\)\(\triangleright\) Eq: (10) 9: update \(Q_{n}(a_{n,i})\) 10: select next action \(a_{n,i+1}\) using \(\epsilon\)-greedy 11: share \(a_{n,i+1}\), \(t_{n,i}^{\text{q}}\) and \(B_{n,i}\) with the others BS\({}_{node}\) 12: if required, update \(a_{n,i+1}\) to reach consensus 13:endfor ``` **Algorithm 1** Safehaul algorithm at each BS\({}_{node}\) The execution of Safehaul begins with the initialization of the latency and CVaR estimates, and the values \(Q\) of the actions in \(\mathcal{A}_{n}\). Additionally, the counters \(K_{n,l}\), that support the calculations of \(\hat{T}_{n,l}\) and \(\widehat{\text{CVaR}}(n,l)\), are initialized for all links in \(\mathcal{A}_{n}\) (line 1-2). These parameters are updated and learnt throughout the execution of Safehaul. At time slot \(t=0\), no transmission has occurred and \(B_{n,0}=0\). Hence, BS\({}_{node}\)\(n\) remains idle for the first time slot \(i=1\), i.e., \(a_{n,1}=(n,n)\) (line 2). Next, and in all the subsequent time slots \(i\in[1,I]\), the selected action is performed and the corresponding reward is obtained (line 4). If BS\({}_{node}\)\(n\) transmits in time slot \(i\), i.e., \(a_{n,i}=(n,l)\), the reward \(r_{n,i}\) is sent by the receiving BS \(l\) through the control channel. If \(a_{n,i}=(l,n)\), the reward \(r_{n,i}\) depends, as given in (7), only on the current estimates at BS\({}_{node}\)\(n\) and the status of its buffer \(B_{n,i}\). With the observed reward \(r_{n,i}\), the counter for action \(a_{n,i}\) is increased and the latency and CVaR estimates are updated (lines 5-7). Using the new estimates (lines 6 and 7), the value \(Q(a_{n,i})\) of the performed action \(a_{n,i}\) is updated (line 8). The next action \(a_{n,i+1}\) is then selected according to \(\epsilon\)-greedy (line 9). \(\epsilon\)-greedy is a well-known method to balance the exploitation of links with estimated low latency, and the exploration of unknown but potentially better ones. In \(\epsilon\)-greedy a random action \(a_{n,i+1}\) from the set \(\mathcal{A}_{n,i}\) is selected with probability \(\epsilon\in[0,1]\). With probability \((1-\epsilon)\), the action that yields the estimated lowest value is chosen, i.e., \[a_{n,i+1}=\begin{cases}\text{randomly selected action from }\mathcal{A}_{n,i},&\text{if }x \leq\epsilon\\ \operatorname*{argmax}_{b_{n}\in\mathcal{A}_{n,i}}Q_{n}(b_{n}),&\text{if }x> \epsilon,\end{cases} \tag{12}\] where \(x\) is a sample taken from a uniform distribution in the interval \([0,1]\). Once the action \(a_{n,i+1}\) is selected, it is shared with other BS\({}_{node}\) in the network along with \(t_{n,i}^{\text{q}}\) and \(B_{n,i}\) (line 10). As described in Section IV-C, this goes through the control channel. If conflicts arise, consensus is reached by prioritizing the transmission of BS\({}_{node}\) with the largest loads and queuing times (line 11). The regret of Safehaul is defined as the expected loss caused by the fact that the optimal action is not always selected [27]. For brevity, we omit the regret analysis of Safehaul. Nevertheless, a regret bound can be derived following an approach similar to the work in [28] but including the \(\epsilon\)-greedy considerations [27]. Moreover, the bound should account for the fact that the probability of not selecting an optimal action also depends on the actions of the other BS\({}_{node}\). ## V Simulation setup Given the lack of access to actual 5G (and beyond) network deployments, prior works mostly rely on _home-grown_ simulators for performance evaluation. Although a valid approach, these simulators often cannot fully capture the real network dynamics, introducing strong assumptions in the physical and/or the upper layers of the protocol stack. Until very recently, the most complete simulator for IAB networks was a system-level simulator [29] developed as an extension of the ns-3 _mmWave_ module [30]. Despite accurate modeling of the IAB protocol stack, it is currently behind the latest IAB specifications1. Moreover, the ns-3 IAB extension is unsuitable for large simulations with hundreds of nodes due to reliance on an older version of the _mmWave_ module. Therefore, in our work we opt for Sionna [20], which is an open-source GPU-accelerated toolkit based on TensorFlow. The tensor-based implementation supports the native integration of neural networks and prototyping complex communication systems. Footnote 1: For instance due to the assumption of layer-3 (instead of layer-2) relaying at the IAB-nodes which was based on a draft version of the TR 38.874 [31]. However, unlike the aforementioned ns-3 module, Sionna is a physical layer-focused simulator that does not explicitly model 5G networks, thus lacking the characterization of the 5G-NR upper-layer protocol stack. Hence, we extend Sionna by including the system-level functionalities such as MAC-level scheduling and RLC-level buffering. Furthermore, since Sionna exhibits slight differences compared to the 5G-NR physical layer, we extend Sionna's physical layer model [20] with the 5G-NR procedures. All these contributions will be made publicly available upon publication of this article. In the following, we describe the details of our extensions. ### _Extensions to Sionna's physical layer module_ In this section, we describe the physical layer modification that were necessary to evaluate IAB scenarios using Sionna. #### V-A1 Codebook-based Beamforming Sionna's native beamforming only supports Zero-Forcing (ZF) pre-coding in downlink. Therefore, as a first step, we extend Sionna by implementing an NR-like codebook-based analog beamforming both at the transmitter and at the receiver. Specifically, we assume that the beamforming vectors at the transmitter \(w_{tx}\) and at the receiver \(w_{rx}\) are a pair of codewords selected from a predefined codebook. The codebook is computed by defining a set of beam directions \(\{\omega_{n,m}\}\) which scans a given angular sector with a fixed beamwidth. The steering vector \(a_{n,m}\) corresponding to direction \(\omega_{n,m}\)can be computed as: \[\begin{split} a_{n,m}&\!=\!\Big{[}1,\!...,\!e^{j \frac{2\pi}{\lambda}d(i_{H}\sin\!\alpha_{n}\sin\!\beta_{m}+i_{V}\cos\!\beta_{m })},\!...,\\ &\quad e^{j\frac{2\pi}{\lambda}d((N_{H}-1)\sin\!\alpha_{n}\sin \!\beta_{m}+(N_{V}-1)\cos\!\beta_{m})}\Big{]}^{T},\end{split} \tag{13}\] where \(N_{H}\) and \(N_{V}\) are the number of horizontal and vertical antenna elements, respectively. The horizontal and vertical index of a radiating element is denoted by \(i_{H}\in[0,\,N_{H}]\) and \(i_{V}\in[0,\,N_{V}]\), respectively. \(\alpha_{n}\) and \(\beta_{m}\) represent the azimuth and elevation angles of \(\omega_{n,m}\). Next, we define the codebook as the set \(\{\left(\sqrt{N_{H}N_{V}}\right)^{-1}\,a_{n,m}\}\). In line with the 5G-NR beam management procedure [32], we assume the lack of complete channel knowledge, i.e., the communication endpoints do not know the corresponding channel matrix. Accordingly, an exhaustive search is conducted to identify the best pair of codewords resulting in the highest Signal to Interference plus Noise Ratio (SINR). Specifically, we leverage a hierarchical search [33], in which the communication pairs first perform a wide-beam search (a.k.a. sector-level sweep) in which the transmitter and receiver approximate the direction of communication, see Fig. 2. Next, the beamforming direction is fine-tuned through a beam refinement procedure going through a codebook with narrow beams. Consequently, we employ two types of codebooks, one with wide beams for sector sweep and another with narrow beams for beam refinement. Fig. 2: Schematic of the hierarchical beam management procedure. First, the general direction is estimated using wide beams (top). Then, the search is refined using the narrow beams codebook. #### V-A2 SINR Computations Since Sionna does not natively calculate the SINR, we add this functionality to the simulator to better model the impact of interference in our simulations. We compute SINR experienced by Transport Blocks (TBs) by combining the power of the intended signal with that of the interferers and the thermal noise. Specifically, we first compute the power \(P_{i}\) of the intended signal at receiver \(i\) over frequency \(f\) and at time slot \(t\). Then, we obtain the overall interference power by leveraging the superposition principle and summing the received power from all other interfering base stations \(P_{k}(t,\,f)\) where \(k\in\mathbb{N}\) and \(k\neq i\). For the purposes of this computation, we assume that each interferer employs the beamforming vector yielding the highest SINR towards its intended destination. Similarly, the transmitter and receiver use the beamforming configuration estimated via the hierarchical search procedure. Finally, the SINR is \(\gamma_{i}(t,\,f)=\frac{P_{i}(t,f)}{\sum\limits_{i\in\mathbb{N},\neq i}P_{k}(t,f)+\sigma(t,f)}\) where \(\sigma(t,\,f)\) is the thermal noise at the receiver. ### _System-level extensions to Sionna_ As mentioned, Sionna is mainly a physical layer simulator. However, to get closer to IAB networks as specified in Rel. 17, we have extended Sionna by implementing a selection of system-level features. To such end, we introduced a discrete-event network simulator for modeling IAB networks. This system-level extension operates atop Sionna and provides basic functionality such as a Medium Access Control (MAC)-level scheduler, layer-2 buffers, and data flow and path selection mechanisms. Our simulator, as depicted in Fig. 3, generates a variety of system-level KPIs such as latency, throughput, and packet drop rate. #### V-B1 Data Flow and buffer 3GPP has opted for a layer 2-relaying architecture for IAB-nodes where hop-by-hop Radio Link Control (RLC) channels are established. This enables retransmissions to take place just over the afflicted hops, thus preventing the need for traversing the whole route from the IAB-donor whenever a physical layer TB is not decoded. Consequently, this design results in a more efficient recovery from transmission failures and reduces buffering at the communication endpoints [34]. To mimic this architecture, we have implemented RLC-like buffers at each base station. Specifically, each IAB-node features layer-2 buffers for both receiving and transmitting packets. For instance, the data flow for an uplink packet is the following. The User Equipment (UE) generates packets and sends a transmission request to the base station. Consequently, the scheduler allocates OFDM symbols for this transmission, which is eventually received and stored at the RX buffer of its Distributed Unit (DU). Next, the packet is placed into the TX buffer to be forwarded to the suitable next hop IAB-nodes. This procedure is repeated until the packet crosses all the wireless-backhaul hops and reaches the IAB-donor. Note that the packet can be dropped due to a violation of latency constraints or interference. #### V-B2 Backhaul Adaptation Protocol To manage routing within the wireless-backhauled network, the 3GPP introduced the BAP, i.e., an adaptation layer above RLC which is responsible for packet forwarding between the IAB-donor and the access IAB-nodes [35]. Our simulator mimics this by associating each IAB-node to a unique BAP ID. Moreover, we append a BAP routing ID to each packet at its entry point in the Radio Access Network (RAN) (i.e., the IAB-donor and the UEs for DL and UL data, respectively). Then, this identifier is used to discern the (possibly multiple) routes toward the packet's intended destination [35]. The choice of the specific route is managed by Safehaul. #### V-B3 Scheduler Finally, we implement a MAC-level scheduler which operates in a Time Division Multiple Access (TDMA) mode. The scheduler periodically allocates the time resources to backhaul or access transmissions in a Round-Robin fashion. Specifically, each cell first estimates the number of OFDM symbols needed by each data flow by examining the corresponding buffer. Then, the subframe's OFDM symbols are equally allocated to the users. If a user requires fewer symbols to transmit its complete buffer, the excess symbols (the difference between the available slot length and the needed slot length) are dispersed to the other active users. ## VI Performance Evaluation In our simulations, we consider a realistic cellular base station deployment in Manhattan, New York City2. Specifically, Fig. 4: Locations of the 223 BS\({}_{node}\) and BS\({}_{donor}\) in Manhattan, NYC. Fig. 3: Overall design of our Sionna’s extension. The red blocks represent our additions to the baseline simulator, i.e., Sionna [20]. we collect the locations of \(N=223\) 5G-NR base stations in an area of 15 Km\({}^{2}\) as depicted in Fig. 4. The detailed simulation parameters are provided in Table I. We used the channel model outlined by 3GPP in TR 38.901 [36], which provides a statistical channel model for 0.5-100 GHz, and analyzed the "Urban Micro (UMi)-Street Canyon" scenario. **Benchmarks.** To provide better insights on the performance of Safehaul, we replicate two approaches from the state of the art: \((i)\) Scalable and Robust Self-backhauling Solution (SCAROS), a learning-based approach that minimizes the average latency in the network [16], and \((ii)\) Maximum Local Rate (MLR), a greedy approach aiming to maximize throughput by selecting the links with the highest data rate. Our evaluation consists of four scenarios studying the convergence of the algorithms to a steady state, the number of IAB-nodes, the number of IAB-donors, and the impact of risk aversion. When demonstrating the results, we show the average throughput, latency, and packet drop rate per UE. Furthermore, we show the statistical variance of the obtained results using candlesticks which include the max, min, mean, and 10 and 90 percentiles of the achieved performance. ### _Scenario 1: Average Network Performance_ Analyzing the performance of the algorithms as a function of time is crucial to determine the convergence speed of the learning-based techniques, i.e., Safehaul and SCAROS. Hence, in Fig. 5 we show the average network performance over time for three metrics: latency, throughput, and packet drop rate. In Fig. 4(a), we can observe that Safehaul rapidly converges to an average latency of approximately \(8.6\) ms which is \(12.2\)% and \(43.4\)% lower than the latency of SCAROS and MLR, respectively. The high performance of Safehaul stems from the joint minimization of the average latency and the expected value of its tail loss, which results in avoiding risky situations where latency goes beyond \(T_{\max}\). This is not the case for SCAROS where we observe a high peak in the latency before convergence, i.e., in between zero and 1000 ms. _It is exactly the avoidance of such transients in Safehaul that leads to higher reliability in the system._ The reliability offered by Safehaul allows operators to deploy self-backhauling in an online fashion and without disrupting the network operation. Moreover, it protects the networks from the transients that may arise from changes in the network topology. The performance of MLR is constant throughout the simulation, as it is not designed as an adaptive algorithm. Figure 4(b) shows that the risk-aversion capabilities of Safehaul have no negative impact on the average throughput in the network. The performance of Safehaul is comparable to that of SCAROS, approximately \(79.3\) Mbps, and \(11.7\)% larger than the performance of MLR. The performance shown in Figure 4(c) is consistent with the behaviour observed in Figure 4(a). As Safehaul additionally minimizes the \(\alpha\)-worst latency, it achieves the lowest packet drop rate, compared to the reference schemes, namely, 16.6% and \(25.0\)% lower than SCAROS and MLR, respectively. ### _Scenario 2: Impact of Network Size_ In Fig. 6 we evaluate the reliability of the three considered approaches for different network sizes. Specifically, we vary the number of BS\({}_{node}\) starting from 25 up to 100. At the same time, we increase the load in the network by increasing the number of UEs. From the figures, we can clearly see that Safehaul consistently achieves a lower variation compared to the reference schemes. This verifies that Safehaul achieves the intended optimization goal, i.e., the joint minimization of the average performance and the worst-case losses. Fig. 5(a) shows that Safehaul is able to maintain an almost constant latency as the number of BS\({}_{node}\) increases. Specifically, the variation of latency with Safehaul is 56.1% \begin{table} \begin{tabular}{l|l} \hline \hline Parameter & Value \\ \hline Carrier frequency and bandwidth & 28 GHz and 400 MHz \\ IAB RF chains & 2 (1 access + 1 backhaul) \\ Pathloss model & UMI-Street Canyon [36] \\ Number of BS\({}_{node}\)\(N\) & 223 \\ Source rate & [40, 80] Mbps \\ IAB Backhaul and access antenna array & 8Hrb8V and 4Hb4V \\ UE antenna array & 4Hb4V \\ IAB and UE height & 15 m and 1.5 m \\ IAB antenna gain & 33 dB \\ Noise power & 10 dB \\ Risk level \(\alpha\) & 0.1 \\ Reliability weight factor \(\eta\) & 1 \\ \hline \hline \end{tabular} \end{table} TABLE I: Simulation parameters. Fig. 5: Average network performance for \(100\) UEs and \(80\) Mbps per-UE source rate (Scenario 1). and 71.4% less than SCAROS and MLR, respectively. Furthermore, Safehaul achieves 11.1% and 43.2% lower latency compared to SCAROS and MLR. MLR's high variance is due to a lack of adaptation capabilities, hence its latency variance is governed by the network's underlying random processes. As shown in Fig. (b)b, the average throughput of the learning-based approaches Safehaul and SCAROS remains constant for the different values of network size. However, the lowest variation in the throughput is achieved by Safehaul, i.e., only 0.40 compared to 0.51 and 0.79 in the benchmark schemes. Such behaviour corroborates Safehaul's reliability capabilities. The packet drop rate for different number of IAB-nodes is shown in Fig. (c)c. Safehaul not only consistently outperforms the reference schemes, but also with the minimum variation in the results (by at least 39.1% compared to benchmarks). Considering the largest network size and load, i.e., 200 BS\({}_{node}\) and 400 UEs, Safehaul achieves 11.2% and 24.9% lower packet drop rate compared to SCAROS and MLR, respectively. ### _Scenario 3: Impact of number of IAB-donors_ Although the benchmark schemes do not support multi-IAB-donors, Safehaul is designed to accommodate such scenarios. In Fig. 7, we investigate the impact of the number of IAB-donors on Safehaul. Specifically, the network load is constant in this scenario, i.e., the number of UEs is fixed. We observe in Fig. (a)a that the highest latency is experienced when only one IAB-donor is present in the network. This stems from the tributary effect of self-backhauling where the traffic flows towards a central entity which itself can become a bottleneck. As the number of IAB-donor increases, the traffic flow is more evenly distributed, resulting in lower latency. Specifically, from an average latency of 8.2 ms for \(D=1\), to an average latency of 1.7 ms when \(D=5\). As mentioned, since the load is constant in this scenario, the average throughput remains also constant for all different numbers of IAB-donors, see Fig. (b)b. However we should highlight that Safehaul's learning speed is maintained for the different values of \(D\). This is an important design feature of Safehaul because having more BS\({}_{donor}\) means that the number of paths a BS\({}_{node}\) has to the core network increases exponentially. From a learning perspective, such increment implies a larger action set and a lower learning speed. Safehaul avoids this problem by learning the average latency based on the estimates of its neighbors and not on the complete paths to the BS\({}_{donor}\). Finally, Fig. (c)c shows that increasing the number of BS\({}_{donor}\) significantly reduces the packet drops, which also stems from a better distribution of traffic flows in the network as observed in Fig. (a)a. ### _Scenario 4: Impact of risk parameter \(\alpha\)_ The definition of losses in the tail of the latency distribution is controlled by the risk level parameter \(\alpha\). Its impact on the average latency is shown in Fig. 8, where an increasing behaviour is observed for \(\alpha\leq 0.7\). The lowest latency is Fig. 6: Network performance for {25, 50, 75, 100, 200} IAB-nodes and \(40\) Mbps per-UE source rate (Scenario 2). Fig. 7: Network performance for \(100\) UEs and \(40\) Mbps per-UE source rate, versus the number of IAB-donors (Scenario 3). achieved for \(\alpha=0.1\), which corresponds to the most risk-averse, and therefore the most reliable, case out of all the considered ones. As \(\alpha\) grows, the performance of Safehaul tends to that of the risk-neutral case. ## VII Related work Self-backhauling wireless networks have been studied in different contexts. Ranging from the so-called Heterogeneous Networks (HetNets) and IAB 5G New Radio (NR) systems, to Cloud Radio Access Networks (C-RANs), each has considered a different set of premises and optimization goals. In this section, we review the related work in the context of basic assumptions and their optimization goals. Furthermore, we shed light on some of the common but perhaps unrealistic assumptions which we refrain from in this article. **Ideal backhaul links.** Numerous works assume either an _infinite or fixed capacity backhaul link_. This is often motivated due to the presence of a wired fiber link between the Small Base Stations (SBSs) and the Macro Base Station (MBS) [3, 5, 6, 7]. Indeed, most of these works consider a scenario where a centralized Baseband Unit (BBU) is connected to several Remote Radio Heads (RRHs), i.e., radios which lack signal processing capabilities [3, 5, 6]. In particular, the authors of [6] consider an even more complex scenario referred to as F-RAN, i.e., a C-RAN where RRHs feature caching and signal processing capabilities. However, in an IAB context it is fundamental to consider _limited-rate, time-varying backhaul channels_ and to study the impact of such limitations on the performance of the RAN. **Constrained topologies.** It is often assumed that self-backhauled networks have a _specific topology_. This assumption usually simplifies the problem and makes it tractable and/or solvable in polynomial time. For instance, the authors of [8, 9, 12] assume a single-hop network where each SBS is directly connected to the MBS. In [10], a \(k\)-ring deployment is considered, i.e., a topology where a single IAB-donor provides backhaul connectivity to \(k\) rings of IAB-nodes. Even though this topology can be used to model networks with arbitrary depth, it maintains a symmetric load for each node, an assumption which generally does not hold in real networks. In fact, the 3GPP does not impose any limits on the number of IAB-nodes which can be connected to a given IAB-donor, nor does it set an upper bound on the number of wireless hops from the latter to other wireless-backhauled base stations [22]. Accordingly, in our problem formulation we consider IAB networks with an _arbitrary number of nodes and an arbitrary maximum wireless hops_ between MBSs and SBSs. **Simplistic traffic models.** Some works assume either a _full buffer traffic model and/or impose flow conservation constraints_. In particular, the authors of [7, 37] consider systems where the capacity of each link can always be fully exploited thanks to the presence of _infinite data to transmit at each node_. However, in actual IAB deployments the presence of packets at the MBSs and SBSs is conditioned on the _status of their RLC buffers and, in turn, on the previous scheduling decisions_. Moreover, _packets can actually be buffered at the intermediate nodes_, thus preventing the need for transmitting a given packet in consecutive time instants along the whole route from the IAB-donor to the UEs (or vice versa). **Optimization goals.** The works in the literature focus on different optimization goals. Therefore, they prioritize different network metrics. For instance, the authors of [38] aim to optimize the beam alignment between MBSs and SBSs. Instead, the work of [4] aims to compute the optimal user-to-base-station association. However, they neglect backhaul associations and focus on the access only. In [4, 37, 39, 8] the objective function is a function of the users data-rate. In particular, the authors of [37] optimize the max-min user throughput, arguing that such a metric better captures the performance of the bottleneck links. In [15], the average rate of each link is maximized under bounded delay constraint. In our work, we focus on reliability by minimizing not only the average end-to-end delay, but also the expected value of the worst-case performance. The work closest to this article is SCAROS [16], a learning-based latency-aware scheme for resource allocation and path selection in self-backhauled networks. Assuming a single IAB-donor, the authors study arbitrary multi-tier IAB networks considering the impact of interference and network dynamics. In contrast to this work, we aim at enhancing the reliability of the IAB-network by jointly minimizing the average end-to-end delay and its expected tail loss. Moreover, considering realistic deployments, our proposed Safehaul supports networks with an arbitrary number of IAB-donors. ## VIII Conclusion In this work, we proposed the first reliability-focused scheduling and path selection algorithm for IAB mmWave networks. Via extensive simulations, we illustrated that our RL-based solution can cope with the network dynamics including channel, interference, and load. Furthermore, we demonstrated that Safehaul not only exhibits highly reliable performance in the presence of the above-mentioned network dynamics but also, it outperforms the benchmark schemes in terms of throughput, latency and packet-drop rate. The reliability of Safehaul stems from the joint minimization of the average latency and the expected value of its tail losses. The latter is achieved by leveraging CVaR as a risk metric. Fig. 8: Average latency for 100 UEs and 20 Mbps per-UE source rate, versus the risk level \(\alpha\) (Scenario 4) Reliability is a highly under-explored topic that definitely deserves more investigation. Some interesting research directions are the maximization of reliability under the assumption of statistical system knowledge, or the evaluation of the network's reliability when the functionality of the BAP layer is compromised. Furthermore, our system-level extension to Sionna can be further developed to support an arbitrary number of RF chains and in-band backhauling, allowing more extensive investigation of IAB protocols and architecture. ## IX Acknowledgement This research was partly funded by the Deutsche Forschungsgemeinschaft within the mm-Cell project and the Collaborative Research Center 1053 MAKI, by the LOEWE initiative (Hesse, Germany) within the emergenCITY center, the Bundesministerium fur Bildung und Forschung through the Open6GHub project and by the European Commission through Grant No. 861222 (H2020 ITN MINPS project).
2310.15322
The equivalence principle for a plane gravitational wave in torsion based and non-metricity based teleparallel equivalents of general relativity
We study the energy-momentum characteristics of the plane ``+''-polarised gravitational wave solution of general relativity in the Teleparallel Equivalent of General Relativity (TEGR) and the Symmetric Teleparallel Equivalent of General Relativity (STEGR) using the previously constructed Noether currents. The current components describe locally measured by observer energy-momentum if the displacement vector $\xi$ is equal to the observer's 4-velocity. To determine the non-dynamical connection in these theories we use the unified ``turning of'' gravity principle. For a constructive analysis of the values of Noether currents and superpotentials in TEGR and STEGR, we use the concept of ``gauges''. The gauge changing can affect the Noether current values. We study under what conditions the Noether current for the freely falling observer is zero. When they are established, zero result can be interpreted as a correspondence to the equivalence principle, and it is a novelty for gravitational waves in TEGR and STEGR. We highlight two important cases with positive and zero energy, which reproduce the results of previous works with a different approach to determine gravitational energy-momentum in TEGR, and give their interpretation.
E. Emtsova, A. N. Petrov, A. V. Toporensky
2023-10-23T19:42:59Z
http://arxiv.org/abs/2310.15322v3
# Conserved quantities for the plane waves in TEGR and STEGR ###### Abstract We study the energy-momentum characteristics of the plane "+"-polarised gravitational wave solution of general relativity in the Teleparallel Equivalent of General Relativity (TEGR) and the Symmetric Teleparallel Equivalent of General Relativity (STEGR) using the previously constructed Noether currents. These currents can describe locally measured by observer energy-momentum if the displacement vector \(\xi\) is equal to the observer's 4-velocity. To determine the non-dynamical connection in these theories we use the unified "turning off" gravity principle. For a constructive analysis of the values of Noether currents and superpotentials in TEGR and STEGR, we use the concept of "gauges". The gauge changing can affect the Noether current values. We study under what conditions the Noether current for the freely falling observer is zero because this can be interpreted as the equivalence principle. We highlight two important cases with positive and zero energy, which reproduce the results of previous works with a different approach to determine gravitational energy-momentum in TEGR, and give their interpretation. ## 1 Introduction Teleparallel theories of gravity have been actively developed in recent years. A main feature of these theories is the use of connection with zero Riemann curvature. These theories include the Teleparallel Equivalent of General Relativity (TEGR), the Symmetric Teleparallel Equivalent of General Relativity (STEGR) and modifications of these theories [1, 2, 3, 4, 5, 6]. In TEGR and its modifications, a flat metric compatible connection is used. In STEGR and its modifications, a flat connection with zero torsion is used. TEGR and STEGR are fully equivalent to GR (General Relativity) at the level of field equations, thus, the solutions of the field equations in TEGR and STEGR are exactly the same as those in GR. Modifications of TEGR and STEGR have the advantage that their field equations are of the second-order, what gives similarities with gauge field theories and potentially links gravity to other theories of fundamental interactions in nature. The teleparallel connections in TEGR and STEGR are not dynamical quantities and cannot be determined by field equations. This fact does not influence the dynamics of gravitational interacting objects but gives ambiguities to values of main quantities which define gravitational energy-momentum [7, 8]. Despite the fact that now the main attention is paid to how accurate the modified teleparallel theories can describe the observed phenomena, not all the issues have been resolved in TEGR and STEGR themselves. One of such issues is definition of gravitational energy-momentum. There are different approaches to this problem. Various approaches in constructing conserved quantities both in TEGR and in STEGR already have been tested to construct them for the Schwarzschild solution and cosmological models [9, 10, 11, 12, 13, 14, 15, 16, 17]. However, for the best of our knowledge, construction of conserved quantities for the gravitational waves in TEGR did not attract much attention of researches. We could cite only the central papers [18, 19, 20, 21, 22] with a few references therein. In calculations the energy-momentum tensor of gravitational field [3] is applied. By these studies, authors sometimes got unexpected results. Thus, for example, in [19] non-positive energy for gravitational waves is obtained; on the other side, in [20] zero energy for gravitational waves is obtained. Such results are not acceptable; indeed, the modern cosmological and astrophysical observable data supports the textbook predictions that gravitational waves bring a positive energy [23, 24, 25] At last, we have not find in a literature any works where gravitational waves have been considered in STEGR. Thus, the main purpose of this paper is to discuss the aforementioned questions and problems, and try to close the remarked gaps. We use the totally covariant formalism developed by us [26, 7, 8]. The paper is organized as follows: In section 2, we give brief introduction into TEGR and STEGR theories, introduce the Noether currents and superpotentials for these theories and describe the unified "turning off" gravity principle to define the teleparallel connection. In section 3, we find energy and momentum of the plane +polarised wave measured by the freely falling observers in TEGR and STEGR in the simplest frames. In section 4, we find the conditions when Noether currents measured by the freely falling observer in TEGR and STEGR are in a correspondence with the equivalence principle. In section 5, we take another coordinates for the plane wave, reproduce the results of [20] and equivalence principle for the freely falling observer. In section 6, we discuss the results All definitions in TEGR correspond to [3]. All definitions in STEGR correspond to [1]. ## 2 Elements of teleparallel gravity In this section, we briefly introduce the elements of teleparallel equivalents of GR (TEGR and STEGR) and, following [27, 7, 8] give totally covariant expressions for conserved quantities, which are necessary for our calculations. ### Elements of TEGR The Lagrangian in TEGR has a form [3] \[\stackrel{{\bullet}}{{\cal L}}=\frac{h}{2\kappa}\stackrel{{ \bullet}}{{T}}\equiv\frac{h}{2\kappa}\left(\frac{1}{4}\stackrel{{ \bullet}}{{T}}^{\rho}{}_{\mu\nu}\stackrel{{\bullet}}{{T}}^{ \rho}{}_{\rho}{}^{\mu\nu}+\frac{1}{2}\stackrel{{\bullet}}{{T}} ^{\rho}{}_{\mu\nu}\stackrel{{\bullet}}{{T}}^{\nu}{}_{\rho}- \stackrel{{\bullet}}{{T}}^{\rho}{}_{\mu\rho}\stackrel{{ \bullet}}{{T}}^{\nu}{}_{\mu}{}_{\nu}\right), \tag{2.1}\] where \(\stackrel{{\bullet}}{{T}}\) is called as the torsion scalar, \(\kappa=8\pi G\) (in \(c=1\) units), and \(h=\det h^{a}{}_{\nu}\). The torsion tensor \(\stackrel{{\bullet}}{{T}}^{a}{}_{\mu\nu}\) is defined as \[\stackrel{{\bullet}}{{T}}^{a}{}_{\mu\nu}=\partial_{\mu}h^{a}{}_{ \nu}-\partial_{\nu}h^{a}{}_{\mu}+\stackrel{{\bullet}}{{A}}^{a}{}_ {c\mu}h^{c}{}_{\nu}-\stackrel{{\bullet}}{{A}}^{a}{}_{c\nu}h^{c}{ }_{\mu}, \tag{2.2}\] where \(h^{a}{}_{\nu}\) are the tetrad components (we denote tetrad indexes of a quantity by Latin letters and spacetime indexes by Greek letters), connected to metric by \[g_{\mu\nu}=\eta_{ab}h^{a}{}_{\mu}h^{b}{}_{\nu}. \tag{2.3}\] The inertial spin connection (ISC) \(\stackrel{{\bullet}}{{A}}^{a}{}_{c\nu}\) is not included into the Lagrangian (2.1) and related field equations, but it covariantizes the expression for the torsion tensor (2.2) and other expressions. It can be suppressed by a related local Lorentz transformation. From now and below, we denote by \(\bullet\) all teleparallel quantities, which are constructed using the teleparallel connection \(\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\kappa\lambda}\). Thus, by definition \[\stackrel{{\bullet}}{{A}}{}^{a}{}_{b\mu}=-h_{b}{}^{\nu}\stackrel{{ \bullet}}{{\nabla}}{}_{\mu}h^{a}{}_{\nu}, \tag{2.4}\] where the covariant derivative \(\stackrel{{\bullet}}{{\nabla}}{}_{\mu}\) is defined by \(\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\kappa\lambda}\). The connection \(\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\kappa\lambda}\) is flat, for which the corresponding curvature is equal to zero: \[\stackrel{{\bullet}}{{R}}{}^{\alpha}{}_{\beta\mu\nu}\equiv \partial_{\mu}\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{ \beta\nu}-\partial_{\nu}\stackrel{{\bullet}}{{\Gamma}}{}^{ \alpha}{}_{\beta\mu}+\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_ {\kappa\mu}\stackrel{{\bullet}}{{\Gamma}}{}^{\kappa}{}_{\beta\nu}- \stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\kappa\nu}\stackrel{{ \bullet}}{{\Gamma}}{}^{\kappa}{}_{\beta\mu}=0, \tag{2.5}\] \(\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\kappa\lambda}\) is also compatible with the physical metric, that is, the corresponding non-metricity is zero: \[\stackrel{{\bullet}}{{Q}}{}_{\mu\alpha\beta}\equiv\stackrel{{ \bullet}}{{\nabla}}{}_{\mu}g_{\alpha\beta}=0. \tag{2.6}\] In our notations, following to [3], quantities denoted by a \(\circ\) are constructed with the use of the Levi-Civita connection \(\stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{\kappa\lambda}\). Thus, \(\stackrel{{\circ}}{{A}}{}^{a}{}_{b\rho}\) is the usual Levi-Civita spin connection (L-CSC) defined by \[\stackrel{{\circ}}{{A}}{}^{a}{}_{b\mu}=-h_{b}{}^{\nu}\stackrel{{ \circ}}{{\nabla}}{}_{\mu}h^{a}{}_{\nu}, \tag{2.7}\] where the covariant derivative \(\stackrel{{\circ}}{{\nabla}}{}_{\mu}\) is constructed with \(\stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{\kappa\lambda}\). Now it is useful to introduce a contortion tensor defined as: \[\stackrel{{\bullet}}{{K}}{}^{\alpha}{}_{b\rho}=\stackrel{{ \bullet}}{{A}}{}^{a}{}_{b\rho}-\stackrel{{\circ}}{{A}}{}^{a}{}_{b \rho}. \tag{2.8}\] Besides, it is useful to recall that the transformation of tetrad indices into spacetime ones and vice versa is performed by contraction with tetrad vectors, for example, \(\stackrel{{\circ}}{{R}}{}^{a}{}_{b\mu\nu}=h^{a}{}_{\alpha}h^{\beta }{}_{b}\stackrel{{\circ}}{{R}}{}^{\alpha}{}_{\beta\mu\nu}\); \(\stackrel{{\bullet}}{{T}}{}^{\alpha}{}_{\mu\nu}=h^{\alpha}{}_{a} \stackrel{{\bullet}}{{T}}{}^{\alpha}{}_{\mu\nu}\), etc. Thus, the contortion tensor can be rewritten in the convenient form: \[\stackrel{{\bullet}}{{K}}{}^{\rho}{}_{\mu\nu}=\frac{1}{2}( \stackrel{{\bullet}}{{T}}{}_{\mu}{}^{\rho}{}_{\nu}+\stackrel{{ \bullet}}{{T}}{}_{\nu}{}^{\rho}{}_{\mu}-\stackrel{{\bullet}}{{T}} {}^{\rho}{}_{\mu\nu}). \tag{2.9}\] The torsion scalar in (2.1) can be rewritten in the form: \[\stackrel{{\bullet}}{{T}}=\frac{1}{2}\stackrel{{ \bullet}}{{S}}_{a}{}^{\rho\sigma}\stackrel{{\bullet}}{{T}}{}^{ \sigma}{}_{\rho\sigma}, \tag{2.10}\] where the teleparallel superpotential \(\stackrel{{\bullet}}{{S}}_{a}{}^{\rho\sigma}\) defined as \[\stackrel{{\bullet}}{{S}}_{a}{}^{\rho\sigma}=\stackrel{{ \bullet}}{{K}}{}^{\rho\sigma}{}_{a}+h_{a}{}^{\sigma}\stackrel{{ \bullet}}{{K}}{}^{\delta\theta}{}_{\rho}-h_{a}{}^{\rho}\stackrel{{ \bullet}}{{K}}{}^{\delta\sigma}{}_{\theta} \tag{2.11}\] is an antisymmetric tensor in the last two indices. All tensors \(\stackrel{{\bullet}}{{T}}{}^{a}{}_{\mu\nu}\), \(\stackrel{{\bullet}}{{K}}{}^{\kappa\rho\sigma}{}_{a}\), and \(\stackrel{{\bullet}}{{S}}_{a}{}^{\rho\sigma}\), are covariant with respect to both coordinate transformations and local Lorentz transformations. (For tensors without Lorentz indexes or scalars we say "invariant with respect to Lorentz transformations".) Note, that working in the covariant formulation of teleparallel gravity, the local Lorentz covariance means that the tensorial quantities are transformed covariantly under the simultaneous transformation of both the tetrad and the ISC: \[h^{a}{}_{\mu}=\Lambda^{a}{}_{b}(x)h^{b}{}_{\mu}\,, \tag{2.12}\] \[\stackrel{{\bullet}}{{A}}{}^{\prime a}{}_{b\mu}=\Lambda^{a}{}_{c} (x)\stackrel{{\bullet}}{{A}}{}^{c}{}_{d\mu}\Lambda_{b}{}{}^{d}(x)+ \Lambda^{a}{}_{c}(x)\partial_{\mu}\Lambda_{b}{}^{c}(x), \tag{2.13}\] where \(\Lambda^{a}{}_{c}(x)\) is the matrix of a local Lorentz rotation, and \(\Lambda_{a}{}^{c}(x)\) is an inverse matrix of the latter. The operation at the right hand side of (2.13) tells us that ISC can be equalized to zero by an appropriate local Lorentz transformation. Then, by another local Lorentz rotation it can be represented in the form: \[\stackrel{{\bullet}}{{A}}{}^{a}{}_{c\nu}=\Lambda^{a}{}_{b}\partial _{\nu}(\Lambda^{-1})^{b}{}_{c}. \tag{2.14}\] Since the main goal of the present paper is to consider gravitational waves we restrict ourselves by vacuum solutions only. Then, varying the action with the Lagrangian (2.1), one obtains the Euler-Lagrange equation \[E_{a}{}^{\rho}=\frac{\partial\stackrel{{\bullet}}{{\mathcal{L}}}}{ \partial h^{a}{}_{\rho}}-\partial_{\sigma}\left(\frac{\partial\stackrel{{ \bullet}}{{\mathcal{L}}}}{\partial h^{a}{}_{\rho,\sigma}}\right)=0. \tag{2.15}\] It can be rewritten as \[\kappa h\stackrel{{\bullet}}{{J}}_{a}{}^{\rho}=\partial_{\sigma} \Big{(}h\!\!\stackrel{{\bullet}}{{S}}_{a}{}^{\rho\sigma}\Big{)}, \tag{2.16}\] where \[\stackrel{{\bullet}}{{J}}_{a}{}^{\rho}=\frac{1}{\kappa}h_{a}{}^{ \mu}\!\!\stackrel{{\bullet}}{{S}}_{c}{}^{\nu\rho}\!\!\stackrel{{ \bullet}}{{T}}\!\!\stackrel{{ c}}{{c}}_{\nu\mu}-\frac{h_{a}{}^{ \rho}}{h}\stackrel{{\bullet}}{{\mathcal{L}}}+\frac{1}{\kappa} \!\!\stackrel{{\bullet}}{{A}}\!\!\stackrel{{ c}}{{c}}_{a\sigma}\!\!\stackrel{{ \bullet}}{{S}}_{c}{}^{\rho\sigma} \tag{2.17}\] is not Lorentz covariant gravitational energy-momentum, in [3] it is called as a gravitational current as well. ### Elements of STEGR The Lagrangian in STEGR has the form [1]: \[{\mathcal{L}}\,=\frac{\sqrt{-g}}{2\kappa}g^{\mu\nu}(L^{\alpha}{}_{\beta\mu}L^ {\beta}{}_{\nu\alpha}-L^{\alpha}{}_{\beta\alpha}L^{\beta}{}_{\mu\nu}), \tag{2.18}\] where \(\kappa=8\pi\), and \(L^{\alpha}{}_{\mu\nu}\) is disformation tensor: \[L^{\alpha}{}_{\mu\nu}=\frac{1}{2}Q^{\alpha}{}_{\mu\nu}-\frac{1}{2}Q_{\mu}{}^{ \alpha}{}_{\nu}-\frac{1}{2}Q_{\nu}{}^{\alpha}{}_{\mu}\,, \tag{2.19}\] and the non-metricity tensor \(Q_{\alpha\mu\nu}\) is defined as follows: \[Q_{\alpha\mu\nu}\equiv\nabla_{\alpha}g_{\mu\nu}. \tag{2.20}\] Here, the covariant derivative \(\nabla_{\alpha}\) is defined with the use of the teleparallel connection \(\Gamma^{\alpha}{}_{\mu\nu}\) which is symmetric in lower indexes, thus, the corresponding torsion is zero: \(T^{\alpha}{}_{\mu\nu}\equiv\Gamma^{\alpha}{}_{\mu\nu}-\Gamma^{\alpha}{}_{\nu \mu}=0\). The curvature tensor of this connection is zero as well: \[R^{\alpha}{}_{\beta\mu\nu}(\Gamma)=\partial_{\mu}\Gamma^{\alpha}{}_{\nu\beta} -\partial_{\nu}\Gamma^{\alpha}{}_{\mu\beta}+\Gamma^{\alpha}{}_{\mu\lambda} \Gamma^{\lambda}{}_{\nu\beta}-\Gamma^{\alpha}{}_{\nu\lambda}\Gamma^{\lambda}{} _{\mu\beta}=0. \tag{2.21}\] One can easily verify that the decomposition of a general connection \(\Gamma^{\beta}{}_{\mu\nu}\) into the Levi-Civita connection, contortion and the disformation terms, see [1], reduces to \[L^{\beta}{}_{\mu\nu}\equiv\Gamma^{\beta}{}_{\mu\nu}-\stackrel{{ \circ}}{{\Gamma}}{}^{\beta}{}_{\mu\nu}. \tag{2.22}\] With the use of (2.19) - (2.22) one can rewrite (2.18) as \[{\mathcal{L}}\,\,=\!\!{\mathcal{L}}\,\,_{Hilb}+\frac{\sqrt{-g}g^{\mu\nu}}{2 \kappa}R_{\mu\nu}+{\mathcal{L}}\,^{\prime}, \tag{2.23}\] where the first term is the Hilbert Lagrangian and the third term \({\mathcal{L}}^{\prime}\) is a total divergence: \[{\mathcal{L}}^{\prime}\,=-\frac{\sqrt{-g}}{2\kappa}\stackrel{{ \circ}}{{\nabla}}_{\alpha}(Q^{\alpha}-\hat{Q}^{\alpha})=\partial_{\alpha} \left(-\frac{1}{2\kappa}\sqrt{-g}(Q^{\alpha}-\hat{Q}^{\alpha})\right)=\partial _{\alpha}\,\,{\mathcal{D}}\,\,^{\alpha}, \tag{2.24}\] and \(Q_{\alpha}=g^{\mu\nu}Q_{\alpha\mu\nu}\), \(\hat{Q}_{\alpha}=g^{\mu\nu}Q_{\mu\alpha\nu}\). According to [8] we neglect the second term in (2.23) and, thus, under variation we consider the Lagrangian (instead of (2.18) or (2.23)): \[{\mathcal{L}}\,=-\frac{\sqrt{-g}}{2\kappa}\stackrel{{\circ}}{{R} }+\partial_{\alpha}\,\,{\mathcal{D}}\,\,^{\alpha}, \tag{2.25}\] where \[{\mathcal{D}}^{\alpha}\!\equiv-\frac{\sqrt{-g}}{2\kappa}(Q^{\alpha}-\hat{Q}^{ \alpha})\,. \tag{2.26}\] ### Noether conserved quantities The Noether conserved quantities were derived for TEGR Lagrangian (2.1) in [26, 27] and for STEGR Lagrangian (2.25) in [8]. In both theories Noether current \({\cal I}^{\alpha}(\xi)\) is a vector density of the weight \(+1\), and the Noether superpotential \({\cal I}^{\alpha\beta}(\xi)\) is an antisymmetric tensor density of the weight \(+1\). For such quantities \(\stackrel{{\circ}}{{\nabla}}_{\mu}\equiv\partial_{\mu}\), and, thus, the conservation laws have evidently covariant form: \[\partial_{\alpha}{\cal I}^{\alpha}(\xi)\equiv\stackrel{{\circ}}{{ \nabla}}_{\alpha}{\cal I}^{\alpha}(\xi)=0\,. \tag{2.27}\] \[{\cal I}^{\alpha}(\xi)=\partial_{\alpha}{\cal I}^{\alpha\beta}(\xi)\equiv \stackrel{{\circ}}{{\nabla}}_{\alpha}{\cal I}^{\alpha\beta}(\xi)\,. \tag{2.28}\] It is important to give a physical interpretation of the quantities presented above. We follow the prescription in [27, 8]. First, one has to choose the displacement vector \(\xi^{\alpha}\), it can be Killing vector, proper vector of observer, etc. Second, setting a time coordinate as \(t=x^{0}\) and choosing a space section as \(\Sigma:=t=\mbox{const}\), one can interpret \({\cal I}^{0}(\xi)\) as a density on the section \(\Sigma\) of the quantity related to a chosen \(\xi^{\alpha}\). For example, if \(\xi^{\alpha}\) is a time-like Killing vector it is interpreted as the energy density on \(\Sigma\). On the other hand, if \(\xi^{\alpha}\) is an observer's proper vector, then the components \({\cal I}^{\alpha}(\xi)\) can be interpreted as components of the energy-momentum vector measured by such an observer. The Noether current \(\stackrel{{\bullet}}{{\cal J}}^{\alpha}(\xi)\) in TEGR in vacuum case when the field equations (2.15) hold is [26, 27]: \[\stackrel{{\bullet}}{{\cal J}}^{\alpha}(\xi)=h\stackrel{{ \bullet}}{{\theta}}_{\sigma}{}^{\alpha}\xi^{\sigma}+\frac{h}{\kappa} \overline{S}_{\sigma}{}^{\alpha\rho}\stackrel{{\circ}}{{\nabla}} _{\rho}\xi^{\sigma}\,, \tag{2.29}\] where the gravitational Noether energy-momentum tensor \(\stackrel{{\bullet}}{{\theta}}_{\sigma}{}^{\alpha}\) is \[\stackrel{{\bullet}}{{\theta}}_{\sigma}{}^{\alpha}\equiv\frac{1 }{\kappa}\overline{S}_{a}{}^{\alpha\rho}\overline{K}_{\;\;\sigma\rho}^{\;\;a} -\frac{1}{h}\stackrel{{\bullet}}{{\cal L}}\delta_{\sigma}^{ \alpha}\,. \tag{2.30}\] The related Noether superpotential in TEGR is \[\stackrel{{\bullet}}{{\cal J}}^{\alpha\beta}(\xi)=\frac{h}{ \kappa}\overline{S}_{a}{}^{\alpha\beta}h^{a}{}_{\sigma}\xi^{\sigma}\,. \tag{2.31}\] In correspondence with (2.28) one can derive the current from the superpotential: \[\stackrel{{\bullet}}{{\cal J}}^{\alpha}(\xi)=\partial_{\beta} \stackrel{{\bullet}}{{\cal J}}^{\alpha\beta}(\xi). \tag{2.32}\] As one can see, both \(\stackrel{{\bullet}}{{\cal J}}^{\alpha}(\xi)\) and \(\stackrel{{\bullet}}{{\cal J}}^{\alpha\beta}(\xi)\) are explicitly spacetime covariant and Lorentz invariant. Indeed, the Lorentz invariance of the Noether conserved quantities and the Lorentz covariance of the superpotential \(\stackrel{{\bullet}}{{S}}_{a}{}^{\alpha\beta}\) is satisfied only under the simultaneous transformation of the tetrad (2.12) and the ISC (2.13). However, as we see below, one can define different ISCs for one tetrad, what gives us different values of the Noether conserved quantities. For a detailed analysis of this situation, in [7, 28], we generalise the definition of "gauges" in TEGR. A gauge is defined as a set of pairs (tetrad, inertial spin connection) which can be obtained from a given combination of the tetrad and the spin connection by an arbitrary covariant Lorentz transformations where tetrad transforms as (2.12) and inertial spin connection transforms as (2.13) simultaneously, and/or arbitrary coordinate transformations. The case in which zero ISC corresponds to some tetrad is called traditionally as the Wietzenbock gauge [3]. Note, that the word "gauge" is used in different senses: when we say "Wietzenbock gauge" we mean the only one pair - tetrad and zero ISC; and when we say "gauge" in our definition we mean the whole equivalence class of pairs of tetrads and ISCs in which two pairs are equivalent if and only if they are connected as defined above. For the STEGR with the Lagrangian (2.25) the conserved quantities were derived in [8] for each term separately. The superpotential \({\cal J}^{\alpha\beta}_{GR}\) for the Hilbert term \(-\frac{\sqrt{-g}}{2\kappa}\stackrel{{\circ}}{{R}}\) is the Komar superpotential [29, 30] \[{\cal J}^{\alpha\beta}_{GR}=\mathrel{\cal K}{}^{\alpha\beta}=\frac{\sqrt{-g}}{ \kappa}\stackrel{{\circ}}{{\nabla}}{}^{[\alpha}\xi^{\beta]}, \tag{2.33}\] For the divergent term \(\mathrel{\cal L}{}^{\prime}\ =\partial_{\alpha}\mathrel{\cal D}{}^{\alpha}\), see (2.24) and (2.26), the Noether superpotential is: \[{\cal J}^{\alpha\beta}_{div}=\frac{\sqrt{-g}}{\kappa}\delta_{\sigma}^{[ \alpha}(Q^{\beta]}-\hat{Q}^{\beta]})\xi^{\sigma}. \tag{2.34}\] The total Noether superpotential of the Lagrangian (2.25) in STEGR is: \[{\cal J}^{\alpha\beta}={\cal J}^{\alpha\beta}_{\ GR}+{\cal J}^{\alpha\beta}_{\ div}. \tag{2.35}\] Taking the divergence of each term of (2.35) in correspondence with (2.28) one gets the total Noether current \[{\cal J}^{\alpha}={\cal J}^{\alpha}_{\ GR}+{\cal J}^{\alpha}_{\ div}. \tag{2.36}\] As one can see \({\cal J}^{\alpha}(\xi)\) and \({\cal J}^{\alpha\beta}(\xi)\) are explicitly spacetime covariant. In [8], the concept of gauges in STEGR has not been introduced. In the present paper we want to have the same terminology in STEGR as in TEGR, so formally, we define a pair of coordinates and connection \(\Gamma^{\alpha}{}_{\mu\nu}\), with the set of pairs which are connected to it by the transformation \[\Gamma^{\alpha}{}_{\mu\nu}=\frac{\partial x^{\alpha}}{\partial\bar{x}^{\dot{ \alpha}}}\frac{\partial\bar{x}^{\dot{\mu}}}{\partial x^{\mu}}\frac{\partial \bar{x}^{\dot{\nu}}}{\partial x^{\nu}}\bar{\Gamma}^{\dot{\alpha}}{}_{\dot{\mu} \dot{\nu}}+\frac{\partial x^{\alpha}}{\partial\bar{x}^{\dot{\lambda}}}\frac{ \partial}{\partial x^{\mu}}\left(\frac{\partial\bar{x}^{\dot{\lambda}}}{ \partial x^{\nu}}\right) \tag{2.37}\] as a "gauge". Usually, the case of zero teleparallel connection is called as "coincident gauge" [31, 32]. Here, we note again, that in the therm "coincident gauge" we mean the only case of coordinates in which the teleparallel connection in zero; and when we say "gauge" we mean the set of all possible coordinates and the values of teleparallel connections in them, such that relation (2.37) is satisfied under all possible coordinate transformations. ### Defining the connection: "turning off" gravity principle Teleparallel connections in TEGR and STEGR are not dynamical quantities and left undetermined [33, 1] -- the choice of them is not unique. To determine the ISC in TEGR the "turning off" gravity principle was introduced in [26, 27]. This principle is based on the assumption that Noether's current and superpotential are proportional to contortion components \(\stackrel{{\bullet}}{{K}}{}^{a}{}_{c\mu}\) (or, alternatively \(\stackrel{{\bullet}}{{T}}{}^{\alpha}{}_{\mu\nu}\) or \(\stackrel{{\bullet}}{{S}}{}_{a}{}^{\mu\nu}\)) and they have to vanish in absence of gravity. Thus, to determine \(\stackrel{{\bullet}}{{A}}{}^{a}{}_{c\mu}\) in correspondence of this requirement we turn to the formula (2.8). It was suggested: 1) for known GR solution, to choose a convenient tetrad and define \(\stackrel{{\circ}}{{R}}{}^{a}{}_{c\mu}=-h_{\nu}{}^{\nu}\stackrel{{ \circ}}{{\nabla}}_{\mu}h{}^{a}{}_{\nu}\); 2) to construct related curvature of Levi-Civita spin connection \(\stackrel{{\circ}}{{R}}{}^{i}{}_{j\mu\nu}=\partial_{\mu} \stackrel{{\circ}}{{A}}{}^{i}{}_{j\nu}-\partial_{\nu}\stackrel{{ \circ}}{{A}}{}^{i}{}_{j\mu}+\stackrel{{\circ}}{{A}}{}^{i}{}_{k \mu}\stackrel{{\circ}}{{A}}{}^{j}{}_{j\nu}-\stackrel{{ \circ}}{{A}}{}^{i}{}_{k\nu}\stackrel{{\circ}}{{A}}{}^{k}{}_{j\mu}\); 3) to "switch off" gravity solving the absent gravity equation \(\stackrel{{\circ}}{{R}}{}^{a}{}_{b\gamma\delta}=0\) for parameters of the chosen GR solution; 4) when the parameters satisfying \(\stackrel{{\circ}}{{R}}{}^{a}{}_{b\gamma\delta}=0\) are found, we take \(\stackrel{{\circ}}{{A}}{}^{a}{}_{c\mu}=\stackrel{{ \bullet}}{{A}}{}^{a}{}_{c\mu}\) for the found parameter values. To define the undetermined connection in STEGR we use the adapted for STEGR "turning off" gravity principle [8]. It is based on the assumption that \(Q_{\alpha\mu\nu}\) and \(L^{\alpha}{}_{\mu\nu}\) vanish in the absence of gravity and \(\stackrel{{\circ}}{{R}}{}^{\alpha}{}_{\beta\mu\nu}\) in GR vanishes in the absence of gravity too. To find the connection in STEGR there are the following steps: 1) for known GR solution, to construct related Riemann curvature tensor of the Levi-Civita connection: \[\stackrel{{\circ}}{{R}}{}^{\alpha}{}_{\beta\mu\nu}=\partial_{\mu} \stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{\beta\nu}-\partial_{ \nu}\stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{\beta\mu}+ \stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{\kappa\mu}\stackrel{{ \circ}}{{\Gamma}}{}^{\kappa}{}_{\beta\nu}-\stackrel{{ \circ}}{{\Gamma}}{}^{\alpha}{}_{\kappa\nu}\stackrel{{\circ}}{{ \Gamma}}{}^{\kappa}{}_{\beta\mu};\] 2) to "switch off" gravity solving the absent gravity equation \(\stackrel{{\circ}}{{R}}{}^{\alpha}{}_{\beta\mu\nu}=0\) for parameters of the chosen GR solution; 3) when the parameters satisfying \(\stackrel{{\circ}}{{R}}{}^{\alpha}{}_{\beta\mu\nu}=0\) are found, we take \(\Gamma^{\alpha}{}_{\mu\nu}=\stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{\mu\nu}\) for the found parameter values. Torsion of the found connection should be zero automatically because we take it from the Levi-Civita connection for some parameter values, and Levi-Civita connection is always symmetric. Curvature of the found connection should be zero too because we found it from the equation \(\stackrel{{\circ}}{{R}}{}^{\alpha}{}_{\beta\gamma\delta}=0\). It turns out, that the described above "turning off" gravity method gives ambiguities both in TEGR [7, 28] and in STEGR [8]. As a rule, "turning off" gravity in TEGR, the result depends on the tetrad that we choose. For each tetrad where we "turn off" gravity we obtain different gauge (pairs of tetrad and ISC). In general, these pairs are not connected by (2.12) and (2.13) applied simultaneously. Thus, torsion, contortion and superpotential being expressed in the same tetrad and coordinates are completely different in each case. This gives us different values of conserved quantities. But, sometimes, "turning off" gravity we can obtain the same gauge [34, 35]. The same way, when "turning off" gravity in STEGR, the result depends on the coordinates that we choose. Thus, disformation and non-metricity being expressed in the same coordinates are different and this gives us different values of conserved quantities in each case. One of the main purposes of this work is to find the gauges in TEGR and connections in STEGR in which we would have physically meaningful results for the concrete solutions - gravitational waves in a vacuum. ### Plane gravitational wave under consideration In this paper, we consider only one polarization of the plane gravitational wave. The simplest form of the metric for the exact (strong) plane gravitational wave has the form of [21]: \[g_{\mu\nu}=\left(\begin{array}{cccc}-1&0&0&0\\ 0&f^{2}(t-z)&0&0\\ 0&0&g^{2}(t-z)&0\\ 0&0&0&1\end{array}\right). \tag{2.38}\] Here and below the numeration of the coordinates are \(t=x^{0}\), \(x=x^{1}\), \(y=x^{2}\), and \(z=x^{3}\). Because the wave-solutions in GR are time dependent ones there are no time-like Killing vectors for them, therefore we do not expect to obtain any conserved charges in this case. So, our goal is to calculate energy-momentum density (current components) measured by the freely falling observer in wave solution taking his proper vector (4-velocity) and interpret the result. ## 3 Simplest gauges ### Diagonal polarisation in TEGR The simplest diagonal tetrad that can be introduced for (2.38) is \[h^{A}{}_{\mu}=\left(\begin{array}{cccc}1&0&0&0\\ 0&f(t-z)&0&0\\ 0&0&g(t-z)&0\\ 0&0&0&1\end{array}\right). \tag{3.1}\] To make the formulae shorter, we won't write the argument \((t-z)\) of functions \(f\) and \(g\) and their derivatives of each order. For this tetrad, we calculate L-CSC (2.7) which has the non-zero components: \[\begin{array}{c}\stackrel{{\circ}}{{A}}\stackrel{{ \circ}}{{A}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{ \circ}}{{i}}\stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} \stackrel{{\circ}}{{i}}\stackrel{{\circ}}{{i}} where \(c_{1}\), \(c_{2}\), \(d_{1}\) and \(d_{2}\) are integration constants. Turning off gravity by (3.4) we get ISC with non-zero components: \[\begin{array}{c}\stackrel{{\bullet}}{{A}}^{0}_{11}=\stackrel{{ \bullet}}{{A}}^{1}_{01}=-\stackrel{{\bullet}}{{A}}^{1}_{31}= \stackrel{{\bullet}}{{A}}^{3}_{11}=c_{1},\\ \stackrel{{\bullet}}{{A}}^{0}_{22}=\stackrel{{ \bullet}}{{A}}^{2}_{02}=\stackrel{{\bullet}}{{A}}^{3}_{22}=- \stackrel{{\bullet}}{{A}}^{2}_{32}=c_{2}.\end{array} \tag{3.5}\] Then we calculate superpotential (2.11) with (2.8) definitions which has the non-zero components: \[\begin{array}{c}\stackrel{{\bullet}}{{S}}^{0}_{0}{}^{33}=- \stackrel{{\bullet}}{{S}}^{3}_{0}{}^{30}=-\stackrel{{ \bullet}}{{S}}^{03}_{3}=\frac{c_{1}-f^{\prime}}{f}+\frac{c_{2}-g^{\prime}}{g}, \\ \stackrel{{\bullet}}{{S}}^{1}_{1}{}^{01}=\stackrel{{ \bullet}}{{S}}^{31}_{1}=-\stackrel{{\bullet}}{{S}}^{1}_{1}{}^{10} =-\stackrel{{\bullet}}{{S}}^{1}_{1}{}^{33}=\frac{g^{\prime}-c_{2} }{fg},\\ \stackrel{{\bullet}}{{S}}^{2}_{2}{}^{02}=\stackrel{{ \bullet}}{{S}}^{3}_{2}{}^{22}=-\stackrel{{\bullet}}{{S}}^{2}_{2}{}^ {20}=-\stackrel{{\bullet}}{{S}}^{2}_{2}{}^{23}=\frac{f^{\prime}- c_{1}}{fg}.\end{array} \tag{3.6}\] We assume that superpotential, torsion, etc should be zero in absence of waves, thus, \(c_{1}=c_{2}=0\). Then, the ISC (3.5) becomes \[\stackrel{{\bullet}}{{A}}^{a}_{b\mu}=0. \tag{3.7}\] Our next goal is to study the Noether superpotential for this gauge. First, we consider the simplest case of a freely falling observer which is static in co-moving coordinates of the metric (2.38). Then components of his proper vector are: \[\xi^{\sigma}=\left(-1,0,0,0\right). \tag{3.8}\] Then one gets for the Noether superpotential (2.31) non-zero components: \[\stackrel{{\bullet}}{{\mathcal{J}}}^{03}=-\stackrel{{ \bullet}}{{\mathcal{J}}}^{30}=\frac{gf^{\prime}+fg^{\prime}}{8\pi}. \tag{3.9}\] By (2.32), taking the divergence of superpotential and using the Einstein equations: \[f^{\prime\prime}/f+g^{\prime\prime}/g=0 \tag{3.10}\] we get Noether current \[\stackrel{{\bullet}}{{\mathcal{J}}}^{\mu}=\left\{-\frac{f^{ \prime}g^{\prime}}{4\pi},0,0,-\frac{f^{\prime}g^{\prime}}{4\pi}\right\}. \tag{3.11}\] Let us generalize (3.8). Directly solving the geodesic equation we derive general freely falling observer's 4-velocity \(\xi^{\mu}\): \[\begin{array}{c}\xi^{0}=-\frac{C_{1}^{2}\sigma^{a_{0}}}{2f^{2}}-\frac{C_{2} ^{2}\sigma^{a_{0}}}{2g^{2}}-\cosh\alpha_{0},\\ \xi^{1}=-\frac{C_{1}}{f^{2}},\\ \xi^{2}=-\frac{C_{2}}{g^{2}},\\ \xi^{3}=-\frac{C_{1}^{2}\sigma^{a_{0}}}{2f^{2}}-\frac{C_{2}^{2}\sigma^{a_{0}} }{2g^{2}}-\sinh\alpha_{0},\end{array} \tag{3.12}\] where \(C_{1}\), \(C_{2}\), \(\alpha_{0}\) are constants of integration. One can see that components (3.12) go to the ones in (3.8) when \(C_{1}=C_{2}=\alpha_{0}=0\). Taking this general form of observer's proper vector one gets Noether superpotential non-zero components: \[\begin{array}{c}\stackrel{{\bullet}}{{\mathcal{J}}}^{01}= \stackrel{{\bullet}}{{\mathcal{J}}}^{31}=-\stackrel{{ \bullet}}{{\mathcal{J}}}^{10}=-\stackrel{{\bullet}}{{ \mathcal{J}}}^{13}=-\frac{C_{1}g^{\prime}}{8\pi f},\\ \stackrel{{\bullet}}{{\mathcal{J}}}^{02}=\stackrel{{ \bullet}}{{\mathcal{J}}}^{32}=-\stackrel{{\bullet}}{{ \mathcal{J}}}^{20}=-\stackrel{{\bullet}}{{\mathcal{J}}}^{23}=- \frac{C_{2}f^{\prime}}{8\pi g},\\ \stackrel{{\bullet}}{{\mathcal{J}}}^{03}=-\stackrel{{ \bullet}}{{\mathcal{J}}}^{30}=\frac{e^{-\alpha_{0}}(gf^{\prime}+fg^{\prime })}{8\pi}.\end{array} \tag{3.13}\] Taking the divergence of superpotential and using the the Einstein equations (3.10) we get Noether current \[\stackrel{{\bullet}}{{\mathcal{J}}}^{\mu}=\left\{-\frac{e^{- \alpha_{0}}f^{\prime}g^{\prime}}{4\pi},0,0,-\frac{e^{-\alpha_{0}}f^{\prime}g^{ \prime}}{4\pi}\right\}. \tag{3.14}\] Note, first, that the components of the current (3.14) do not depend on \(C_{1}\) and \(C_{2}\); and second, that for \(\alpha_{0}=0\) the components (3.14) coincide with (3.11). ### Diagonal polarisation in STEGR The non-zero components of Levi-Civita connection for the metric (2.38) are: \[\begin{array}{c}\stackrel{{\circ}}{{\Gamma}}{}^{0}_{11}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{11}=ff^{\prime},\\ \stackrel{{\circ}}{{\Gamma}}{}^{0}_{22}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{22}=gg^{\prime},\\ \stackrel{{\circ}}{{\Gamma}}{}^{1}{}_{01}=\stackrel{{ \circ}}{{\Gamma}}{}^{1}{}_{10}=-\stackrel{{\circ}}{{\Gamma}}{}^{ 1}{}_{13}=-\stackrel{{\circ}}{{\Gamma}}{}^{1}{}_{31}=\frac{f^{ \prime}}{f},\\ \stackrel{{\circ}}{{\Gamma}}{}^{2}{}_{02}=\stackrel{{ \circ}}{{\Gamma}}{}^{2}{}_{20}=-\stackrel{{\circ}}{{\Gamma}}{}^{2}{}_ {23}=-\stackrel{{\circ}}{{\Gamma}}{}^{2}{}_{32}=\frac{g^{\prime}}{ g}.\end{array} \tag{3.15}\] Taking (3.4) in Levi-Civita connection we get the symmetric teleparallel connection (flat and torsionless): \[\begin{array}{c}\Gamma^{0}{}_{11}=\Gamma^{3}{}_{11}=c_{1}(c_{1}(t-z)+d_{1}), \\ \Gamma^{0}{}_{22}=\Gamma^{3}{}_{22}=c_{2}(c_{2}(t-z)+d_{2}),\\ \Gamma^{1}{}_{01}=\Gamma^{1}{}_{10}=-\Gamma^{1}{}_{13}=-\Gamma^{1}{}_{31}= \frac{c_{1}}{c_{1}(t-z)+d_{1}},\\ \Gamma^{2}{}_{02}=-\Gamma^{2}{}_{23}=-\Gamma^{2}{}_{32}=\Gamma^{2}{}_{20}= \frac{c_{2}}{c_{2}(t-z)+d_{2}}.\end{array} \tag{3.16}\] Then we can obtain the non-metricity (2.20) and disformation (2.22). Non-metricity (2.20) in absence of waves (i.e for Minkowski metric) is: \[\begin{array}{c}Q_{011}=-Q_{311}=-\frac{2c_{1}}{c_{1}(t-z)+d_{1}},\\ Q_{022}=-Q_{322}=-\frac{2c_{2}}{c_{2}(t-z)+d_{2}},\\ Q_{101}=Q_{110}=-Q_{113}=-Q_{131}=-\frac{c_{1}}{c_{1}(t-z)+d_{1}}+c_{1}(c_{1}( t-z)+d_{1}),\\ Q_{202}=Q_{220}=-Q_{223}=-\frac{c_{2}}{c_{2}(t-z)+d_{2}}+c_{2}(c_{2}(t-z)+d_{2}). \end{array} \tag{3.17}\] We assume that non-metricity should be zero in absence of waves. Thus, we have \(c_{1}=c_{2}=0\) and (3.16) becomes \[\Gamma^{\alpha}{}_{\mu\nu}=0. \tag{3.18}\] Proceeding further in this gauge to get the Noether superpotential we choose the observer's proper vector as (3.8). Komar superpotential (2.33) becomes zero. After calculation of the divergent part of the superpotential (2.34) we get the total superpotential with non-zero components: \[\mathcal{J}^{03}=-\mathcal{J}^{30}=\mathcal{J}^{03}_{div}=-\mathcal{J}^{30}_ {div}=\frac{f^{\prime}g+g^{\prime}f}{8\pi}. \tag{3.19}\] Taking the divergence of the Noether superpotential and using the Einstein equations (3.10) we get the Noether current \[\mathcal{J}^{\mu}=\left\{-\frac{f^{\prime}g^{\prime}}{4\pi},0,0,-\frac{f^{ \prime}g^{\prime}}{4\pi}\right\}. \tag{3.20}\] Note that it exactly coincides with (3.11). Second, to get another Noether superpotential we choose the observer's proper vector as in (3.12). Komar superpotential (2.33) is zero again. Calculating the divergent part (2.34) we get the total superpotential \[\begin{array}{c}\mathcal{J}^{01}=\mathcal{J}^{31}=-\mathcal{J}^{10}=- \mathcal{J}^{13}=-C_{1}\frac{f^{\prime}g+g^{\prime}f}{8\pi f^{\prime}},\\ \mathcal{J}^{02}=\mathcal{J}^{32}=-\mathcal{J}^{20}=-\mathcal{J}^{23}=-C_{2} \frac{f^{\prime}g+g^{\prime}f}{8ggz},\\ \mathcal{J}^{03}=-\mathcal{J}^{30}=\frac{e^{-\alpha_{0}}(f^{\prime}g+g^{ \prime}f)}{8\pi}.\end{array} \tag{3.21}\] And then taking the divergence and using the Einstein equations (3.10) we get the Noether current \[\mathcal{J}^{\mu}=\left\{-\frac{e^{-\alpha_{0}}f^{\prime}g^{\prime}}{4\pi},0,0,- \frac{e^{-\alpha_{0}}f^{\prime}g^{\prime}}{4\pi}\right\}. \tag{3.22}\] Note that it exactly coincides with (3.14). ### A linear approximation Turn to the simplest case of freely falling observer (3.8). The Noether currents both in TEGR (3.11) and in STEGR (3.20) are the same. For the weak wave in Minkowski space we choose the simplest presentation [36]: \[g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}, \tag{3.23}\] where \(\eta_{\mu\nu}={\rm diag}[-1,\ \ +1,\ +1,\ +1]\). Thus, in the case of the metric (2.38) in TT-gauge [36] one has \[f=1+\frac{1}{2}h_{xx}(t-z),\ \ g=1-\frac{1}{2}h_{xx}(t-z). \tag{3.24}\] Then, the current (3.11), the same (3.20), becomes \[{\cal J}^{\mu}_{lin}=\left\{\frac{h^{\prime}{}^{2}_{xx}}{16\pi},\ 0,\ 0,\ \frac{h^{\prime}{}^{2}_{xx}}{16\pi}\right\}. \tag{3.25}\] The component \({\cal J}^{0}_{lin}\) represents the energy density, whereas \({\cal J}^{3}_{lin}\) - the energy density flux of the weak flat gravitational wave along \(z\) axis. It exactly coincides with the result in [36] for the case of the only one polarization. Recall that these quantities are positively defined. It is interesting to consider a more general case of freely falling observers (3.12) in the linear approximation (3.23) and (3.24). Then the current (3.14), the same (3.22), becomes \[{\cal J}^{\mu}_{lin}=\left\{\frac{e^{-\alpha_{0}}h^{\prime}{}^{2}_{xx}}{16\pi},\ 0,\ 0,\ \frac{e^{-\alpha_{0}}h^{\prime}{}^{2}_{xx}}{16\pi}\right\}, \tag{3.26}\] that is, at least, positively defined like (3.25). Because vector (3.8) is a proper vector of a freely falling observer one could expect that due to the equivalence principle the components of the conserved current are to be equal to zero. So, from one side, one meets the evidently undesirable result. From the other side, however, this coincides with the well-known result from the Landau-Lifshitz pseudotensor formula. The same result have been obtained for the gravitational wave (2.38) in [21] where this fact is considered as a criterion that supports correctness of the result. In the following paper [22], this coincidence is supported by a relation to a so-called "ideal frame". From our side, we should interpret the discrepancy in interpretation of the equivalence principle in the framework of our approach. In the formalism [26, 8] applied here, unlike other approaches, observers are connected with their proper vectors. We remind a reader that studying conserved quantities for the Schwarzschild black hole [34, 35] we have met with the evidence that constructing an appropriate global mass requires the one related gauge, whereas a correspondence to the equivalence principle requires another gauge. In such a paradigm we need to claim that here the gauges, the pair of tetrad (3.1) and ISC (3.7) in TEGR and the pair of metric (2.38) and teleparallel connection (3.18) in STEGR, do not correspond to the equivalence principle. In the next sections we present gauges for the solution (2.38) which give a correspondence to the equivalence principle in that old sense. However, we need to explain the quite acceptable result (3.25) in comparison with the textbook [36] result, where components are explained as energetic characteristics of gravitational wave. Thus, we need deeper to consider the Landau and Lifshitz prescription, which is based on their pseudotensor. As all pseudotensors, it is non-covariant - it is not transformed as tensors under arbitrary coordinate transformations which makes the physical interpretation of the conserved quantities more difficult. One of the ways to improve the situation is a possibility to covariantize pseudotensors by introducing to the dynamical spacetime manifold with the metric tensor \(g_{\mu\nu}\) a fixed Minkowskian background with the Minkowski metric, see book [30]. Then, the Landau-Lifshitz conserved pseudotensor \(t^{\mu\nu}_{LL}\), which has the mathematical weight \(+2\), permits to construct us a conserved covariant current1 Footnote 1: The analogous method of covariantization can be applied to an arbitrary pseudotensor. \[{\cal J}^{\mu}_{LL}=\left(\sqrt{-\det\eta_{\alpha\beta}}\right)^{-1}t^{\mu\nu }_{LL}\tilde{\xi}_{\nu}. \tag{3.27}\] From the beginning it is assumed that (3.27) is written in the Lorentzian coordinates, of course, \(-\det\eta_{\alpha\beta}=1\). Then, \({\cal J}^{\mu}_{LL}\) is thought as a vector density of the weight \(+1\) and can be represented in arbitrary coordinates in an ordinary way if particular derivatives in \(t^{\mu\nu}_{LL}\) are replaced by covariant ones. Vector \(\bar{\xi}^{\mu}\) is a Killing vector of the Minkowski space. Thus, if \[\bar{\xi}^{\mu}=(-1,\ 0,\ 0,\ 0), \tag{3.28}\] it is a timelike Killing vector of the Minkowski space. Then \({\cal J}^{\mu}_{LL}=(\ t^{00}_{LL},\ t^{10}_{LL},\ t^{20}_{LL},\ t^{30}_{LL})\), that is the interpretation of \({\cal J}^{0}_{LL}\) and \({\cal J}^{3}_{LL}\) as energy density and energy density flux coincides with a related interpretation in \(t^{00}_{LL}\) and \(t^{30}_{LL}\) in the book [36]. Now let us turn to the linear approximation. The decompositions (3.23) and (3.24), where perturbations are considered on a flat background with the Minkowski metric, are quite appropriate to turn to the paradigm of covariantization of (3.27) on the Minkowskian background. Besides, for the result (3.25) we anyway use vector (3.8) that is just coincides with the Killing vector (3.28) of the Minkowski space. Thus, the coincidence of the result (3.25) with the Landau-Lifshitz one becomes clear. Let us consider (3.27) without an approximation. Thus, \(t^{\mu\nu}_{LL}\) for the metric (2.38) is \[{\cal J}^{\mu}_{LL}=\left\{-\frac{4fgf^{\prime}g^{\prime}+g^{2}f^{\prime 2}+f^ {2}g^{\prime 2}}{8\pi},\ 0,\ 0,\ -\frac{4fgf^{\prime}g^{\prime}+g^{2}f^{\prime 2}+f^ {2}g^{\prime 2}}{8\pi}\right\}. \tag{3.29}\] One can see that it differs from (3.11) and (3.20), although its linear approximation gives (3.25). We stress that our results (3.11) and (3.20) are obtained in the framework of the initially covariant method, whereas (3.29) is obtained after covariantization (an additional procedure) of the Landau-Lifshitz pseudotensor. Finally, it is necessary to discuss the results (3.14), (3.22) and (3.26). The vector (3.12) is not a Killing vector of Minkowski space. However, without changing the results (3.14), (3.22) and (3.26) one can set \(C_{1}=C_{2}=0\). By this, we can claim that we use vector \[\xi^{\mu}=\left\{-\cosh\alpha_{0},\ 0,\ 0,\ -\sinh\alpha_{0}\right\}, \tag{3.30}\] which is a timelike Killing vector of Minkowski space. Indeed, boosting (3.28) by global Lorentz transformation in Minkowski space one gets (3.30). This transformation has a form: \(\xi^{\mu}_{boosted}=\lambda^{\mu}\,\nu_{\xi hill}^{\nu}\), where \(\xi^{\nu}_{kill}\) is (3.8), \(\xi^{\mu}_{boosted}\) is (3.12), \[\lambda^{\mu}\,_{\nu}=\left(\begin{array}{cccc}\cosh\alpha_{0}&0&0&\sinh \alpha_{0}\\ 0&0&0&0\\ 0&0&0&0\\ \sinh\alpha_{0}&0&0&\cosh\alpha_{0}\end{array}\right)=\left(\begin{array}{ cccc}\frac{1}{\sqrt{1-v^{2}}}&0&0&\frac{v}{\sqrt{1-v^{2}}}\\ 0&0&0&0\\ 0&0&0&0\\ \frac{v}{\sqrt{1-v^{2}}}&0&0&\frac{1}{\sqrt{1-v^{2}}}\end{array}\right), \tag{3.31}\] where \(v\) is 3-dimensional velocity in Minkowski spacetime. Then interpretation of (3.14), (3.22) and (3.26) becomes clear: they are obtained by boosting the vector \(\xi^{\mu}\) by (3.31) in (3.11), (3.20) and (3.25). The result of linear approximation is an acceptable result not only because it coincides with the Landau-Lifshitz prediction (and other pseudotensor approaches as well), but mainly because it is checked observationally. This possibly means that the equivalence principle itself may not require necessary zero current _in this case_, and non-zero values can have a physical meaning. On the other hand, due to lack of observational evidence for a strong gravitational wave regime, we can not a priory say if the general form of the current has the same meaning, regarding its difference from the pseudo-tensor result for a strong wave. ## 4 Gauges compatible with the equivalence principle in the old sense In this section, as it was announced above, we are searching gauges which give zero current for a free falling observer. ### Gauge changing in TEGR Let us try to obtain zero Noether current in TEGR by changing a gauge. To do this, we use the ISC changes as usual (2.13): \[A^{\prime a}{}_{b\mu}=\Lambda^{a}{}_{c}(x^{\nu})A^{c}{}_{d\mu}\Lambda^{b}{}_{d}( x^{\nu})+\Lambda^{a}{}_{c}(x^{\nu})\partial_{\mu}\Lambda_{b}{}^{c}(x^{\nu})\,. \tag{4.1}\] Because in the previous section we had zero ISC (3.7), our new ISC has a form (2.14) \[A^{\prime a}{}_{b\mu}=\Lambda^{a}{}_{c}(x^{\nu})\partial_{\mu}\Lambda_{b}{}^{c }(x^{\nu})\,, \tag{4.2}\] while the tetrad remains the same (3.1). Or, another way is as follows: the tetrad changes as (2.12) \[h^{\prime a}{}_{\mu}=\Lambda^{a}{}_{b}(x^{\nu})h^{b}{}_{\mu}\,, \tag{4.3}\] while ISC (3.7) remains zero. In this section, we also assume that our affine connection \(\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\mu\nu}\) should have the same symmetries as a solution (2.38). This means that displacements in \(x\), in \(y\) and in \(t+z\) directions do not change it. Such a proposal was applied to the affine connection in modified teleparallel theories with other symmetries in [37]. Because in modified teleparallel theories the connection is dynamical, both the metric and the connection should have the same symmetries as solution. In TEGR, the ISC and affine connection are non dynamical, so, at the level of field equations, the requirement for them to have the same symmetries as metric might be too strong. Nevertheless, we consider Noether's current (2.29) and superpotential (2.31) and can require for them the same symmetries as metric has. For this purpose it is sufficient to require the same symmetries for the teleparallel superpotential, contortion or torsion. For example we can require for contortion: \[\pounds_{\xi}\not{\bar{K}}^{\alpha}{}_{\mu\nu}=0, \tag{4.4}\] where \(\pounds_{\xi}\) is Lie derivative and \(\xi\) is a Killing vector of the solution. For the metric (2.38), the symmetry holds along the direction \(\Delta x\): \(\xi^{\mu}=(0,\ 1,\ 0,\ 0)\), \(\Delta y\): \(\xi^{\mu}=(0,\ 0,\ 1,\ 0)\), and \(\Delta(t+z)\): \(\xi^{\mu}=(1,\ 0,\ 0,\ 1).\) The contortion is \[\not{\bar{K}}^{\alpha}{}_{\mu\nu}=\stackrel{{\bullet}}{{\Gamma}} {}^{\alpha}{}_{\mu\nu}-\stackrel{{\circ}}{{\Gamma}}{}^{\alpha}{}_{ \mu\nu}=0, \tag{4.5}\] where the Levi-Civita connection has the same symmetries as metric (2.38). Now let's go back to definitions (2.4) and (2.14). The tetrad (3.1) is already symmetric as metric (2.38). Therefore, \(\stackrel{{\bullet}}{{\Gamma}}{}^{\alpha}{}_{\mu\nu}\) is symmetric when the ISC (2.14) is symmetric. This requirement is fulfilled when \(\Lambda^{a}{}_{b}(x^{\nu})=\Lambda^{a}{}_{b}(t-z)\) in (2.14) is an arbitrary Lorentz rotation which depends on \(t-z\) only. Keeping in mind matrices of local Lorentz rotations obtained by this prescription and given in Appendix A, we have found that for the related gauges (pairs of tetrad (3.1) and ISC (2.14), where \(\Lambda^{a}{}_{b}(t-z)\) is a composition of matrices given in Appendix A, or transformed tetrad (3.1) by \(\Lambda^{a}{}_{b}(t-z)h^{b}{}_{\mu}\) and zero ISC (3.7)) the Noether current does not change! Thus one cannot find a gauge for which the current vanishes for a free moving observer with such a restrictive condition. Now let's assume that \(\Lambda^{a}{}_{b}\) can be non-symmetrical, for example, along the direction \(dx\), thus, it can depend on \(x\) and \((t-z)\). The motivation for this proposal is that the connection in TEGR is not dynamical and thus cannot be felt by observers like symmetrical metric can, and, thus, we don't need \(\Lambda^{a}{}_{b}\) to be symmetrical. When we assume the dependence on \(x\), the Noether superpotential can depend on \(x\). However, making the Noether current be equal to zero, such a current will automatically satisfy the symmetries of the solution. One of the most simple Lorentz rotations \(\Lambda^{a}{}_{b}\) which depend on \(x\) and \((t-z)\) and can change the Noether current is \[\Lambda^{a}{}_{b}=\left(\begin{array}{ccc}1&0&0&0\\ 0&\cos(x\psi(t-z))&0&\sin(x\psi(t-z))\\ 0&0&1&0\\ 0&-\sin(x\psi(t-z))&0&\cos(x\psi(t-z))\end{array}\right). \tag{4.6}\] The ISC (2.14) \(\stackrel{{\bullet}}{{A}}^{a}{}_{c\mu}=\Lambda_{b}{}^{c}\partial_{\mu} \Lambda^{a}{}_{b}\) calculated with (4.6) is \[\begin{array}{c}\stackrel{{\bullet}}{{A}}^{\hat{1}}{}_{\hat{3} 1}=-\stackrel{{\bullet}}{{A}}^{\hat{3}}{}_{\hat{1}1}=\psi(t-z);\\ \stackrel{{\bullet}}{{A}}^{\hat{1}}{}_{30}=\stackrel{{ \bullet}}{{A}}^{\hat{3}}{}_{\hat{1}3}=-\stackrel{{ \bullet}}{{A}}^{\hat{1}}{}_{\hat{3}3}=-\stackrel{{\bullet}}{{A}}^{ \hat{3}}{}_{\hat{1}0}=x\psi^{\prime}(t-z).\end{array} \tag{4.7}\] Then one can calculate the contortion (2.8) with (4.6) and (3.2). Then, the teleparallel superpotential (2.11) is \[\begin{array}{c}\stackrel{{\bullet}}{{S}}_{0}{}^{01}=\stackrel{{ \bullet}}{{S}}_{3}{}^{31}=-\stackrel{{\bullet}}{{S}}_{0}{}^{10}=- \stackrel{{\bullet}}{{S}}_{0}{}^{13}=-\frac{x\psi^{\prime}(t-z)}{f };\\ \stackrel{{\bullet}}{{S}}_{0}{}^{3}=-\stackrel{{ \bullet}}{{S}}_{0}{}^{30}=-\frac{f^{\prime}+\psi(t-z)}{f}-\frac{x}{g^{ \prime}};\\ \stackrel{{\bullet}}{{S}}_{1}{}^{01}=\stackrel{{ \bullet}}{{S}}_{1}{}^{31}=-\stackrel{{\bullet}}{{S}}_{1}{}^{10}=- \stackrel{{\bullet}}{{S}}_{1}{}^{13}=\frac{g^{\prime}}{fg};\\ \stackrel{{\bullet}}{{S}}_{2}{}^{02}=-\stackrel{{ \bullet}}{{S}}_{2}{}^{20}=\frac{f^{\prime}}{fg};\\ \stackrel{{\bullet}}{{S}}_{2}{}^{12}=-\stackrel{{ \bullet}}{{S}}_{2}{}^{21}=\frac{x\psi^{\prime}(t-z)}{fg};\\ \stackrel{{\bullet}}{{S}}_{2}{}^{23}=-\stackrel{{ \bullet}}{{S}}_{2}{}^{32}=-\frac{f^{\prime}+\psi(t-z)}{fg},\\ \stackrel{{\bullet}}{{S}}_{3}{}^{03}=-\stackrel{{ \bullet}}{{S}}_{3}{}^{30}=\frac{1}{f}+\frac{g}{g}.\end{array} \tag{4.8}\] Taking (3.8) we have the Noether superpotential in TEGR (2.31): \[\begin{array}{c}\stackrel{{\bullet}}{{\mathcal{J}}}^{01}= \stackrel{{\bullet}}{{\mathcal{J}}}^{31}=-\stackrel{{ \bullet}}{{\mathcal{J}}}^{10}=-\stackrel{{\bullet}}{{ \mathcal{J}}}^{13}=\frac{xg\psi^{\prime}(t-z)}{8\pi};\\ \stackrel{{\bullet}}{{\mathcal{J}}}^{03}=-\stackrel{{ \bullet}}{{\mathcal{J}}}^{30}=\frac{g\left(f^{\prime}+\psi(t-z)\right)+fg^{ \prime}}{8\pi};\end{array} \tag{4.9}\] Then, taking the divergence (2.32) of (4.9) we get the Noether current in TEGR: \[\stackrel{{\bullet}}{{\mathcal{J}}}^{\mu}=\left\{-\frac{gf^{ \prime\prime}+2f^{\prime}g^{\prime}+fg^{\prime\prime}+\psi(t-z)g^{\prime}}{8 \pi},\ 0,\ 0,\ -\frac{gf^{\prime\prime}+2f^{\prime}g^{\prime}+fg^{\prime\prime}+\psi(t-z)g^{ \prime}}{8\pi}\right\}. \tag{4.10}\] Applying here the Einstein equation (3.10) we get: \[\stackrel{{\bullet}}{{\mathcal{J}}}^{\mu}=\left\{-\frac{2f^{ \prime}g^{\prime}+\psi(t-z)g^{\prime}}{8\pi},\ 0,\ 0,\ -\frac{2f^{\prime}g^{\prime}+\psi(t-z)g^{\prime}}{8\pi}\right\}. \tag{4.11}\] Then, the condition for zero Noether current is \[\psi(t-z)=-2f^{\prime}. \tag{4.12}\] Thus a gauge compatible with the equivalence principle in the old sense is constructed. Analogously, one can permit a dependence on \(y\) and \((t-z)\) with the same result. More complicated gauge constructed with the local Lorentz rotations depending simultaneously on \(x\), \(y\) and \((t-z)\) is considered later on the basis of work [20]. ### Gauge changing in STEGR In this subsection, from the start we restrict ourselves by the simplest requirement as well. We assume that changed Noether conserved quantities have to depend on \((t-z)\) only. Komar superpotential obtained for the metric (2.38) and vector (3.8) is left zero independently on transformations which change a gauge. Now, consider the additional part (2.34). Because the metric (2.38) depend on \((t-z)\) only, the teleparallel connection \(\Gamma^{\alpha}{}_{\mu\nu}\) included into the additional part of Noether superpotential (2.34) through the non-metricity (2.20) should depend on \((t-z)\) only too. Then we check if such Noether superpotentials (depending on \((t-z)\) only) can satisfy the equivalence principle. We assume that there exist some new coordinates \((T,X,Y,Z)\) in which the teleparallel connection \(\Gamma^{\alpha}{}_{\mu\nu}=0\). New coordinates \((T,X,Y,Z)\) depend on the coordinates \((t,x,y,z)\) in a general way as: \[T=t+\Delta T,\ \ X=x+\Delta X,\ \ Y=y+\Delta Y,\ \ Z=z+\Delta Z. \tag{4.13}\] When (4.13) is expanded in a Taylor series, the functions \(\Delta T\), \(\Delta X\), \(\Delta Y\), \(\Delta Z\), depend on the derivatives of \((T,X,Y,Z)\) with respect to \((t,x,y,z)\). These derivatives are included in the formula of the transformed STEGR connection, which (after applying the coordinate transformation from \((T,X,Y,Z)\) to \((t,x,y,z)\)) is calculated as \[\Gamma^{\alpha}{}_{\mu\nu}=\frac{\partial x^{\alpha}}{\partial X^{\lambda}} \frac{\partial}{\partial x^{\mu}}\left(\frac{\partial X^{\lambda}}{\partial x^{ \nu}}\right), \tag{4.14}\] where \(x^{\mu}\equiv(t,x,y,z)\), \(X^{\mu}\equiv(T,X,Y,Z)\). To make the teleparallel connection (4.14) depend only on \((t-z)\), the derivatives of \((T,X,Y,Z)\) with respect to \((t,x,y,z)\) defining (4.14) and, thus, the functions \(\Delta T\), \(\Delta X\), \(\Delta Y\), \(\Delta Z\) should depend only on \((t-z)\). Thus, assuming that the functions \(\Delta T\), \(\Delta X\), \(\Delta Y\), \(\Delta Z\) depend on \((t-z)\) only we get for the teleparallel connection (4.14) non-zero components \[\begin{array}{c}\Gamma^{0}{}_{00}=\Gamma^{0}{}_{33}=-\Gamma^{0}{}_{03}=- \Gamma^{0}{}_{30}=\frac{\Delta T^{\prime}\Delta Z^{\prime\prime}-\Delta T^{ \prime\prime}\left(\Delta Z^{\prime}-1\right)}{\Delta T^{\prime}-\Delta Z^{ \prime}+1};\\ \Gamma^{1}{}_{00}=\Gamma^{1}{}_{33}=\frac{\Delta X^{\prime}\left(\Delta Z^{ \prime\prime}-\Delta T^{\prime\prime}\right)+\Delta X^{\prime\prime}\left(- \Delta T^{\prime}+\Delta Z^{\prime}-1\right)}{\Delta T^{\prime}-\Delta Z^{ \prime}+1};\\ \Gamma^{2}{}_{00}=\Gamma^{2}{}_{33}=\frac{\Delta Y^{\prime}\left(\Delta Z^{ \prime\prime}-\Delta T^{\prime\prime}\right)}{\Delta T^{\prime}-\Delta Z^{ \prime}+1}+\Delta Y^{\prime\prime};\\ \Gamma^{2}{}_{03}=\Gamma^{2}{}_{30}=\frac{\Delta Y^{\prime}\left(\Delta T^{ \prime\prime}-\Delta Z^{\prime}+1\right)+\Delta Y^{\prime\prime}\left(-\Delta T ^{\prime}+\Delta Z^{\prime}-1\right)}{\Delta T^{\prime}-\Delta Z^{\prime}+1}; \\ \Gamma^{3}{}_{00}=\Gamma^{3}{}_{33}=-\Gamma^{3}{}_{03}=-\Gamma^{3}{}_{30}= \frac{\left(\Delta T^{\prime\prime}+1\right)\Delta Z^{\prime\prime}-\Delta T^{ \prime\prime}\Delta Z^{\prime}}{\Delta T^{\prime}-\Delta Z^{\prime}+1},\end{array} \tag{4.15}\] where the functions \(\Delta T\), \(\Delta X\), \(\Delta Y\), \(\Delta Z\), depend on \((t-z)\) only. We take again the observer's proper vector (3.8). Because Komar superpotential (2.33) remains zero the total Noether superpotential (2.35) is determined only by additional part (2.34) which has non-zero components: \[\mathcal{J}^{03}=-\mathcal{J}^{30}=\frac{g\left(2f^{\prime}\left(\Delta T^{ \prime}-\Delta Z^{\prime}+1\right)+f\left(\Delta Z^{\prime\prime}-\Delta T^{ \prime\prime}\right)\right)+2fg^{\prime}\left(\Delta T^{\prime}-\Delta Z^{ \prime}+1\right)}{16\pi\left(\Delta T^{\prime}-\Delta Z^{\prime}+1\right)}. \tag{4.16}\] To make the Noether current zero, the Noether superpotential (following (2.28)) should be constant \(\mathcal{J}^{03}=-\mathcal{J}^{30}=A_{0}\). Then we easily find that \[\frac{\left(\Delta T-\Delta Z\right)^{\prime\prime}}{\left(\Delta T-\Delta Z \right)^{\prime}}=2\left(-\frac{8\pi A_{0}}{fg}+\frac{f^{\prime}}{f}+\frac{g^{ \prime}}{g}\right). \tag{4.17}\] Then \[\left(\Delta T-\Delta Z\right)^{\prime}=A_{1}f^{2}g^{2}\exp\left(-\int\left( \frac{16\pi A_{0}}{fg}\right)\,dt\right), \tag{4.18}\] where \(A_{1}\) is a constant of integration. If we assume \(A_{0}=0\), in this case the Noether superpotential is zero and \[\Delta T=\Delta Z+A_{1}\int f^{2}g^{2}dt+A_{2}, \tag{4.19}\] where \(A_{1}\) and \(A_{2}\) are constants of integration. \(\Delta Z\), \(\Delta X\), \(\Delta Y\), can be arbitrary functions and \(\Delta T\) in connected with \(\Delta Z\) by (4.18) or by (4.19) in the case of zero Noether current. In STEGR, in contrast to TEGR, even with a very strong restriction on the connection and Noether superpotential, requiring dependence only on \(t-z\), we have reached the goal, that is, we have found a gauge with zero current for a free moving observer. ## 5 Obukhov-Pereira-Rubilar gauge In [20] and [19] the authors study the problem of the energy that can be brought by flat fronted gravitational wave. They use different set of coordinates \((T,X,Y,Z)\), for which in \((-,+,+,+)\) signature the metric for the gravitational wave solution has a form \[g_{\mu\nu}=\left(\begin{array}{cccc}-H(T-Z,X,Y)-1&0&0&H(T-Z,X,Y)\\ 0&1&0&0\\ 0&0&1&0\\ H(T-Z,X,Y)&0&0&-H(T-Z,X,Y)+1\end{array}\right). \tag{5.1}\] The Einstein equations in vacuum acquire the simple form: \[\frac{\partial^{2}H(T-Z,X,Y)}{\partial X^{2}}+\frac{\partial^{2}H(T-Z,X,Y)}{ \partial Y^{2}}=0. \tag{5.2}\] We discuss the results of [20] in Appendix B in detail. Here, we consider the frame suggested in [20] in the framework of our formalism. The reason is that result of [20] with zero energetic characteristics cloud be interpreted as having a relation to the equivalence principle. Thus, Obukhov et. al. [20] use a tetrad for the metric (5.1): \[h^{a}{}_{\mu}=\left(\begin{array}{cccc}\frac{1}{2}H(T-Z,X,Y)+1&0&0&-\frac{1}{ 2}H(T-Z,X,Y)\\ 0&1&0&0\\ 0&0&1&0\\ -\frac{1}{2}H(T-Z,X,Y)&0&0&\frac{1}{2}H(T-Z,X,Y)-1\end{array}\right). \tag{5.3}\] Here, we need to use the coordinates of (2.38). To transform the metric (5.1) to the metric (2.38) under consideration we use the Formiga's [22] Eq(2.32): \[\begin{array}{c}T=t+\frac{1}{2}\left(x^{2}f(U)\frac{df}{dU}+y^{2}g(U)\frac{ dg}{dU}\right),\\ Z=z+\frac{1}{2}\left(x^{2}f(U)\frac{df}{dU}+y^{2}g(U)\frac{dg}{dU}\right),\\ X=f(U)x,\;\;Y=g(U)y\\ U=T-Z=t-z,\\ H(U,X,Z)=-\frac{1}{f}\frac{d^{2}f}{dU^{2}}(X^{2}-Y^{2}).\end{array} \tag{5.4}\] ### Zero current in TEGR After the coordinate transformation (5.4) \((T,X,Y,Z)\rightarrow(t,x,y,z)\) metric (5.1) transforms to diagonal form (2.38). Tetrad (5.3) transforms to the form \[h^{a}{}_{\mu}=\left(\begin{array}{cccc}\frac{1}{2}\left(x^{2}f^{\prime 2}+y^{2} g^{\prime 2}+2\right)&xff^{\prime}&ygg^{\prime}&\frac{1}{2}\left(-x^{2}f^{\prime 2}-y^{2 }g^{\prime 2}\right)\\ xf^{\prime}&f&0&-xf^{\prime}\\ yg^{\prime}&0&g&-yg^{\prime}\\ \frac{1}{2}\left(-x^{2}f^{\prime 2}-y^{2}g^{\prime 2}\right)&-xff^{\prime}&-ygg^{ \prime}&\frac{1}{2}\left(x^{2}f^{\prime 2}+y^{2}g^{\prime 2}-2\right)\end{array} \right). \tag{5.5}\] As is was considered in [20], the corresponding ISC to the tetrad (5.3) was zero. Because ISC transforms as spacetime vector under spacetime transformations, corresponding to (5.5) ISC remains zero. Besides, the tetrad (5.5) is connected to the diagonal tetrad (3.1) by \(h^{a}{}_{\mu}=\Lambda^{a}{}_{b}h^{b}_{(diag)\mu}\), where the Lorentz rotation \(\Lambda^{a}{}_{b}\) is \[\Lambda^{a}{}_{b}=\left(\begin{array}{cccc}\frac{1}{2}\left(x^{2}f^{\prime 2 }+y^{2}g^{\prime 2}+2\right)&xf^{\prime}&yg^{\prime}&\frac{1}{2}\left(-x^{2}f^{ \prime 2}-y^{2}g^{\prime 2}\right)\\ xf^{\prime}&1&0&-xf^{\prime}\\ yg^{\prime}&0&1&-yg^{\prime}\\ \frac{1}{2}\left(-x^{2}f^{\prime 2}-y^{2}g^{\prime 2}\right)&-xf^{\prime}&-yg^{ \prime}&\frac{1}{2}\left(x^{2}f^{\prime 2}+y^{2}g^{\prime 2}-2\right)\end{array}\right). \tag{5.6}\] Thus, if one preserves zero ISC (3.7) with the tetrad (5.5) one obtains the pair presenting the new gauge which differs from the gauge of the tetrad (3.1) and zero ISC (3.7). We call it as the Obukhov-Pereira-Rubilar gauge. For the new gauge, by (2.8) and (2.11), we get the teleparallel superpotential which in all coordinate indexes has non-zero components: \[\begin{array}{c}\stackrel{{\bullet}}{{S}}_{0}{}^{01}=\stackrel{{ \bullet}}{{S}}_{0}{}^{31}=\stackrel{{\bullet}}{{S}}_{3}{}^{10}= \stackrel{{\bullet}}{{S}}_{3}{}^{13}=-\stackrel{{ \bullet}}{{S}}_{0}{}^{10}=-\stackrel{{\bullet}}{{S}}_{0}{}^{13}= -\stackrel{{\bullet}}{{S}}_{3}{}^{01}=-\stackrel{{ \bullet}}{{S}}_{3}{}^{31}=\frac{xf^{\prime\prime}}{f};\\ \stackrel{{\bullet}}{{S}}_{0}{}^{02}=\stackrel{{\bullet}}{{S}}_{ 3}{}^{20}=\stackrel{{\bullet}}{{S}}_{3}{}^{20}=\stackrel{{ \bullet}}{{S}}_{3}{}^{23}=-\stackrel{{\bullet}}{{S}}_{0}{}^{20}=- \stackrel{{\bullet}}{{S}}_{0}{}^{23}=-\stackrel{{ \bullet}}{{S}}_{3}{}^{02}=-\stackrel{{\bullet}}{{S}}_{3}{}^{32}= \frac{g^{\prime\prime}}{g}.\end{array} \tag{5.7}\] Taking the freely falling observer's proper vector (3.8) in \((t,x,y,z)\) coordinates we get the Noether superpotential (2.31): \[\stackrel{{\bullet}}{{\mathcal{J}}}^{01}=\stackrel{{ \bullet}}{{\mathcal{J}}}^{31}=-\stackrel{{\bullet}}{{\mathcal{J}}} ^{10}=-\stackrel{{\bullet}}{{\mathcal{J}}}^{13}=\frac{xf^{ \prime\prime}}{8\pi}; \tag{5.8}\] \[\stackrel{{\bullet}}{{\mathcal{J}}}^{02}=\stackrel{{ \bullet}}{{\mathcal{J}}}^{32}=-\stackrel{{\bullet}}{{\mathcal{J}}} ^{20}=-\stackrel{{\bullet}}{{\mathcal{J}}}^{23}=\frac{yfg^{ \prime\prime}}{8\pi}.\] Taking the divergence of the Noether superpotential get the Noether current (2.32): \[\stackrel{{\bullet}}{{\mathcal{J}}}^{\mu}=\left\{\frac{gf^{\prime \prime}+fg^{\prime\prime}}{8\pi},0,0,\frac{gf^{\prime\prime}+fg^{\prime \prime}}{8\pi}\right\}. \tag{5.9}\] It is zero due to the Einstein equation (3.10). ### Zero current in STEGR Switching off gravity for the coordinates in (2.38) in STEGR gives a gauge with the metric (2.38) and zero teleparallel connection (3.18). This gives nonzero current (3.20), or (3.22). In linear approximation it rather corresponds to the accepted energy and energy flux in (3.25), or (3.26). Here, we switch off gravity for the metric (5.1) in the coordinates \((T,X,Y,Z)\). It particularly gives \(H(U,X,Y)=0\) and, correspondingly, zero teleparallel connection. The transformed by (2.37) metric (5.1) goes to (2.38), and zero teleparallel connection is transformed as well and should be calculated as \[\Gamma^{\alpha}{}_{\mu\nu}=\frac{\partial x^{\alpha}}{\partial X^{\lambda}} \frac{\partial}{\partial x^{\nu}}\left(\frac{\partial X^{\lambda}}{\partial x ^{\nu}}\right), \tag{5.10}\] where \(x^{\mu}\equiv(t,x,y,z)\), \(X^{\mu}\equiv(T,X,Y,Z)\). The non-zero components of the transformed connection are \[\Gamma^{0}{}_{00}=\Gamma^{0}{}_{33}=\Gamma^{3}{}_{00}=\Gamma^{3}{ }_{33}=-\Gamma^{0}{}_{03}=-\Gamma^{0}{}_{30}=-\Gamma^{3}{}_{03}=-\Gamma^{3}{} _{30}=\frac{1}{2}\left(x^{2}ff^{\prime\prime\prime}+x^{2}f^{\prime}f^{\prime \prime}+y^{2}gg^{\prime\prime\prime}+y^{2}g^{\prime}g^{\prime\prime}\right);\] \[\Gamma^{0}{}_{01}=\Gamma^{0}{}_{10}=\Gamma^{3}{}_{01}=\Gamma^{3}{ }_{10}=-\Gamma^{0}{}_{13}=-\Gamma^{-}{}_{31}=-\Gamma^{3}{}_{31}=xff^{\prime \prime};\] \[\Gamma^{0}{}_{02}=\Gamma^{0}{}_{20}=\Gamma^{3}{}_{20}=-\Gamma^{0} {}_{23}=-\Gamma^{0}{}_{32}=-\Gamma^{3}{}_{23}=-\Gamma^{3}{}_{32}=ygg^{\prime \prime};\] \[\Gamma^{0}{}_{11}=\Gamma^{3}{}_{11}=ff^{\prime};\] \[\Gamma^{0}{}_{22}=\Gamma^{3}{}_{22}=gg^{\prime};\] \[\Gamma^{1}{}_{00}=\Gamma^{1}{}_{33}=-\Gamma^{1}{}_{03}=-\Gamma^{ 1}{}_{30}=\frac{xf^{\prime\prime}}{f};\] \[\Gamma^{1}{}_{01}=\Gamma^{1}{}_{10}=-\Gamma^{1}{}_{13}=-\Gamma^{ 1}{}_{31}=\frac{f^{\prime}}{f};\] \[\Gamma^{2}{}_{00}=\Gamma^{2}{}_{33}=-\Gamma^{2}{}_{03}=-\Gamma^{2 }{}_{30}=\frac{yg^{\prime\prime}}{g};\] \[\Gamma^{2}{}_{02}=\Gamma^{2}{}_{20}=-\Gamma^{2}{}_{23}=-\Gamma^{ 2}{}_{32}=\frac{g}{g}.\] Thus, the new STEGR gauge is presented by the pair of the coordinates in (2.38) with the connection (5.11). Using (5.11) we get the non-metricity (2.20): \[\begin{split} Q_{000}=Q_{033}=Q_{303}=Q_{330}=-Q_{003}=-Q_{030}=- Q_{300}=-Q_{333}=x^{2}ff^{\prime\prime\prime}+x^{2}f^{\prime}f^{\prime\prime}+y^{2}gg^ {\prime\prime\prime}+y^{2}g^{\prime}g^{\prime\prime};\\ Q_{100}=Q_{133}=-Q_{103}=-Q_{130}=2xff^{\prime\prime};\\ Q_{200}=Q_{233}=-Q_{203}=-Q_{230}=2ygg^{\prime\prime}.\end{split} \tag{5.12}\] Taking the freely falling observer's proper vector (3.8) in \((t,x,y,z)\) coordinates we get: zero Komar superpotential (2.33) which was found in previous sections; by (2.24) and (2.34) one has zero additional part of Noether superpotential. As a result, one has zero total Noether superpotential in STEGR, what gives zero Noether current. ## 6 Concluding remarks In article, using the developed earlier our formalism [7, 8, 26, 27, 28] in TEGR and in STEGR, we have studied the problem of constructing conserved quantities (energy density and energy density flux) for a flat fronted exact (strong) gravitational wave of the only one polarization "+". Concerning TEGR, the applied formalism is a tensorial one and gives a possibility to construct both local and integral conserved quantities covariant both to coordinate transformations and to local Lorentz rotations. For the best of our knowledge, STEGR has not still been used for construction of conserved quantities for gravitational waves. The crucial property of the formalism both in TEGR and in STEGR is that a displacement vector \(\xi^{\alpha}\) is included. Namely, interpretation of conserved quantities is defined by \(\xi^{\alpha}\) that can be chosen as a Killing vector of spacetime, a proper vector of an observer, etc. Another important notion in the formalism [7, 8, 26, 27, 28] is a gauge, that is classes of equivalence of pairs (tetrad, ISC) in TEGR, or (coordinates, flat connections) in STEGR. To construct a physically sensible conserved quantity one has to find out a corresponding gauge [7, 8, 28]. In the present paper, we have found gauges when energetic characteristics of flat gravitational wave measured by a freely falling observer are zero that is in a correspondence with the equivalence principle. However, because the wave solution is not stationary there is no timelike Killing vectors, formally it means that it is impossible to construct energy density. What is interesting is that the simplest gauge used gives non-zero results, which in the limit of a weak wave coincides with the related pseudotensor formulae. Analogous results obtained in the non-covariant formalism are discussed in [21, 22]. We should stress here that despite free falling masses can be used for detecting non-zero energy of gravitational waves, at least two masses separated by non-zero distance are needed. Even for a simpler case of Friedmann - Robertson - Walker cosmological metric consideration of distant masses makes energy issues more subtle, as it, for example, have been shown in the Harrison paper with a remarkable title "Mining energy in an expanding Universe" [38]. However, in our studied here we consider only one point moving geodesically in a gravitational wave metric, nevertheless, the obtained non-zero result has a physical sense at least for a weak wave. This means that the connection between the equivalence principle and the energy-momentum characteristics of a gravitational field studied here needs a deeper investigation. ## Appendix A Arbitrary Lorentz rotations Here, we formally derive the arbitrary Lorentz rotation dependent on \((t-z)\) only, which can be applied only to the tetrad as (2.12) preserving ISC or only to the ICS as (2.13) preserving tetrad. First, we present the compositions of simple Lorentz rotations dependent on \((t-z)\). It was noted in subsection 4.1, that no compositions such Lorentz rotations of the tetrad only preserving ISC (or of ISC only preserving tetrad) can change the Noether current. Then, connect the compositions of simple Lorentz rotations dependent on \((t-z)\) only with arbitrary Lorentz rotation dependent on \((t-z)\) only and conclude that if no such compositions can change the Noether current, no such arbitrary Lorentz rotation can change the Noether current. Arbitrary Lorentz rotation matrix \(\Lambda^{a}{}_{b}(t-z)\) of \(SO(1,3)\) group can be expressed trough \(so(1,3)\) algebra as \[\Lambda^{a}{}_{b}(t-z)=\exp(J_{1}\alpha_{1}(t-z)+J_{2}\alpha_{2}(t-z)+J_{3} \alpha_{3}(t-z)+K_{1}\beta_{1}(t-z)+K_{2}\beta_{2}(t-z)+K_{3}\beta_{3}(t-z)),\] (A.13) where \(\alpha_{i}(t-z),\beta_{i}(t-z)\)\((i=1,2,3)\) are arbitrary functions and the generators of algebra \(so(1,3)\) are \[\begin{split} J_{1}=i\left(\begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{array}\right),&J_{2}=i\left(\begin{array}{cccc}0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\\ 0&-1&0&0\end{array}\right),&J_{3}=i\left(\begin{array}{cccc}0&0&0&0\\ 0&0&-1&0\\ 0&1&0&0\\ 0&0&0&0\end{array}\right),\\ K_{1}=i\left(\begin{array}{cccc}0&1&0&0\\ 1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right),&K_{2}=i\left(\begin{array}{cccc}0&0&1&0\\ 0&0&0&0\\ 1&0&0&0\\ 0&0&0&0\end{array}\right),&K_{3}=i\left(\begin{array}{cccc}0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0\end{array}\right).\end{split}\] (A.14) It is practically difficult to calculate directly the matrix exponent (A.13). Therefore, we assume that there exist functions \(\bar{\alpha}_{1}(t-z)\), \(\bar{\alpha}_{2}(t-z)\), \(\bar{\alpha}_{3}(t-z)\), \(\bar{\beta}_{1}(t-z)\), \(\bar{\beta}_{2}(t-z)\), \(\bar{\beta}_{3}(t-z)\) such that \[\exp(J_{1}\alpha_{1}+J_{2}\alpha_{2}+J_{3}\alpha_{3}+K_{1}\beta_{1}+K_{2}\beta _{2}+K_{3}\beta_{3})=\exp(J_{1}\bar{\alpha}_{1})\exp(J_{2}\bar{\alpha}_{2}) \exp(J_{3}\bar{\alpha}_{3})\exp(K_{1}\bar{\beta}_{1})\exp(K_{2}\bar{\beta}_{2}) \exp(K_{3}\bar{\beta}_{3}),\] (A.15) where \(\bar{\alpha}_{i}\equiv\bar{\alpha}_{i}(t-z)\), \(\bar{\beta}_{i}\equiv\bar{\beta}_{i}(t-z)\), \(\alpha_{i}\equiv\alpha_{i}(t-z)\), \(\beta_{i}\equiv\beta_{i}(t-z)\), \(i=1,2,3\). \(\bar{\alpha}_{i}\), \(\bar{\beta}_{i}\) can be connected with \(\alpha_{i},\ \beta_{i}\) using the Baker-Campbell-Hausdorff formula [39]. This formula gives direct expression for matrix \(Z\) in \(\exp(X)\exp(Y)=\exp(Z)\), as a function \(Z=Z(X,Y)=\log(\exp X\exp Y)\). Applying this formula to each term under the exponent in (A.15) sequentially, we can get the exact expressions of matrices \(\bar{\alpha}_{i},\bar{\beta}_{i}\) as functions of matrices \(\alpha_{i},\beta_{i}\). We don't need make these calculations directly here. It is enough for us to know that \(\bar{\alpha}_{i},\bar{\beta}_{i}\) can be expressed explicitly through \(\alpha_{i},\beta_{i}\) and then take the Lorentz rotation \(\Lambda^{a}{}_{b}(t-z)\) as a composition of simple Lorentz rotations: \[\begin{split}\Lambda^{a}{}_{b}(t-z)=Exp(J_{1}\bar{\alpha}_{1}) \exp(J_{2}\bar{\alpha}_{2})Exp(J_{3}\bar{\alpha}_{3})Exp(K_{1}\bar{\beta}_{1}) Exp(K_{2}\bar{\beta}_{2})Exp(K_{3}\bar{\beta}_{3})=\\ \Lambda_{(\bar{\alpha}_{1})}(t-z)\Lambda_{(\bar{\alpha}_{2})}(t-z) \Lambda_{(\bar{\alpha}_{3})}(t-z)\Lambda_{(\bar{\beta}_{1})}(t-z)\Lambda_{( \bar{\beta}_{2})}(t-z)\Lambda_{(\bar{\beta}_{3})}(t-z)\end{split}\] (A.16) where \(\bar{\alpha}_{i}\), \(\bar{\beta}_{i}\), \(i=1,2,3\) are arbitrary functions of \((t-z)\) and \[\Lambda_{(\bar{\alpha}_{1})}(t-z)=Exp(J_{1}\bar{\alpha}_{1}(t-z))= \left(\begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\ 0&0&\cos[\bar{\alpha}_{1}(t-z)]&-\sin[\bar{\alpha}_{1}(t-z)]\\ 0&0&\sin[\bar{\alpha}_{1}(t-z)]&\cos[\bar{\alpha}_{1}(t-z)]\\ \end{array}\right),\] \[\Lambda_{(\bar{\alpha}_{2})}(t-z)=Exp(J_{2}\bar{\alpha}_{2}(t-z))= \left(\begin{array}{cccc}0&0&0&0\\ 0&\cos[\bar{\alpha}_{2}(t-z)]&0&\sin[\bar{\alpha}_{2}(t-z)]\\ 0&0&0&0\\ 0&-\sin[\bar{\alpha}_{2}(t-z)]&0&\cos[\bar{\alpha}_{2}(t-z)]\\ \end{array}\right),\] \[\Lambda_{(\bar{\alpha}_{3})}(t-z)=Exp(J_{3}\bar{\alpha}_{3}(t-z))= \left(\begin{array}{cccc}0&0&0&0\\ 0&\cos[\bar{\alpha}_{3}(t-z)]&-\sin[\bar{\alpha}_{3}(t-z)]&0\\ 0&\sin[\bar{\alpha}_{3}(t-z)]&\cos[\bar{\alpha}_{3}(t-z)]&0\\ 0&0&0&0\\ \end{array}\right),\] \[\Lambda_{(\bar{\beta}_{1})}(t-z)=Exp(K_{1}\bar{\beta}_{1}(t-z))= \left(\begin{array}{cccc}\cosh[\bar{\beta}_{1}(t-z)]&\sinh[ \bar{\beta}_{1}(t-z)]&0&0\\ \sinh[\bar{\beta}_{1}(t-z)]&\cosh[\bar{\beta}_{1}(t-z)]&0&0\\ 0&0&0&0\\ \end{array}\right),\] \[\Lambda_{(\bar{\beta}_{2})}(t-z)=Exp(K_{2}\bar{\beta}_{2}(t-z))= \left(\begin{array}{cccc}\cosh[\bar{\beta}_{2}(t-z)]&0&\sinh[ \bar{\beta}_{2}(t-z)]&0\\ 0&0&0&0\\ \sinh[\bar{\beta}_{2}(t-z)]&0&\cosh[\bar{\beta}_{2}(t-z)]&0\\ 0&0&0&0\\ \end{array}\right),\] \[\Lambda_{(\bar{\beta}_{3})}(t-z)=Exp(K_{3}\bar{\beta}_{3}(t-z))= \left(\begin{array}{cccc}\cosh[\bar{\beta}_{3}(t-z)]&0&0&\sinh[ \bar{\beta}_{3}(t-z)]\\ 0&0&0&0\\ 0&0&0&0\\ \sinh[\bar{\beta}_{3}(t-z)]&0&0&\cosh[\bar{\beta}_{3}(t-z)]\\ \end{array}\right).\] Arbitrary compositions of Lorentz rotations \(\Lambda_{(\bar{\alpha}_{i})}(t-z)\) and \(\Lambda_{(\bar{\beta}_{j})}(t-z)\) (\(i=1,2,3\)) are used in calculations. ## Appendix B Not freely falling tetrad Here, we show that the tetrad (5.3) taken in [20] is not a freely falling tetrad and, thus, the observer that is rest in the frame (5.3) is not a freely falling. The inversed tetrad (5.3) is \[{h_{a}}^{\mu}=\left(\begin{array}{cccc}-1+\frac{1}{2}H(T-Z,X,Y)&0&0&\frac{1} {2}H(T-Z,X,Y)\\ 0&1&0&0\\ 0&0&1&0\\ \frac{1}{2}H(T-Z,X,Y)&0&0&\frac{1}{2}H(T-Z,X,Y)+1\\ \end{array}\right).\] (B.1) The time-like tetrad vector (which is taken as the observer's 4-velocity) is \[{h_{0}}^{\mu}=\left\{-1+\frac{1}{2}H(T-Z,X,Y),0,0,\frac{1}{2}H(T-Z,X,Y)\right\},\] (B.2) and, in [20], it was assumed that (B.2) is equal to the observer's 4-velocity, i.e. the observer is rest in the frame (5.3). Levi-Civita connection for (5.1) is \[\begin{array}{c}\stackrel{{\circ}}{{\Gamma}}{}^{0}{}_{00}= \stackrel{{\circ}}{{\Gamma}}{}^{3}{}_{33}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{00}=\stackrel{{\circ}}{{\Gamma}}{}^{0}{}_ {03}=\frac{1}{2}H^{(1,0,0)}(T-Z,X,Y);\\ \stackrel{{\circ}}{{\Gamma}}{}^{3}{}_{03}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{30}=\stackrel{{\circ}}{{\Gamma}}{}^{0}{}_ {03}=-\frac{1}{2}H^{(1,0,0)}(T-Z,X,Y);\\ \stackrel{{\circ}}{{\Gamma}}{}^{0}{}_{01}=\stackrel{{ \circ}}{{\Gamma}}{}^{0}{}_{00}=\stackrel{{\circ}}{{\Gamma}}{}^{0}{}_ {10}=\stackrel{{\circ}}{{\Gamma}}{}^{3}{}_{31}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{01}=\stackrel{{\circ}}{{\Gamma}}{}^{3}{}_ {10}=\frac{1}{2}H^{(0,1,0)}(T-Z,X,Y);\\ \stackrel{{\circ}}{{\Gamma}}{}^{0}{}_{13}=\stackrel{{ \circ}}{{\Gamma}}{}^{0}{}_{31}=\stackrel{{\circ}}{{\Gamma}}{}^{1}{}_ {03}=\stackrel{{\circ}}{{\Gamma}}{}^{1}{}_{30}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{13}=\stackrel{{\circ}}{{\Gamma}}{}^{3}{}_ {11}=\frac{1}{2}H^{(0,1,0)}(T-Z,X,Y);\\ \stackrel{{\circ}}{{\Gamma}}{}^{0}{}_{02}=\stackrel{{ \circ}}{{\Gamma}}{}^{2}{}_{00}=\stackrel{{\circ}}{{\Gamma}}{}^{2}{}_ {30}=\stackrel{{\circ}}{{\Gamma}}{}^{2}{}_{30}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{02}=\stackrel{{\circ}}{{\Gamma}}{}^{3}{}_ {20}=\frac{1}{2}H^{(0,0,1)}(T-Z,X,Y);\\ \stackrel{{\circ}}{{\Gamma}}{}^{0}{}_{32}=\stackrel{{ \circ}}{{\Gamma}}{}^{0}{}_{23}=\stackrel{{\circ}}{{\Gamma}}{}^{2}{}_ {03}=\stackrel{{\circ}}{{\Gamma}}{}^{2}{}_{30}=\stackrel{{ \circ}}{{\Gamma}}{}^{3}{}_{32}=\stackrel{{\circ}}{{\Gamma}}{}^{3}{}_ {23}=-\frac{1}{2}H^{(0,0,1)}(T-Z,X,Y),\end{array}\] (B.3) where \(U\equiv T-Z\), \(H^{(1,0,0)}(U,X,Y)\)\(\frac{\partial H(U,X,Y)}{\partial U}\), \(H^{(0,1,0)}(U,X,Y)=\frac{\partial H(U,X,Y)}{\partial X}\), \(H^{(0,0,1)}(U,X,Y)=\frac{\partial H(U,X,Y)}{\partial Y}\). The geodesic equation has a form: \[l^{\mu}\equiv\frac{d^{2}x^{\mu}}{d\tau^{2}}+\stackrel{{\circ}}{{ \Gamma}}{}^{\mu}{}_{\alpha\beta}\frac{dx^{\alpha}}{d\tau}\frac{dx^{\beta}}{d \tau}\equiv\frac{du^{\mu}}{d\tau}+\stackrel{{\circ}}{{\Gamma}}{}^ {\mu}{}_{\alpha\beta}u^{\alpha}u^{\beta}\equiv\frac{\partial u^{\mu}}{\partial x ^{\kappa}}+\stackrel{{\circ}}{{\Gamma}}{}^{\mu}{}_{\alpha\beta}u^{ \alpha}u^{\beta}=0,\] (B.4) where \(u^{\alpha}\) is the observer's 4-velocity and the parameter \(\tau\) can be taken as his proper time because he moves along the time-like curves. Let's substitute (B.2) identified with \(u^{\alpha}\) and (B.3) into the left hand side of the equation (B.4). If we obtain zero, the observer would be freely falling, but for general \(H(U,X,Y)\) we obtain non zero result: \[l^{\mu}=\left\{0,\frac{1}{2}H^{(0,1,0)}(U,X,Y),\frac{1}{2}H^{(0,0,1)}(U,X,Y),0 \right\}.\] (B.5) So, the observer in [20] is not freely falling. Therefore, zero energetic characteristics of the wave obtained in [20] with the use of (2.16) (or (2.17)) cannot be interpreted as a correspondence with the equivalence principle. **Acknowledgments** AP has been supported by the Interdisciplinary Scientific and Educational School of Moscow University "Fundamental and Applied Space Research"; EE and AT are supported by RSF grant 21-12-00130. AT also thanks the Russian Government Program of Competitive Growth of Kazan Federal University. The authors are grateful to Alexei Starobinsky for the idea to consider gravitational wave solution in teleparallel gravity under the application of the Noether formalism.
2302.05320
Bayesian modeling with spatial curvature processes
Spatial process models are widely used for modeling point-referenced variables arising from diverse scientific domains. Analyzing the resulting random surface provides deeper insights into the nature of latent dependence within the studied response. We develop Bayesian modeling and inference for rapid changes on the response surface to assess directional curvature along a given trajectory. Such trajectories or curves of rapid change, often referred to as \emph{wombling} boundaries, occur in geographic space in the form of rivers in a flood plain, roads, mountains or plateaus or other topographic features leading to high gradients on the response surface. We demonstrate fully model based Bayesian inference on directional curvature processes to analyze differential behavior in responses along wombling boundaries. We illustrate our methodology with a number of simulated experiments followed by multiple applications featuring the Boston Housing data; Meuse river data; and temperature data from the Northeastern United States.
Aritra Halder, Sudipto Banerjee, Dipak K. Dey
2023-02-10T15:29:36Z
http://arxiv.org/abs/2302.05320v2
# Bayesian Modeling with Spatial Curvature Processes ###### Abstract Spatial process models are widely used for modeling point-referenced variables arising from diverse scientific domains. Analyzing the resulting random surface provides deeper insights into the nature of latent dependence within the studied response. We develop Bayesian modeling and inference for rapid changes on the response surface to assess directional curvature along a given trajectory. Such trajectories or curves of rapid change, often referred to as _wombling_ boundaries, occur in geographic space in the form of rivers in a flood plain, roads, mountains or plateaus or other topographic features leading to high gradients on the response surface. We demonstrate fully model based Bayesian inference on directional curvature processes to analyze differential behavior in responses along wombling boundaries. We illustrate our methodology with a number of simulated experiments followed by multiple applications featuring the Boston Housing data; Meuse river data; and temperature data from the Northeastern United States. _Keywords--_ Bayesian modeling, Directional Curvature, Gaussian Processes, Wombling. ## 1 Introduction Spatial data science manifests in a variety of domains including environmental and geographical information systems (GIS) (Webster & Oliver 2007, Burrough et al. 2015, Schabenberger & Gotway 2017, Plant 2018), digital cartography and terrain modeling (Law et al. 2000, Santner et al. 2003, Jones 2014, Vaughan 2018), imaging (Winkler 2003, Chiu et al. 2013, Dryden & Mardia 2016), spatial econometrics and land use (LeSage & Pace 2009), public health and epidemiology (Elliot et al. 2000, Waller & Gotway 2004, Lawson 2013) and public policy (Haining 1993, Wise & Craglia 2007). Spatial data analysis seeks to estimate an underlying spatial surface representing the process generating the data. Specific inferential interest resides with local features of the surface including rates of change of the process at points and along "spatial boundaries" to understand the behavior of the underlying process and identify lurking explanatory variables or risk factors. This exercise is often referred to as "wombling", named after a seminal paper by Womble (1951); (also see Gleyze et al. 2001). For regionally aggregated data, it identifies boundaries delineating neighboring regions and has been used to study health disparities (Lu & Carlin 2005, Li et al. 2015, Gao et al. 2022) and ecological boundaries (Fitzpatrick et al. 2010). For point-referenced data, where variables are mapped at locations within an Euclidean coordinate frame with a sufficiently smooth spatial surface, it refers to estimating spatial gradients and identifying boundaries representing large gradients (Banerjee et al. 2003, Banerjee & Gelfand 2006, Qu et al. 2021). Our current contribution develops Bayesian inference for spatial curvature along curves on Euclidean domains. Modeling curvature will require smoothness considerations of the process (Adler 1981, Kent 1989, Stein 1999, Banerjee & Gelfand 2003). Observations over a finite set of locations from these processes cannot visually inform about smoothness. Therefore, smoothness of the process is specified from mechanistic considerations which can be introduced through prior specifications as needed. While Bayesian inference for first order derivatives and directional gradients have received considerable attention (see, e.g., Morris et al. 1993, Banerjee et al. 2003, Majumdar et al. 2006, Liang et al. 2009, Heaton 2014, Terres & Gelfand 2015, Wang & Berger 2016, Terres & Gelfand 2016, Wang et al. 2018, Qu et al. 2021, for inferential developments involving spatial gradients from diverse modeling and application perspectives) such processes inform about directional change, but do not enable inference on curvature (departure from flatness) of the spatial surface. Analyzing surface roughness from sampling considerations can be traced at least as far back as Greenwood (1984). We offer full inference with uncertainty quantification about spatial curvature at a point and average curvature along a curve from observed data after accounting for explanatory variables. Considering second-order finite differences we establish a valid spatial curvature process as a limit of such finite difference processes. When formulating directional curvature, we favor the normal direction corresponding to a chosen curve and devise a "wombling" measure to track curvature of the surface along the curve. We derive and exploit analytical expressions of higher order processes to avoid numerical finite differences. The Bayesian inferential framework delivers exact posterior inference for the above constructs on the response as well as latent (or residual) processes. Section 2 develops the directional curvature processes through a differential operator. Section 3 develops the vector analytic framework for curvilinear wombling using curvature processes. Section 4 builds a hierarchical model to exploit the preceding distribution theory and conduct curvature analysis on the response and the latent process. Section 5 presents detailed simulation experiments for assessing directional gradients and curvatures. Section 6 considers applications to three different data sets: Boston housing data, Meuse river data, and Northeastern US Temperatures (the third data is presented in the Supplement). ## 2 Spatial Curvature Processes Let \(\{Y(\mathbf{s}):\mathbf{s}\in\mathcal{S}\subset\mathbb{R}^{d}\}\) be a univariate weakly stationary random field with zero mean, finite second moment and a positive definite covariance function \(K(\mathbf{s},\mathbf{s}^{\prime})=\text{Cov}\left(Y(\mathbf{s}),Y(\mathbf{s}^{ \prime})\right)\) for locations \(\mathbf{s},\mathbf{s}^{\prime}\in\mathbb{R}^{d}\). In particular, under _isotropy_ we assume \(K(\mathbf{s},\mathbf{s}^{\prime})=\widetilde{K}\left(||\mathbf{s}-\mathbf{s}^{ \prime}||\right)\), where \(||\mathbf{s}-\mathbf{s}^{\prime}||\) is the Euclidean distance between the locations \(\mathbf{s},\mathbf{s}^{\prime}\)(Matern, 2013). Building upon notions of mean square smoothness (see, e.g., Stein, 1999) at an arbitrary location \(\mathbf{s}_{0}\) in \(\mathbb{R}^{d}\), we focus upon second order differentiability, \(Y(\mathbf{s}_{0}+h\mathbf{u})=Y(\mathbf{s}_{0})+h\mathbf{u}^{\top}\nabla Y( \mathbf{s}_{0})+h^{2}\mathbf{u}^{\top}\nabla^{2}Y(\mathbf{s}_{0})\mathbf{u}/2+ r_{2}(\mathbf{s}_{0},h^{2}||\mathbf{u}||)\), where \(r_{2}(\mathbf{s}_{0},h^{2}||\mathbf{u}||)/h^{2}\to 0\) as \(h\to 0\) in the \(L_{2}\) sense and \(\nabla\) and \(\nabla^{2}\) are the gradient and Hessian operators, respectively. For the scalar \(h\) and unit vectors \(\mathbf{u}\), \(\mathbf{v}\), we define \(Y^{(2)}_{\mathbf{u},\mathbf{v},h}(\mathbf{s}_{0})=(Y(\mathbf{s}_{0}+h(\mathbf{u }+\mathbf{v}))-Y(\mathbf{s}_{0}+h\mathbf{u})-Y(\mathbf{s}_{0}+h\mathbf{v})+Y( \mathbf{s}_{0}))/h^{2}\) to be the second order finite difference processes in the directions \(\mathbf{u}\), \(\mathbf{v}\) at scale \(h\). Being a linear function of stationary processes it is well-defined. Passing to limits, \(D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s}_{0})=\lim_{h\to 0}Y^{(2)}_{ \mathbf{u},\mathbf{v},h}(\mathbf{s}_{0})\). Provided the limit exists, \(D^{(2)}_{\mathbf{u},\mathbf{u}}Y(\mathbf{s}_{0})\) is defined as the directional curvature process. If \(Y(\mathbf{s})\) is a mean square second order differentiable process in \(\mathbb{R}^{d}\) for every \(\mathbf{s}\in\mathbb{R}^{d}\) then \(D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s})=\mathbf{u}^{\top}\nabla^{2}Y( \mathbf{s})\mathbf{v}\) is well-defined with \(D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s})=\lim_{h\to 0}\left(h^{2} \mathbf{u}^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{v}+\widetilde{r_{2}}\right)/ h^{2}=\mathbf{u}^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{v}\), where \(\widetilde{r_{2}}=r_{2}(\mathbf{s},h^{2}||\mathbf{u}+\mathbf{v}||)-r_{2}( \mathbf{s},h^{2}||\mathbf{u}||)-r_{2}(\mathbf{s},h^{2}||\mathbf{v}||)\). In practice, we need only work with computing these derivatives for an orthonormal basis of \(\mathbb{R}^{d}\), say the Euclidean canonical unit vectors along each axis \(\{\mathbf{e}_{1},\ldots,\mathbf{e}_{d}\}\). If \(\mathbf{u}=\sum_{i=1}^{d}u_{i}\mathbf{e}_{i}\), and \(\mathbf{v}=\sum_{i=1}^{d}v_{i}\mathbf{e}_{i}\) are arbitrary unit vectors, we can compute \(D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s})=\sum_{i=1}^{d}\sum_{j=1}^{d}u_{i} D^{(2)}_{\mathbf{e}_{i},\mathbf{e}_{j}}Y(\mathbf{s})v_{j}\). The directional curvature process is linear in the sense that \(D^{(2)}_{-\mathbf{u},-\mathbf{v}}Y(\mathbf{s})=D^{(2)}_{\mathbf{u},\mathbf{v} }Y(\mathbf{s})\), \(D^{(2)}_{\mathbf{u},-\mathbf{v}}Y(\mathbf{s})=D^{(2)}_{-\mathbf{u},\mathbf{v} }Y(\mathbf{s})=-D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s})\). Since \(D^{(2)}_{\mathbf{w},\mathbf{w}}Y(\mathbf{s})=||\mathbf{w}||^{2}D^{(2)}_{ \mathbf{u},\mathbf{u}}Y(\mathbf{s})\), where \(\mathbf{w}=||\mathbf{w}||\mathbf{u}\) and \(\mathbf{u}\) is a unit direction, we henceforth only consider unit directions. First order directional gradient processes, \(D^{(1)}_{\mathbf{u}}Y(\mathbf{s})\), are reviewed in Banerjee & Gelfand (2006) and in Section S1.1 of the Supplement. Choosing a direction is emphasized with respect to interpreting the directional curvature processes. Directional curvature is the change in the normal to the surface \(Y(\mathbf{s})\) at \(\mathbf{s}_{0}\) when moving along a slice of the surface in the direction \(\mathbf{w}\). The associated algebraic sign locally classifies the nature of curvature at \(\mathbf{s}_{0}\)--for instance, convex or concave ellipsoids (see Stevens 1981). A detailed discussion, with illustration, is available in Section S2 of the Supplement. Since \(\nabla^{2}Y(\mathbf{s})\) is a symmetric matrix, to avoid singularities arising from duplication we modify \(D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s})\) as follows. If \(vech\) is the usual half-vectorization operator for symmetric matrices and \(\mathcal{D}_{d}\) is the duplication matrix (Magnus & Neudecker 1980) of order \(d^{2}\times d(d+1)/2\) then, \(D^{(2)}_{\mathbf{u},\mathbf{v}}Y(\mathbf{s})=\mathbf{c}^{\top}_{\mathbf{u}, \mathbf{v}}vech\left(\nabla^{2}Y(\mathbf{s})\right)\) where \(\mathbf{c}^{\top}_{\mathbf{u},\mathbf{v}}=(\mathbf{u}\otimes\mathbf{v})^{ \top}\mathcal{D}_{d}\) and \(\otimes\) is the Kronecker product for matrices. If \(\mathbf{u}=(u_{1},u_{2})^{\top},\mathbf{v}=(v_{1},v_{2})^{\top}\in\mathbb{R}^{2}\), then \(\mathbf{c_{u,v}}=\left(\mathbf{u}\otimes\mathbf{v}\right)^{\top}\mathcal{D}_{2}= \left(v_{1}u_{1},v_{1}u_{2}+v_{2}u_{1},v_{2}u_{2}\right)^{\top}\). The process \(vech\left(\nabla^{2}Y(\mathbf{s})\right)\) in \(\mathbb{R}^{d(d+1)/2}\) consists of the pure and mixed second order derivatives in \(\nabla^{2}Y(\mathbf{s})\). The distributions needed for inference on directional curvature processes depend on \(vech\left(\nabla^{2}Y(\mathbf{s})\right)\) rather than \(\nabla^{2}Y(\mathbf{s})\). We refer to \((\nabla Y(\mathbf{s})^{\top},vech(\nabla^{2}Y(\mathbf{s}))^{\top})^{\top}\) as the differential process and \(\{\mathbf{u}^{\top}\nabla Y(\mathbf{s})\), \(\mathbf{c_{u,u}^{\top}}vech(\nabla^{2}Y(\mathbf{s}))\}\) as the directional differential processes induced by \(Y(\mathbf{s})\) along \(\mathbf{u}\). Inference for differential processes requires \(\left(Y(\mathbf{s}),\nabla Y(\mathbf{s})^{\top},vech(\nabla^{2}Y(\mathbf{s}) )^{\top}\right)\) to be a valid multivariate process. Its existence is derived from the limit of corresponding finite difference approximations, which yields the cross-covariance matrix depending on fourth (and lower) order derivatives of \(K\). We investigate the parent and differential processes using a differential operator \(\mathcal{L}:\mathbb{R}^{1}\rightarrow\mathbb{R}^{m}\), \(m=1+d+d(d+1)/2\), where \(\mathcal{L}Y=\left(Y,\nabla Y^{\top},vech(\nabla^{2}Y)^{\top}\right)^{\top}\). The resulting process \(\mathcal{L}Y(\mathbf{s})\) is also stationary with a zero mean and a cross-covariance matrix \[V_{\mathcal{L}Y}(\Delta)=\begin{pmatrix}&K(\Delta)&-(\nabla K(\Delta))^{\top }&vech(\nabla^{2}K(\Delta))^{\top}\\ &\nabla K(\Delta)&-\nabla^{2}K(\Delta)&\nabla^{3}K(\Delta)^{\top}\\ &vech(\nabla^{2}K(\Delta))&-\nabla^{3}K(\Delta)&\nabla^{4}K(\Delta)\end{pmatrix}\, \tag{1}\] where \(\Delta=\mathbf{s}-\mathbf{s}^{\prime}\), \(\nabla K(\Delta)\) is the \(d\times 1\) gradient, \(\nabla^{2}K(\Delta)\) is the \(d\times d\) Hessian, \(\nabla^{3}K(\Delta)\) is the \(d(d+1)/2\times d\) matrix of third derivatives and \(\nabla^{4}K(\Delta)\) is the \(d(d+1)/2\times d(d+1)/2\) matrix of fourth order derivatives associated with \(K(\Delta)\). Under isotropy, \(\nabla K(\Delta)=\frac{\nabla\tilde{K}(\|\Delta\|)}{\|\Delta\|}\Delta\), if \(A_{0}=\left(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(|| \Delta||)}{||\Delta||}\right)\) then, \(\nabla^{2}K(\Delta)=\frac{\nabla\widetilde{K}(||\Delta||)}{\|\Delta\|}I_{d}+ A_{0}\frac{\Delta\Delta^{\top}}{\|\Delta\|^{2}}\), \(\nabla^{3}K(\Delta)=A_{0}\bigg{\{}\frac{vech(A_{d})^{\top}\otimes\Delta}{\| \Delta\|^{2}}-3\frac{vech(\Delta\Delta^{\top})^{\top}\otimes\Delta}{\|\Delta\| ^{4}}+\frac{1}{\|\Delta\|^{2}}\left(\frac{\partial vech(\Delta\Delta^{\top}) ^{\top}}{\partial\Delta}\right)\bigg{\}}+\nabla^{3}\widetilde{K}(||\Delta||) \cdot\frac{vech(\Delta\Delta^{\top})^{\top}\otimes\Delta}{\|\Delta\|^{3}}\), where \(\widetilde{K}(\Delta)\) and its derivatives are analytically computed for our covariance functions of interest in Section S3 of the Supplement. Let \(A_{1}=\frac{\partial\Delta\otimes vech(I_{d})^{\top}}{\partial\Delta}\), \(A_{2}=\frac{\partial\Delta\otimes vech(\Delta\Delta^{\top})^{\top}}{\partial \Delta}\), \(A_{3}=\frac{\partial}{\partial\Delta}\left(\frac{\partial vech(\Delta\Delta^{ \top})^{\top}}{\partial\Delta}\right)\) be reordered tensors (matrices) of order \(d(d+1)/2\times d(d+1)/2\) conforming to the order of corresponding elements in \(vech\). Let \(A_{4}\) be the element-wise product of \(\Delta\) with \(\left(\frac{\partial vech(\Delta\Delta^{\top})}{\partial\Delta}\right)\) in the same order, \(B_{1}=vech(\Delta\Delta^{\top})vech(I_{d})^{\top}\) and \(B_{2}=vech(\Delta\Delta^{\top})vech(\Delta\Delta^{\top})^{\top}\). Then, \(\nabla^{4}K(\Delta)\) is, \[\begin{split} A_{0}\left\{\frac{A_{1}}{||\Delta||^{2}}& -3\frac{A_{2}}{||\Delta||^{4}}+\frac{A_{3}}{||\Delta||^{2}}-(1+A_{4}) \left(\frac{2B_{1}}{||\Delta||^{4}}+\frac{B_{1}}{||\Delta||^{3}}\right)+3 \left(\frac{4B_{2}}{||\Delta||^{6}}+\frac{B_{2}}{||\Delta||^{5}}\right)\right\} \\ &\qquad\qquad+\nabla^{3}\widetilde{K}(||\Delta||)\Bigg{(}\frac{B _{1}}{||\Delta||^{3}}+\frac{A_{2}}{||\Delta||^{3}}+\frac{A_{4}}{||\Delta||^{3} }-6\frac{B_{2}}{||\Delta||^{5}}\Bigg{)}+\nabla^{4}\widetilde{K}(||\Delta||) \frac{B_{2}}{||\Delta||^{4}}\;.\end{split} \tag{2}\] The resulting multivariate differential process, \(\mathcal{L}Y\), is stationary but not isotropic. Evidently, for the differential operator to be well-defined under isotropy, \(\nabla^{4}K(\mathbf{0})\) must exist since \(var(D^{(2)}_{\mathbf{u},\mathbf{u}}Y(\mathbf{s}))=\nabla^{4}\widetilde{K}( \mathbf{0})\) (analogous to results in Banerjee et al. 2003, Section 3). The directional differential operator is defined analogously as \(\mathcal{L}_{\mathbf{u}}Y(\mathbf{s})\) such that \(\mathcal{L}_{\mathbf{u}}:\mathbb{R}\rightarrow\mathbb{R}^{3}\). If \(a_{0}=\Big{(}1-\frac{(\mathbf{u}^{\top}\Delta)^{2}}{||\Delta||^{2}}\Big{)}\), then analogous to (2) the covariance function of the directional curvature process, \(\text{Cov}\left(\mathbf{c}^{\top}_{\mathbf{u},\mathbf{u}}vech(\nabla^{2}Y( \mathbf{s})),\mathbf{c}^{\top}_{\mathbf{u},\mathbf{u}}vech(\nabla^{2}Y( \mathbf{s}^{\prime}))\right)=\frac{3}{||\Delta||^{2}}(5a_{0}-4)a_{0}A_{0}+ \frac{6}{||\Delta||}(1-a_{0})a_{0}\nabla^{3}\widetilde{K}(||\Delta||)+(1-a_{0 })^{2}\nabla^{4}\widetilde{K}(||\Delta||)\). To characterize covariance functions that admit such processes, we turn to spectral theory. Recall that for a positive definite function \(K\) defined in \(\mathbb{R}\), Bochner's theorem (see e.g., Williams & Rasmussen 2006) establishes the existence of a finite positive spectral measure \(\mathcal{F}\) on \(\mathbb{R}\). \(K\) can be expressed as the inverse Fourier transform of \(\mathcal{F}\), \(K(t)=\int_{\mathbb{R}}e^{-i\lambda t}\mathcal{F}(d\lambda)\). In cases where \(\mathcal{F}\) admits a spectral density, \(K(t)=\int e^{-i\lambda t}f(\lambda)\,d\lambda\). For \(\nabla^{4}K\) to exist, a trivial extension of the result in Wang et al. (2018) requires that \(f\) possess a finite fourth moment. Examples of covariance kernels that satisfy this condition are (a) the squared exponential covariance kernel with \(K(t)=\exp(-t^{2})\) (\(\sigma^{2}=\phi=1\)), and \(f(\lambda)=1/2\sqrt{\pi}\exp(-\lambda^{2}/4)\) then, \(\frac{1}{2\sqrt{\pi}}\int_{\mathbb{R}}\lambda^{4}\exp(-\lambda^{2}/4)\,d \lambda=3(\sqrt{2})^{4}=12\); and (b) the Matern class with fractal parameter, \(\nu\); \(f(\lambda)\) is known to belong to the \(t\)-family (see e.g., Stein 1999) with \(f(\lambda)=C(\phi,\nu)/(c(\phi,\nu)+\lambda^{2})^{\nu+1/2}\) then, \(\int_{\mathbb{R}}\lambda^{4}C(\phi,\nu)/(c(\phi,\nu)+\lambda^{2})^{\nu+1/2}\,d \lambda<\infty\), for all \(\nu>2\) (since the fourth central moment for the \(t\)-distribution exists if \(\nu>2\)). Here, we consider formulating the directional differential processes using these two classes of covariance functions (a) the squared exponential, \(\widetilde{K}(||\Delta||)=\sigma^{2}\exp(-\phi||\Delta||^{\nu})\), \(\nu=2\); and (b) members of the Matern class, \(\widetilde{K}(||\Delta||)=\sigma^{2}(\phi||\Delta||)^{\nu}K_{\nu}(\phi||\Delta ||)\), where \(K_{\nu}\) is the modified Bessel function of order \(\nu\) (see e.g., Abramowitz et al. 1988), and \(\nu\) controls the smoothness of process realizations. We are particularly interested in \(\nu=5/2\). The multivariate process, \(\mathcal{L}Y(\mathbf{s})\), is valid under the above assumptions without any further specific parametric assumptions over what has been outlined above. To facilitate inference for \(\mathcal{L}Y(\mathbf{s})\), a probability distribution is specified for the parent process. We assume that \(Y(\mathbf{s})\sim GP(\mu(\mathbf{s},\boldsymbol{\beta}),K(\cdot;\sigma^{2}, \phi))\) is a stationary process specified on \(\mathbb{R}^{d}\). In what follows we also assume that \(K=K(\cdot;\sigma^{2},\phi)\) admits four derivatives. There are some immediate implications of a Gaussian assumption on the parent process. If \(Y_{1}(\cdot)\) and \(Y_{2}(\cdot)\) are zero mean, independent stationary Gaussian processes on \(\mathbb{R}^{d}\), then (i) the differential processes \(\mathcal{L}Y_{1}\) and \(\mathcal{L}Y_{2}\) are independent of each other; (ii) if \(c_{1},c_{2}\in\mathbb{R}\) are scalars, then \(\mathcal{L}(c_{1}Y_{1}+c_{2}Y_{2})=c_{1}\mathcal{L}Y_{1}+c_{2}\mathcal{L}Y_{2}\) is stationary and (iii) any sub-vector of \(\mathcal{L}Y\), for example \(Y\) or \((Y,\nabla Y^{\top})^{\top}\), is a stationary Gaussian processes. If \(K\) is \(k\)-times mean square differentiable (i.e. \(\nabla^{2k}K\) exists), the proposed differential operator can be extended to include higher order derivatives of \(\nabla^{k}Y(\mathbf{s})\)(Mardia et al., 1996). Differential operators characterizing change in the response (and gradient) surface also follow valid stationary Gaussian processes. For instance, at an arbitrary location \(\mathbf{s}_{0}\) the divergence operator, \(\mbox{div}(Y(\mathbf{s}_{0}))=\sum_{i=1}^{d}\frac{\partial}{\partial \mathbf{e}_{i}}Y(\mathbf{s}_{0})=\mathbf{c}_{1}^{\top}\mathcal{L}Y(\mathbf{s} _{0})\), where \(\mathbf{c}_{1}\) is an \(m\times 1\) vector with 0's in all places except for first order derivatives where it takes a value of 1, and the Laplacian, defined as the divergence operator for gradients, \(\Delta(Y(\mathbf{s}_{0}))=\sum_{i=1}^{d}(\nabla^{2}Y(\mathbf{s}_{0}))_{ii}= \sum_{i=1}^{d}\frac{\partial^{2}}{\partial\mathbf{e}_{i}^{2}}Y(\mathbf{s}_{0 })=\mathbf{c}_{2}^{\top}\mathcal{L}Y(\mathbf{s}_{0})\), where \(\mathbf{c}_{2}\) is a \(m\times 1\) vector with 0's in all places except for pure second order derivatives where it takes a value of 1. Furthermore, they follow valid Gaussian processes with \(\mbox{var}(\mbox{div}(Y(\mathbf{s}_{0})))=\mathbf{c}_{1}^{\top}V_{\mathcal{L} }\mathbf{c}_{1}\) and \(\mbox{var}(\Delta(Y(\mathbf{s}_{0})))=\mathbf{c}_{2}^{\top}V_{\mathcal{L}} \mathbf{c}_{2}\). Let \(Y(\mathbf{s})\) be a Gaussian parent process with a twice-differentiable mean function \(\mu(\mathbf{s},\boldsymbol{\beta})\), i.e. \(\nabla\mu(\mathbf{s},\boldsymbol{\beta})\) and \(\nabla^{2}\mu(\mathbf{s},\boldsymbol{\beta})\) exist, and let \(K(\cdot)\) be a covariance function with variance \(\sigma^{2}\) and range \(\phi\). Let \(\mathbf{Y}=(Y(\mathbf{s}_{1}),\ldots,Y(\mathbf{s}_{L}))^{\top}\) be the observed realization over \(\mathcal{S}\) with mean \(\boldsymbol{\mu}=(\mu(\mathbf{s}_{1},\boldsymbol{\beta}),\ldots,\mu(\mathbf{s} _{L},\boldsymbol{\beta}))^{\top}\) and \(\Sigma_{\mathbf{Y}}\) be the associated \(L\times L\) covariance matrix with elements \(K(\mathbf{s}_{i},\mathbf{s}_{j})\), and \(\mathbf{s}_{0}\) be an arbitrary location. Let \(\nabla\mathbf{K}_{1}=\left(\nabla K(\delta_{1})^{\top},\ldots,\nabla K(\delta_ {L})^{\top}\right)^{\top}\) and \(\nabla\mathbf{K}_{2}=\left(vech(\nabla^{2}K(\delta_{1}))^{\top},\dots,vech( \nabla^{2}K(\delta_{L}))^{\top}\right)^{\top}\) be \(L\times d\) and \(L\times d(d+1)/2\) matrices, respectively, and \(\delta_{i}=\mathbf{s}_{i}-\mathbf{s}_{0}\), \(i=1,\dots,L\). The distribution \(P(\mathbf{Y},\nabla Y(\mathbf{s}_{0}),vech(\nabla^{2}Y(\mathbf{s}_{0}))\,|\, \boldsymbol{\theta})\), where \(\boldsymbol{\theta}=\{\boldsymbol{\beta},\sigma^{2},\phi\}\), is the \(m_{0}=L+d+d(d+1)/2\)-dimensional Gaussian, \[\mathcal{N}_{m_{0}}\left(\begin{pmatrix}\boldsymbol{\mu}\\ \nabla\mu(\mathbf{s}_{0})\\ vech(\nabla^{2}\mu(\mathbf{s}_{0}))\end{pmatrix},\begin{pmatrix}\Sigma_{ \mathbf{Y}}&-\nabla\mathbf{K}_{1}&\nabla\mathbf{K}_{2}\\ \nabla\mathbf{K}_{1}^{\top}&-\nabla^{2}K(\mathbf{0})&\nabla^{3}K(\mathbf{0}) \\ \nabla\mathbf{K}_{2}^{\top}&-\nabla^{3}K(\mathbf{0})^{\top}&\nabla^{4}K( \mathbf{0})\end{pmatrix}\right)\;, \tag{3}\] which is well-defined as long as the fourth order derivative of \(K\) exists. The posterior predictive distribution for the differential process at \(\mathbf{s}_{0}\) is \[P(\nabla Y(\mathbf{s}_{0}),vech(\nabla^{2}Y(\mathbf{s}_{0}))\,|\,\,\mathbf{Y} )=\int P(\nabla Y(\mathbf{s}_{0}),vech(\nabla^{2}Y(\mathbf{s}_{0}))\,|\,\, \mathbf{Y},\boldsymbol{\theta})P(\boldsymbol{\theta}\,|\,\,\mathbf{Y})\,d \boldsymbol{\theta}\;. \tag{4}\] Posterior inference for curvature proceeds by sampling from \(P(vech(\nabla^{2}Y(\mathbf{s}_{0}))\,|\,\,\mathbf{Y})=\int P(vech(\nabla^{2} Y(\mathbf{s}_{0}))\,|\,\,\nabla Y(\mathbf{s}_{0}),\mathbf{Y},\boldsymbol{ \theta})P(\nabla Y(\mathbf{s}_{0})\,\,|\,\,\mathbf{Y},\boldsymbol{\theta})P( \boldsymbol{\theta}\,|\,\,\mathbf{Y})\,d\boldsymbol{\theta}\,d\nabla Y\). We sample from (4) by drawing one instance of \((\nabla Y(\mathbf{s}_{0}),vech(\nabla^{2}Y(\mathbf{s}_{0}))\) for each sample of \(\boldsymbol{\theta}\) obtained from \(P(\boldsymbol{\theta}\,|\,\,\mathbf{Y})\). The conditional predictive distribution of the differential process is given by \(\nabla Y(\mathbf{s}_{0}),vech(\nabla^{2}Y(\mathbf{s}_{0}))\,|\,\,\mathbf{Y}, \boldsymbol{\theta}\sim\mathcal{N}_{m_{1}}\left(\boldsymbol{\mu}_{1},\Sigma_{1}\right)\) where \(m_{1}=d+d(d+1)/2\), and \[\boldsymbol{\mu}_{1} =\begin{pmatrix}\nabla\mu(\mathbf{s}_{0})\\ vech(\nabla^{2}\mu(\mathbf{s}_{0}))\end{pmatrix}-\begin{pmatrix}\nabla \mathbf{K}_{1}\\ \nabla\mathbf{K}_{2}\end{pmatrix}^{\top}\Sigma_{\mathbf{Y}}^{-1}(\mathbf{Y}- \boldsymbol{\mu})\;, \tag{5}\] \[\Sigma_{1} =\begin{pmatrix}-\nabla^{2}K(\mathbf{0})&\nabla^{3}K(\mathbf{0})^ {\top}\\ -\nabla^{3}K(\mathbf{0})&\nabla^{4}K(\mathbf{0})\end{pmatrix}-\begin{pmatrix} \nabla\mathbf{K}_{1}\\ \nabla\mathbf{K}_{2}\end{pmatrix}^{\top}\Sigma_{\mathbf{Y}}^{-1}\begin{pmatrix} -\nabla\mathbf{K}_{1}\\ \nabla\mathbf{K}_{2}\end{pmatrix}\;. \tag{6}\] Analogous results follow for posterior predictive inference on the curvature process. If \(\mu(\mathbf{s},\boldsymbol{\beta})=\mu\) is a constant, as in simple "kriging", then \(\nabla\mu(\mathbf{s})=\nabla^{2}\mu(\mathbf{s})=0\). More generally, if \(\mu(\mathbf{s},\boldsymbol{\beta})=\mathbf{x}(\mathbf{s})^{\top}\boldsymbol{\beta}\), where \(\mathbf{x}(\mathbf{s})\) is a vector of spatially indexed covariates and \(\mathbf{x}(\mathbf{s})^{\top}\boldsymbol{\beta}\) produces a twice differentiable trend surface then explicit calculation of \(\nabla\mu(\mathbf{s}_{0})\) and \(\nabla^{2}\mu(\mathbf{s}_{0})\) are possible. In case \(Y(\mathbf{s})=\mu(\mathbf{s},\boldsymbol{\beta})+Z(\mathbf{s})+\epsilon( \mathbf{s})\), where \(Z(\mathbf{s})\sim GP(\mathbf{0},K(\cdot;\sigma^{2},\phi))\) and \(N(0,\tau^{2})\) is a white noise process, inference on gradients for the residual spatial process, \(Z(\mathbf{s})\), can be performed from the posterior predictive distribution, \(P(\nabla Z(\mathbf{s}_{0}),vech(\nabla^{2}Z(\mathbf{s}_{0}))\,|\;\mathbf{Y})\). We address this in Section 4 in the context of curvature wombling. ## 3 Wombling with Curvature Processes Bayesian wombling deals with inference for line integrals \[\Gamma(C)=\int_{C}g\left(\mathcal{L}Y\right)\,\mathrm{d}\ell\quad\text{or,} \ \ \overline{\Gamma}(C)=\frac{1}{\ell(C)}\int_{C}g\left(\mathcal{L}Y\right)\, \mathrm{d}\ell\;, \tag{7}\] where \(C\) is a geometric structure of interest, such as lines or planar curves, residing within the spatial domain of reference, \(\ell\) is an appropriate measure, often taken to be the arc-length measure, \(g\) is a linear function (or functional) of the differential operator \(\mathcal{L}Y\). \(\Gamma\) and \(\overline{\Gamma}\) are referred to as the total and average _wombling measures_ respectively. The structure \(C\) is defined to be a _wombling boundary_ if it yields a large total (or average) wombling measure. Depending on the spatial domain, geometric structures of interest constructed within them may vary. For example, if we are dealing with surfaces in \(\mathbb{R}^{3}\), choices of \(C\) are curves and lines within the surface, with the local co-ordinate being \(\mathbb{R}^{2}\). In higher dimensions they would be planes (curves) or hyperplanes (hypercurves). Specifically, Bayesian curvilinear wombling involves estimating integrals in (7) over curves, which tracks rapid change over the spatial domain by determining boundaries (curves) with large gradients normal to the curve (see for e.g., Banerjee & Gelfand 2006). The integrand in (7) inherently involves a direction, in particular change measured is always in a direction normal to \(C\). Hence, \(g(\mathcal{L}Y)\) can equivalently be expressed as a linear function (functional) of \(\mathcal{L}_{\mathbf{n}}Y(\mathbf{s})\), where \(\mathbf{n}=\mathbf{n}(\mathbf{s})\) denotes the unit normal vector to \(C\) at \(\mathbf{s}\). The next few paragraphs provide more detail. With wombling measures for directional gradients discussed the Supplement, Section S1.2, we construct wombling measures for curvature. Given \(C\), depending on the smoothness of the surface, the rate at which gradients change along the curve may present sufficient heterogeneity while traversing the curve. If \(C\) forms a wombling boundary with respect to the gradient, then wombling boundaries for curvature are subsets of \(C\) that feature segments with large positive (negative) directional curvature along a normal direction to the curve. Leveraging only gradients, we develop wombling measures for curvature that further characterize such boundaries located for gradients. The wombling measure for curvature in \(Y(\mathbf{s})\) along \(C\) ascertains whether \(C\) also forms a wombling boundary with respect to curvature. We associate a directional curvature to each \(\mathbf{s}\in C\), \(g(\mathcal{L}Y(\mathbf{s}))=D^{(2)}_{\mathbf{n},\mathbf{n}}Y(\mathbf{s})= \mathbf{c}^{\top}_{\mathbf{n},\mathbf{n}}vech(\nabla^{2}Y(\mathbf{s}))\) (a linear function of \(\mathcal{L}_{\mathbf{n}}Y(\mathbf{s})\)) along the direction of a unit normal \(\mathbf{n}=\mathbf{n}(\mathbf{s})\) to \(C\) at \(\mathbf{s}\). Using (7) we define _wombling measures_ for total and average curvature as, \[\Gamma^{(2)}(C)=\int_{C}D^{(2)}_{\mathbf{n},\mathbf{n}}Y(\mathbf{s})d\ell=\int _{C}\mathbf{n}(\mathbf{s})^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{n}(\mathbf{s} )d\ell\;,\qquad\overline{\Gamma}^{(2)}(C)=\Gamma^{(2)}(C)/\ell(C)\;, \tag{8}\] respectively, where \(\ell(C)\) denotes the arc-length of \(C\). Parameterized curves, \(C=\{\mathbf{s}(t)=(s_{1}(t),s_{2}(t)):t\in\mathcal{T}\subset\mathbb{R}\}\), offer further insights. As \(t\) varies over its domain, \(\mathbf{s}(t)\) outlines the curve \(C\). Implicitly assuming that \(C\) is regular, i.e., \(||\mathbf{s}^{\prime}(t)||\neq 0\), allows the tangent and normal to exist at all points on the curve. The unit tangent and normal at each point of the curve are \(\mathbf{s}^{\prime}(t)/||\mathbf{s}^{\prime}(t)||\) and \(\mathbf{n}=\mathbf{n}(\mathbf{s}(t))=(s^{\prime}_{2}(t),-s^{\prime}_{1}(t))^{ \top}/||\mathbf{s}^{\prime}(t)||\), respectively, while \(\mathbf{c}_{\mathbf{n},\mathbf{n}}=\mathbf{c}_{\mathbf{n}(\mathbf{s}(t)), \mathbf{n}(\mathbf{s}(t))}=\left(\mathbf{n}(\mathbf{s}(t))\otimes\mathbf{n}( \mathbf{s}(t))\right)^{\top}\mathcal{D}_{d}\) from Section 2. The arc-length of \(C\) is \(\ell(C)=\int_{\mathcal{T}}||\mathbf{s}^{\prime}(t)||\,dt\) or \(\mathrm{d}\ell=||\mathbf{s}^{\prime}(t)||\,dt\). If \(\mathcal{T}=[t_{0},t_{1}]\), then \(\ell(C)=\int_{t_{0}}^{t_{1}}||\mathbf{s}^{\prime}(t)||\,dt\) and \(\Gamma^{(2)}(C)=\int_{t_{0}}^{t_{1}}\mathbf{n}(\mathbf{s}(t))^{\top}\nabla^{2} Y(\mathbf{s}(t))\mathbf{n}(\mathbf{s}(t))||\mathbf{s}^{\prime}(t)||\,dt\). If \(C\) is an open curve, then \(\ell(C)^{-1}\int_{C}\mathbf{n}(\mathbf{s})^{\top}\nabla^{2}Y(\mathbf{s}) \mathbf{n}(\mathbf{s})d\mathbf{s}=\ell(C)^{-1}\int_{C}\mathbf{n}(\mathbf{s}(t ))^{\top}\nabla^{2}Y(\mathbf{s}(t))\mathbf{n}(\mathbf{s}(t))||\mathbf{s}^{ \prime}(t)||\,dt\) is the average directional curvature. For example, \(C=\{\mathbf{s}(t)=(r\cos t,r\sin t),t\in[0,\pi/4]\}\) is the arc of a parameterized circle of radius \(r\). It follows that \(||\mathbf{s}^{\prime}(t)||=r\), \(\mathbf{n}(\mathbf{s}(t))=(\cos t,\sin t)^{\top}\) and \(\ell(C)^{-1}\int_{0}^{\pi/4}\mathbf{n}(\mathbf{s}(t))^{\top}\nabla^{2}Y( \mathbf{s}(t))\mathbf{n}(\mathbf{s}(t))r\,dt=\frac{4}{\pi}\int_{0}^{\pi/4} \mathbf{n}(\mathbf{s}(t))^{\top}\nabla^{2}Y(\mathbf{s}(t))\mathbf{n}(\mathbf{ s}(t))\,dt\). The average curvature in the tangential direction of \(C\) is \(\frac{1}{\ell(C)}\int_{C}\mathbf{u}(\mathbf{s}(t))^{\top}\nabla^{2}Y(\mathbf{s}(t)) \mathbf{u}(\mathbf{s}(t))||\mathbf{s}^{\prime}(t)||\,dt=\ell(C)^{-1}\int_{t_ {0}}^{t_{1}}\frac{\mathbf{s}^{\prime}(t)}{||\mathbf{s}^{\prime}(t)||}\, \nabla^{2}Y(\mathbf{s}(t))\frac{\mathbf{s}^{\prime}(t)}{||\mathbf{s}^{ \prime}(t)||}||\mathbf{s}^{\prime}(t)||\,dt=\mathbf{u}(\mathbf{s}(t_{1}))^{ \top}\nabla Y(\mathbf{s}(t_{1}))-\mathbf{u}(\mathbf{s}(t_{0}))^{\top}\nabla Y (\mathbf{s}(t_{0})).\) Hence, the average directional curvature remains path independent and is the difference of directional gradient at the end points of \(C\). For a closed curve \(C\), \(\oint_{C}\mathbf{n}(\mathbf{s})^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{n}(\mathbf{s })d\mathbf{s}=\oint_{C}\mathbf{n}(\mathbf{s}(t))^{\top}\nabla^{2}Y(\mathbf{s}(t ))\mathbf{n}(\mathbf{s}(t))||\mathbf{s}^{\prime}(t)||\,dt\). If the surface admits up to three derivatives, i.e. \(\nabla^{3}Y(\mathbf{s})\) exists, the average curvature of the region, \(\mathcal{D}\), enclosed by \(C\), is free of \(t\). If \(\mathbf{F}(\mathbf{s})=\nabla^{2}Y(\mathbf{s})=(F_{ij}(\mathbf{s}))_{i,j=1,2}\), with \(F_{12}(\mathbf{s})=F_{21}(\mathbf{s})\) and \(F_{ij}=F_{ij}(\mathbf{s})=\frac{\partial^{2}}{\partial s_{i}\partial s_{j}}Y( \mathbf{s})\) then, \(\oint_{C}\mathbf{n}(\mathbf{s})^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{n}( \mathbf{s})d\mathbf{s}=\oint_{C}\mathbf{n}(\mathbf{s})^{\top}\mathbf{F}( \mathbf{s})\mathbf{n}(\mathbf{s})d\mathbf{s}=\oint_{C}\mathbf{n}(\mathbf{s}(t ))^{\top}\mathbf{F}(\mathbf{s})\mathbf{n}(\mathbf{s}(t))||\mathbf{s}^{\prime }(t)||\,dt=\oint_{C}||\mathbf{s}^{\prime}(t)||^{-1}\left(F_{11}s_{2}^{\prime}( t)^{2}-2F_{12}s_{1}^{\prime}(t)s_{2}^{\prime}(t)+F_{22}s_{1}^{\prime}(t)^{2}\right)\,dt= \iint\limits_{\mathcal{D}}\left\{\frac{\partial F_{11}n_{2}}{\partial s_{1}}+ \left(\frac{\partial F_{12}n_{2}}{\partial s_{2}}+\frac{\partial F_{21}n_{1}} {\partial s_{1}}\right)+\frac{\partial F_{22}n_{1}}{\partial s_{2}}\right\}ds _{1}ds_{2}\). The last equality is obtained using Green's theorem (see for e.g., Rudin 1976). This can be interpreted as "flux" in the gradient within \(\mathcal{D}\). Since, \(F_{ij}(\mathbf{s})=\nabla_{ij}^{2}Y(\mathbf{s})\), the integrand in the last equality require the existence of \(\nabla_{ijk}^{3}Y(\mathbf{s})\), \(i,j,k=1,2\). Denoting, \(\widetilde{\nabla}^{3}Y(\mathbf{s})=(\nabla_{ijk}^{3}Y(\mathbf{s}))_{i,j,k=1,2}^ {\top}\), vector of unique third derivatives, and \(\mathbf{n}_{0}(\mathbf{s})=(n_{2}(\mathbf{s}),n_{2}(\mathbf{s}),n_{1}( \mathbf{s}),n_{1}(\mathbf{s}))^{\top}\) then, \[\frac{1}{\ell(C)}\oint_{C}\mathbf{c}_{\mathbf{n},\mathbf{n}}^{\top}vech( \nabla^{2}Y(\mathbf{s}))\,d\mathbf{s}=\frac{1}{\ell(C)}\iint\limits_{\mathcal{ D}}\mathbf{n}_{0}(\mathbf{s})^{\top}\widetilde{\nabla}^{3}Y(\mathbf{s})\,d \mathbf{s}. \tag{9}\] This extends the development in Section 3.2 of Banerjee & Gelfand (2006) to study the behavior of spatial curvature over closed curves on surfaces in \(\mathbb{R}^{3}\). Sampling along \(C\) is generally harder than sampling inside \(\mathcal{D}\). Hence, the computational implications of (9) are more appealing. When studying the same behavior along a tangential direction to \(C\) with \(\mathbf{s}(t_{0})=\mathbf{s}(t_{1})=\mathbf{s}_{0}\), \(\oint_{C}\mathbf{u}(\mathbf{s})^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{u}( \mathbf{s})d\mathbf{s}=\oint_{\mathbf{S}(t_{0})}^{\mathbf{S}(t_{1})}F_{11}( \mathbf{s})n_{1}ds_{1}+F_{12}(\mathbf{s})n_{1}ds_{2}+F_{21}(\mathbf{s})n_{2}ds_ {1}+F_{22}(\mathbf{s})n_{2}ds_{2}=\mathbf{u}(\mathbf{s}(t_{1}))^{\top}\nabla Y (\mathbf{s}(t_{1}))-\mathbf{u}(\mathbf{s}(t_{0}))^{\top}\nabla Y(\mathbf{s}(t_ {0}))=0\), again a consequence of path independence. This validates the choice of a normal direction to \(C\) when measuring change in the gradient. Using the rectilinear approximation to curvature wombling, as discussed later, provides a more computationally tractable and simpler approach, where double integrals manifest when computing variances of the wombling measures. Curvature wombling requires predictive inference performed using gradient measures on the interval \(\mathcal{T}\), to include \(\Gamma^{(2)}(C)\) (or \(\overline{\Gamma}^{(2)}(C)\)) in (8). Leveraging inference for differential processes in Section 2, we obtain joint inference on the wombling measures. Suppose \(\{\mathbf{s}(t):t\in[0,T]\}\) is generated over \(\mathcal{T}=[0,T]\). For any \(t^{*}\in[0,T]\), let \(C_{t^{*}}\) denote the curve restricted to \([0,t^{*}]\) and \(\ell(C_{t^{*}})\) its arc-length. Line integrals for curvilinear gradient and curvature wombling measures are \(\Gamma^{(1)}(C_{t^{*}})=\int_{0}^{t^{*}}D_{\mathbf{n}}^{(1)}Y(\mathbf{s}(t))|| \mathbf{s}^{\prime}(t)||\,dt\), \(\overline{\Gamma}^{(1)}(C_{t^{*}})=\frac{1}{\ell(C_{t^{*}})}\Gamma^{(1)}(C_{t ^{*}})\), \(\Gamma^{(2)}(C_{t^{*}})=\int_{0}^{t^{*}}D_{\mathbf{n},\mathbf{n}}^{(2)}Y( \mathbf{s}(t))||\mathbf{s}^{\prime}(t)||\,dt\) and \(\overline{\Gamma}^{(2)}(C_{t^{*}})=\frac{1}{\ell(C_{t^{*}})}\Gamma^{(2)}(C_{t ^{*}})\). Since \(D_{\mathbf{n}}^{(1)}Y(\mathbf{s}(t))\) and \(D_{\mathbf{n},\mathbf{n}}^{(2)}Y(\mathbf{s}(t))\) are Gaussian processes on \(\mathcal{T}\), \(\Gamma^{(1)}(C_{t^{*}})\) and \(\Gamma^{(2)}(C_{t^{*}})\) are valid _dependent_ Gaussian processes on \(\mathcal{T}\). Therefore, \(\mathbf{\Gamma}(C_{t^{*}})=(\Gamma^{(1)}(C_{t^{*}}),\Gamma^{(2)}(C_{t^{*}}))^ {\top}\sim\mathcal{N}_{2}\big{(}\boldsymbol{\mu}_{\mathbf{\Gamma}}(t^{*}), \mathbf{K}_{\mathbf{\Gamma}}(t^{*},t^{*})\big{)}\), where \(\boldsymbol{\mu}_{\mathbf{\Gamma}}(t^{*})=\left(\int_{0}^{t^{*}}D_{\mathbf{n} }^{(1)}\mu(\mathbf{s}(t))||\mathbf{s}^{\prime}(t)||\,dt\;,\int_{0}^{t^{*}}D_{ \mathbf{n},\mathbf{n}}^{(2)}\mu(\mathbf{s}(t))||\mathbf{s}^{\prime}(t)||\,dt \right)^{\top}=(m_{1}(t^{*}),m_{2}(t^{*}))^{\top}\) and \(\mathbf{K}_{\mathbf{\Gamma}}(t^{*},t^{*})=\{k_{ij}(t^{*},t^{*})\}_{i,j=1,2}\) whose elements are evaluated as \[k_{ij}(t^{*},t^{*})=(-1)^{j}\int_{0}^{t^{*}}\int_{0}^{t^{*}}\mathbf{a}_{i}^{ \top}(t_{1})\nabla^{i+j}K(\Delta(t_{1},t_{2}))\mathbf{a}_{j}(t_{2})||\mathbf{ s}^{\prime}(t_{1})||||\mathbf{s}^{\prime}(t_{2})||\,dt_{1}\,dt_{2}\;, \tag{10}\] where \(\mathbf{a}_{1}(t)=\mathbf{n}(\mathbf{s}(t))\) and \(\mathbf{a}_{2}(t)=\mathbf{c}_{\mathbf{n}(\mathbf{s}(t)),\mathbf{n}(\mathbf{s} (t))}\). Simplifications arise in \(d=2\). For example, \(\mathbf{c}_{\mathbf{n},\mathbf{n}}(t)=(s_{2}^{\prime}(t)^{2},-2s_{2}^{\prime} (t)s_{1}^{\prime}(t),s_{1}^{\prime}(t)^{2})^{\top}\), while \(\nabla^{k}K\), for \(k=2,3,4\), are matrices of orders \(2\times 2\), \(2\times 3\) and \(3\times 3\), respectively, of partial and mixed second, third and fourth derivatives of \(K\) and \(\Delta(t_{1},t_{2})=\mathbf{s}(t_{2})-\mathbf{s}(t_{1})\). For any two points \(t_{1}^{*},t_{2}^{*}\in\mathcal{T}\), the dependence is specified through \(\begin{pmatrix}\mathbf{\Gamma}(C_{t_{1}^{*}})\\ \mathbf{\Gamma}(C_{t_{2}^{*}})\end{pmatrix}\sim\mathcal{N}_{4}\left(\begin{pmatrix} \mathbf{m}_{1}\\ \mathbf{m}_{2}\end{pmatrix},\begin{pmatrix}\mathbf{k}_{11}&\mathbf{k}_{12}\\ \mathbf{k}_{21}&\mathbf{k}_{22}\end{pmatrix}\right)\), where \(\mathbf{m}_{i}=(m_{i}(t_{1}^{*}),m_{i}(t_{2}^{*}))^{\top}\), \(\mathbf{k}_{ij}=\begin{pmatrix}k_{ij}(t_{1}^{*},t_{1}^{*})&k_{ij}(t_{1}^{*},t_{2 }^{*})\\ k_{ij}(t_{2}^{*},t_{1}^{*})&k_{ij}(t_{2}^{*},t_{2}^{*})\end{pmatrix}\), \(i,j=1,2\). Generally, for \(n_{P}\) points partitioning \(\mathcal{T}\) the above can be analogously extended. Clearly, \(\mathbf{\Gamma}(C_{t^{*}})\) is a mean squared continuous process. However, stationarity of \(Y(\mathbf{s})\) does not imply stationarity of \(\mathbf{\Gamma}(C_{t^{*}})\). For any \(\mathbf{s}_{j}\in\mathcal{S}\) with \(\text{Cov}(Y(\mathbf{s}_{j}),\mathbf{\Gamma}(C_{t^{*}}))=\boldsymbol{\gamma}_ {j}(t^{*})\) and \(\Delta_{j}(t)=\mathbf{s}(t)-\mathbf{s}_{j}\) we have, \[\boldsymbol{\gamma}_{j}(t^{*})=\left(\int_{0}^{t^{*}}D_{\mathbf{n}}^{(1)}K( \Delta_{j}(t))||\mathbf{s}^{\prime}(t)||\,dt,\int_{0}^{t^{*}}D_{\mathbf{n}, \mathbf{n}}^{(2)}K(\Delta_{j}(t))||\mathbf{s}^{\prime}(t)||\,dt\right)^{\top}\;. \tag{11}\] A valid _joint distribution_ can be specified over \(\mathcal{T}\) by, \[\begin{pmatrix}\mathbf{Y}\\ \mathbf{\Gamma}(C_{t^{*}})\end{pmatrix}\sim\mathcal{N}_{L+2}\left(\begin{pmatrix} \boldsymbol{\mu}\\ \mu_{\mathbf{\Gamma}}(t^{*})\end{pmatrix},\begin{pmatrix}\Sigma_{\mathbf{Y}}& \boldsymbol{\gamma}_{\mathbf{\Gamma}}(t^{*})\\ \boldsymbol{\gamma}_{\mathbf{\Gamma}}^{\top}(t^{*})&\mathbf{K}_{\mathbf{\Gamma}}(t^ {*},t^{*})\end{pmatrix}\right)\;, \tag{12}\] where \(\boldsymbol{\gamma}_{\boldsymbol{\Gamma}}^{\top}(t^{*})=[\boldsymbol{\gamma}_{1}(t^ {*})\;\boldsymbol{\gamma}_{2}(t^{*})\;\cdots\;\boldsymbol{\gamma}_{L}(t^{*})]\) is the \(2\times L\) cross-covariance matrix. In practical applications curvilinear wombling is performed by approximating the curve \(C\) using linear segments. These measures at the segment level are then aggregated to produce a wombling measure for the curve. The curve is segmented using a partition. Consequently, the accuracy of estimated wombling measures for the curve depend on the choice of partition. Figures S2 and S3 in the online Supplement illustrate this concept. Explicitly, let \(C\) be a regular rectifiable curve and \([a,b]\subset\mathcal{T}\) be a compact interval. Let \(g\) be a uniformly continuous function. For any partition, \(P\) of \([a,b]\), \(a=t_{0}^{\prime}<t_{1}^{\prime}<\ldots<t_{n_{P}}^{\prime}=b\), with its norm defined as \(|P|=\max\limits_{i=1,\ldots,n_{P}}(t_{i}^{\prime}-t_{i-1}^{\prime})\). A polygonal (piecewise-linear) approximation to the curve is, \(\widetilde{C}_{P}=\bigcup\limits_{i=1}^{n_{P}}C_{t_{i}}\), where \(C_{t_{i}}=\{\mathbf{s}(t_{i-1}^{\prime})+t\mathbf{u}_{i},t\in[0,t_{i}]\}\), \(t_{i}=||\mathbf{s}(t_{i}^{\prime})-\mathbf{s}(t_{i-1}^{\prime})||\) and \(\mathbf{u}_{i}=||\mathbf{s}(t_{i}^{\prime})-\mathbf{s}(t_{i-1}^{\prime})||^{- 1}(\mathbf{s}(t_{i}^{\prime})-\mathbf{s}(t_{i-1}^{\prime}))^{\top}\). Note that \(\mathbf{s}(t)=\mathbf{s}(t_{i-1}^{\prime})+t\mathbf{u}_{i}\) for \(t\in[0,t_{i}]\) and, hence, \(||\mathbf{s}^{\prime}(t)||=||\mathbf{u}_{i}||=1\). Wombling measure for \(\widetilde{C}_{P}\) is, \(\Gamma(\widetilde{C}_{P})=\sum\limits_{i=1}^{n_{P}}\int_{C_{t_{i}}}g\left( \mathcal{L}Y(\mathbf{s}(t))\right)||\mathbf{s}^{\prime}(t)||\,dt\). As \(|P|\to 0\) we have, \(\Gamma(\widetilde{C}_{P})\xrightarrow{a.s.}\Gamma(C)=\int_{a}^{b}g\left( \mathcal{L}Y(\mathbf{s}(t))\right)||\mathbf{s}^{\prime}(t)||\,dt\). This provides us with an estimate, \(\Gamma(\widetilde{C}_{P})\) for curvilinear wombling measures associated with any general curve \(C\). Further details are provided in the Supplement, at the end of Section S5. The choices of \(g\) for our wombling measures result in, \(\mathbf{u}^{\top}\nabla Y\) and \(\mathbf{c}_{\mathbf{u},\mathbf{u}}^{\top}vech(\nabla^{2}Y)\), which are linear and therefore uniformly continuous over any compact interval. Since predictive inference is performed iteratively on individual line segments, it is sufficient to show the inferential procedure for an arbitrary curve segment \(C_{t_{i}}\). The normal to \(C_{t_{i}}\) is free of \(t\) and denoted as, \(\mathbf{u}_{i}^{\perp}\), which is the normal to \(\mathbf{u}_{i}\). The associated wombling measures with \(C_{t_{i}}\) are \(\boldsymbol{\Gamma}(t_{i})=\left(\int_{0}^{t_{i}}D_{\mathbf{u}_{i}^{\perp}}^{ (1)}Y(\mathbf{s}(t))\,dt,\int_{0}^{t_{i}}D_{\mathbf{u}_{i}^{\perp},\mathbf{u} _{i}^{\perp}}^{(2)}Y(\mathbf{s}(t))\,dt\right)^{\top}\). For a point \(\mathbf{s}_{j}\) define \(\Delta_{i-1,j}=\mathbf{s}_{i-1}-\mathbf{s}_{j}\), \(j=1,2,\ldots,L\). Their joint distribution is specified by (12), where \(\boldsymbol{\gamma}_{j}(t_{i})\) is obtained from (11) by replacing \(\Delta_{j}(t)\) with \(\Delta_{i-1,j}+t\mathbf{u}_{i}\) and \(\mathbf{K}_{\boldsymbol{\Gamma}}(t_{i},t_{i})\) is obtained from (10) replacing \(\Delta(t_{1},t_{2})=(t_{2}-t_{1})\mathbf{u}_{i}\) in the integrand. The analytic tractability of the line integrals in \(\boldsymbol{\gamma}_{j}(t_{i})\) is not a concern. Given choices of \(\mu(\cdot)\) and \(K(\cdot)\), they are all one or two dimensional integrals which are efficiently computed using simple quadrature. For example, let \(Y(\mathbf{s})\) be the isotropic Gaus sian process with mean \(\mu(\mathbf{s})=\mu\) and \(K(||\Delta||;\sigma^{2},\phi)=\sigma^{2}\exp(-\phi||\Delta||^{2})\), where \(\Delta=(\delta_{1},\delta_{2})^{\top}\). \(\nabla^{k}K(\Delta)\), \(k=2,3,4\) is obtained from (2) and related results. \(\mathbf{\gamma}_{j}(t_{i})=\mathbf{\gamma}_{j}(t_{i};\sigma^{2},\phi)=\left\{\Phi \left(\sqrt{2\phi}\left(t_{i}+\mathbf{u}_{i}^{\top}\Delta_{i-1,j}\right)\right) -\Phi\left(\sqrt{2\phi}\mathbf{u}_{i}^{\top}\Delta_{i-1,j}\right)\right\}(c_{1 },c_{2})^{\top}\) where, \(c_{1}=c_{1}(\sigma^{2},\phi,\mathbf{u}_{i}^{\perp},\Delta_{i-1,j})=-2\sigma^{2 }\sqrt{\pi\phi}\mathbf{u}_{i}^{\perp\top}\Delta_{i-1,j}e^{-\phi\left(\mathbf{ u}_{i}^{\perp\top}\Delta_{i-1,j}\right)^{2}}\), \(c_{2}=c_{2}(\sigma^{2},\phi,\mathbf{u}_{i}^{\perp},\Delta_{i-1,j})=c_{1}(1-2 \phi\mathbf{u}_{i}^{\perp\top}\Delta_{i-1,j}\Delta_{i-1,j}^{\top}\mathbf{u}_{ i}^{\perp})\), and \(\Phi(\cdot)\) denotes the standard Gaussian cumulative distribution function. These are simple computations with quadrature required only for computing \(K_{\mathbf{\Gamma}}(t_{i},t_{i})\). ## 4 Bayesian Hierarchical Model We operate under a Bayesian hierarchical model, which is specified as \[Y(\mathbf{s})=\mu(\mathbf{s},\mathbf{\beta})+Z(\mathbf{s})+\epsilon(\mathbf{s})\;, \tag{13}\] where \(Z(\mathbf{s})\sim GP(0,K(\cdot;\sigma^{2},\phi))\) is a Gaussian process, and \(\epsilon(\mathbf{s})\sim N(0,\tau^{2})\) is a white noise process, termed as the nugget (see Banerjee et al. 2014, and references therein). The process parameters are \(\mathbf{\theta}=\{\mathbf{\beta},\sigma^{2},\phi,\tau^{2}\}\). More generally, we can consider a latent specification for response arising from exponential families, \(\alpha(\mathbf{\eta}(\mathbf{s}))=\mathbf{x}^{\top}(\mathbf{s})\mathbf{\beta}+Z( \mathbf{s})+\epsilon(\mathbf{s})\), \(Z(\mathbf{s})\sim GP(0,K(\cdot;\sigma^{2},\phi))\) and \(Y(\mathbf{s})\sim\pi\left(\mathbf{\eta}(\mathbf{s}),\cdot\right)\), where \(\alpha\) is a monotonic link function, \(\pi\) is a member of the exponential family and \(\mathbf{\eta}\) is the natural parameter. Predictive inference on differential processes and curvature wombling proceeds on the latent surface through \(P(\mathcal{L}Z\,|\,\,\mathbf{Y})\). The joint posterior for differential processes is obtained through, \(P(\nabla Z^{\top},vech(\nabla^{2}Z)^{\top}\,|\,\,\mathbf{Y})=\int P(\nabla Z^ {\top},vech(\nabla^{2}Z)^{\top}\,|\,\,\mathbf{Z},\mathbf{\theta})P(\mathbf{Z}\,| \,\,\mathbf{Y},\mathbf{\theta})P(\mathbf{\theta}\,|\,\,\mathbf{Y})\,d\mathbf{\theta}\,d \mathbf{Z}\), while wombling measures \(\mathbf{\Gamma}_{Z}(t^{*})\) for a curve \(C_{t^{*}}\) within the estimated posterior surface for \(\mathbf{Z}\), are sampled from the posterior, \(P(\mathbf{\Gamma}_{Z}(t^{*})\,|\,\,\mathbf{Y})=\int P(\mathbf{\Gamma}_{Z}(t^{*})\,|\, \,\mathbf{Z},\mathbf{\theta})P(\mathbf{Z}\,|\,\,\mathbf{Y},\mathbf{\theta})P(\mathbf{ \theta}\,|\,\,\mathbf{Y})\,d\mathbf{\theta}\,d\mathbf{Z}\). Customary prior specifications for \(\mathbf{\theta}\) yield \[\begin{split} P(\mathbf{\theta},\mathbf{Z}\,|\,\,\mathbf{Y})\propto U(\phi\,|\,a_{\phi},b_{\phi})\times IG(\sigma^{2}\,|\,a_{\sigma},b_{ \sigma})\times IG(\tau^{2}\,|\,a_{\tau},b_{\tau})\times\mathcal{N}_{L}(\mathbf{ Z}\,|\,\,\mathbf{0},\sigma^{2}\mathbf{R}_{Z})\\ \times\mathcal{N}_{p}(\mathbf{\beta}\,|\,\mu_{\beta},\Sigma_{\beta}) \times\prod_{l=1}^{L}\mathcal{N}_{1}\big{(}Y(\mathbf{s}_{l})\,|\,\,\mathbf{x} (\mathbf{s}_{l})^{\top}\mathbf{\beta}+Z(\mathbf{s}_{l}),\tau^{2}\big{)}\;,\end{split} \tag{14}\] where \(IG\) denotes the inverse-gamma distribution with a shape-rate parameterization, \(U\) is a uniform distribution and \(\mathbf{R}_{Z}\) is the correlation matrix corresponding to \(K(\cdot;\sigma^{2},\phi)\). The resulting full conditionals are \(\boldsymbol{\beta}\,|\ \tau^{2},\mathbf{Z},\mathbf{Y}\sim\mathcal{N}_{p}(M_{ \beta}m_{\beta},M_{\beta})\), \(\sigma^{2}\,|\ \phi,\mathbf{Z}\sim IG(a_{\sigma}+\frac{L}{2},b_{\sigma}+\frac{1}{2} \mathbf{Z}^{\top}\mathbf{R}_{Z}^{-1}(\cdot;\phi)\mathbf{Z})\), \(\tau^{2}\,|\ \boldsymbol{\beta},\mathbf{Z},\mathbf{Y}\sim IG\left(a_{\tau}+\frac{L}{2},b_{ \tau}+\frac{1}{2}||\mathbf{Y}-\mathbf{X}\boldsymbol{\beta}-\mathbf{Z}||_{2}^{2} \right)\), \(\mathbf{Z}\,|\ \mathbf{Y},\boldsymbol{\theta}\sim\mathcal{N}_{L}(M_{Z}m_{Z},\tau^{2}M_{Z})\), where \(\mathbf{X}\) is the \(L\times p\) matrix with \(\mathbf{x}(\mathbf{s}_{i})^{\top}\) as rows, \(M_{\beta}^{-1}=\Sigma_{\beta}^{-1}+\tau^{-2}\mathbf{X}^{\top}\mathbf{X}\), \(m_{\beta}=\Sigma_{\beta}^{-1}\mu_{\beta}+\tau^{-2}\mathbf{X}^{\top}(\mathbf{Y} -\mathbf{Z})\), \(M_{Z}^{-1}=\tau^{-2}\big{(}\tau^{-2}I_{L}+\sigma^{-2}\mathbf{R}_{Z}^{-1}(\cdot ;\phi)\big{)}\), and \(m_{Z}=\mathbf{Y}-\mathbf{X}\boldsymbol{\beta}\). \(\phi\) is updated using Metropolis steps with a normal proposal and an adaptive variance. Under this setup posterior samples for the differential processes and wombling measures result from (5) and (6). For each posterior sample of \(\{\mathbf{Z},\boldsymbol{\theta}\}\), we draw \(\boldsymbol{\Gamma}_{Z}(t^{*})\,|\ \mathbf{Z},\boldsymbol{\theta}\sim\mathcal{N}_{2} \big{(}\mu_{\boldsymbol{\Gamma}_{Z}}(t^{*})-\boldsymbol{\gamma}_{\boldsymbol{ \Gamma}_{Z}}^{\top}(t^{*})\Sigma_{\mathbf{Z}}^{-1}\mathbf{Z},K_{\boldsymbol{ \Gamma}_{Z}}(t^{*},t^{*})-\boldsymbol{\gamma}_{\boldsymbol{\Gamma}_{Z}}^{\top} (t^{*})\Sigma_{\mathbf{Z}}^{-1}\boldsymbol{\gamma}_{\boldsymbol{\Gamma}_{Z}}(t ^{*})\big{)}\), where \(\mu_{\boldsymbol{\Gamma}_{Z}}(t^{*})\), \(\boldsymbol{\gamma}_{\boldsymbol{\Gamma}_{Z}}(t^{*})\), and \(K_{\boldsymbol{\Gamma}_{Z}}(t^{*},t^{*})\) are computed from (10) and (11). Algorithms 1 and 2 in the Supplement, Section S4, present further details for posterior sampling. Next, we turn to numerical experiments and data analyses. Codes required for reproducing and emulating the analyses presented in the manuscript are produced for the R statistical programming environment and available for download in the public domain at [https://github.com/arh926/spWombling](https://github.com/arh926/spWombling). ## 5 Simulation Experiments ### Data generation The proposed differential processes are not observed in reality, but are induced by an observed spatially indexed parent process. To evaluate statistical learning of the curvature process we perform simulation experiments within a setup where true values of the differential process and wombling measures are available. We consider locations \(\mathbf{s}=(s_{1},s_{2})^{\top}\in\mathbb{R}^{2}\) over the unit square \([0,1]\times[0,1]\subset\mathbb{R}^{2}\). We generate synthetic data from two distributions: (a) Pattern 1: \(y_{1}(\mathbf{s})\sim N(10[\sin(3\pi s_{1})+\cos(3\pi s_{2})],\tau^{2})\); (b) Pattern 2: \(y_{2}(\mathbf{s})\sim N(10[\sin(3\pi s_{1})\cdot\cos(3\pi s_{2})],\tau^{2})\), where the value of \(\tau^{2}=1\). Figure 1 presents spatial plots of the generated synthetic response from these patterns. The rationale behind selecting these distributions is: (i) synthetic data is more practical and not from the model in (13), and (ii) true gradient and curvature can be computed at every location \(\mathbf{s}\). The synthetic patterns chosen feature two different scenarios that may arise. In the first pattern expressions for differentials along the principal directions \(\mathbf{e}_{1}=(1,0)^{\top}\) and \(\mathbf{e}_{2}=(0,1)^{\top}\) are functions of either \(s_{1}\) or \(s_{2}\), \(\nabla\mu_{1}(\mathbf{s})=30\pi(\cos(3\pi s_{1}),-\sin(3\pi s_{2}))^{\top}\), \(\nabla^{2}\mu_{1}(\mathbf{s})=-90\pi^{2}\text{diag}\{\sin(3\pi s_{1}),\cos(3 \pi s_{2})\}\). The curvature along \(s_{1}\) does not influence curvature along \(s_{2}\), \(\left(\nabla^{2}\mu_{1}(\mathbf{s})\right)_{12}=0\) for all \(\mathbf{s}\). While \(\nabla\mu_{2}(\mathbf{s})=30\pi(\cos(3\pi s_{1})\cos(3\pi s_{2}),-\sin(3\pi s _{1})\sin(3\pi s_{2}))^{\top}\), \(\nabla^{2}\mu_{2}(\mathbf{s})=-90\pi^{2}M(\mathbf{s})\), where \(M(\mathbf{s})\) is a \(2\times 2\) matrix with, \(m_{11}=\sin(3\pi s_{1})\cos(3\pi s_{2})\), \(m_{12}=m_{21}=\cos(3\pi s_{1})\sin(3\pi s_{2})\) and \(m_{22}=\sin(3\pi s_{1})\cos(3\pi s_{2})\) with differentials being functions of both \(s_{1}\) and \(s_{2}\) and \(\left(\nabla^{2}\mu(\mathbf{s})\right)_{12}\neq 0\) for some \(\mathbf{s}\). While setting up the experiments we vary \(L\in\{100,500,1000\}\) with 10 replicated instances under each setting. ### Bayesian model fitting We fit the model in (14) with only an intercept allowing the spatial process to learn the functional patterns in the synthetic response. We use the following hyper-parameter values in (14): \(a_{\phi}=3/\max||\Delta||\), \(b_{\phi}=30\), \(a_{\sigma}=2\), \(b_{\sigma}=1\), \(a_{\tau}=2\), \(b_{\tau}=0.1\)\(\mu_{\beta}=0\) and \(\Sigma_{\beta}=10^{6}I_{p}\). These choices comprise reasonable weakly informative priors. While a \(\text{Uniform}(2,3)\) prior on \(\nu\) can be specified (and was implemented as part of this experiment) to ensure the existence Figure 1: Spatial plots for synthetic patterns, from Pattern 1 (left) and Pattern 2 (right). Scales are shown in the legend alongside. of the curvature process, here our choice of scales in the data generating patterns ensured that \(\nu=5/2\) provided the best model fit when compared with values of \(\nu\in\{1/2,3/2,5/2\}\). Hence, we present the results with \(\nu=5/2\). The parameter estimates for \(\mathbf{\theta}\) are computed using posterior medians and their highest posterior density (HPD) intervals (Chen & Shao, 1999, Plummer et al., 2015). For each replicate, we assess our ability to estimate the local geometry of the resulting posterior surface. For this we overlay a grid spanning the unit square. We perform posterior predictive inference for the differential processes at each grid location following Section 2. Posterior predictive medians (accompanied by 95% HPD intervals) summarize inference for the differential processes over the grid locations (Section 5.4 offers supplementary analysis). ### Bayesian wombling with curvature processes For wombling with curvature processes, or _curvature wombling_, we focus on locating curves that track rapid change within the simulated random surfaces. For example, consider the surface produced by the first pattern. If a curve is provided to us, we can evaluate the posterior distribution of the average or total curvature wombling measures to assess their statistical significance. On the other hand, without a given curve, we consider three different approaches for constructing them from a boundary analysis or wombling perspective: (a) level curves: \(C_{y_{0}}=\{\mathbf{s}:Y(\mathbf{s})=y_{0}\}\): Bayesian wombling literature finds that curves parallel to contours often form wombling boundaries (see, e.g., Banerjee & Gelfand, 2006) and level curves on a surface are parallel to local contours by definition; (b) smooth curves: produces a smooth curve using Bezier splines (see, e.g., Gallier & Gallier, 2000) from a set of _annotated_ points that are of interest within the surface; and (c) rectilinear curves: produces a rectilinear curve joining adjacent _annotated_ points of interest within the surface using straight lines, performs curvature wombling using a Riemann sum approximation (see (S1) in the Supplement). Curves of types (b) and (c) allow the investigator to specify a region of in terest that house possible wombling boundaries. For the surface realization produced by Pattern 1, we consider four different types of curves on the response surface, (A) a closed curve enclosing a trough corresponding to a level curve, \(C_{y_{0}=-18}\), (B) a closed curve enclosing a peak corresponding to a level curve, \(C_{y_{0}=+18}\), (C) a closed curve that outlines a contour corresponding to a level curve, \(C_{y_{0}=+15}\) and (D) an open curve along a contour constructed using a Bezier spline. These curves are marked in Figure 2c. Curvature wombling is performed using methods outlined in Section 3. Referring to the discussion on rectilinear approximation, for each curve, given a partition, we compute \(t_{i}\) and \(\mathbf{u}_{i}\). Combining the segments produces a vector \(\mathbf{t}\) and a matrix of directions, \(\mathbf{U}\) that represents the curve. Algorithm 2 in the Supplement, Section S4 devises efficient computation using \(\mathbf{t}\) and \(\mathbf{U}\). The total (and average) wombling measures \(\overline{\mathbf{\Gamma}}(C)\) are sampled from their posteriors using (12). For curves A, B, C and D, we use partitions with sufficiently small norms (\(|P|\)) to achieve accuracy (\(3.99\times 10^{-3}\), \(3.97\times 10^{-3}\), \(4.42\times 10^{-3}\) and \(2.66\times 10^{-2}\) respectively). One and two dimensional line integrals (refer to (10) and (11)) are computed via quadrature using grids of size 10 on \([0,t_{i}]\), and size 100 on \([0,t_{i}]\times[0,t_{i}]\) respectively, for \(i=1,2,\ldots,n_{P}\). The median of sampled \(\overline{\mathbf{\Gamma}}(\widetilde{C}_{P})\) is our estimated wombling measure for the curve. Significance at the curve-segment level is assessed based on the inclusion of 0 Figure 2: (left) shows color coded directional gradients for segments, (center) shows color coded directional curvature for segments in the direction normal to the curve, (right) shows curves selected for performing curvature wombling. green indicates a positive significance, cyan indicates negative significance and white indicates no significance. within the HPD intervals. Our design allows us to compute true values of average wombling measures for each rectilinear segment in the curve. They are computed using, \(\mu_{\mathbf{\Gamma}}^{true}(\widetilde{C}_{P})=\left(\sum_{i=1}^{n_{P}}t _{i}\right)^{-1}\left(\sum_{i=1}^{n_{P}}\int_{0}^{t_{i}}{\bf u}_{i}^{\,\,\, \top}\nabla\mu_{1}({\bf s}(t))\,dt,\sum_{i=1}^{n_{P}}\int_{0}^{t_{i}}{\bf u}_{ i}^{\,\,\,\top}\nabla^{2}\mu_{1}({\bf s}(t)){\bf u}_{i}^{\,\,\,\perp}\,dt \right)^{\top}\). We compute HPD intervals for the wombling measures at the segment level. Coverage probabilities (CPs) are then constructed by aggregating coverage of true values by HPD intervals over segments. Curve A encloses a trough and a local minima for the surface, while B and C enclose peaks and local maximums (referring to corresponding locations in Figures S8c and S9c). Along all segments of A we expect negative gradients owing to the decreasing nature of the response in that region, while for B and C we expect positive gradients. Each of them would be expected to yield significant wombling measures for gradients. Referring to the Laplacian surface (see Supplement, Figures S8e and S9e) A, B, and C are located in regions manifesting rapid change in the gradient surface, implying they should yield large positive (curve A) or negative (curves B and C) curvature, forming curvature wombling boundaries. These are all aligned with our findings presented in Table 1, which presents measures of quality assessment for wombling. The magnitude and sign of wombling measures also allow us to differentiate between the type of curvature for the different wombling boundaries. For instance, B is located in a region of higher convexity compared to C, while the nature of convexity for regions enclosed by them are different compared to A. Plots in Figure 2 (left and center) show line segment level inference for average wombling measures. Arrows indicate segments \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Curves (\(C\))} & \multicolumn{2}{c}{Average Gradient (\(\Gamma^{(1)}(C)\))} & \multicolumn{2}{c}{Average Curvature (\(\Gamma^{(2)}(C)\))} \\ \cline{2-4} & True & Estimated & True & Estimated \\ \hline Curve A & -61.54 & -64.97 & 731.94 & (599.30, 913.70) \\ & & 49.19 & & -850.84 \\ Curve B & 40.85 & (20.45, 73.12) & -808.04 & (-1066.98, -630.09) \\ Curve C & 84.03 & 85.65 & -504.98 & -504.98 \\ & (59.81, 109.97) & -558.58 & (-767.55, -241.61) \\ Curve D & -110.84 & -113.27 & **-94.64** \\ & & (-153.23, -77.01) & 11.32 & **(-386.94, 233.78)** \\ \hline \hline \end{tabular} \end{table} Table 1: Results from curvature wombling performed on curves A, B, C and D as shown in Figure 2. The estimated average directional gradient and curvature are accompanied by their respective HPD intervals in brackets. HPD intervals _containing 0_ are marked in bold. which were not significant with respect to gradient or curvature, while regions of significance are color coded. D is located in a "relatively flat" region of the surface (see Figures S8e and S9e) and is expected to have gradients but no curvature, which aligns with results shown in Table 1. We conclude by noting that the true values, \(\mu_{\mathbf{\Gamma}}^{\text{true}}(C)\) of the wombling measures for the curves considered, are all covered by the estimated HPD intervals for respective curves. Additionally, at the line segment level we achieved a CP of 1.0 across all curves. ### Supplementary analysis We present additional results in the online supplement. Tables S1 and S2 present parameter estimates, measures of goodness of fit for the fitted process, and assessment of derivative process characteristics for each pattern considered. We compute root mean square errors (RMSE) across observed locations averaged over 10 replicates for each sample size setting for the fitted process \(\widehat{Y}(\mathbf{s})=\widehat{\beta}_{0}+\widehat{Z}(\mathbf{s})\), and \(\widehat{\nabla Y(\mathbf{s})}\), \(vec\widehat{(\nabla^{2}Y(\mathbf{s})})\). We report standard deviations across replicates. With increasing number of observed locations we are able to effectively learn the underlying process and induced differential processes. Figures S4, S5, S6 and S7 present spatial plots of posterior medians of gradient and curvature processes, for \(L=100\) locations. These plots demonstrate the effectiveness of our methods in learning about the differential processes from the underlying patterns. Similarly plots shown in Figures S8, S9, S10 and S11 demonstrate the same for derived quantities and operators of \(\mathcal{L}Y(\mathbf{s})\)--principal curvature (eigenvalues), Gaussian curvature (determinant) (see, e.g., Spivak 1999, Do Carmo 2016), divergence and Laplacian, which pertain to geometric analysis of curvature for the random surface resulting from the underlying patterns. Statistical significance is assessed at every grid point by checking the inclusion of 0 in their HPD intervals. Significantly positive (negative) points are color coded. We compute average CPs at every grid location to measure the accuracy of our assessment. These CPs are then averaged over replicates. We observed high CPs across the grid for parent and differential processes. Figures S12 and S13 compare observed against estimated differential processes coupled with their HPD regions. ## 6 Applications Frameworks developed for differential assessment and boundary analysis in spatially indexed response are applied to multiple data sets with the aim of locating curvature wombling boundaries that track rapid change in response. The chosen data arise from varied areas of scientific interest, we briefly describe the origin and significance of each with respect to our methods before performing our analysis. Response is modeled using the hierarchical model in (13). Prior specifications used in (14) are, \(\phi\sim\text{Unif}\left(3/\max_{\mathbf{s}\in\mathcal{S}}||\Delta||,300\right)\), \(\sigma^{2}\sim IG(2,1)\), \(\tau^{2}\sim IG(2,1)\) (mean 1, infinite variance), \(\mathbf{\beta}\sim N(0,10^{6}I_{p})\), \(p\) being the number of covariates and \(\nu=5/2\) for the Matern kernel ensuring existence of the differential processes. _Boston Housing:_ The Boston housing data (see, e.g., Harrison Jr & Rubinfeld 1978) was collected by the United States Census Service featuring median house prices for tracts and towns in Boston, Massachusetts area. The purpose was to study heterogeneity in the market caused by the need for residents to have clean air. To study such heterogeneity, modern equitable housing policies are incorporating statistical modeling to quantify such behavior. Often they are a result of unobserved effects of rapidly shifting socioeconomic conditions (see, e.g., Hu et al. 2019). Within a spatial map this manifests as neighboring regions of disparity. Figure 3 shows two such regions: high priced including Downtown Boston, Cambridge, Newton, Wellesley, Brookline etc. and low priced including South and East End. For effective policy implementation, identifying such regions becomes crucial. Spatial variation in the median house prices is evidenced in Figure 4. Curvature wombling effected on the house price surface would locate regions that feature such change. The data contains median house price values for 506 census-tracts along with demographic data. Latitude-longitude centers of the census-tracts are used for spatial referencing. To allow \(Z(\mathbf{s})\) to capture all the spatial variation, we include only an intercept in the model. Table 2 shows posterior estimates and HPD intervals for process parameters. We observe that \(\frac{\sigma^{2}}{\sigma^{2}+\tau^{2}}\approx 78.75\%\)--larger portion of total variance being explained by varying location. Modeled spatial variation in the response is shown in Figure 3 (left). Significance for the estimate, \(\widehat{Z}(\mathbf{s})\), is assessed using the inclusion of \(0\) in its posterior HPD. Using posterior samples we estimate the derivative processes for \(Z(\mathbf{s})\). A grid, \((\mathcal{G}=\{\mathbf{s}_{g}:\mathbf{s}_{g}\in\mathtt{convex}-\mathtt{ hull}(\mathcal{S})\}\), containing \(1229\) equally spaced locations) is overlaid over the region with the same purpose. To effect posterior surface analysis on the estimated surface we use \begin{table} \begin{tabular}{l|c c} \hline \hline Parameters (\(\boldsymbol{\theta}\)) & Posterior Estimates (\(\widehat{\boldsymbol{\theta}}\)) & HPD \\ \hline \(\phi\) & 0.96 & (0.83, 1.11) \\ \(\sigma^{2}\) & 55.18 & (43.91, 68.06) \\ \(\tau^{2}\) & 14.89 & (11.77, 18.71) \\ \(\beta_{0}\) & 25.58 & (24.29, 27.34) \\ \hline \hline \end{tabular} \end{table} Table 2: Posterior Estimates from the hierarchical linear model in (13) to Boston housing Figure 4: Plots (left to right) showing fitted process, divergence and Laplacian for the median house price surface. Figure 3: Plots showing (left) probability density of median house prices (in USD 1000) (right) spatial plot of median owner occupied house prices in Boston. posterior predictive distributions of \(\text{div}(Z)\) and \(\Delta(Z)\) revealing zones that manifest rapid change in response and gradients respectively. They are shown in Figures 4 (center and right). Next, we focus on performing curvature wombling on the estimated surface. Strategic posterior surface analysis is used to locate level-sets of interest within the surface that could possibly contain wombling boundaries. We start with contours shown in Figure 5 (left column). Boundary 1 (2) bounds a region where the fitted process has positive (negative) significant estimates. Evidently, the chosen curves should house significant gradients along most segments, but significant curvature should only be detected for segments located at the center (lat-long: \((42.18,42.23)\times(-71.05,-70.05)\)) of the surface in Figures 4 (center and right). Estimated average wombling measures for these curves are shown in Table 3. Figures 5 (center and right) correspond to segment level posterior inference for the curves, line segments with significant directional differentials are indicated in bold. Summarizing, we observe that the gradient, curvature and posterior surface analysis allow us to highlight towns (with census-tracts) within Boston that exhibit heterogeneity in prices. Curvature wombling performed on the surface allows us to delineate zones that house such heterogeneity. For instance, towns located within boundaries 3 (South and East End) and 6 (Newton Figure 5: Curvature wombling on the Boston Housing Data. and Brookline) show significant change in price gradients, compared to towns within boundaries 4 (Lincoln and Weston) and 5 (Wellesley and Dover). These findings can be verified referring back to price dynamics for real estate in Boston during 1978 (see e.g., Schnare and Struyk 1976). The same regions are scrutinized for studying segmentation--towns within curves 1 and 3 are accessible to lower income groups willing to sacrifice air quality. _Muse River Data:_ The Meuse river data features in Pebesma et al. (2012). It provides locations of topsoil heavy metal concentrations, along with soil and landscape variables at the observed locations, collected in a flood plain of the river Meuse, near the village of Stein, Netherlands. The heavy metal concentrations recorded include Cadmium (Cd), Copper (Cu), Lead (Pb) and Zinc (Zn). A distinguishing feature is the naturally occurring boundary--the Meuse. From a boundary analysis standpoint we are interested in examining differentials in heavy metal concentrations along the flood plain of the river to understand the heterogeneous effect of the river on the topsoil. The soils of the floodplain are commonly used for agriculture. Crops grown on the floodplain of the river banks of the Meuse may be consumed by man and/or livestock. The spatial variation in heavy metal concentration can be seen in Figure 6. The path of the Meuse river is shown in each of the spatial plots. Evidently, the heavy metal concentrations decreases with increasing distance from \begin{table} \begin{tabular}{l|c c} \hline \hline Curve (\(C\)) & Average Gradient (\(\overline{\Gamma}^{(1)}(C)\)) & Average Curvature (\(\overline{\Gamma}^{(2)}(C)\)) \\ \hline Boundary 1 & -8.91 & 10.14 \\ & (-11.31, -6.65) & (2.84, 18.34) \\ Boundary 2 & 6.18 & **-0.09** \\ & (4.75, 7.49) & **(-3.45, 3.35)** \\ \hline Boundary 3 & -6.47 & 12.69 \\ & (-9.74, -3.27) & (2.65, 22.48) \\ Boundary 4 & 6.92 & **1.26** \\ & (4.63, 9.19) & **(-5.04, 7.14)** \\ Boundary 5 & 5.47 & **1.36** \\ & (2.95, 7.86) & **(-4.33, 7.42)** \\ Boundary 6 & 11.82 & -16.27 \\ & (7.28, 16.14) & (-26.68,-6.57) \\ \hline \hline \end{tabular} \end{table} Table 3: Curvature wombling measures for boundaries in Boston housing accompanied by corresponding HPD intervals in brackets below. Estimates corresponding to HPD intervals containing 0 are marked in bold. the river. We model the concentrations as independent Gaussian processes. Covariates used are relative elevation above local river bed (elev, measured in meters), organic matter (om measured in kg/(100kg) of soil), distance to Meuse (dist), frequency of flooding, soil type (soil), and lime content in soil (\(p=9\)). Table 4 shows the posterior estimates of process parameters and model coefficients, \(\mathbf{\beta}\) for each of the heavy metals in question. We observe that \(\sigma^{2}/(\sigma^{2}+\tau^{2})\approx 62.45\%\), 99.79%, 52.09%, 62.29% for Cd, Cu, Pb and Zn respectively, indicating larger portions of total variation being explained by spatial heterogeneity, except for Pb. Variation in Cd and Zn concentration is significantly affected by elevation, organic matter and flooding frequency, while variations in Cu and Pb concentration is significantly affected by elevation, organic matter and flooding frequency and lime content. The estimated residual surface is shown in Figure 7 (left) for Cd concentrations. We observe significant positive gradients with varying curvature depending on segments of the river bed for all heavy metals. We perform curvature wombling on the Meuse using the residual surface, \(\mathbf{Z}\). The results of curvature wombling for cadmium are shown in Figure 7. Results and plots for other metals can be found in the Supplement, Section S7, Figure S14. The accompanying wombling measures are shown in Table 5. We observe sufficient heterogeneity in the signs of the wombling measures, yielding contiguous positive (negative) segments. For example, in Cd concentration, boundaries located for average gradients in the northern and southern region are positive, as opposed to boundaries located in the north western region. Therefore, Figure 6: Plots showing heavy metal concentrations in the topsoil of a flood plain at 155 locations for (from left to right) Cadmium (Cd), Copper (Cu), Lead (Pb) and Zinc (Zn) (in mg/kg of soil). while displaying the wombling measures, in Table 5, we separate them by their sign. We conclude that effects of river Meuse on regions of the flood plain exhibit significant heterogeneity when considered across heavy metals. Compared to other metals, Pb concentrations are limited to northern regions of the flood plain. Concentrations of Cd and Zn concentrations along the river are similar. Compared to the northern region, in the northwestern region Zn concentrations decrease significantly as we move inland. Studies corroborating such evidence can be found in Leenaers et al. (1988) and Albering et al. (1999). ## 7 Discussion and Future Work We developed a fully model-based Bayesian inferential framework for differential process assessment and curvature-based boundary analysis for spatial processes. Introducing the directional curvature process and its associated inferential framework supplements the direc \begin{table} \begin{tabular}{l|c c c c} \hline \hline Parameters (\(\boldsymbol{\theta}\)) & Cadmium (Cd) & Copper (Cu) & Lead (Pb) & Zinc (Zn) \\ \hline \(\phi\) & 0.0379 & 0.1138 & 0.0399 & 0.0472 \\ & (0.0207, 0.0618) & (0.0871, 0.1471) & (0.0131, 0.1900) & (0.0230, 0.0744) \\ & 2.9566 & 3.2044 & 0.9303 & 38.3538 \\ \(\sigma^{2}\) & (1.2803, 5.2227) & (2.3955, 4.0892) & (0.2763, 1.7641) & (16.7815, 65.1450) \\ \(\tau^{2}\) & 1.7771 & 0.0067 & 0.8555 & 23.2226 \\ & (0.9107, 2.6328) & (0.0012, 0.0244) & (0.0010, 1.2280) & (9.3743, 35.6867) \\ \hline Intercept & 9.4973 & 4.8503 & 6.1120 & 37.1315 \\ & (5.9750, 13.3704) & (3.1392, 6.8308) & (3.4615, 8.1910) & (25.0903, 53.0870) \\ elev & -0.7672 & -0.4065 & -0.5413 & -2.8781 \\ & (-1.2531, -0.3574) & (-0.7418, -0.1656) & (-0.7853, -0.1442) & (-4.7805, -1.2834) \\ & -0.4011 & 0.4293 & 0.3434 & 0.8606 \\ om & (0.2616, 0.5233) & (0.3276, 0.4728) & (0.2490, 0.4253) & (0.3681, 1.3166) \\ & **-0.0033** & **-0.0025** & **-0.0011** & **-0.0081** \\ dist & (-0.0061, 0.0000) & (-0.0043, -0.0014) & (-0.0029, 0.0006) & (-0.0197, 0.0038) \\ & -1.4176 & -2.4727 & -0.8483 & -4.3182 \\ ffreq (=2) & (-2.3202, -0.3432) & (-3.1794, -1.6716) & (-1.6109, -0.2598) & (-7.9184, -0.6220) \\ & **-0.7322** & -1.4298 & **-0.1865** & **-3.3159** \\ ffreq (=3) & (-2.0520, 0.6248) & (-2.4443, -0.5157) & (-1.2972, 0.6861) & (-7.9307, 1.9128) \\ & **-0.3337** & **0.2236** & **0.5988** & **-2.2213** \\ soil (=2) & (-1.4661, 0.7491) & (-0.7248, 0.9799) & (-0.0345, 1.2956) & (-6.1446, 2.0831) \\ soil (=3) & **-0.3884** & **0.6344** & **0.3707** & **-2.9922** \\ & (-2.0891, 1.2628) & (-0.2309, 1.8474) & (-0.7108, 1.4029) & (-9.0918, 3.6289) \\ line (=1) & **0.5752** & 1.3223 & 0.7759 & **-0.4759** \\ & (-0.3509, 1.4341) & (0.7152, 1.9427) & (0.1173, 1.4645) & (-3.9057, 2.6510) \\ \hline \hline \end{tabular} \end{table} Table 4: Posterior estimates of process parameters and covariates for the Meuse river data accompanied by their corresponding HPD intervals in brackets below. Effects with HPDs containing 0 are marked in bold. tional gradients with inference for their rates of change, while its induction into the folds of Bayesian curvilinear wombling allows for further characterization of difference boundaries. Adopting a Bayesian hierarchical model allows for Gaussian calibration when characterizing points, regions and boundaries within a surface. This framework is widely applicable; our applications arise from selected disciplines indicating the utilities of mapping curvature process boundaries to understand spatial data generating patterns. Substantive case studies will be reported separately. Several avenues hold scope for future developments. A more generalized theoretical framework can be developed for studying joint behavior of the principal curvature (direction of maximum (or minimum) curvature) and the aspect (direction of maximum gradient) (see, e.g., Wang et al.2018) leveraging dependent circular uniform distributions (see, \begin{table} \begin{tabular}{l|c c c c} \hline \hline \multicolumn{1}{c|}{Wombling Measures} & Cd & Cu & Pb & Zn \\ \hline \(\overline{\Gamma}^{(1)}(>0)\) & 0.0510 & 0.1273 & 0.0375 & 0.1984 \\ \(\overline{\Gamma}^{(1)}(>0)\) & (0.0298, 0.07401) & (0.0913, 0.1729) & (0.0019, 0.1561) & (0.0876, 0.3162) \\ \(\overline{\Gamma}^{(1)}(<0)\) & -0.0400 & -0.2561 & – & -0.1890 \\ \(\overline{\Gamma}^{(1)}(<0)\) & (-0.0635, -0.0170) & (-0.3187, -0.1997 ) & – & (-0.2967, -0.0669) \\ \(\overline{\Gamma}^{(2)}(>0)\) & 0.0074 & – & – & 0.0422 \\ \(\overline{\Gamma}^{(2)}(>0)\) & (0.0019, 0.0158) & – & – & (0.0111, 0.0879) \\ \(\overline{\Gamma}^{(2)}(<0)\) & -0.0078 & -0.1247 & -0.0039 & -0.0473 \\ (-0.0223, -0.0024) & (-0.1979, -0.0860) & (-0.1076, -0.0006) & (-0.1114, -0.0095) \\ \hline \hline \end{tabular} \end{table} Table 5: Curvature wombling measures for the Meuse, separated by zones of positive and negative signs, they are accompanied by their corresponding HPD intervals in brackets below. Figure 7: Plots showing results for curvature wombling on the Meuse river for Cadmium (Cd) concentration. Plots showing (left) the resulting fitted process (center) the contiguous segments that display significant gradients (right) the contiguous segments with significant curvature. e.g., Kent et al.2008). We offer some brief remarks. To obtain the direction of maximum curvature for a spatial surface, we solve \(\max\limits_{\mathbf{u}\in\mathbb{R}^{2}}\left|\mathbf{u}^{\top}\nabla^{2}Y( \mathbf{s})\mathbf{u}\right|\), such that \(||\mathbf{u}||=1\), at an arbitrary point \(\mathbf{s}\). Using Lagrange multipliers and denoting \(\kappa(\mathbf{u})=|\mathbf{u}^{\top}\nabla^{2}Y(\mathbf{s})\mathbf{u}|\), define \(\mathcal{O}(\mathbf{u})=\kappa(\mathbf{u})-\lambda(||\mathbf{u}||^{2}-1)\) hence, \(\partial\mathcal{O}(\mathbf{u})/\partial u_{i}=\kappa(\mathbf{u})^{-1}( \nabla_{ii}^{2}Y(\mathbf{s})u_{i}+\nabla_{ij}^{2}Y(\mathbf{s})u_{j})-\lambda u_ {i}=0\), \(i,j=1,2\). With \(u_{2}/u_{1}=\tan\theta_{pc}\), eliminating \(\lambda\) we get \(\tan\theta_{pc}=\frac{\nabla_{22}^{2}Y(\mathbf{s})\tan\theta_{pc}+\nabla_{12}^ {2}Y(\mathbf{s})}{\nabla_{11}^{2}Y(\mathbf{s})+\nabla_{12}^{2}Y(\mathbf{s}) \tan\theta_{pc}}\). Defining \(h_{1}=h_{1}(\mathbf{s})=(\nabla_{11}^{2}Y(\mathbf{s})-\nabla_{22}^{2}Y(\mathbf{ s}))/\nabla_{12}^{2}Y(\mathbf{s})\) given \(\nabla_{12}^{2}Y(\mathbf{s})\neq 0\) and solving \(\theta_{pc}=\tan^{-1}\frac{1}{2}\left[-h_{1}\pm\sqrt{h_{1}^{2}+4}\right]\). If \(\nabla_{12}^{2}Y(\mathbf{s})=0\) then, \(\nabla^{2}Y(\mathbf{s})\) is diagonal and \(\theta_{pc}\) corresponds to the direction of \(\max\{\nabla_{11}^{2}Y(\mathbf{s}),\nabla_{22}^{2}Y(\mathbf{s})\}\). We propose that \(\Theta=(\theta_{asp},\theta_{pc})^{\top}\) follows a dependent circular uniform distribution over \([0,2\pi]\times[0,2\pi]\). Further developments with circular regression methods can proceed to examine the effect of covariates on \(\Theta\). Multivariate extensions would involve formulating these differential processes on arbitrary manifolds. This requires simulating a Gaussian process on manifolds and inspecting the covariant derivative. Bayesian curvilinear wombling could then be implemented on curves of interest to the investigator. This would not only involve an inferential framework for normal curvature, but also geodesic curvature for such curves. Spatiotemporal curvature processes can build upon Quick et al. (2015) to study evolutionary behavior of the curvature processes with respect to variations in the response across time. Finally, we remark that while there have been substantial recent developments in scalable spatial processes for massive data sets--a comprehensive review is beyond the scope of the current article (see, e.g., Heaton et al.2019)--not all scalable processes admit the correct degree of smoothness for curvature processes to exist. Constructing scalable processes for curvilinear wombling, and subsequent inference, remains a problem of interest in the wombling community. ## Supplementary Materials The following supplement includes additional theoretical derivations, computing details, additional simulation experiments and wombling for Northeastern US temperatures. **Supplementary Materials for** _"Bayesian Modeling with Spatial Curvature Processes"_ Aritra Halder\({}^{a}\), Sudipto Banerjee\({}^{b}\) and Dipak K. Dey\({}^{c}\) \({}^{a}\)Department of Biostatistics, Drexel University, Philadelphia, PA, USA. \({}^{b}\)Department of Biostatistics, University of California, Los Angeles, CA, USA. \({}^{c}\)Department of Statistics, University of Connecticut, Storrs, CT, USA. ## 8 Review of Directional Gradients and Wombling ### Directional Gradients For the scalar \(h\) and unit vector \(\mathbf{u}\) we define \(Y_{\mathbf{u},h}^{(1)}=\left(Y(\mathbf{s}+h\mathbf{u})-Y(\mathbf{s})\right)/h\) to be the first order finite difference processes at location \(\mathbf{s}\) in the directions of \(\mathbf{u}\). Being a linear function of stationary processes this is well-defined. Passing to limits, we define \(D_{\mathbf{u}}^{(1)}Y(\mathbf{s})=\lim_{h\to 0}Y_{\mathbf{u},h}^{(1)}( \mathbf{s})\). Provided the limit exist, \(D_{\mathbf{u}}^{(1)}Y(\mathbf{s})\) is defined as the directional gradient process. If \(Y(\mathbf{s})\) is a mean square differentiable process in \(\mathbb{R}^{d}\) for every \(\mathbf{s}_{0}\in\mathbb{R}^{d}\) then \(D_{\mathbf{u}}^{(1)}Y(\mathbf{s})=\mathbf{u}^{\top}\nabla Y(\mathbf{s})\). Then, if \(\mathbf{u}=\sum_{i=1}^{d}u_{i}\mathbf{e}_{i}\), we can compute \(D_{\mathbf{u}}^{(1)}Y(\mathbf{s})=\sum_{i=1}^{d}u_{i}D_{\mathbf{e}_{i}}^{(1)}Y (\mathbf{s})\). The directional gradient process is linear in \(\mathbf{u}\), hence \(D_{-\mathbf{u}}^{(1)}Y(\mathbf{s})=-D_{\mathbf{u}}^{(1)}Y(\mathbf{s})\) and for any vector \(\mathbf{w}=||\mathbf{w}||\mathbf{u}\), \(D_{\mathbf{w}}^{(1)}Y(\mathbf{s})=||\mathbf{w}||D_{\mathbf{u}}^{(1)}Y(\mathbf{ s})\). The directional gradient at \(\mathbf{s}_{0}\) in the direction \(\mathbf{w}\) is the slope at \(\mathbf{s}_{0}\) of the curve traced out by slicing \(Y(\mathbf{s})\) in the direction \(\mathbf{w}\) (see e.g., Banerjee & Gelfand 2006, Section 2, for more details). ### Wombling Measures Wombling measures constructed from (7) for total and average gradient are associated with curves to characterize the magnitude of change. To each point \(\mathbf{s}\in C\), a directional gradient is associated, \(g\left(\mathcal{L}Y(\mathbf{s})\right)=D_{\mathbf{n}}^{(1)}Y(\mathbf{s})= \mathbf{n}(\mathbf{s})^{\top}\nabla Y(\mathbf{s})\) (also a linear function of \(\mathcal{L}_{\mathbf{n}}Y(\mathbf{s})\)), along the direction of a unit normal \(\mathbf{u}=\mathbf{n}(\mathbf{s})\) to the curve. For a curve tracking rapid change in the surface, choice of the normal direction to a curve is motivated by sharp directional gradients orthogonal to the curve; \(\ell\) is chosen to be the arc-length measure. The rationale behind this choice is to measure change in response with respect to distance traversed on the curve. With reference to (7) the total and average gradients are, \(\Gamma^{(1)}(C)=\int_{t_{0}}^{t_{1}}\nabla Y(\mathbf{s}(t))^{\top}\mathbf{n}( \mathbf{s}(t))||\mathbf{s}^{\prime}(t)||dt\) and \(\overline{\Gamma}^{(1)}(C)=\Gamma^{(1)}(C)/\ell(C)\) respectively (see e.g., Banerjee & Gelfand 2006, Section 3, for more details). ## 9 Interpretation of Spatial Curvature Pursuits in geospatial analysis generally encounter surfaces which have canonical coordinate systems (e.g. latitude-longitude, easting-northing etc.). This facilitates a parameterization for the surface that leverages the coordinate system, commonly known as the Monge parameterization (also called a Monge patch, named after Gaspard Monge, see e.g., O'Neill 2006, Pressley 2010)--a surface, \(S\), embedded in \(\mathbb{R}^{3}\), is parameterized by giving its height \(Y\) over some plane as a function of the orthonormal co-ordinates \(s_{1}\) and \(s_{2}\) in the plane, \(S=\{\mathcal{S}\subset\mathbb{R}^{2}\mapsto\mathbb{R}^{3}:\mathbf{s}=(s_{1},s_ {2})\mapsto Y(s_{1},s_{2})=Y(\mathbf{s})\}\). A point is then, \((\mathbf{s},Y(\mathbf{s}))=(s_{1},s_{2},Y(s_{1},s_{2}))\). The two tangent vectors at \(\mathbf{s}\) are, \(\mathbf{E}_{1}(\mathbf{s})=(1,0,\nabla_{1}Y(\mathbf{s}))^{\top}\) and \(\mathbf{E}_{2}(\mathbf{s})=(0,1,\nabla_{2}Y(\mathbf{s}))^{\top}\), where \(\nabla_{i}Y(\mathbf{s})=\dfrac{\partial}{\partial\mathbf{s}_{i}}Y( \mathbf{s}),\ i=1,2\). Let \(\nabla Y(\mathbf{s})=(\nabla_{1}Y(\mathbf{s}),\nabla_{2}Y(\mathbf{s}))^{\top}\) denote the gradient vector, consider a unit direction vector, \(\mathbf{u}=(u_{1},u_{2})^{\top}\in S\subset\mathbb{R}^{2}\), then \(u_{1}\mathbf{E}_{1}(\mathbf{s})+u_{2}\mathbf{E}_{2}(\mathbf{s})=(\mathbf{u}^ {\top},\mathbf{u}^{\top}\nabla Y(\mathbf{s}))^{\top}\in T_{S}(\mathbf{s})\) corresponds to the _directional derivative_ of \(Y\) along the direction \(\mathbf{u}\), where \(T_{S}(\mathbf{s})\) is the local tangent plane at \(\mathbf{s}\), that is gen erated by \(\{\mathbf{E}_{1}(\mathbf{s}),\mathbf{E}_{2}(\mathbf{s})\}\). The outward pointing normal to the surface \(S\), denoted by \(\mathbf{N}(\mathbf{s})=\mathbf{E}_{1}(\mathbf{s})\times\mathbf{E}_{2}(\mathbf{ s})=(-\nabla_{1}Y(\mathbf{s}),-\nabla_{2}Y(\mathbf{s}),1)^{\top}\), where \(\times\) denotes the usual cross-product of vectors. Evidently, \(\mathbf{N}(\mathbf{s})\) is orthogonal to the local tangent plane at that point, \(T_{S}(\mathbf{s})\). Quantifying the local geometry of a surface, we are interested in how \(\mathbf{N}(\mathbf{s})\) changes ("tips") as we move in the direction \(\mathbf{u}\) from the point \(\mathbf{s}\) on the surface--derivatives for \(\mathbf{N}(\mathbf{s})\) at the point \(\mathbf{s}\), which lie in \(T_{S}(\mathbf{s})\). This is quantified by the _normal curvature_ of \(S\) along a direction \(\mathbf{u}\). Before defining normal curvature for a surface, we digress briefly to investigate effects of surface curvature on curves--for a curve parameterized by \(t\), \(C=\{\mathbf{s}(t)=(s_{1}(t),s_{2}(t)):t\in[a,b]\}\), passing through \(\mathbf{s}\), if the curvature of \(C\) is, \(\kappa\), the tangent to \(C\) at \(\mathbf{s}\), \(\mathbf{t}(\mathbf{s})\), and the principal unit normal, i.e. the normal to \(C\) on \(S\), \(\mathbf{n}(\mathbf{s})=\mathbf{t}^{\prime}(\mathbf{s})/\kappa\), then we have, \(\mathbf{n}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s})=\cos(\theta)\), \(\mathbf{t}^{\prime}(\mathbf{s})=\kappa\mathbf{n}(\mathbf{s})\), which implies \(\kappa\cos(\theta)=\mathbf{t}^{\prime}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s})\). We observe that, \[\mathbf{t}^{\prime}(\mathbf{s}(t))=\frac{\partial^{2}}{\partial t^{2}}\left( \mathbf{s}(t),Y(\mathbf{s}(t))\right)^{\top}=\frac{\partial}{\partial t} \mathbf{E}_{i}(\mathbf{s}(t))s^{\prime}_{i}(t)=\mathbf{E}_{ij}(\mathbf{s}(t) )s^{\prime}_{i}(t)s^{\prime}_{j}(t)+\mathbf{E}_{i}(\mathbf{s}(t))s^{\prime \prime}_{i}(t)\;,\] Figure 8: A Monge patch, \(S=(s_{1},s_{2},Y(s_{1},s_{2}))^{\top}=(\mathbf{s}^{\top},Y(\mathbf{s}))^{\top}\), showing a point \(\mathbf{s}\), a curve \(C\) passing through \(\mathbf{s}\), normal to the surface, \(\mathbf{N}(\mathbf{s})\), normal to the curve \(\mathbf{n}(s)\), \(\theta\) is the angle between them, and the local tangent plane to the surface, \(T_{S}(\mathbf{s})\). The thin perpendicular pink arrows are tangent vectors, \(\mathbf{E}_{1}(\mathbf{s})\) and \(\mathbf{E}_{2}(\mathbf{s})\). The thin outward pointing black arrows around \(\mathbf{N}(\mathbf{s})\) demonstrate change in \(\mathbf{N}\) as we move along the direction (dotted line) on the surface. since \(\mathbf{E}_{i}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s})=0\), \(\kappa\cos(\theta)=\mathbf{t}^{\prime}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s})=( \mathbf{E}_{ij}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s}))\partial s_{i}\partial s _{j}\), \(i,j=1,2\), where the expression in the parenthesis is a property of the surface, independent of curve \(C\), and is defined as the _second fundamental form_, \[\Pi(\mathbf{s})=\begin{pmatrix}\mathbf{E}_{11}(\mathbf{s})\cdot\mathbf{N}( \mathbf{s})&\mathbf{E}_{12}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s})\\ \mathbf{E}_{21}(\mathbf{s})\cdot\mathbf{N}(\mathbf{s})&\mathbf{E}_{22}( \mathbf{s})\cdot\mathbf{N}(\mathbf{s})\end{pmatrix}=\begin{pmatrix}\nabla_{1 1}^{2}Y(\mathbf{s})&\nabla_{12}^{2}Y(\mathbf{s})\\ \nabla_{21}^{2}Y(\mathbf{s})&\nabla_{22}^{2}Y(\mathbf{s})\end{pmatrix}= \nabla^{2}Y(\mathbf{s})\;,\] where \(\mathbf{E}_{ij}\) and \(\nabla_{ij}^{2}Y\) are the partial differentiation of \(\mathbf{E}\) and \(Y\) with respect to \(s_{i},s_{j}\) respectively, \(\cdot\) is the usual dot product for vectors and \(\nabla_{12}^{2}Y=\nabla_{21}^{2}Y\). The second to last equality is obtained under the Monge parameterization. The second fundamental form is invariant with respect to transformations of the local co-ordinate which preserves the sense of \(\mathbf{N}\), i.e. the transformation does not change an outward (inward) pointing normal to an inward (outward) pointing normal for \(S\). Such surfaces are termed as _orientable surfaces_. The Mobius transformation is an example of non-orientable surfaces. The individual terms of \(\Pi\), quantify the local geometry of a surface (or curvature) along orthonormal coordinates. Curvature of the \(C\) can be attributed to (a) the curvature of the curve itself, and (b) the curvature of the surface on which \(C\) lies. \(\kappa\) is the curvature of \(C\), termed as _geodesic curvature_. The curvature of the surface is termed as _normal curvature_, computed along a direction \(\mathbf{u}=\mathbf{u}(\mathbf{s})\) is denoted by \(\kappa_{n}(\mathbf{u})\). The normal curvature, which is an intrinsic property of the surface independent of \(C\), is of primary interest to us, \(\kappa_{n}(\mathbf{u})=\mathbf{u}^{\top}\Pi(\mathbf{s})\mathbf{u}/\mathbf{u}^ {\top}\mathbf{u}=\mathbf{u}^{\top}\Pi\mathbf{u}\), under \(\mathbf{u}^{\top}\mathbf{u}=1\). \(\kappa_{n}(\mathbf{u})\) is also the _directional curvature_ of the \(Y\) along \(\mathbf{u}\). For our exploits, \(\mathbf{u}=\mathbf{n}\), the normal direction to \(C\). The sign of \(\kappa_{n}(\mathbf{u})\), or equivalently eigen-values of \(\Pi\) inform about the nature of curvature at \(\mathbf{s}\)--for example, if \(\kappa_{1}\) and \(\kappa_{2}\) denote eigenvalues of \(\Pi\), with \(K=\det\Pi=\kappa_{1}\kappa_{2}\), if \(K>0\), it implies that the surface is bending away from \(T_{S}(\mathbf{s})\); depending on whether \(\kappa_{1},\kappa_{2}<0\) (or \(>0\)), \(\mathbf{s}\) can be locally classified as a concave (convex) ellipsoid (for more details see Stevens 1981). For a purely differential geometric treatment of this discussion see--Gauss (1902), Spivak (1999), Do Carmo (2016), Kreyszig (2019). Figure 8 illustrates this discussion. ## 10 Examples for selected Covariance Functions The detailed calculations for closed form expression of selected covariance functions are presented. We start with the power exponential family of isotropic covariance functions, \(\widetilde{K}(||\Delta||)=\alpha\exp(-\phi||\Delta||^{\nu})\), \(0<\nu\leq 2\). It is clear that \(\nabla^{4}\widetilde{K}(||\Delta||)\) and exists only for \(\nu=2\), we have the following form, \(\widetilde{K}(||\Delta||)=\sigma^{2}\exp(-\phi||\Delta||^{2})\). For this choice we have, \[\left(\nabla\widetilde{K}(\Delta)\right)_{i}=-2\sigma^{2}\phi\exp (-\phi||\Delta||^{2})\delta_{i},\] \[\left(\nabla^{2}\widetilde{K}(\Delta)\right)_{ii}=-2\sigma^{2} \phi\exp(-\phi||\Delta||^{2})(1-2\phi\delta_{i}^{2}),\] \[\left(\nabla^{2}\widetilde{K}(\Delta)\right)_{ij}=4\sigma^{2} \phi^{2}\exp(-\phi||\Delta||^{2})\delta_{i}\delta_{j},\] \[\left(\nabla^{3}\widetilde{K}(\Delta)\right)_{iii}=4\sigma^{2} \phi^{2}\exp(-\phi||\Delta||^{2})(3-2\phi\delta_{i}^{2})\delta_{i},\] \[\left(\nabla^{3}\widetilde{K}(\Delta)\right)_{iij}=4\sigma^{2} \phi^{2}\exp(-\phi||\Delta||^{2})(1-2\phi\delta_{i}^{2})\delta_{j},\] \[\left(\nabla^{3}\widetilde{K}(\Delta)\right)_{ijk}=-8\sigma^{2} \phi^{3}\exp(-\phi||\Delta||^{2})\delta_{i}\delta_{j}\delta_{k},\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{iiii}=4\sigma^{2} \phi^{2}\exp(-\phi||\Delta||^{2})(3-12\phi\delta_{i}^{2}+4\phi^{2}\delta_{i}^ {4}),\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{iiij}=-8\sigma^{2} \phi^{3}\exp(-\phi||\Delta||^{2})(3-2\phi\delta_{i}^{2})\delta_{i}\delta_{j},\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{iijj}=4\sigma^{2} \phi^{2}\exp(-\phi||\Delta||^{2})(1-2\phi\delta_{i}^{2})(1-2\phi\delta_{j}^{2}),\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{ijkl}=16\sigma^{2} \phi^{4}\exp(-\phi||\Delta||^{2})\delta_{i}\delta_{j}\delta_{k}\delta_{l},\] where \(i,j,k,l=1,2,\ldots,d\). The squared exponential or the Gaussian covariance kernel is the only member of its class that admits such derivatives, although they have been critiqued to produce realizations that are too smooth to be of practical use in modeling (see Stein (1999)). Turning to the Matern class we see that with, \(\widetilde{K}||\Delta||=\alpha(\phi||\Delta||)^{\nu}K_{\nu}(\phi||\Delta||)\), where \(\nu\) is a parameter controlling the smoothness of realizations, that is mean square differentiability and \(K_{\nu}\) is the modified Bessel function of order \(\nu\). At \(\nu=3/2\) and \(\nu=5/2\), \(\widetilde{K}(||\Delta||)\) takes the forms, \[\widetilde{K}(||\Delta||)=\begin{cases}\sigma^{2}(1+\sqrt{3}\phi||\Delta||)e^{- \sqrt{3}\phi||\Delta||},&\nu=3/2\\ \sigma^{2}\left(1+\sqrt{5}\phi||\Delta||+\frac{5}{3}\phi^{2}||\Delta||^{2} \right)e^{-\sqrt{5}\phi||\Delta||},&\nu=5/2\end{cases},\] where \(\sigma^{2}\) is the overall process variance. Matern with \(\nu=3/2\) is once mean square differentiable, where as Matern with \(\nu=5/2\) is twice mean square differentiable at \(0\). As \(\nu\rightarrow\infty\), Matern covariances tend to the Gaussian covariance. Unlike the Gaussian covariance, they do not yield overly smoothed process realizations. For \(\nu=3/2\) we have, \[\left(\nabla\widetilde{K}(\Delta)\right)_{i}=-3\sigma^{2}\phi^{2} e^{-\sqrt{3}\phi||\Delta||}\delta_{i},\] \[\left(\nabla^{2}\widetilde{K}(\Delta)\right)_{ii}=-3\sigma^{2} \phi^{2}e^{-\sqrt{3}\phi||\Delta||}\left(1-\sqrt{3}\phi\frac{\delta_{i}^{2}} {||\Delta||}\right),\] \[\left(\nabla^{2}\widetilde{K}(\Delta)\right)_{ij}=3\sqrt{3} \sigma^{2}\phi^{3}e^{-\sqrt{3}\phi||\Delta||}\frac{\delta_{i}\delta_{j}}{|| \Delta||}.\] where \(i,j=1,2,\ldots,d\). Since the process is just once mean square differentiable higher order derivatives do not exist. However, for \(\nu=5/2\) we have, \[\left(\nabla\widetilde{K}(\Delta)\right)_{i}=-\frac{5}{3}\sigma^{ 2}\phi^{2}e^{-\sqrt{5}\phi||\Delta||}\left(1+\sqrt{5}\phi||\Delta||\right) \delta_{i},\] \[\left(\nabla^{2}\widetilde{K}(\Delta)\right)_{ii}=-\frac{5}{3} \sigma^{2}\phi^{2}e^{-\sqrt{5}\phi||\Delta||}\left(1+\sqrt{5}\phi||\Delta||-5 \phi^{2}\delta_{i}^{2}\right),\] \[\left(\nabla^{2}\widetilde{K}(\Delta)\right)_{ij}=\frac{25}{3} \sigma^{2}\phi^{4}e^{-\sqrt{5}\phi||\Delta||}\delta_{i}\delta_{j},\] \[\left(\nabla^{3}\widetilde{K}(\Delta)\right)_{iii}=\frac{25}{3} \sigma^{2}\phi^{4}e^{-\sqrt{5}\phi||\Delta||}\left(3-\sqrt{5}\phi\frac{\delta _{i}^{2}}{||\Delta||}\right)\delta_{i},\] \[\left(\nabla^{3}\widetilde{K}(\Delta)\right)_{iij}=\frac{25}{3} \sigma^{2}\phi^{4}e^{-\sqrt{5}\phi||\Delta||}\left(1-\sqrt{5}\phi\frac{\delta _{i}^{2}}{||\Delta||}\right)\delta_{j},\] \[\left(\nabla^{3}\widetilde{K}(\Delta)\right)_{ijk}=-\frac{25\sqrt {5}}{3}\sigma^{2}\phi^{5}e^{-\sqrt{5}\phi||\Delta||}\frac{\delta_{i}\delta_{j }\delta_{k}}{||\Delta||},\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{iiii}=\frac{25}{3} \sigma^{2}\phi^{4}e^{-\sqrt{5}\phi||\Delta||}\left[3-6\sqrt{5}\phi\frac{ \delta_{i}^{2}}{||\Delta||}+\sqrt{5}\phi\left(\sqrt{5}\phi+\frac{1}{||\Delta ||}\right)\frac{\delta_{i}^{4}}{||\Delta||^{2}}\right],\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{iijj} =\frac{25\sqrt{5}}{3}\sigma^{2}\phi^{5}e^{-\sqrt{5}\phi||\Delta||} \left[\frac{\delta_{i}}{||\Delta||}-\frac{1}{||\Delta||}\left(3-\sqrt{5}\phi \frac{\delta_{i}^{2}}{||\Delta||}\right)\right]\delta_{i}^{2}\delta_{j},\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{iijj} =\frac{25}{3}\sigma^{2}\phi^{4}e^{-\sqrt{5}\phi||\Delta||}\left[ \left(1-\sqrt{5}\phi\frac{\delta_{i}^{2}}{||\Delta||}\right)\left(1-\sqrt{5} \phi\frac{\delta_{j}^{2}}{||\Delta||}\right)+\sqrt{5}\phi\frac{\delta_{i}^{2} \delta_{j}^{2}}{||\Delta||^{3}}\right],\] \[\left(\nabla^{4}\widetilde{K}(\Delta)\right)_{ijkl} =-\frac{25\sqrt{5}}{3}\sigma^{2}\phi^{5}e^{-\sqrt{5}\phi||\Delta ||}\left(\sqrt{5}\phi+\frac{1}{||\Delta||}\right)\frac{\delta_{i}\delta_{j} \delta_{k}\delta_{l}}{||\Delta||^{2}},\] where \(i,j,k,l=1,2,\ldots,d\). The above expressions for entries of the cross-covariance matrices correspond to the joint process, \(\mathcal{L}Y(\mathbf{s})=(Y(\mathbf{s}),\nabla Y(\mathbf{s})^{\top},vech( \nabla^{2}Y(\mathbf{s}))^{\top})^{\top}\) with respect to our kernel choices. The following cross-covariance matrices are evaluated at \(||\Delta||\to 0\). 1. _Squared Exponential:_ In \(\mathbb{R}^{d}\), we have, \[V_{\mathcal{L}Y}(\mathbf{0})=\sigma^{2}\begin{pmatrix}1&\mathbf{0}^{\top}&-2 \phi vech(I_{d})^{\top}\\ \mathbf{0}&2\phi I_{d}&\mathbf{O}\\ -2\phi vech(I_{d})&\mathbf{O}&4\phi^{2}\text{diag}\{3,1\ldots,1,3,1,\ldots,1, \ldots,3\}\end{pmatrix},\] 2. _Matern_ (\(\nu=3/2\)): For this kernel the existence of only the gradient process is guaranteed, therefore the covariance is for the process, \(\mathcal{L}Y(\mathbf{s})=(Y(\mathbf{s}),\nabla Y(\mathbf{s})^{\top})^{\top}\). In \(\mathbb{R}^{d}\) we have, \(V_{\mathcal{L}Y}(\mathbf{0})=\sigma^{2}\begin{pmatrix}1&\mathbf{0}^{\top}\\ \mathbf{0}&3\phi^{2}I_{d}\end{pmatrix}\). 3. _Matern_ (\(\nu=5/2\)): In \(\mathbb{R}^{d}\) we have, \[V_{\mathcal{L}Y}(\mathbf{0})=\sigma^{2}\begin{pmatrix}1&\mathbf{0}^{\top}&- \frac{5\phi^{2}}{3}vech(I_{d})^{\top}\\ \mathbf{0}&\frac{5\phi^{2}}{3}I_{d}&\mathbf{O}\\ -\frac{5\phi^{2}}{3}vech(I_{d})&\mathbf{O}&\frac{25\phi^{4}}{3}\text{diag}\{3, 1\ldots,1,3,1,\ldots,1,\ldots,3\}\end{pmatrix}.\] Algorithms In what follows, we provide the required algorithms for sampling gradients and wombling measures. Although listed separately to highlight the requirement of only posterior samples, the required steps could be included within the MCMC subroutine devised for spatial learning (or fitting the model) of \(Y(\mathbf{s})\). Sampling Gradients and Curvature:The choice for \(K\) varies between Gaussian, Matern with \(\nu=3/2\) and \(\nu=5/2\). There is scope for parallel computation across grid locations. Additionally if the inverse of estimated covariance matrices are stored for the MCMC runs from the model fit, sufficient gains in compilation can be achieved while sampling gradients. If (\(\nu=3/2\)) is chosen the \(\nabla^{2}\) terms are not computed. ``` Input:\(\mathcal{S}\), A Grid \(\mathcal{G}\) spanning \(\mathcal{S}\), posterior MCMC samples \(\boldsymbol{\beta}\), \(\boldsymbol{\theta}_{K}=\{\sigma^{2},\phi\}\), \(\mathbf{Z}\) Result: Posterior samples for gradients \(\nabla Y(\mathbf{s}_{g})\) and curvature \(\nabla^{2}Y(\mathbf{s}_{g})\) for \(\mathbf{s}_{g}\in\mathcal{G}\) for\(i=1,2,\ldots,L\)do for\(j=1,2,\ldots,n_{G}\)do \(\Delta[i,j]=\mathbf{s}_{g}[j]-\mathbf{s}[i]\)\(\triangleright\) Compute distances of grid locations to observed process for\(i=1,2,\ldots,n_{\text{MCMC}}\)do \(K[i]=K(\cdot;\boldsymbol{\theta}_{K}[i])\) \(K.inv[i]=(K(\cdot;\boldsymbol{\theta}_{K}[i]))^{-1}\) for\(j=1,2,\ldots,n_{G}\)do \(\nabla K[i,j]=(\nabla K(\Delta[,j];\boldsymbol{\theta}_{K}[i])^{\top},vech( \nabla^{2}K(\Delta[,j];\boldsymbol{\theta}_{K}[i]))^{\top})^{\top}\) \(V[i,j]=V_{\mathcal{L}Y}(\mathbf{0})\) \(\mu[i,j]=\nabla\mu(\mathbf{s};\boldsymbol{\beta}[i])-\nabla K[i,j]^{\top}K. inv[i]\mathbf{Z}[i]\)\(\triangleright\) *[r]\(\mu(\mathbf{s};\boldsymbol{\beta}[i])=\mathbf{X}\boldsymbol{\beta}[i]\) \(\Sigma[i,j]=V[i,j]-\nabla K[i,j]^{\top}K.inv[i]\nabla K[i,j]\) \(\mathcal{L}Y[i,j]=\mathcal{N}(\mu[i,j],\Sigma[i,j])\)\(\triangleright\) Posterior sample of Gradients and Curvature return\(\mathcal{L}Y\) ``` **Algorithm 1**Algorithm for Sampling Gradients and Curvature Sampling Wombling MeasuresThe choices for \(K\) are again between Gaussian, Matern with \(\nu=3/2\) and \(\nu=5/2\). Choices for curves to be evaluated for wombling boundaries range from those outlined in Section 5.2. In case \(\nu=3/2\) wombling measures for curvature are not computed. Choices for approximations include computing Riemann sums replacing quadrature for line integrals. There is scope for parallel computation with the curve being broken into segments evaluated in parallel for wombling boundaries. The functions \(\mathsf{q}_{1}\) and \(\mathsf{q}_{2}\) denote one and two-dimensional quadrature respectively. In case a Riemann sum (see (15), Section 12) approximation is chosen, the points partitioning \(C\) are treated as grid points and the algorithm for sampling gradients and curvature is used for predictive inference on the differential process. The Riemann sums are computed using \(\mathbf{t}\) and \(\mathbf{U}\) and returned. ## 12 Proofs and Discussion For the curvature process formulated in Section 2, we aim to show that the covariance matrix associated with the process \(D^{(2)}_{\mathbf{U},\mathbf{V}}Y(\mathbf{s})=\mathbf{c}^{\top}_{\mathbf{U}, \mathbf{V}}vech\nabla^{2}Y(\mathbf{s})\) is valid (pg. 5 last paragraph). We obtain the expression for the covariance matrix by leveraging the directional finite difference process \(Y^{(2)}_{\mathbf{u},\mathbf{V},h}(\mathbf{s})\). For points \(\mathbf{s},\mathbf{s}^{\prime}\), we denote \(\Delta=\mathbf{s}-\mathbf{s}^{\prime}\) and \(\mathbf{u}\), \(\mathbf{v}\) are unit vectors specifying direction and \(\boldsymbol{\delta}(x,y)=\Delta+x\mathbf{u}+y\mathbf{v}\) as a map from \(\mathbb{R}^{2}\rightarrow\mathbb{R}^{d}\), after suppressing dependence on \(\Delta\) and \(\mathbf{u}\), \(\mathbf{v}\), let \(g(x,y)=K(\Delta(x,y))=K(\Delta+x\mathbf{u}+y\mathbf{v})\) denote a map from \(\mathbb{R}^{2}\rightarrow\mathbb{R}\) we compute the covariance, \[C^{(2)}_{\mathbf{u},\mathbf{v}}(\mathbf{s},\mathbf{s}^{\prime}) =\lim_{h\to 0}\lim_{k\to 0}E\left[Y^{(2)}_{ \mathbf{u},\mathbf{v},h}(\mathbf{s})Y^{(2)}_{\mathbf{u},\mathbf{v},k}( \mathbf{s}^{\prime})\right],\] \[=\lim_{h\to 0}\lim_{k\to 0}\frac{1}{h^{2}k^{2}}[g(h-k,h-k)-g(h -k,h)-g(h,h-k)+g(h,h)\] \[\qquad\qquad\qquad\qquad-g(h-k,-k)+g(h-k,0)+g(h,-k)-g(h,0)\] \[\qquad\qquad\qquad\qquad-g(-k,h-k)+g(-k,h)+g(0,h-k)-g(0,h)\] \[\qquad\qquad\qquad\qquad+g(-k,-k)-g(-k,0)-g(0,-k)+g(0,0)],\] \[=\lim_{h\to 0}\frac{g^{\prime\prime}(h,h)-g^{\prime\prime}(h, 0)-g^{\prime\prime}(0,h)+g^{\prime\prime}(0,0)}{h^{2}}=g^{(iv)}(0,0).\] On repeated application of the chain rule and noting that \(\boldsymbol{\delta}_{x}(x,y)=\mathbf{u}^{\top}\), \(\boldsymbol{\delta}_{y}(x,y)=\mathbf{v}^{\top}\) with all other higher order derivatives being \(0\) we have, \[g_{x}(x,y) =\boldsymbol{\delta}_{x}(x,y)\nabla K(\Delta(x,y))=\mathbf{u}^{ \top}\nabla K(\Delta(x,y)),\] \[g_{y}(x,y) =\boldsymbol{\delta}_{y}(x,y)\nabla K(\Delta(x,y))=\mathbf{v}^{ \top}\nabla K(\Delta(x,y)),\] \[g_{xx}(x,y) =\boldsymbol{\delta}_{x}(x,y)\nabla^{2}K(\Delta(x,y))\boldsymbol {\delta}_{x}(x,y)^{\top}=\mathbf{u}^{\top}\nabla^{2}K(\Delta(x,y))\mathbf{u},\] \[g_{xy}(x,y) =\boldsymbol{\delta}_{x}(x,y)\nabla^{2}K(\Delta(x,y))\boldsymbol {\delta}_{y}(x,y)^{\top}=\mathbf{u}^{\top}\nabla^{2}K(\Delta(x,y))\mathbf{v},\] \[g_{yy}(x,y) =\boldsymbol{\delta}_{y}(x,y)\nabla^{2}K(\Delta(x,y))\boldsymbol {\delta}_{y}(x,y)^{\top}=\mathbf{v}^{\top}\nabla^{2}K(\Delta(x,y))\mathbf{v},\] \[g_{xxx}(x) =\sum_{i=1}^{d}\delta_{i,xxx}(x,y)\left(\frac{\partial K}{ \partial\delta_{i}}\right)+3\sum_{i,j=1}^{d}\delta_{i,xx}(x,y)\frac{\partial^ {2}K}{\partial\delta_{i}\partial\delta_{j}}\delta_{j,x}(x,y)\] \[\qquad\qquad\qquad+\sum_{i,j,k=1}^{d}\delta_{i,x}(x,y)\delta_{j, x}(x,y)\delta_{k,x}(x,y)\frac{\partial^{3}K}{\partial\delta_{i}\partial \delta_{j}\partial\delta_{k}},\] \[=\sum_{i,j,k=1}^{d}\delta_{i,x}(x,y)\delta_{j,x}(x,y)\delta_{k,x} (x,y)\frac{\partial^{3}K}{\partial\delta_{i}\partial\delta_{j}\partial\delta_{k }}=\mathbf{c}_{\mathbf{u},\mathbf{u}}^{\top}\nabla^{3}K(\Delta(x,y))\mathbf{u}.\] Similarly \(g_{xxy}(x)=\mathbf{c}_{\mathbf{u},\mathbf{u}}^{\top}\nabla^{3}K(\Delta(x,y)) \mathbf{v}\), \(g_{yyx}(x)=\mathbf{c}_{\mathbf{V},\mathbf{V}}^{\top}\nabla^{3}K(\Delta(x,y)) \mathbf{u}\) and \(g_{yyy}(x)=\mathbf{c}_{\mathbf{V},\mathbf{V}}^{\top}\nabla^{3}K(\Delta(x,y)) \mathbf{v}\). Next, \[g_{xxxx}(x,y) =\sum_{i=1}^{d}\delta_{i,xxxx}(x,y)\left(\frac{\partial K}{ \partial\delta_{i}}\right)+\sum_{i,j=1}^{d}\delta_{i,xxx}(x,y)\frac{\partial^{ 2}K}{\partial\delta_{i}\partial\delta_{j}}\delta_{j,x}(x,y)\] \[+3\bigg{[}\sum_{i,j=1}^{d}\delta_{i,xxx}(x,y)\frac{\partial^{2}K }{\partial\delta_{i}\partial\delta_{j}}\delta_{j,x}(x,y)+\sum_{i,j=1}^{d} \delta_{i,xx}(x,y)\frac{\partial^{2}K}{\partial\delta_{i}\partial\delta_{j}} \delta_{j,xx}(x,y)\] \[\qquad\quad+\sum_{i,j,k=1}^{d}\delta_{i,xx}(x,y)\delta_{j,x}(x,y) \delta_{k,x}(x,y)\frac{\partial^{3}K}{\partial\delta_{i}\partial\delta_{j} \partial\delta_{k}}\bigg{]}\] \[\quad+3\sum_{i,j,k=1}^{d}\delta_{i,xx}(x,y)\delta_{j,x}(x,y) \delta_{k,x}(x,y)\frac{\partial^{3}K}{\partial\delta_{i}\partial\delta_{j} \partial\delta_{k}}\] \[\quad+\sum_{i,j,k,l=1}^{d}\delta_{i,x}(x,y)\delta_{j,x}(x,y) \delta_{k,x}(x,y)\delta_{l}^{\prime}(x)\frac{\partial^{4}K}{\partial\delta_{i} \partial\delta_{j}\partial\delta_{k}\partial\delta_{l}},\] \[=\sum_{i,j,k,l=1}^{d}\delta_{i,x}(x,y)\delta_{j,x}(x,y)\delta_{k, x}(x,y)\delta_{l}^{\prime}(x)\frac{\partial^{4}K}{\partial\delta_{i} \partial\delta_{j}\partial\delta_{k}\partial\delta_{l}}=\mathbf{c}_{\mathbf{ u},\mathbf{u}}^{\top}\nabla^{4}K(\mathbf{\delta}(x))\mathbf{c}_{\mathbf{u},\mathbf{u}}.\] Similarly, \(g_{xxxy}(x,y)=\mathbf{c}_{\mathbf{u},\mathbf{u}}^{\top}\nabla^{4}K(\mathbf{\delta }(x))\mathbf{c}_{\mathbf{u},\mathbf{V}}\), \(g_{yyyx}(x,y)=\mathbf{c}_{\mathbf{V},\mathbf{V}}^{\top}\nabla^{4}K(\mathbf{\delta }(x))\mathbf{c}_{\mathbf{V},\mathbf{u}}\), \(g_{xxyy}(x,y)=\mathbf{c}_{\mathbf{u},\mathbf{V}}^{\top}\nabla^{4}K(\mathbf{\delta }(x))\mathbf{c}_{\mathbf{u},\mathbf{V}}\) and \(g_{yyyy}(x,y)=\mathbf{c}_{\mathbf{V},\mathbf{V}}^{\top}\nabla^{4}K(\mathbf{\delta }(x))\mathbf{c}_{\mathbf{V},\mathbf{V}}\). Evaluated at \(x,y=0\), i.e. \(\mathbf{\delta}(0,0)=\Delta\), \(g_{xxyy}\) provides us with the required expression, \(C_{\mathbf{u},\mathbf{V}}^{(2)}(\mathbf{s},\mathbf{s}^{\prime})=\mathbf{c}_{ \mathbf{u},\mathbf{V}}^{\top}\nabla^{4}K(\Delta)\mathbf{c}_{\mathbf{u},\mathbf{V}}\) and \(var(D_{\mathbf{u},\mathbf{V}}^{(2)}Y(\mathbf{s}))=\lim_{h\to 0}E(Y_{\mathbf{u}, \mathbf{V},h}^{(2)}(\mathbf{s}),Y_{\mathbf{u},\mathbf{V},k}^{(2)}(\mathbf{s} ))=\mathbf{c}_{\mathbf{u},\mathbf{V}}^{\top}\nabla^{4}K(\mathbf{0})\mathbf{c} _{\mathbf{u},\mathbf{V}}\) which exists if \(K^{(iv)}(\Delta)\) exists for all \(\Delta\), including \(\Delta=\mathbf{0}\). Note:_In the above proof we make some abuse of notation for brevity of mathematical expressions involved. To clarify, \(\delta_{x}=\frac{\partial}{\partial x}\delta(x,y)\), \(\delta_{xx}=\frac{\partial^{2}}{\partial x^{2}}\delta(x,y)\) and so on, \(g_{x}=\frac{\partial}{\partial x}g(x,y)\), \(g_{xx}=\frac{\partial^{2}}{\partial x^{2}}g(x,y)\) etc., \(\sum_{i,j=1}^{d}=\sum_{i=1}^{d}\sum_{j=1}^{d}\) etc._ To derive (2) and the covariance for the directional curvature process, we assume that \(Y(\mathbf{s})\) is isotropic, i.e. \(K(\Delta)=\widetilde{K}(||\Delta||)\) therefore, \[C^{(2)}_{\mathbf{u},\mathbf{u}}(\mathbf{s},\mathbf{s}^{\prime}) =\lim_{h\to 0}\lim_{k\to 0}E\left[Y^{(2)}_{\mathbf{u},\mathbf{V},h}( \mathbf{s})Y^{(2)}_{\mathbf{u},\mathbf{V},k}(\mathbf{s}^{\prime})\right],\] \[=\lim_{h\to 0}\lim_{k\to 0}\frac{1}{h^{2}k^{2}}\bigg{[}E\left(Y( \mathbf{s}+h(\mathbf{u}+\mathbf{v}))Y^{(2)}_{\mathbf{u},\mathbf{V},k}( \mathbf{s}^{\prime})\right)-E(Y(\mathbf{s}+h\mathbf{u})Y^{(2)}_{\mathbf{u}, \mathbf{V},k}(\mathbf{s}^{\prime}))\] \[\qquad\qquad\qquad-E(Y(\mathbf{s}+k\mathbf{v})Y^{(2)}_{\mathbf{u },\mathbf{V},k}(\mathbf{s}^{\prime}))+E(Y(\mathbf{s})Y^{(2)}_{\mathbf{u}, \mathbf{V},k}(\mathbf{s}^{\prime}))\bigg{]}\] where, \[E\left(Y(\mathbf{s}+h(\mathbf{u}+\mathbf{v}))Y^{(2)}_{\mathbf{u },\mathbf{V},k}(\mathbf{s}^{\prime})\right) =\widetilde{K}(||\Delta+(h-k)(\mathbf{u}+\mathbf{v})||)- \widetilde{K}(||\Delta+(h-k)\mathbf{u}+h\mathbf{v}||)-\] \[\widetilde{K}(||\Delta+h\mathbf{u}+(h-k)\mathbf{v}||)+\widetilde {K}(||\Delta+h(\mathbf{u}+\mathbf{v})||)\] \[-\widetilde{K}(||\Delta+(h-k)\mathbf{u}-k\mathbf{v}||),\] \[E(Y(\mathbf{s}+h\mathbf{u})Y^{(2)}_{\mathbf{u},\mathbf{V},k}( \mathbf{s}^{\prime})) =\widetilde{K}(||\Delta+(h-k)\mathbf{u}-k\mathbf{v}||)- \widetilde{K}(||\Delta+(h-k)\mathbf{u}||)\] \[-\widetilde{K}(||\Delta+h\mathbf{u}-k\mathbf{v}||)+\widetilde{K }(||\Delta+h\mathbf{u}||)\] \[E(Y(\mathbf{s}+k\mathbf{v})Y^{(2)}_{\mathbf{u},\mathbf{V},k}( \mathbf{s}^{\prime})) =\widetilde{K}(||\Delta-k\mathbf{u}+(h-k)\mathbf{v}||)-\widetilde {K}(||\Delta-k\mathbf{u}+h\mathbf{v}||)\] \[-\widetilde{K}(||\Delta+(h-k)\mathbf{v}||)+\widetilde{K}(|| \Delta+h\mathbf{v}||)\] \[E(Y(\mathbf{s})Y^{(2)}_{\mathbf{u},\mathbf{V},k}(\mathbf{s}^{ \prime})) =\widetilde{K}(||\Delta-k(\mathbf{u}+\mathbf{v})||)-\widetilde{K}( ||\Delta-k\mathbf{u}||)-\widetilde{K}(||\Delta-k\mathbf{v}||)+\widetilde{K}( ||\Delta||)\] suppressing dependence on \(\Delta\), \(\mathbf{u}\) and \(\mathbf{v}\), we define \(\rho(h,k)=||\Delta(h,k)||=||\Delta+h\mathbf{u}+k\mathbf{v}||\) and let \(g(h,k)=K(\rho(h,k))\). Hence, \[C^{(2)}_{\mathbf{u},\mathbf{V}}(\mathbf{s},\mathbf{s}^{\prime}) =\lim_{h\to 0}\lim_{k\to 0}\frac{1}{h^{2}k^{2}}[ \rho(h-k,h-k)-\rho(h-k,h)-\rho(h,h-k)+\rho(h,h)\] \[-\rho(h-k,-k)+\rho(h-k,0)+\rho(h,-k)-\rho(h,0)\] \[-\rho(-k,h-k)+\rho(-k,h)+\rho(0,h-k)-\rho(0,h)\] \[+\rho(-k,-k)-\rho(-k,0)-\rho(0,-k)+\rho(0,0)],\] \[=\lim_{h\to 0}\frac{\rho^{\prime\prime}(h,h)-\rho^{\prime\prime}(h,0)- \rho^{\prime\prime}(0,h)+\rho^{\prime\prime}(0,0)}{h^{2}}=\rho^{(iv)}(0,0)\.\] Since, \(\rho(0,0)=||\Delta||\), from the previous proof we can see that, \(\rho^{(iv)}(0,0)=\nabla\widetilde{K}(||\Delta||)\rho^{(iv)}(0,0)+\nabla^{2} \widetilde{K}(||\Delta||)\left(4\rho^{\prime\prime\prime}(0,0)\rho^{\prime}(0,0)+ 3[\rho^{\prime\prime}(0,0)]^{2}\right)+6\nabla^{3}\widetilde{K}(||\Delta||) \rho^{\prime\prime}(0,0)[\rho^{\prime}(0,0)]^{2}+\sqrt{4}\widetilde{K}(|| \Delta||)[\rho^{\prime}(0,0)]^{4}\). After some algebra, \(\rho^{\prime}(0)=\frac{\mathbf{u}^{\top}\Delta}{||\Delta||}\), \(\rho^{\prime\prime}(0)=\frac{1}{||\Delta||}\left(1-\frac{(\mathbf{u}^{\top} \Delta)^{2}}{||\Delta||^{2}}\right)\), \(\rho^{\prime\prime\prime}(0)=-3\frac{\mathbf{u}^{\top}\Delta}{||\Delta||^{3}} \left(1-\frac{(\mathbf{u}^{\top}\Delta)^{2}}{||\Delta||^{2}}\right)\) and \(\rho^{(iv)}(0)=\frac{3}{||\Delta||^{3}}\left(5\frac{(\mathbf{u}^{\top}\Delta)^ {2}}{||\Delta||^{2}}-1\right)\left(1-\frac{(\mathbf{u}^{\top}\Delta)^{2}}{|| \Delta||^{2}}\right)\). Substituting and grouping terms corresponding to \(\left(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta|| )}{||\Delta||}\right)\), \(\nabla^{3}\widetilde{K}(||\Delta||)\) and \(\nabla^{4}\widetilde{K}(||\Delta||)\) we get, \[g^{(iv)}(0) =\frac{3}{||\Delta||^{2}}\left\{1-5\frac{(\mathbf{u}^{\top}\Delta )^{2}}{||\Delta||^{2}}\right\}\left[1-\frac{(\mathbf{u}^{\top}\Delta)^{2}}{|| \Delta||^{2}}\right]\left(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla \widetilde{K}(||\Delta||)}{||\Delta||}\right)\] \[\quad+\frac{6}{||\Delta||}\frac{(\mathbf{u}^{\top}\Delta)^{2}}{|| \Delta||^{2}}\left[1-\frac{(\mathbf{u}^{\top}\Delta)^{2}}{||\Delta||^{2}} \right]\nabla^{3}\widetilde{K}(||\Delta||)+\left(\frac{(\mathbf{u}^{\top} \Delta)^{2}}{||\Delta||^{2}}\right)^{2}\nabla^{4}\widetilde{K}(||\Delta||),\] which is the required expression. We discuss validity of the curvature process for surfaces in \(\mathbb{R}^{3}\). It can be easily extended to surfaces in \(\mathbb{R}^{d}\). Define, \[\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y(\mathbf{s})=\begin{pmatrix}1&0& 0&0&0&0\\ -\frac{1}{h}&\frac{1}{h}&0&0&0&0\\ -\frac{1}{h}&0&\frac{1}{h}&0&0&0\\ \frac{1}{h^{2}}&-\frac{2}{h^{2}}&0&\frac{1}{h^{2}}&0&0\\ \frac{1}{h^{2}}&-\frac{1}{h^{2}}&-\frac{1}{h^{2}}&0&\frac{1}{h^{2}}&0\\ \frac{1}{h^{2}}&0&-\frac{2}{h^{2}}&0&0&\frac{1}{h^{2}}\end{pmatrix}\begin{pmatrix} Y(\mathbf{s})\\ Y(\mathbf{s}+h\mathbf{e}_{1})\\ Y(\mathbf{s}+h\mathbf{e}_{2})\\ Y(\mathbf{s}+2h\mathbf{e}_{1})\\ Y(\mathbf{s}+h(\mathbf{e}_{1}+\mathbf{e}_{2}))\\ Y(\mathbf{s}+2h\mathbf{e}_{2})\end{pmatrix}\\ =\mathbf{A}_{h}\mathbf{L}_{h}(\mathbf{s})=\left(Y(\mathbf{s}),Y^{(1)}_{ \mathbf{e}_{1},h}(\mathbf{s}),Y^{(1)}_{\mathbf{e}_{2},h}(\mathbf{s}),Y^{(2)}_{ \mathbf{e}_{1},\mathbf{e}_{1},h}(\mathbf{s}),Y^{(2)}_{\mathbf{e}_{1},\mathbf{e }_{2},h}(\mathbf{s}),Y^{(2)}_{\mathbf{e}_{2},\mathbf{e}_{2},h}(\mathbf{s}) \right)^{\top},\] as finite difference corresponding to the differential operator \(\mathcal{L}Y\) using the expressions of \(Y^{(1)}_{\mathbf{e}_{i},h}(\mathbf{s})\) and \(Y^{(2)}_{\mathbf{e}_{i},\mathbf{e}_{j},h}(\mathbf{s})\), \(i,j=1,2\). For every \(h>0\) this defines a linear transformation, since the determinant of \(\mathbf{A}_{h}\) is \(h^{-8}\). We denote the differenced differential process on the right of \(\mathbf{A}_{h}\), suppressing dependence on \(\mathbf{e}_{1},\mathbf{e}_{2}\) by \(\mathbf{L}_{h}(\mathbf{s})\). The associated covariance matrix is given by, \(Cov(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2}h}Y(\mathbf{s}),\mathcal{L}_{ \mathbf{e}_{1},\mathbf{e}_{2},k}Y(\mathbf{s}^{\prime}))=\mathbf{A}_{h}\mathcal{K} _{\mathbf{e}_{1},\mathbf{e}_{2},h,k}(\Delta)\mathbf{A}_{h}^{\top}\), where elements of \(\mathcal{K}_{\mathbf{e}_{1},\mathbf{e}_{2},h,k}(\Delta)\) are obtained from \(Cov(\mathbf{L}_{h}(\mathbf{s}),\mathbf{L}_{k}(\mathbf{s}^{\prime}))\). Hence, as \(h\downarrow 0\), \(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y(\mathbf{s})\rightarrow\mathcal{L}Y (\mathbf{s})\) and \(\lim_{h\downarrow 0,k\downarrow 0}Cov(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2}h}Y( \mathbf{s}),\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},k}Y(\mathbf{s}^{\prime}) )=V_{\mathcal{L}Y}(\Delta)\), where the limits operate element wise on the matrix \(\mathbf{A}_{h}\mathcal{K}_{\mathbf{e}_{1},\mathbf{e}_{2},h,k}(\Delta)\mathbf{A}_{k}^{\top}\) and, the expression for each element is obtained from previous computations, by setting \(\mathbf{u}=\mathbf{e}_{1}\) and \(\mathbf{v}=\mathbf{e}_{2}\). For the directional operator, this can be established by observing that the directional differential operator is obtained as follows, \[\mathcal{L}_{\mathbf{u},h}Y(\mathbf{s})=\begin{pmatrix}1&0&0&0&0&0\\ 0&u_{1}&u_{2}&0&0&0\\ 0&0&0&u_{1}^{2}&2u_{1}u_{2}&u_{2}^{2}\end{pmatrix}\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y(\mathbf{s})=\left(1\oplus\mathbf{u}^{\top}\oplus\mathbf{c }_{\mathbf{u},\mathbf{u}}^{\top}\right)\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_ {2},h}Y(\mathbf{s}),\] where \(\oplus\) denotes the direct sum for matrices, then as \(h\to 0\), \(\mathcal{L}_{\mathbf{u},h}Y(\mathbf{s})\rightarrow\mathcal{L}_{\mathbf{u}}Y( \mathbf{s})\). The covariance matrix is obtained following similar arguments presented in the proof for the previous result. In case the covariance is isotropic we have \(K(\Delta)=\widetilde{K}(||\Delta||)\), on repeated differentiation and noting that \(\frac{\partial}{\partial\Delta}||\Delta||=\frac{\Delta}{||\Delta||}\) we have, \[\nabla K(\Delta) =\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||}\Delta,\] \[\nabla^{2}K(\Delta) =\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||}I-\frac{\nabla \widetilde{K}(||\Delta||)}{||\Delta||^{3}}\Delta\Delta^{\top}+\frac{\nabla^{2} \widetilde{K}(||\Delta||)}{||\Delta||^{2}}\Delta\Delta^{\top}\] \[=\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||}I+\left(\nabla ^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta ||}\right)\frac{\Delta\Delta^{\top}}{||\Delta||^{2}}.\] Differentiating \(\nabla^{2}K(\Delta)\) w.r.t. \(\Delta\) we obtain, \[\nabla^{3}K(\Delta) =\frac{\nabla^{2}\widetilde{K}(||\Delta||)}{||\Delta||^{2}}vech(I) ^{\top}\otimes\Delta-\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||^{3}} vech(I)^{\top}\otimes\Delta\] \[\quad+\left(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla \widetilde{K}(||\Delta||)}{||\Delta||}\right)\frac{1}{||\Delta||^{2}}\frac{ \partial vech(\Delta\Delta^{\top})}{\partial\Delta}\] \[\quad-2\left(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla \widetilde{K}(||\Delta||)}{||\Delta||}\right)\frac{1}{||\Delta||^{4}}\frac{ \partial vech(\Delta\Delta^{\top})}{\partial\Delta}\] \[\quad-\left(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla \widetilde{K}(||\Delta||)}{||\Delta||}\right)\frac{1}{||\Delta||^{4}}\frac{ \partial vech(\Delta\Delta^{\top})}{\partial\Delta}\] \[\quad+\nabla^{3}\widetilde{K}(||\Delta||)\cdot\frac{vech(\Delta \Delta^{\top})^{\top}\otimes\Delta}{||\Delta||^{3}}.\] On grouping terms for \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{|| \Delta||}\) and \(\nabla^{3}\widetilde{K}(||\Delta||)\) we obtain the required expression, \[\nabla^{3}K(\Delta)=\left(\nabla^{2}\widetilde{K}(||\Delta||)- \frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||}\right)\left\{\frac{vech(I)^{ \top}\otimes\Delta}{||\Delta||^{2}}-3\frac{vech(\Delta\Delta^{\top})^{\top} \otimes\Delta}{||\Delta||^{4}}\right.\] \[\left.\hskip 14.226378pt+\frac{1}{||\Delta||^{2}}\frac{\partial vech (\Delta\Delta^{\top})}{\partial\Delta}\right\}\] \[+\nabla^{3}\widetilde{K}(||\Delta||)\cdot\frac{vech(\Delta\Delta^{ \top})^{\top}\otimes\Delta}{||\Delta||^{3}}\] To obtain \(\nabla^{4}K(\Delta)\) we differentiate \(\nabla^{3}K(\Delta)\) w.r.t. \(\Delta\), we use notations \(A_{1}=\frac{\partial\Delta\otimes vech(I)^{\top}}{\partial\Delta}\), \(A_{2}=\frac{\partial\Delta\otimes vech(\Delta\Delta^{\top})^{\top}}{\partial\Delta}\), \(A_{3}=\frac{\partial}{\partial\Delta}\left(\frac{\partial vech(\Delta\Delta^{ \top})}{\partial\Delta}\right)\) for matricized tensors of order \(d(d+1)/2\times d(d+1)/2\), where the order of matricization conforms to the listing order of the half-vectorization operator \(vech\) and \(A_{4}\) denotes the element-wise product of \(\Delta\) with \(\left(\frac{\partial vech(\Delta\Delta^{\top})}{\partial\Delta}\right)\) in the same order as the matricized tensor. On differentiating the factor corresponding to the coefficient \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{|| \Delta||}\) we obtain, \[\frac{1}{||\Delta||^{2}}A_{1}-3\frac{1}{||\Delta||^{4}}A_{2}+\frac {1}{||\Delta||^{2}}A_{3}-\frac{2}{||\Delta||^{4}}vech(\Delta\Delta^{\top})vech (I)^{\top}\] \[+\frac{12}{||\Delta||^{6}}vech(\Delta\Delta^{\top})vech(\Delta \Delta^{\top})^{\top}-\frac{2}{||\Delta||^{4}}A_{4}.\] Differentiating \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{|| \Delta||}\), \[\frac{\nabla^{3}\widetilde{K}(||\Delta||)}{||\Delta||}\Delta-\left(\nabla^{2} \widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||} \right)\frac{\Delta\Delta^{\top}}{||\Delta||^{2}}.\] Differentiating the factor corresponding to the coefficient \(\nabla^{3}\widetilde{K}(||\Delta||)\) we obtain, \[-3\frac{vech(\Delta\Delta^{\top})vech(\Delta\Delta^{\top})^{\top}}{||\Delta||^ {5}}+\frac{1}{||\Delta||^{3}}A_{2},\] finally, differentiating \(\nabla^{3}\widetilde{K}(||\Delta||)=\nabla^{4}\widetilde{K}(||\Delta||)\frac{ \Delta}{||\Delta||}\). Grouping coefficients for \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{|| \Delta||}\), we obtain, \[\frac{\nabla^{3}\widetilde{K}(||\Delta||)}{||\Delta||}\Delta-\left(\nabla^{2} \widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||} \right)\frac{\Delta\Delta^{\top}}{||\Delta||^{2}}.\] Differentiating the factor corresponding to the coefficient \(\nabla^{3}\widetilde{K}(||\Delta||)\) we obtain, \[-3\frac{vech(\Delta\Delta^{\top})vech(\Delta\Delta^{\top})^{\top}}{||\Delta||^ {5}}+\frac{1}{||\Delta||^{3}}A_{2},\] finally, differentiating \(\nabla^{3}\widetilde{K}(||\Delta||)=\nabla^{4}\widetilde{K}(||\Delta||||)\frac {\Delta}{||\Delta||}\). Grouping coefficients for \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{|| \Delta||}\), we obtain, \[\frac{\nabla^{3}\widetilde{K}(||\Delta||)}{||\Delta||}\Delta-\left(\nabla^{2} \widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{||\Delta||} \right)\frac{\Delta\Delta^{\top}}{||\Delta||||^{2}}.\] Differentiating the factor corresponding to the coefficient \(\nabla^{3}\widetilde{K}(||\Delta||)\) we obtain, \[-3\frac{vech(\Delta\Delta^{\top})vech(\Delta\Delta^{\top})^{\top}}{||\Delta||^ {5}}+\frac{1}{||\Delta||^{3}}A_{2},\] finally, differentiating \(\nabla^{3}\widetilde{K}(||\Delta||)=\nabla^{4}\widetilde{K}(||\Delta||||)\frac {\Delta}{||\Delta||||}\). Grouping coefficients for \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||||)}{|| \Delta||}\), we obtain, \[\frac{\nabla^{3}\widetilde{K}(||\Delta||)}{||\Delta||||^{2}}\Delta-\left(\nabla^{2} \widetilde{K}(||\Delta||||)-\frac{\nabla\widetilde{K}(||\Delta||||)}{||\Delta|| }\right)\frac{\Delta\Delta^{\top}}{||\Delta||||||^{2}}.\] Differentiating the factor corresponding to the coefficient \(\nabla^{3}\widetilde{K}(||\Delta||||)\) we obtain, \[-3\frac{vech(\Delta\Delta^{\top})vech(\Delta\Delta^{\top})^{\top}}{||\Delta||||^ {2}}+\frac{1}{||\Delta||^{3}}A_{2},\] finally, differentiating \(\nabla^{3}\widetilde{K}(||\Delta||||)=\nabla^{4}\widetilde{K}(||\Delta||||||)\frac {\Delta}{||\Delta||||}\). Grouping coefficients for \(\nabla^{2}\widetilde{K}(||\Delta||||)-\frac{\nabla\widetilde{K}(||\Delta||||)}{|| \Delta||||}\), we obtain, \[\frac{\nabla^{3}\widetilde{K}(||\Delta||||)}{||\Delta||||||^{2}}\Delta-\left( \nabla^{2}\widetilde{K}(||\Delta||||||)-\frac{\nabla\widetilde{K}(||\Delta||||)}{|| \Delta||}\right)\frac{\Delta\Delta^{\top}}{||\Delta||||||^{2}}\Delta\ \(\frac{\nabla K(||\Delta||)}{||\Delta||}\), \(\nabla^{3}\widetilde{K}(||\Delta||)\) and \(\nabla^{4}\widetilde{K}(||\Delta||)\) we obtain the required expression, thereby completing the proof. For proving the result focusing on spectral theory, note that since \(f\) is symmetric about \(0\) by hypothesis, \[K(t)=\int_{\mathbbm{R}}e^{i\lambda t}f(\lambda)d\lambda=\int_{\mathbbm{R}}\cos( \lambda t)f(\lambda)d\lambda+i\int_{\mathbbm{R}}\sin(\lambda t)f(\lambda)d \lambda=\int\cos(\lambda t)f(\lambda)d\lambda.\] Differentiating w.r.t. \(t\) on both sides we have, \[\nabla K(t)=-\int\sin(\lambda t)\lambda f(\lambda)d\lambda,\] Since \(|\sin(\lambda t)\lambda f(\lambda)|\leq|\lambda|f(\lambda)\) and \(\int|\lambda|f(\lambda)d\lambda<\infty\) under hypothesis, differentiation under the integral sign is valid. We repeat the process to obtain, \[\nabla^{2}K(t)=-\int\cos(\lambda t)\lambda^{2}f(\lambda)d\lambda,\ \ \nabla^{3}K(t)=\int\sin(\lambda t)\lambda^{3}f(\lambda)d\lambda,\ \ \nabla^{4}K(t)=\int\cos(\lambda t)\lambda^{4}f(\lambda)d\lambda,\] Next we make the following observations for limits of these derivatives, \[\lim_{t\to 0}\left(\nabla^{2}K(t)-\frac{\nabla K(t)}{t} \right)=\lim_{t\to 0}\int\left(\cos(\lambda t)-\frac{\sin(\lambda t)}{ \lambda t}\right)\lambda^{2}f(\lambda)d\lambda=0,\] \[\lim_{t\to 0}\nabla^{2}K(t)=-\lim_{t\to 0}\int\cos(\lambda t) \lambda^{2}f(\lambda)d\lambda=-\int\lambda^{2}f(\lambda)d\lambda<\infty,\] \[\lim_{t\to 0}\nabla^{3}K(t)=\lim_{t\to 0}\int\sin(\lambda t) \lambda^{3}f(\lambda)d\lambda=0,\] \[\lim_{t\to 0}\nabla^{4}K(t)=\lim_{t\to 0}\int\cos(\lambda t) \lambda^{4}f(\lambda)d\lambda=\int\lambda^{4}f(\lambda)d\lambda<\infty,\] We evaluate the results obtained above under these observations. Making note of, \[\bigg{\{}\frac{vech(I)^{\top}\otimes\Delta}{||\Delta||^{2}}-3\frac{vech( \Delta\Delta^{\top})^{\top}\otimes\Delta}{||\Delta||^{4}}+\frac{1}{||\Delta||^ {2}}\frac{\partial vech(\Delta\Delta^{\top})}{\partial\Delta}\bigg{\}},\ \ \ \ \frac{vech( \Delta\Delta^{\top})^{\top}\otimes\Delta}{||\Delta||^{3}}\] stay bounded as \(\Delta\to 0\) implying \(\nabla^{3}K(\Delta)\to 0\) as \(\Delta\to 0\). For \(\nabla^{4}K(\Delta)\), we observe that the factors corresponding to \(\nabla^{2}\widetilde{K}(||\Delta||)-\frac{\nabla\widetilde{K}(||\Delta||)}{|| \Delta||}\), \(\nabla^{3}\widetilde{K}(||\Delta||)\) and \(\nabla^{4}\widetilde{K}(||\Delta||)\) remain bounded as \(\Delta\to 0\), additionally \(\frac{vech(\Delta\Delta^{\top})vech(\Delta\Delta^{\top})^{\top}}{||\Delta||^{4}} \to I_{d(d+1)/2}\) as \(\Delta\to 0\). For each diagonal element of \((\nabla^{4}K)_{ii}=a_{i}=\int\lambda^{4}f_{i}(\lambda)d\lambda\) which completes the proof. For results in page 8, we prove this for \(\mathbf{s}\in\mathbb{R}^{2}\), the proof can be extended to \(\mathbb{R}^{d}\) analogously. Under the hypothesis, \[Y_{1}(\mathbf{s}),\sim GP(\mathbf{0},K(\cdot,\boldsymbol{\theta}_{K}^{1})) \text{ and }Y_{2}(\mathbf{s})\sim GP(\mathbf{0},K(\cdot,\boldsymbol{\theta}_{K}^{2}))\] independently. Without loss of generality, consider the finite difference differential process, \[\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{1}(\mathbf{s})=\boldsymbol{A} _{h}\mathbf{L}_{h}^{1}(\mathbf{s})=\begin{pmatrix}1&0&0&0&0&0\\ -\frac{1}{h}&\frac{1}{h}&0&0&0&0\\ -\frac{1}{h}&0&\frac{1}{h}&0&0&0\\ \frac{1}{h^{2}}&-\frac{2}{h^{2}}&0&\frac{1}{h^{2}}&0&0\\ \frac{1}{h^{2}}&-\frac{1}{h^{2}}&-\frac{1}{h^{2}}&0&\frac{1}{h^{2}}&0\\ \frac{1}{h^{2}}&0&-\frac{2}{h^{2}}&0&0&\frac{1}{h^{2}}\end{pmatrix}\left( \begin{array}{c}Y_{1}(\mathbf{s})\\ Y_{1}(\mathbf{s}+h\mathbf{e}_{1})\\ Y_{1}(\mathbf{s}+h\mathbf{e}_{2})\\ Y_{1}(\mathbf{s}+2h\mathbf{e}_{1})\\ Y_{1}(\mathbf{s}+h(\mathbf{e}_{1}+\mathbf{e}_{2}))\\ Y_{1}(\mathbf{s}+2h\mathbf{e}_{2})\end{array}\right),\] noting that for every \(h>0\), \(\mathbf{L}_{h}^{1}(\mathbf{s})\) follows a 6-dimensional normal distribution and \(|\boldsymbol{A}_{h}|=h^{-8}\neq 0\), making the above linear transformation non-singular. We know from properties of multivariate normal distributions that \(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{1}(\mathbf{s})\sim\mathcal{N }_{5}(\boldsymbol{A}_{h}.\mathbf{0},\boldsymbol{A}_{h}\mathbf{K}_{h}(\mathbf{ 0},\boldsymbol{\theta}_{K}^{1})\boldsymbol{A}_{h}^{\top})\), where \(\mathbf{K}_{h}(\mathbf{0},\boldsymbol{\theta}_{K}^{1})=Var(\mathbf{L}_{h}^{1}( \mathbf{s}))\), the cross-covariance matrix for the process \(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{1}(\mathbf{s})\) is \(\boldsymbol{A}_{h}\mathcal{K}_{h,k}(\Delta)\boldsymbol{A}_{k}^{\top}\), where \(\mathcal{K}_{h,k}(\Delta)=Cov(\mathbf{L}_{h}^{1}(\mathbf{s}),\mathbf{L}_{k}^{1} (\mathbf{s}^{\prime}))\), with \(\Delta=\mathbf{s}-\mathbf{s}^{\prime}\). As \(h\to 0\), \(\boldsymbol{A}_{h}\mathbf{K}_{h}(\mathbf{0},\boldsymbol{\theta}_{K}^{1}) \boldsymbol{A}_{h}^{\top}\to V_{CY_{1}}(\mathbf{0})\) and \(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{1}(\mathbf{s})\overset{d}{ \rightarrow}\mathcal{L}Y_{1}(\mathbf{s})\sim\mathcal{N}_{5}(\mathbf{0},V_{ CY_{1}}(\mathbf{0}))\), where \(\overset{d}{\rightarrow}\) indicates convergence in distribution. As \(h,k\downarrow 0\)\(\boldsymbol{A}_{h}\mathcal{K}_{h,k}(\Delta)\boldsymbol{A}_{k}^{\top}\to V_{ \mathcal{L}Y_{1}}(\Delta)\), implying that \(\mathcal{L}Y_{1}(\mathbf{s})\sim GP(\mathbf{0},V_{\mathcal{L}Y_{1}}(\cdot, \boldsymbol{\theta}_{K}^{1}))\). The same arguments can be followed for showing \(\mathcal{L}Y_{2}(\mathbf{s})\sim GP(\mathbf{0},V_{\mathcal{L}Y_{2}}(\cdot, \boldsymbol{\theta}_{K}^{2}))\). 1. for (a) consider the associated finite differential operators, \(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{1}(\mathbf{s})=\boldsymbol{A} _{h}\mathbf{L}_{h}^{1}(\mathbf{s})\) and \(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{2}(\mathbf{s})=\boldsymbol{A} _{h}\mathbf{L}_{h}^{2}(\mathbf{s})\). We note that for every \(h>0\), \(Cov(\mathbf{L}_{h}^{1}(\mathbf{s}),\mathbf{L}_{h}^{2}(\mathbf{s}))=\mathbf{O}\) which is the zero matrix (of order \(6\times 6\)). Now consider the process, \[\mathcal{L}_{h}^{1,2} =(\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{1}(\mathbf{s})^{ \top},\mathcal{L}_{\mathbf{e}_{1},\mathbf{e}_{2},h}Y_{2}(\mathbf{s})^{\top})^{ \top}=\mathbf{A}_{h}\oplus\mathbf{A}_{h}(\mathbf{L}_{h}^{1}(\mathbf{s})^{\top},\mathbf{ L}_{h}^{2}(\mathbf{s})^{\top})^{\top}\] \[\mathcal{L}_{h}^{1,2} \sim\mathcal{N}_{12}(\mathbf{0},(\mathbf{A}_{h}\oplus\mathbf{A}_{h})( \mathbf{K}_{h}(\cdot,\mathbf{\theta}_{K}^{1})\oplus\mathbf{K}_{h}(\cdot,\mathbf{\theta }_{K}^{2}))(\mathbf{A}_{h}\oplus\mathbf{A}_{h})^{\top})\] as \(h\downarrow 0\), \(\mathcal{L}_{h}^{1,2}\overset{d}{\rightarrow}(\mathcal{L}Y_{1}^{\top}, \mathcal{L}Y_{2}^{\top})^{\top}\sim\mathcal{N}_{12}(\mathbf{0},V_{\mathcal{L}Y _{1}}(\mathbf{0})\oplus V_{\mathcal{L}Y_{2}}(\mathbf{0}))\) which implies \(Cov(\mathcal{L}Y_{1},\mathcal{L}Y_{2})=\mathbf{O}\), and since they jointly follow a multivariate Gaussian this implies that they are independent. Observing that the cross covariance for all \(h,k\downarrow 0\), \(Cov(\mathbf{L}_{h}^{1}(\mathbf{s}),\mathbf{L}_{k}^{2}(\mathbf{s}^{\prime}))= \mathbf{O}\), we can establish that they are independent Gaussian processes. 2. (b) and (c) follow from standard properties of the Gaussian processes. For the discussion on page 14 preceding (14) we suppress dependence on \(\mathbf{s}\) and \(Y\), we denote \(g(t)=g\left(\mathcal{L}Y(\mathbf{s}(t))\right)\). By definition of the integral, given \(\epsilon>0\), there exists a \(\delta_{0}>0\) such that if \(|P|<\delta_{0}\), then \[\left|\int_{a}^{b}g(t)||\mathbf{s}^{\prime}(t)||dt-\sum_{i=1}^{n_{P}}(t_{i}^{ \prime}-t_{i-1}^{\prime})g(t_{i}^{\prime})||\mathbf{s}^{\prime}(t_{i}^{\prime })||\right|<\frac{\epsilon}{2}, \tag{15}\] where \(\sum\limits_{i=1}^{n_{P}}(t_{i}^{\prime}-t_{i-1}^{\prime})g(t_{i}^{\prime})|| \mathbf{s}^{\prime}(t_{i}^{\prime})||\) is a _Riemann sum approximation_ of the integral. On the other hand, since \(g(t)\) is uniformly continuous over \([a,b]\), given \(\epsilon>0\), there exists \(\delta_{1}>0\) such that if \(x,y\in[a,b]\) with \(|x-y|<\delta_{1}\), \[\left|g(x)||\mathbf{s}^{\prime}(x)||-g(y)||\mathbf{s}^{\prime}(y)||\right|< \frac{\epsilon}{2(b-a)}\] Set \(\delta=\min\{\delta_{0},\delta_{1}\}\), then \(|P|<\delta\), using the mean value theorem we obtain, \[\left|\sum_{i=1}^{n_{P}}\int_{C_{t_{i}}}g(t)||\mathbf{s}^{\prime}(t )||dt-\sum_{i=1}^{n_{P}}(t_{i}^{\prime}-t_{i-1}^{\prime})g(t_{i}^{\prime})|| \mathbf{s}^{\prime}(t_{i}^{\prime})||\right|\] \[\leq\left|\sum_{i=1}^{n_{P}}(t_{i}^{\prime}-t_{i-1}^{\prime}) \sup g(t)||\mathbf{s}^{\prime}(t)||-\sum_{i=1}^{n_{P}}(t_{i}^{\prime}-t_{i-1}^ {\prime})g(t_{i}^{\prime})||\mathbf{s}^{\prime}(t_{i}^{\prime})||\right|\] \[\leq\left|\sum_{i=1}^{n_{P}}(t_{i}^{\prime}-t_{i-1}^{\prime}) \sup\big{|}g(t)||\mathbf{s}^{\prime}(t)||-g(t^{\prime})||\mathbf{s}^{\prime} (t^{\prime})||\right|\Bigg{|}\leq\frac{\epsilon}{2}\;,\] where the first inequality follows from the assumption of \(C\) being regular. Together with the inequality in (15) we have, \[\left|\int_{a}^{b}g(t)||\mathbf{s}^{\prime}(t)||dt-\sum_{i=1}^{n_{P}}\int_{C_ {t_{i}}}g(t)||\mathbf{s}^{\prime}(t)||dt\right|<\epsilon.\] Finally, almost sure convergence yields for every \(\epsilon>0\), \[P\left[\left|\int_{a}^{b}g(t)||\mathbf{s}^{\prime}(t)||dt-\sum_{i=1}^{n_{P}} \int_{C_{t_{i}}}g(t)||\mathbf{s}^{\prime}(t)||dt\right|<\epsilon\right]=1.\] Considering a sequence of \(\epsilon\downarrow 0\), and using the preceding arguments for each \(\epsilon\) we can find \(\delta\downarrow 0\) such that \(|P|<\delta\) which concludes the proof. Tables ## 14 Plots Figure 9: Illustration showing geometric interpretation of curvilinear wombling. Local tangent planes are shaded around points \(\mathbf{s}_{1}\), \(\mathbf{s}_{2}\). Normals to the surface are marked as \(\mathbf{N}(\mathbf{s}_{1})\) and \(\mathbf{N}(\mathbf{s}_{2})\), locally projected principal unit normals to the projected curve \(\mathbf{C}\) are marked as \(\mathbf{n}(\mathbf{s}(t_{1}))\) and \(\mathbf{n}(\mathbf{s}(t_{2}))\) respectively. The tangent vectors spanning the local tangent planes are shown with arrows. Figure 11: Plots showing the true surfaces for the (a) process, (b) gradients along \(x\)-axis, (c) gradients along \(y\)-axis, and estimated surfaces for the (d) process, gradients (e), (f) w.r.t. synthetic response generated, \(y\sim N(10[\sin(3\pi s_{1})+\cos(3\pi s_{2})],1)\) Figure 10: Illustration of rectilinear wombling, showing a curve \(C\), an initial starting point \(\mathbf{s}_{0}\) on the curve, with following points, \(\{\mathbf{s}_{1},\mathbf{s}_{2},\dots,\mathbf{s}_{8}\}\) corresponding a partition \(\mathcal{T}\), of the parameterized curve. Each linear segment consists of a norm-direction pair \((t,\mathbf{u})\), where \(t\) specifies the length of the segment and \(\mathbf{u}\) the direction of movement. The normal direction for each segment is indicated as \(\mathbf{u}^{\perp}\). Figure 12: Plots showing the true surfaces for the (a) curvature along \(x\)-axis, (b) curvature along \(x\)-\(y\)-axis, (c) curvature along \(y\)-axis, and estimated surfaces for (d) curvature along \(x\)-axis, (e) mixed curvature along \(x\)-\(y\)-axis, (f) curvature along \(y\)-axis, for synthetic response generated from, \(y\sim N(10[\sin(3\pi s_{1})+\cos(3\pi s_{2})],1)\). Figure 13: Plots showing the true surfaces for the (a) process, (b) gradients along \(x\)-axis, (c) gradients along \(y\)-axis, and estimated surfaces for the (d) process, gradients (e), (f) for synthetic response generated from, \(y\sim N(10[\sin(3\pi s_{1})\cdot\cos(3\pi s_{2})],1)\). Figure 14: Plots showing the true surfaces for the (a) curvature along \(x\)-axis, (b) curvature along \(x\)-\(y\)-axis, (c) curvature along \(y\)-axis, and estimated surfaces for (d) curvature along \(x\)-axis, (e) mixed curvature along \(x\)-\(y\)-axis, (f) curvature along \(y\)-axis, for synthetic response generated from, \(y\sim N(10[\sin(3\pi s_{1})\cdot\cos(3\pi s_{2})],1)\). Figure 15: Plots showing the true surfaces for (a) eigen value, \(\lambda_{1}\) (b) eigen value \(\lambda_{2}\), (c) Gaussian curvature (scales in \(\times 10^{4}\)) (d) divergence (e) Laplacian (scales in \(\times 10^{1}\)) over grid points, and (f) fitted process. This is shown for synthetic response generated from, \(y\sim N(10[\sin(3\pi s_{1})+\cos(3\pi s_{2})],1)\). Figure 16: Plots showing the estimated surfaces for (a) eigen value, \(\lambda_{1}\) (b) eigen value \(\lambda_{2}\), (c) Gaussian curvature (scales in \(\times 10^{4}\)) (d) divergence (e) Laplacian (scales in \(\times 10^{1}\)) (f) fitted process over grid points. Each point is color coded; green denoting the HPD intervals not containing 0, with positive end points, while cyan denotes HPD intervals not containing 0, with negative end points. Figure 17: Plots showing the true surfaces for (a) eigen value, \(\lambda_{1}\) (b) eigen value \(\lambda_{2}\), (c) Gaussian curvature (scales in \(\times 10^{4}\)) (d) divergence (e) Laplacian (scales in \(\times 10^{1}\)) (f) fitted process over grid points. This is shown for synthetic response generated from, \(y\sim N(10[\sin(3\pi s_{1})\cdot\cos(3\pi s_{2})],1)\) Figure 19: Plots showing observed versus fitted values for (a) the response variable \(Y(s)\); (b) gradients with respect to \(x\)-axis; (c) gradients with respect to \(y\)-axis; (d) curvature with respect to \(x\)-axis; (e) mixed curvature over \(x\)-\(y\); (f) curvature for \(y\sim N(10[\sin(3\pi s_{1})+\cos(3\pi s_{2})],1)\) with respect to \(y\)-axis. The gray shades represent the 95% HPD regions for each estimate. Figure 18: Plots showing the estimated surfaces for (a) eigen value, \(\lambda_{1}\) (b) eigen value \(\lambda_{2}\), (c) Gaussian curvature (scales in \(\times 10^{4}\)) (d) divergence (e) Laplacian (scales in \(\times 10^{1}\)) (f) fitted process over grid points. Each point is color coded; green denoting the HPD intervals not containing 0, with positive end points, while cyan denotes HPD intervals not containing 0, with negative end points. Figure 20: Plots showing observed versus fitted values for (a) the response variable \(Y(s)\); (b) gradients with respect to x-axis; (c) gradients with respect to \(y\)-axis; (d) curvature with respect to \(x\)-axis; (e) mixed curvature over \(x\)-\(y\); (f) curvature for \(y\sim N(10[\sin(3\pi s_{1})\cdot\cos(3\pi s_{2})],1)\) with respect to \(y\)-axis. The gray shades represent the 95% HPD regions for each estimate. ## 15 Further Application ### Temperatures in Northeastern US Temperatures are historically known to exhibit spatial variation. We focus on a data set that records monthly temperatures across weather monitoring stations in the Northeastern United States during January, 2000 from the R-package spBayes(Finley et al., 2007). Temperature Figure 21: Plots showing results for curvature wombling on the Meuse river (first row) Copper (Cu) (second row) Lead (Pb) (third row) Zinc (Zn). gradients and curvature are of interest from an environmental science perspective to track and perform boundary analysis on zones that exhibit significant changes in the surface during a month. Curvature wombling performed on temperature reveals climate zones featuring rapid atmospheric changes. Quantifying such variations in atmospheric conditions is central to statistical modeling in environmental applications. The data consists of temperatures (in degree Celsius) from 356 weather monitoring stations. The probability distribution for temperatures and interpolated spatial plot is shown in Figure 22. We model the data using the hierarchical model outlined in (13) in the manuscript. We used the following hyper parameters for the model, \(\phi\sim\text{Unif}\left(\frac{3}{max_{\mathbf{s}\in S}||\Delta||},300\right)\), \(\sigma^{2}\sim IG(2,1)\) (mean 1 variance infinite), \(\tau^{2}\sim IG(2,1)\) (mean 1 variance infinite), \(\beta_{0}\sim N(0,10^{6})\), and \(\nu=5/2\) for the Matern kernel. We consider \(10^{4}\) iterations for MCMC chains, with burn-in diagnosed at \(5\times 10^{3}\). The posterior estimates from the model fit are shown in Table 8. We fit the model with only an intercept, that allows the spatial process \(Z(\mathbf{s})\) to capture most of the variation in the data. We observe from Table 8 that \(\sigma^{2}/(\sigma^{2}+\tau^{2})\approx 95.86\%\). Significance in process parameter estimates is assessed by checking containment of 0 within the HPD intervals. The fitted spatial process, along with significance is shown in Figure 23 (top row, left). For temperature observations falling in the middle no significant spatial effect Figure 22: Plots showing (left) probability density of temperatures (in \({}^{*}\)C) (right) spatial plot of temperatures in Northeastern US during January 2000. is observed, with stations located in the northern and southern regions showing positive and negative spatial effects. This indicates a clear variation in the north-south direction for temperatures in January. The variance of the nugget process \(\tau^{2}\) is small compared to \(\sigma^{2}\). The average estimated temperature is \(-3.78\ ^{\circ}\mathrm{C}\)\((-5.92,-1.69)\). Using the posterior estimates for Temperature in Northeastern US. The highest posterior density intervals are shown alongside for respective estimates. ### 6.2 The posterior density intervals \begin{table} \begin{tabular}{l|c c} \hline \hline Parameters (\(\mathbf{\theta}\)) & Posterior Estimates (\(\widehat{\mathbf{\theta}}\)) & Highest Posterior Density intervals (HPD) \\ \hline \(\phi\) & 0.38 & (0.27, 0.51) \\ \(\sigma^{2}\) & 21.44 & (10.11, 41.03) \\ \(\tau^{2}\) & 0.92 & (0.76, 1.10) \\ \(\beta_{0}\) & -3.78 & (-5.92, -1.69) \\ \hline \hline \end{tabular} \end{table} Table 8: Posterior Estimates for Temperatures in Northeastern US. The highest posterior density intervals are shown alongside for respective estimates. Posterior surface is effected on the same grid \(\mathcal{G}\) using the posterior estimates for \(\nabla^{2}_{xx}\), \(\nabla^{2}_{xy}\) and \(\nabla^{2}_{yy}\). We leverage these to produce estimated surfaces for the Gaussian curvature (determinant) (shown in left of Figure 24), the divergence operator (shown in center of Figure 24) and Laplacian (shown in right of Figure 24). Posterior surfaces for the Gaussian curvature are indicative of locations/presence of maximas and saddle points, divergence surfaces show regions of rapid change in temperatures while the Laplacian shows regions of maximum change in gradients. Gaussian calibration for these estimate attaches significance allowing us to distinguish between contiguous zones housing significant change. We perform curvature wombling using inference obtained from the posterior analysis of the surface. Figure 25 and Table 9 show the results for curvilinear wombling on the Figure 24: Plots showing surfaces for (left) Gaussian curvature (center) divergence and (right) Laplacian of temperatures in Northeastern US during January 2000. Figure 23: Plots showing (top row) (left) the fitted spatial process (center) the estimated gradient, \(\nabla_{x}\) process along \(x\)-axis (right) the estimated gradient, \(\nabla_{y}\) process along \(y\)-axis (bottom row) (left) estimated curvature \(\nabla^{2}_{xx}\) along \(x\)-axis (center) estimated curvature, \(\nabla^{2}_{xy}\) (right) estimated curvature, \(\nabla^{2}_{yy}\) along \(y\)-axis. resulting posterior estimates of the surface. The curves chosen are shown in plots on the left for Figure 25. We begin with curves (level sets) "1" and "2" that delineate zones of significant (positive in the south and negative in the north) spatial effects, iteratively proceeding to higher (lower) level sets while inspecting them for curvilinear gradients and curvature. Referring to Table 9, we observe that all curves located with respect to curvilinear gradients, observed from significant average gradients. With respect to directional curvature, we refer to significant segments located in the Figure 25 which show enormous heterogeneity, i.e. changes in directional concavity when traversing the curve, with separated contiguous segments indicating significant changes in concavity. For instance traversing curve "2" in the west-east direction, we observe this clearly. This naturally renders the average directional curvature (shown in Table 9) insignificant when considered along the entirety of curve "2", which is also the case for other level sets. To be able to detect significance we could only summarize across significant segments (as was done for the Meuse river data). \begin{table} \begin{tabular}{l|c c} \hline \hline Curve (\(C\)) & Average Gradient (\(\overline{\Gamma}^{(1)}(C)\)) & Average Curvature (\(\overline{\Gamma}^{(2)}(C)\)) \\ \hline Boundary 1 & 2.44 & -0.38 \\ & (1.90, 2.97) & (-1.48, 0.74) \\ Boundary 2 & 2.70 & -0.40 \\ & (2.24, 3.18) & (-1.67, 0.79) \\ \hline Boundary 3 & 2.69 & -0.34 \\ & (2.10, 3.28) & (-1.71, 0.84) \\ Boundary 4 & 2.23 & -0.31 \\ & (1.74, 2.72) & (-1.67, 1.11) \\ \hline Boundary 5.1 & -2.96 & 1.13 \\ & (-4.22, -1.73) & (-0.88, 3.16) \\ Boundary 5.2 & -3.24 & -0.23 \\ & (-4.22, -2.25) & (-2.19, 1.69) \\ Boundary 6.1 & 1.63 & -0.23 \\ & (0.70, 2.58) & (-2.96, 2.62) \\ & 2.90 & -0.38 \\ Boundary 6.2 & (1.93, 3.92) & (-3.00, 2.31) \\ \hline \hline \end{tabular} \end{table} Table 9: Curvilinear Wombling measures for boundaries in Northeastern US Temperatures, each measure is accompanied by its corresponding HPD interval in brackets below. Figure 25: Plots showing curvature wombling on temperatures data. The curves are marked in each row on the figure to the left are to be referenced with Table 9 showing average wombling measures for gradient and curvature for respective curves.
2302.03989
KK-duality for self-similar groupoid actions on graphs
We extend Nekrashevych's $KK$-duality for $C^*$-algebras of regular, recurrent, contracting self-similar group actions to regular, contracting self-similar groupoid actions on a graph, removing the recurrence condition entirely and generalising from a finite alphabet to a finite graph. More precisely, given a regular and contracting self-similar groupoid $(G,E)$ acting faithfully on a finite directed graph $E$, we associate two $C^*$-algebras, $\mathcal{O}(G,E)$ and $\widehat{\mathcal{O}}(G,E)$, to it and prove that they are strongly Morita equivalent to the stable and unstable Ruelle C*-algebras of a Smale space arising from a Wieler solenoid of the self-similar limit space. That these algebras are Spanier-Whitehead dual in $KK$-theory follows from the general result for Ruelle algebras of irreducible Smale spaces proved by Kaminker, Putnam, and the last author.
Nathan Brownlowe, Alcides Buss, Daniel Gonçalves, Jeremy B. Hume, Aidan Sims, Michael F. Whittaker
2023-02-08T10:56:05Z
http://arxiv.org/abs/2302.03989v2
# \(Kk\)-duality for self-similar groupoid actions on graphs ###### Abstract. We extend Nekrashevych's \(KK\)-duality for \(C^{*}\)-algebras of regular, recurrent, contracting self-similar group actions to regular, contracting self-similar groupoid actions on a graph, removing the recurrence condition entirely and generalising from a finite alphabet to a finite graph. More precisely, given a regular and contracting self-similar groupoid \((G,E)\) acting faithfully on a finite directed graph \(E\), we associate two \(C^{*}\)-algebras, \(\mathcal{O}(G,E)\) and \(\widehat{\mathcal{O}}(G,E)\), to it and prove that they are strongly Morita equivalent to the stable and unstable Ruelle C*-algebras of a Smale space arising from a Wieler solenoid of the self-similar limit space. That these algebras are Spanier-Whitehead dual in \(KK\)-theory follows from the general result for Ruelle algebras of irreducible Smale spaces proved by Kaminker, Putnam, and the last author. Key words and phrases:\(C^{*}\)-algebra; self-similar group; limit space; Poincare duality; Spanier-Whitehead duality; KK-duality; Smale space 2020 Mathematics Subject Classification: 47L05, 19K35 (Primary); 37B05 (Secondary) Sims was supported by Australian Research Council grant DP220101631 and by CAPES grant 88887.370640. The second and third authors were partially supported by CNPq and CAPES - Brazil. Hume was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 817597). We thank Isnie Yusnitha for careful reading and helpful comments and Volodia Nekrashevych for insightful conversations. In this paper we extend this duality result to the self-similar groupoids defined in [17], and simultaneously extend Nekrashevych's Smale space result [22] to self-similar actions that are not necessarily recurrent. We also note that our proof is completely different; in particular, we show that the underlying Smale space is a Wieler solenoid [31]. Our main theorem, Theorem 8.1, states that the \(C^{*}\)-algebra \(\mathcal{O}(G,E)\) of a contracting self-similar groupoid action as defined in [17], and the \(C^{*}\)-algebra, suggestively denoted \(\widehat{\mathcal{O}}(G,E)\), of the Deaconu-Renault groupoid of the canonical local homeomorphism on an associated limit space are \(KK\)-dual. In conjunction with consequences of classification theory for \(C^{*}\)-algebras (see [12, Theorem 1.1 and Section 4.4] and [24, Theorem 5.11]), our result implies that \(\mathcal{O}(G,E)\cong\widehat{\mathcal{O}}(G,E)\), so our \(KK\)-duality mimics Poincare duality in topology. To prove our main theorem, we generalise the results of Nekrashevych [22] to the self-similar groupoid setting of [17]. We extend Nekrashevych's construction of the limit space \(\mathcal{J}\) of a self-similar group to the setting of self-similar groupoids, and show that the shift map on \(\mathcal{J}\) is open and expansive. We employ Wieler's classification of Smale spaces with totally disconnected stable sets to see that the projective limit of \(\mathcal{J}\) with respect to the shift map, which we identify with a natural limit solenoid \(\mathcal{S}\), is a Smale space with respect to a homeomorphism \(\tilde{\tau}\) induced by the shift map on \(\mathcal{J}\). Nekrashevych's construction of Smale spaces from self-similar groups is subtle, so we are careful to include all the details in extending to the situation of self-similar groupoids. In doing so, we are able to weaken the existing hypotheses, even for self-similar groups. The remainder of our work goes into proving that the Cuntz-Pimsner algebra \(\mathcal{O}(G,E)\) is Morita equivalent (i.e. stably isomorphic) to the unstable Ruelle algebra of \((\mathcal{S},\tilde{\tau})\) and the Deaconu-Renault groupoid \(C^{*}\)-algebra \(\widehat{\mathcal{O}}(G,E)\) is Morita equivalent to the stable Ruelle algebra of \((\mathcal{S},\tilde{\tau})\). We illustrate our results via several examples. In particular, the main example from [17] is defined as the self-similar groupoid \((G,E)\) arising from the following graph and automaton: The limit space of this self-similar action is homeomorphic to the complex unit circle with the map \(z\mapsto z^{2}\). The associated limit solenoid is the classical dyadic solenoid Smale space. Thus, using [32] and Theorem 8.1 we deduce that \[K_{0}(\mathcal{O}_{(G,E)})\cong\mathbb{Z}\oplus\mathbb{Z},\quad K_{1}( \mathcal{O}_{(G,E)})\cong\mathbb{Z},\quad\text{ and }K^{1}(\mathcal{O}_{(G,E)})\cong \mathbb{Z}\oplus\mathbb{Z}.\] The Kirchberg-Phillips Theorem then implies that \(\mathcal{O}_{(G,E)}\) is isomorphic to the Cuntz--Pimsner algebra of the odometer. Another source of interesting examples are the Katsura algebras [14]. These were recognised as self-similar actions on graphs by Exel and Pardo [7]. To see how they fit into our framework, see [17, Example 7.7] and [17, Appendix A] for the general translation from the Exel-Pardo situation to self-similar groupoid actions. In [7, Section 18], Exel and Pardo show that all unital Kirchberg algebras in the UCT class have representations as self-similar groupoids. They also prove that the \(K\)-theory of the Cuntz-Pimsner algebra of a self-similar groupoid of this sort is directly computable from the graph adjacency matrix and restriction matrix for the self-similar action. An analysis using Schreier graphs shows that the limit space of such a self-similar groupoid is the total space of a bundle over the circle of copies of the Cantor set, each with an odometer action specified by the restriction matrix. Once again, a combination of \(KK\)-duality with classification theory shows that the Cuntz-Pimsner algebra of such a system is isomorphic to the Deaconu-Renault groupoid of the limit space. This allows us to compute the \(K\)-theory of these interesting Deaconu-Renault \(C^{*}\)-algebras. Example 3.22 introduces a new example whose limit dynamical system is conjugate to that of the basilica group. By definition, the basilica group is the iterated monodromy group (see [21, Chapter 5]) of the function \(f(z)=z^{2}-1\) as a complex map from \(\mathbb{C}\setminus\{-1,0,1\}\to\mathbb{C}\setminus\{0,1\}\). The \(K\)-groups of its Cuntz-Pimsner algebra, and those of its dual algebra, are computed in [22, Theorem 4.8] and [22, Theorem 6.6] (see also [11]). Our main theorem therefore allows us also to compute the \(K\)-homology of both algebras. The paper is organised as follows. In Section 2 we give the necessary background for the paper. We begin with directed graphs and their \(C^{*}\)-algebras. This leads to a section on self-similar groupoid actions on graphs and we recall relevant information from [17]. We conclude the section with Smale spaces and their \(C^{*}\)-algebras. In Section 3, we generalise Nekrashevych's notion of a limit space to self-similar groupoid actions. Nekrashevych's construction is clever and subtle, so we provide substantial details regarding the metric topology on the limit space that are omitted in Nekrashevych's work. We complete this section by defining the level Schreier graphs of a self-similar groupoid action and how these relate to the limit space. In the following two sections we seek to understand the dynamics on the limit space. In particular, we show that the shift map on the limit space is a Wieler solenoid, and hence the natural extension is a Smale space with totally disconnected stable sets. Sections 6 and 7 define two natural \(C^{*}\)-algebras associated to a self-similar groupoid action. One is the Cuntz-Pimsner algebra of [17]; our main goal is to provide a groupoid model for this algebra, extending that given by Nekrashevych in [22, Section 5]. The other is the \(C^{*}\)-algebra of a generalisation to self-similar groupoids of Nekrashevych's Deaconu-Renault groupoid of the limit space of a self-similar group. This becomes the dual algebra for the Cuntz-Pimsner algebra. Corollary 6.7 establishes the exact condition required for the groupoid of germs to be Hausdorff. Our main result appears in Section 8. We prove that the Cuntz-Pimsner algebra is strongly Morita equivalent to the unstable Ruelle algebra of a Smale space and that the Deaconu-Renault groupoid algebra is strongly Morita equivalent to its stable Ruelle algebra. We then deduce our main \(KK\)-duality result from [12]. ## 2. Background ### Graphs and \(C^{*}\)-algebras In this paper, we use the notation and conventions of [28] for graphs and their \(C^{*}\)-algebras. A _directed graph_\(E\) is a quadruple \(E=(E^{0},E^{1},r,s)\) consisting of sets \(E^{0}\) and \(E^{1}\) and maps \(r,s:E^{1}\to E^{0}\). The elements of \(E^{0}\) are called _vertices_ and we think of them as dots, and elements of \(E^{1}\) are called edges, and we think of them as arrows pointing from one vertex to another: \(e\in E^{1}\) points from \(s(e)\) to \(r(e)\). A _path_ in \(E\) is either a vertex, or a string \(\mu=e_{1}\ldots e_{n}\) such that each \(e_{i}\in E^{1}\) and \(s(e_{i})=r(e_{i+1})\). For \(n\geq 2\) we define \(E^{n}=\{e_{1}\ldots e_{n}\mid e_{i}\in E^{1},s(e_{i})=r(e_{i+1})\}\) for the set of paths of length \(n\) in \(E\). The _length_\(|\mu|\) of the path \(\mu\) is given by \(|\mu|=n\) if and only if \(\mu\in E^{n}\). The collection \(E^{*}:=\bigcup_{n=0}^{\infty}E^{n}\) of all paths in \(E\) is a small category with identity morphisms \(E^{0}\), composition given by concatenation of paths of nonzero length together with the identity rules \(r(\mu)\mu=\mu=\mu s(\mu)\), and domain and codomain maps \(r,s\). For \(\mu\in E^{*}\) and \(X\subseteq E^{*}\), we write \(\mu X=\{\mu\nu\mid\nu\in X,s(\mu)=r(\nu)\text{ and }X\mu=\{\nu\mu\mid\nu\in X,r(\mu)=s( \nu)\}\). We write \(\mu X\nu\) for \(\mu X\cap X\nu\). We say that a graph \(E\) is _row finite_ if \(vE^{1}\) is finite for each \(v\in E^{0}\) and that it has _no sources_ if each \(vE^{1}\) is nonempty. We say that it is _finite_ if both \(E^{0}\) and \(E^{1}\) are finite. We say that \(E\) is _strongly connected_ if for all \(v,w\in E^{0}\), the set \(vE^{*}w\) is nonempty, and \(E\) is not the graph with one vertex and no edges. If \(E\) is strongly connected, then \(vE^{1}\) and \(E^{1}v\) are nonempty for all \(v\) in \(E^{0}\). In this paper, we will need to work with left-infinite, right-infinite and bi-infinite paths in a directed graph \(E\). We will use the following notation: \[E^{\infty} =\{e_{1}e_{2}e_{3}\cdots|\;e_{i}\in E^{1},s(e_{i})=r(e_{i+1}) \text{ for all }i\},\] \[E^{-\infty} =\{\ldots e_{-3}e_{-2}e_{-1}\mid e_{i}\in E^{1},s(e_{i})=r(e_{i+1 })\text{ for all }i\},\quad\text{ and}\] \[E^{\mathbb{Z}} =\{\ldots e_{-2}e_{-1}e_{0}e_{1}e_{2}\cdots|\;e_{i}\in E^{1},s(e_ {i})=r(e_{i+1})\text{ for all }i\}.\] For \(x=x_{1}x_{2}\cdots\in E^{\infty}\) we write \(r(x)=r(x_{1})\) and for \(x=\ldots x_{-2}x_{-1}\in E^{-\infty}\), we write \(s(x)=s(x_{-1})\). We endow these spaces with the topologies determined by cylinder sets. These cylinder sets are indexed by finite paths in each of the three spaces involved, so we will distinguish them with the following slightly non-standard notation: for \(\mu\in E^{n}\), we define \[Z[\mu] :=\{x\in E^{\infty}\mid x_{1}\ldots x_{n}=\mu\},\quad\text{ and}\] \[Z(\mu] :=\{x\in E^{-\infty}\mid x_{-n}\ldots x_{-1}=\mu\}.\] For \(n\geq 0\) and \(\mu\in E^{2n+1}\), we write \[Z(\mu):=\{x\in E^{\mathbb{Z}}\mid x_{-n}\ldots x_{n}=\mu\}.\] In this paper, all graphs will be finite. The spaces \(E^{\infty}\), \(E^{-\infty}\) and \(E^{\mathbb{Z}}\) are then totally disconnected compact Hausdorff spaces and the collections of cylinder sets are bases for the topologies that are closed under intersections. There are standard metrics realising these topologies. The metric on \(E^{-\infty}\) is given by \[d(x,y)=\begin{cases}\inf\{2^{-n}\mid x,y\in Z(\mu]\text{ for some }\mu\in E^{n}\}&\text{ if }s(x)=s(y)\\ 2&\text{ if }s(x)\neq s(y),\end{cases} \tag{2.1}\] and the other two are defined analogously: for \(E^{\infty}\), we replace \(Z(\mu]\) with \(Z[\mu]\) and the source map with the range map; for \(E^{\mathbb{Z}}\) we replace "\(Z(\mu]\) for some \(\mu\in E^{n}\)" with "\(Z(\mu)\) for some \(\mu\in E^{2n+1}\)," and the conditions "\(s(x)=s(y)\)" and "\(s(x)\neq s(y)\)" with "\(x_{0}=y_{0}\)" and "\(x_{0}\neq y_{0}\)." Given a finite directed graph with no sources, a _Cuntz-Krieger \(E\)-family_ in a \(C^{*}\)-algebra \(A\) is a pair \((p,s)\) of functions \(p:v\mapsto p_{v}\) from \(E^{0}\) to \(A\) and \(s:e\mapsto s_{e}\) from \(E^{1}\) to \(A\) such that the \(p_{v}\) are mutually orthogonal projections, each \(s_{e}^{*}s_{e}=p_{s(e)}\), and \(p_{v}=\sum_{e\in vE^{1}}s_{e}s_{e}^{*}\) for all \(v\in E^{0}\). The _graph \(C^{*}\)-algebra_, denoted \(C^{*}(E)\), is the universal \(C^{*}\)-algebra generated by a Cuntz-Krieger \(E\)-family, see [28]. ### Self similar actions of groupoids on graphs Recall that a _groupoid_ is a small category \(\mathcal{G}\) with inverses. The identity morphisms are called _units_ and the collection of all identity morphisms is called the _unit space_ and denoted \(\mathcal{G}^{(0)}\). The set of composable pairs of elements in \(\mathcal{G}\) is denoted \(\mathcal{G}^{(2)}\). Self-similar actions of groupoids on graphs were introduced in [17], inspired by Exel and Pardo's work in [7]. The precise relationship between the two constructions is detailed in the appendix of [17]. Given a directed graph \(E\) with no sources, and given \(v,w\in E^{0}\), a _partial isomorphism_ of \(E^{*}\) is a bijection \(g:vE^{*}\to wE^{*}\) that preserves length and preserves concatenation in the sense that \(g(\mu e)\in g(\mu)E^{1}\) for all \(\mu\in E^{*}\) and \(e\in E^{1}\). The expected formula \(g(\mu e)=g(\mu)g(e)\) does not even make sense since \(g\) is not typically defined on \(s(\mu)E^{*}\). For each \(v\in E^{*}\), the identity map \(\operatorname{id}_{v}:vE^{*}\to vE^{*}\) is a partial isomorphism. The set \(\operatorname{PIso}(E^{*})\) of all partial isomorphisms of \(E^{*}\) is a groupoid with units \(\operatorname{id}_{v}\) indexed by the vertices of \(E\) and multiplication given by composition of maps. We will identify the unit space of \(\operatorname{PIso}(E^{*})\) with \(E^{0}\) in the canonical way; this is consistent with our notation for graphs since the map \(\mu\mapsto v\mu\) coincides with \(\operatorname{id}_{v}:vE^{*}\to vE^{*}\). We will write \(c,d:\operatorname{PIso}(E^{*})\to E^{0}\) for the codomain and domain maps on the groupoid \(\operatorname{PIso}(E^{*})\), because the symbols \(s\) and \(r\) are already fairly overloaded. So if \(g:vE^{*}\to wE^{*}\) is a partial isomorphism, then \(c(g)=w\) and \(d(g)=v\). A _faithful action_ of a groupoid \(G\) with unit space \(E^{0}\) on the graph \(E\) is an injective groupoid homomorphism \(\phi:G\to\operatorname{PIso}(E^{*})\) that restricts to the identity map on \(E^{0}\). We will generally write \(g\cdot\mu\) in place of \(\phi(g)(\mu)\). If \(E=(E^{0},E^{1},r,s)\) is a directed graph, and \(G\) is a groupoid with unit space \(E^{0}\), that acts faithfully on \(E^{*}\), then we say that \((G,E)\) is a _self similar groupoid action_ if for every \(g\in G\) and every \(e\in d(g)E^{1}\) there exists \(h\in G\) such that \(c(h)=s(g\cdot e)\) and \[g\cdot(e\mu)=(g\cdot e)(h\cdot\mu)\quad\text{ for all }\mu\in s(g\cdot e)E^{*}. \tag{2.2}\] Since the groupoid action \(G\curvearrowright E^{*}\) is faithful, for each \(g\in G\) and \(e\in d(g)E^{1}\) there is a _unique_\(h\) satisfying (2.2). We denote this element by \(g|_{e}\), and call it the _restriction_ of \(g\) to \(e\). Restriction extends to finite paths by iteration: for \(g\in G\), we define \(g|_{d(g)}=g\), and for \(e\in E^{1}\) and \(\mu\in s(e)E^{*}\), we recursively define \[g|_{e\mu}=(g|_{e})|_{\mu}.\] So \(g|_{e_{1}\dots e_{n}}=(\dots(g|_{e_{1}})|_{e_{2}}\dots)|_{e_{n}}\), and then (2.2) extends to \[g\cdot(\mu\nu)=(g\cdot\mu)(g|_{\mu}\cdot\nu)\] whenever \(g\in G\), \(\mu\in d(g)E^{*}\) and \(\nu\in E^{*}s(\mu)\). We will use the following fundamental formulas without comment throughout the paper. **Lemma 2.1** ([17, Lemma 3.4 and Proposition 3.6]).: _Let \((G,E)\) be a self-similar groupoid action on a finite directed graph \(E\). For \((g,h)\in G^{(2)}\), \(\mu\in d(g)E^{*}\), \(\nu\in E^{*}s(\mu)\) and \(\eta\in c(g)E^{*}\), we have_ 1. \(r(g\cdot\mu)=c(g)\) _and_ \(s(g\cdot\mu)=g|_{\mu}\cdot s(\mu)\)_;_ 2. \(g|_{\mu\nu}=(g|_{\mu})|_{\nu}\)_;_ 3. \(\operatorname{id}_{r(\mu)}|_{\mu}=\operatorname{id}_{s(\mu)}\)_;_ 4. \((hg)|_{\mu}=(h|_{g\cdot\mu})(g|_{\mu})\)_; and_ 5. \(g^{-1}|_{\eta}=(g|_{g^{-1}\cdot\eta})^{-1}\)_._ We now give an example of a self-similar groupoid action by defining an \(E\)-automaton as described in [17, Definition 3.7, Proposition 3.9 and Theorem 3.9]. The key point of an \(E\)-automaton is that an action on the edges of the graph and a restriction map satisfying specified range and source conditions ensures that the action extends to a self-similar groupoid action on finite paths of the graph. _Example 2.2_.: The following example is carried through [17]. Consider the graph \(E\) in Figure 1, and define \[a\cdot 1=4,\ a|_{1}=v; b\cdot 3=1,\ b|_{3}=v;\] \[a\cdot 2=3,\ a|_{2}=b; b\cdot 4=2,\ b|_{4}=a. \tag{2.3}\] See [17, Example 3.10] for a detailed exposition. To see explicitly how the groupoid action on \(E^{*}\) manifests we compute \[a\cdot 242312=3(b\cdot 42312)=32(a\cdot 2312)=323(b\cdot 312)=3231(v\cdot 12)=3231 12.\] ### Smale spaces and \(C^{*}\)-algebras A Smale space \((X,\varphi)\) consists of a compact metric space \(X\) and a homeomorphism \(\varphi:X\to X\) along with constants \(\varepsilon_{X}>0\) and \(0<\lambda<1\) and a locally defined continuous map \[[\cdot,\cdot]:\{(x,y)\in X\times X\mid d(x,y)\leq\varepsilon_{X}\}\to X, \quad(x,y)\mapsto[x,y]\] satisfying 1. \([x,x]=x\), 2. \([x,[y,z]]=[x,z]\) if both sides are defined, 3. \([[x,y],z]=[x,z]\) if both sides are defined, 4. \(\varphi[x,y]=[\varphi(x),\varphi(y)]\) if both sides are defined, 5. For \(x,y\in X\) such that \([x,y]=y\), we have \(d(\varphi(x),\varphi(y))\leq\lambda d(x,y)\), and 6. For \(x,y\in X\) such that \([x,y]=x\), we have \(d(\varphi^{-1}(x),\varphi^{-1}(y))\leq\lambda d(x,y)\). The bracket map defines a local product structure on a Smale space as follows, for \(x\in X\) and \(0<\varepsilon\leq\varepsilon_{X}\), define \[X^{s}(x,\varepsilon) :=\{y\in X\mid d(x,y)<\varepsilon,[y,x]=x\}\,\text{ and}\] \[X^{u}(x,\varepsilon) :=\{y\in X\mid d(x,y)<\varepsilon,[x,y]=x\}.\] We call \(X^{s}(x,\varepsilon)\) a _local stable set_ of \(x\) and \(X^{u}(x,\varepsilon)\) a _local unstable set_ of \(x\). Figure 2 gives a pictorial representation of the local stable sets and their interactions (provided \(d(x,y)<\varepsilon_{X}/2\)). Suppose \((X,\varphi)\) is a Smale space. Then for \(x,y\in X\) the global stable and unstable equivalence relations are given by \[x\sim_{s}y\text{ whenever }d(\varphi^{n}(x),\varphi^{n}(y))\to 0 \text{ as }n\to\infty\text{ and}\] \[x\sim_{u}y\text{ whenever }d(\varphi^{-n}(x),\varphi^{-n}(y))\to 0 \text{ as }n\to\infty.\] The stable equivalence class of \(x\in X\) is denoted \(X^{s}(x)\) and we have \(X^{s}(x,\varepsilon)\subset X^{s}(x)\). Similarly, the unstable equivalence class of \(x\in X\) is denoted by \(X^{u}(x)\) and \(X^{u}(x,\varepsilon)\subset X^{u}(x)\). We consider each of the stable equivalence classes as locally compact and Figure 1. Graph \(E\) for Example 2.2 Hausdorff topological spaces whose topology is generated by \(\{X^{s}(y,\varepsilon)\mid y\in X^{s}(x),0<\varepsilon<\varepsilon_{X}\}\). A similar topology is defined in the unstable case. A Smale space \((X,\varphi)\) is said to be _irreducible_ if, for all non-empty open sets \(U,V\subseteq X\), there exists \(N\) such that \(\varphi^{N}(U)\cap V\neq\varnothing\). It is said to be _mixing_ if, for all non-empty open sets \(U,V\subseteq X\), there exists \(N\) such that \(\varphi^{n}(U)\cap V\neq\varnothing\), for all \(n\geq N\). We now consider various \(C^{*}\)-algebras associated with Smale spaces. Ruelle first defined \(C^{*}\)-algebras associated to Smale spaces in [29], and these \(C^{*}\)-algebras are usually referred to as the stable and unstable algebras of the Smale space. In [25], Putnam then defined the _Ruelle algebras_ as crossed product \(C^{*}\)-algebras of the stable and unstable algebras. Putnam showed that the Ruelle algebras generalise Cuntz-Krieger algebras. More recently, Putnam and Spielberg [27] considerably simplified the groupoid constructions of the above algebras when the Smale space is mixing, up to Morita equivalence. Putnam discussed in [26, Section 2] how this simplification extends to the _non-wandering_ case. We shall only be interested in Smale spaces that are irreducible, which is a stronger condition than non-wandering. For the remainder of this section we will outline the construction of the stable algebra \(S(X,P)\) and the stable Ruelle algebra \(S(X,P)\rtimes\mathbb{Z}\). A more detailed version of these constructions is given in [27] and [12, Section 3]. Given an irreducible Smale space \((X,\varphi)\), we fix a non-empty finite \(\varphi\)-invariant set of periodic points \(P\) (in the irreducible case periodic points are dense). Then we define \(X^{u}(P)=\bigcup_{p\in P}X^{u}(p)\), which is given a locally compact and Hausdorff topology generated by the collection \(\{X^{u}(x,\varepsilon)\mid x\in X^{u}(P),\;\varepsilon\in(0,\varepsilon_{X}]\}\). The groupoid of the stable equivalence relation is \[G^{s}(P):=\{(v,w)\in X\times X\mid v\sim_{s}w\text{ and }v,w\in X^{u}(P)\}. \tag{2.4}\] The stable groupoid can be endowed with an etale topology, see [12, Lemma 3.1] for details. With this structure \(G^{s}(P)\) is an amenable locally compact Hausdorff etale groupoid. The stable \(C^{*}\)-algebra \(S(X,P)\) is defined to be the groupoid \(C^{*}\)-algebra associated with \(G^{s}(P)\). There is a canonical automorphism of the \(C^{*}\)-algebra \(S(X,P)\) induced by the automorphism of the underlying groupoid \(G^{s}(P)\) defined by \(\alpha:=\varphi\times\varphi\). This automorphism of \(G^{s}(P)\) gives rise to a semidirect product groupoid \(G^{s}(P)\rtimes_{\alpha}\mathbb{Z}\), which is again an amenable locally compact Hausdorff etale groupoid. The stable Ruelle algebra is the crossed product \(S(X,P)\rtimes_{\alpha}\mathbb{Z}\cong C^{*}(G^{s}(P)\rtimes_{\alpha}\mathbb{Z})\), where we also write \(\alpha\) for the automorphism of \(S(X,P)\) induced by \(\alpha\in\operatorname{Aut}(G^{s}(P))\). Putnam explains in [26, Section 2] Figure 2. The local stable and unstable sets of \(x,y\in X\) and their bracket maps how \(S(X,P)\rtimes_{\alpha}\mathbb{Z}\) is strongly Morita equivalent to the Ruelle algebra originally defined by Putnam in [25], building from the similar result of Putnam and Spielberg in the mixing case ([27]). The result of [27] that the Ruelle algebra is separable, simple, stable, nuclear, purely infinite, and satisfies the UCT extends readily to the irreducible case. A similar construction gives the unstable groupoid \(G^{u}(P)\), the unstable algebra \(U(X,P)\) and the associated unstable Ruelle algebra \(U(S,P)\rtimes\mathbb{Z}\cong C^{*}(G^{u}(P)\rtimes\mathbb{Z})\). Alternatively, the stable algebras for the Smale space \((X,\varphi^{-1})\) with the opposite bracket map are isomorphic to the relevant unstable algebras for \((X,\varphi)\). ## 3. The limit space of a self-similar groupoid action In this section we generalise Nekrashevych's construction of the limit space of a self-similar group [21, Chapter 3] to the situation of self-similar groupoid actions. **Definition 3.1**.: Let \(E\) be a finite directed graph. Let \((G,E)\) be a self-similar groupoid action. We say that left-infinite paths \(x,y\in E^{-\infty}\) are _asymptotically equivalent_, and write \(x\sim_{\mathrm{ae}}y\) if there is a sequence \((g_{n})_{n<0}\) in \(G\) such that \(\{g_{n}\mid n<0\}\) is a finite set, and such that \[g_{n}\cdot x_{n}\ldots x_{-1}=y_{n}\ldots y_{-1}\quad\text{ for all }n<0.\] If the sequence \((g_{n})\) implements an asymptotic equivalence \(x\sim_{\mathrm{ae}}y\) and the sequence \((h_{n})\) an asymptotic equivalence \(y\sim_{\mathrm{ae}}z\) then \(d(h_{n})=c(g_{n})\) for all \(n\), and \((h_{n}g_{n})\) implements and asymptotic equivalence \(x\sim_{\mathrm{ae}}z\). Moreover, the sequence \(g_{n}^{-1}\) implements an asymptotic equivalence \(y\sim_{\mathrm{ae}}x\), and the sequence \((r(x_{n}))_{n<0}\) implements an asymptotic equivalence \(x\sim_{\mathrm{ae}}x\). So \(\sim_{\mathrm{ae}}\) is an equivalence relation. **Definition 3.2**.: Let \(E\) be a finite directed graph. Let \((G,E)\) be a self-similar groupoid action. The _limit space_ of \((G,E)\) is defined to be the quotient space \(\mathcal{J}_{G,E}:=E^{-\infty}/\!\!\sim_{\mathrm{ae}}\). The limit space is typically not a Hausdorff space, but, just as in the setting of [21], it is guaranteed to be Hausdorff if the self-similar action is contracting in the following sense, introduced in [21, 17]. **Definition 3.3**.: We say that a self-similar groupoid action \((G,E)\) on a finite directed graph \(E\) is _contracting_ if there is a finite subset \(F\subseteq G\) such that for every \(g\in G\) there exists \(n\geq 0\) such that \(\{g|_{\mu}:\mu\in d(g)E^{n}\}\subseteq F\). Any such finite set \(F\) is called a _contracting core_ for \((G,E)\). The _nucleus_ of \(G\) is the set \[\mathcal{N}_{G,E}:=\bigcap\{F\subseteq G\mid F\text{ is a contracting core for }(G,E)\}.\] We will frequently just write \(\mathcal{N}\) rather than \(\mathcal{N}_{G,E}\) when the self-similar groupoid action in question is clear from context. Just as in the setting of self similar groups, the nucleus is the minimal contracting core for \((G,E)\), and is symmetric, closed under restriction and contains \(G^{(0)}\): **Lemma 3.4**.: _Let \(E\) be a finite directed graph. Let \((G,E)\) be a contracting self-similar groupoid action. Then \(\mathcal{N}\) is a contracting core for \((G,E)\) and is contained in any other contracting core for \((G,E)\). We have_ \[\mathcal{N}=\bigcup_{g\in G}\bigcap_{n=1}^{\infty}\{g|_{\mu}\mid\mu\in d(g)E^ {*},|\mu|\geq n\}. \tag{3.1}\] _We have \(\mathcal{N}=\mathcal{N}^{-1}\), \(\mathcal{N}\) is closed under restriction, and if \(E\) has no sinks, then \(G^{(0)}\subseteq\mathcal{N}\)._ Proof.: For the first statement, we first show that the collection of contracting cores for \((G,E)\) is closed under intersections. If \(F,K\) are contracting cores for \(G\) and \(g\in G\), then there exist \(M,N\) such that \(g|_{\mu}\in F\) whenever \(|\mu|\geq M\) and \(g|_{\nu}\in K\) whenever \(|\nu|\geq N\). In particular, if \(|\mu|\geq\max\{M,N\}\) then \(g|_{\mu}\in F\cap K\). So \(F\cap K\) is a contracting core. Now since contracting cores are, by definition, finite, there is a finite collection \(\mathcal{F}\) of contracting cores such that \(\mathcal{N}=\bigcap\mathcal{F}\). So the preceding paragraph shows that \(\mathcal{N}\) is a contracting core. It is then contained in any other contracting core by definition. Let \(\mathcal{M}:=\bigcup_{g\in G}\bigcap_{n=1}^{\infty}\{g|_{\mu}\mid\mu\in d(g) E^{*},|\mu|\geq n\}\). Fix \(h\in\mathcal{M}\). Then there exists \(g\in G\) and a sequence \((\mu_{i})_{i=1}^{\infty}\) of finite paths such that \(|\mu_{i}|\to\infty\) and \(g|_{\mu_{i}}=h\) for all \(i\). By definition of \(\mathcal{N}\) there exists \(N\) such that \(g|_{\mu}\in\mathcal{N}\) whenever \(|\mu|\geq N\). Since \(n_{i}\to\infty\) we have \(n_{i}>N\) for some \(i\), and so \(h=g|_{\mu_{i}}\in\mathcal{N}\). So \(\mathcal{M}\subseteq\mathcal{N}\). For the reverse containment, observe that we have just seen that \(\mathcal{M}\) is finite. Fix \(g\in G\). Since \(G\) is contracting, the sets \(R_{n}:=\{g|_{\mu}\mid|\mu|\geq n\}\) indexed by \(n\in\mathbb{N}\) are all finite, and they are decreasing with respect to set containment. So there exists \(N\) such that \(R_{n}=R_{N}\) for all \(n\geq N\). It follows that \(g|_{\mu}\in R_{N}\subseteq\mathcal{M}\) whenever \(|\mu|\geq N\). So \(\mathcal{M}\) is a contracting core for \((G,E)\) and therefore \(\mathcal{N}\subseteq\mathcal{M}\) by the first assertion of the lemma. To see that \(\mathcal{N}=\mathcal{N}^{-1}\), fix \(h\in\mathcal{N}\). Then (3.1) shows that there exists \(g\in G\) and a sequence \((\mu_{i})_{i=1}^{\infty}\) in \(d(g)E^{*}\) such that \(|\mu_{i}|\to\infty\) and \(g|_{\mu_{i}}=h\) for all \(i\). We then have \(h^{-1}=(g|_{\mu_{i}})^{-1}=g^{-1}|_{g\cdot\mu_{i}}\) for all \(i\), and so (3.1) gives that \(h^{-1}\in\mathcal{N}\). That \(\mathcal{N}\) is closed under restriction follows immediately from (3.1). Finally, if \(E\) has no sinks, then for each \(v\in E^{0}\) and \(n\geq 0\), \(E^{n}v\neq\emptyset\). Let \(w\) be a vertex such that \(wE^{n}v\neq\emptyset\) for infinitely many \(n\in\mathbb{N}\). Then, \(v\in\{w|_{\mu}\ :\ \mu\in wE^{n},|\mu|\geq n\}\) for all \(n\in\mathbb{N}\), so that \(v\in\mathcal{N}\). **Notation 3.5**.: Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting self-similar groupoid action with nucleus \(\mathcal{N}\). Since \(\mathcal{N}\) is finite, so are the sets \[\mathcal{N}^{k}:=\big{\{}\prod_{i=1}^{k}g_{i}\mid g_{1},\ldots,g_{k}\in \mathcal{N}\text{ and }d(g_{i})=c(g_{i+1})\text{ for all }i\big{\}}.\] We write \(R_{k}\) for the integer \[R_{k}:=\min\{j\in\mathbb{N}\mid h|_{\mu}\in\mathcal{N}\text{ for all }h\in \mathcal{N}^{k}\text{ and }\mu\in E^{j}\}.\] So \(R_{1}=0\), and \(R_{i}\leq R_{i+1}\) for all \(i\). We now show that if \(x,y\in E^{-\infty}\) are asymptotically equivalent, then the sequence \(g_{i}\) implementing the asymptotic equivalence can be taken to belong to \(\mathcal{N}\) and to be consistent with respect to restriction. **Lemma 3.6**.: _Let \(E\) be a finite directed graph. Let \((G,E)\) be a contracting self-similar groupoid action with nucleus \(\mathcal{N}\). Then \(x,y\in E^{-\infty}\) are asymptotically equivalent if and only if there exists a sequence \((h_{n})_{n<0}\) of elements of \(\mathcal{N}\) such that \(h_{n}\cdot x_{n}=y_{n}\) and \(h_{n}|_{x_{n}}=h_{n+1}\) for all \(n\)._ Proof.: If there is such a sequence \((h_{n})\) of elements of \(\mathcal{N}\), then for each \(n\) we have \[h_{n}\cdot(x_{n}\ldots x_{-1})=y_{n}(h_{n}|_{x_{n}}\cdot x_{n+1}\ldots x_{-1})= y_{n}(h_{n+1}\cdot x_{n+1}\ldots x_{-1})=\cdots=y_{n}\ldots y_{-1}.\] So \(x\sim_{\mathrm{ae}}y\). Conversely suppose that \(x\sim_{\mathrm{ae}}y\), and fix a sequence \((g_{n})_{n<0}\) in \(G\) with just finitely many distinct terms and satisfying \(g_{n}\cdot x_{n}\ldots x_{-1}=y_{n}\ldots y_{-1}\) for all \(n\). Let \(S=\{g_{n}\mid n<0\}\) be the finite set of elements appearing in the sequence \((g_{n})\). Since \((G,E)\) is contracting, for each \(g\in S\) there exists \(k_{g}\) such that \(g|_{\mu}\in\mathcal{N}\) whenever \(|\mu|\geq k_{g}\). Let \(k:=\max_{g\in S}k_{g}\). We construct a sequence \((h_{n})_{n<\infty}\) in \(\mathcal{N}\) iteratively as follows. Consider the sequence \((g_{n}|_{x_{n}\dots x_{-2}})_{n<-k-1}\). By the choice of \(k\), every term of this sequence belongs to \(\mathcal{N}\). Since \(\mathcal{N}\) is finite, there exists \(h_{-1}\in\mathcal{N}\) and a strictly decreasing infinite sequence \((n_{i}^{1})_{i=1}^{\infty}\) of integers \(n_{i}^{1}<-2-k\) such that \(g_{n_{i}^{1}}|_{x_{n_{i}^{1}}\dots x_{-2}}=h_{-1}\) for all \(i\). Since each \(g_{n_{i}^{1}}\cdot(x_{n_{i}^{1}}\dots x_{-1})=y_{n_{i}^{1}}\dots y_{-1}\), we have \(h_{-1}\cdot x_{-1}=g_{n_{i}^{1}}|_{x_{n_{i}^{1}}\dots x_{-2}}\cdot x_{-1}=(g_{ n_{i}^{1}}\cdot x_{n_{i}^{1}}\dots x_{-1})_{-1}=y_{-1}\). Now suppose that we have chosen \(h_{m},\dots,h_{-1}\in\mathcal{N}\) such that each \(h_{j}\cdot x_{j}=y_{j}\) and \(h_{j}|_{x_{j}}=h_{j+1}\), and a strictly decreasing sequence \((n_{i}^{m})_{i=1}^{\infty}\) of integers \(n_{i}^{m}<m-k-1\) such that \(g_{n_{i}^{m}}|_{x_{n_{i}^{m}}\dots x_{m-1}}=h_{m}\) for all \(i\). Then the sequence \((g_{n_{i}^{m}}|_{x_{n_{i}^{m}}\dots x_{m-2}})_{i}\) is contained in \(\mathcal{N}\), so there exists \(h_{m-1}\in\mathcal{N}\) and a subsequence \(n_{i}^{m-1}\) of the sequence \(n_{i}^{m}\) with the property that \(n_{i}^{m-1}|_{x_{n_{i}^{m}}\dots x_{m-2}}=h_{m-1}\) for all \(i\). We have \[h_{m-1}|_{m-1}=(g_{n_{1}^{m-1}}|_{x_{n_{i}^{m}}\dots x_{m-2}})|_{x_{m-1}}=g_{n_ {1}^{m-1}}|_{x_{n_{i}^{m}}\dots x_{m-1}}=h_{m},\] and \(h_{m-1}\cdot x_{m-1}=y_{m-1}\) by a calculation just like the one we used to see that \(h_{-1}\cdot x_{-1}=y_{-1}\). The above procedure produces a sequence \((h_{n})_{n<1}\) in \(\mathcal{N}\) with the desired properties. **Corollary 3.7**.: _Let \(E\) be a finite directed graph. Let \((G,E)\) be a contracting self-similar groupoid action with limit space \(\mathcal{J}=\mathcal{J}_{G,E}\). Let \(q:E^{-\infty}\to\mathcal{J}\) be the quotient map. For each \(x\in E^{-\infty}\), the equivalence class \([x]:=q^{-1}(q(x))\) satisfies \(|[x]|\leq|\mathcal{N}|\)._ Proof.: Fix \(x\in E^{-\infty}\). Let \(y^{1},\dots,y^{l}\) be distinct elements of \([x]\). We must show that \(l\leq|\mathcal{N}|\). Fix \(m<0\) such that the finite paths \(\mu^{i}:=y_{m}^{i}\dots y_{-1}^{i}\) for \(i\leq l\) are all distinct. Lemma 3.6 implies that there are elements \(n^{1},\dots,n^{l}\in\mathcal{N}\) such that \(\mu^{i}=n^{i}\cdot x_{m}\dots x_{-1}\) for all \(i\). Since the \(\mu^{i}\) are distinct, the \(n^{i}\) are distinct, forcing \(l\leq|\mathcal{N}|\). To construct a Smale space from the limit space \(\mathcal{J}\) we will show that the shift map on \(E^{-\infty}\) descends to a self-mapping of \(\mathcal{J}\), and that under a regularity hypothesis similar to that used by Nekrashevych [21], this self-mapping is locally expanding and hence a local homeomorphism. To do this, we need to describe a basis for the topology on \(\mathcal{J}\) and then a metric that induces that topology. We start with a preliminary lemma about quotient topologies. **Lemma 3.8**.: _Let \(X\) be a compact and metrisable Hausdorff space and let \(\sim\) be an equivalence relation on \(X\). Let \(Y:=X/\!\!\sim\) be the quotient space, and \(q:X\to Y\) the quotient map. For each \(A\subseteq X\), let \(U_{A}:=\{y\in Y\mid q^{-1}(y)\subseteq A\}\). If \(A\) is open in \(X\), then \(U_{A}\) is open in \(Y\). If \(|q^{-1}(y)|<\infty\) for each \(y\in Y\), then for any basis \(\mathcal{B}\) for the topology on \(X\), the set_ \[\mathcal{U}_{\mathcal{B}}:=\{U_{B}\mid B\text{ is a finite union of elements of }\mathcal{B}\}\] _is a basis for the quotient topology on \(Y\). If \(q:X\to Y\) is a closed map, then \(Y\) is metrisable._ Proof.: By definition of the quotient topology, \(U_{A}\) is open in \(Y\) if and only if \(q^{-1}(U_{A})\) is open in \(X\). By definition of \(U_{A}\), we have \([x]\in U_{A}\) if and only if \([x]\subseteq A\), and so \[q^{-1}(U_{A})=\{x\in X\mid[x]\subseteq A\}=X\setminus\{x\in X\mid[x]\setminus A \neq\varnothing\}.\] So it suffices to show that if a net \((x_{i})_{i\in I}\) in \(X\) converges to some \(x\in X\), and if each \([x_{i}]\setminus A\) is nonempty, then \([x]\setminus A\) is nonempty. To see this, note that for each \(i\), there exists \(y_{i}\in[x_{i}]\setminus A\). Since \(X\) is compact we can pass to a subnet \((y_{i_{j}})\) that converges in \(X\). Since \(A\) is open, we have that \(y:=\lim_{j}y_{i_{j}}\not\in A\). Since the quotient map is continuous we have \(q(y)=\lim_{j}q(y_{i_{j}})=\lim_{j}q(x_{i_{j}})=\lim_{i}q(x_{i})=[x]\), and so \(y\in[x]\setminus A\). Now suppose that \(\mathcal{B}\) is a basis for the topology on \(X\). Let \(V\) be an open subset of \(Y\) and fix \(y\in V\). Since \(q^{-1}(V)\) is open in \(X\), for each point \(x\in q^{-1}(y)\), we can find \(B_{x}\in\mathcal{B}\) such that \(x\in B_{x}\subseteq q^{-1}(V)\). Let \(B:=\bigcup_{x\in q^{-1}(y)}B_{x}\). Since \(q^{-1}(y)\) is finite, this is a finite union of elements of \(\mathcal{B}\), so it suffices to show that \(y\in U_{B}\subseteq V\). By definition of \(B\) we have \(q^{-1}(y)\subseteq B\) and so \(y\in U_{B}\). To see that \(U_{B}\subseteq V\), take \(y^{\prime}\in U_{B}\). Then \(q^{-1}(y)\subseteq B\) by definition of \(U_{B}\). Since each \(B_{x}\subseteq q^{-1}(V)\), we have \(B\subseteq q^{-1}(V)\) and hence \(q(B)\subseteq V\). So \(y\in q(B)\subseteq V\), as required. The last statement follows from [5, Theorem 4.2.13]. Our next lemma describes how asymptotic equivalence interacts with the action of the nucleus on cylinder sets. **Lemma 3.9**.: _Let \(E\) be a finite directed graph. Let \((G,E)\) be a contracting self-similar groupoid action. If there exists \(g\in\mathcal{N}\) and \(\mu,\nu\) in \(E^{*}\) such that \(g\cdot\mu=\nu\), then \(q(Z(\mu)]\cap q(Z(\nu])\neq\varnothing\)._ Proof.: Fix \(\mu,\nu\), and \(g\). Since \(\mathcal{N}\) is closed under restriction, there exist \(e\in E^{1}\) and \(h\in\mathcal{N}\) such that \(h|_{e}=g\). Let \(f:=h\cdot e\). We claim that \(s(e)=r(\mu)\) and \(s(f)=r(\nu)\) so that \(h\cdot e\mu=f\nu\). Indeed, by Lemma 2.1(1) we have \[s(f)=s(h\cdot e)=h|_{e}\cdot s(e)=g\cdot s(e).\] Since \(d(g)=r(\mu)\) we have that \(s(e)=r(\mu)\) and then \(s(f)=g\cdot r(\mu)=r(\nu)\), proving the claim. By applying the above procedure recursively, we can construct paths \(x\in Z(\mu]\) and \(y\in Z(\nu]\) such that \(x\sim_{\mathrm{ae}}y\). We can now describe a basis for the topology on the limit space of a contracting self-similar groupoid action. **Corollary 3.10**.: _Let \(E\) be a finite directed graph with no sources or sinks. Let \((G,E)\) be contracting self-similar groupoid action. The sets_ \[U_{\mu}:=\Big{\{}y\in\mathcal{J}\mid q^{-1}(y)\subseteq\bigcup_{g\in \mathcal{N}\cap d^{-1}(r(\mu))}Z(g\cdot\mu]\Big{\}},\] _indexed by \(\mu\in E^{*}\), are a basis for the topology on \(\mathcal{J}\)._ Proof.: By Lemma 3.8 we know that \(U_{\mu}\) is open. Now fix an open set \(V\subseteq\mathcal{J}\) and \(y\in V\). If \(x\in Z(\mu]\) for some \(\mu\in E^{n}\), and \(x^{\prime}\sim_{a.e}x\), then by Lemma 3.6 there exists \(g\in\mathcal{N}\cap d^{-1}(r(\mu))\) such that \(x^{\prime}\in Z(g\cdot\mu]\). So, if \(q(x)=y\), then \(y\in U_{x_{-n}\ldots x_{-1}}\) for all \(n\in\mathbb{N}\). Let \(X_{n}=\bigcup_{g\in\mathcal{N}\cap d^{-1}(r(x_{-n}))}Z(g\cdot x_{-n}...x_{-1}]\). Since \(\mathcal{N}\) is closed under restriction, \(X_{n+1}\subseteq X_{n}\) for all \(n\in\mathbb{N}\). By Lemma 3.6, \(\bigcap_{n\in\mathbb{N}}X_{n}=q^{-1}(y)\). Hence, the compact sets \(Y_{n}:=q(X_{n})\) satisfy \(Y_{n+1}\subseteq Y_{n}\) and \(\bigcap_{n\in\mathbb{N}}Y_{n}=\{y\}\). Therefore, there exists \(k\in\mathbb{N}\) such that \(Y_{k}\subseteq V\). Since \(y\in U_{x_{-k}\ldots x_{-1}}\subseteq Y_{k}\), the result follows. Our eventual goal is to show that the projective limit of copies of \(\mathcal{J}\) with respect to the endomorphism induced by the shift map on \(E^{-\infty}\), is a Smale space. This requires a metric that induces the topology in \(\mathcal{J}\). We will build this from the following semi-metric. **Definition 3.11** ([3, Definition 3.1.2]).: Suppose \((X,d)\) is a metric space and \(R\) is an equivalence relation on \(X\). The _quotient semi-metric_\(d_{R}\) is defined by \[d_{R}(x,y)=\inf\Big{\{}\sum_{i=0}^{k}d(p_{i},q_{i})\mid p_{i},q_{i}\in X,x=p_{0},y=q_{k}\text{ and }q_{i}\sim p_{i+1}\text{ for }i<k\Big{\}}. \tag{3.2}\] The following fairly straightforward diagonal argument shows that if \(X\) is compact and \(R\) is a closed equivalence relation, then \(d_{R}\) is a metric; this is surely known, but we could not find the result in the literature so we give a proof. **Lemma 3.12**.: _Suppose \((X,d)\) is a compact metric space and \(R\) is a closed equivalence relation on \(X\). Then there is a metric \(\tilde{d}_{R}\) on \(X/R\) such that \(\tilde{d}_{R}([x],[y])=d_{R}(x,y)\) for all \(x,y\in X\)._ Proof.: To see that the formula for \(\tilde{d}_{R}\) is well defined, suppose that \((x,x^{\prime})\) and \((y,y^{\prime})\) belong to \(R\). We must show that \(d_{R}(x,y)=d_{R}(x^{\prime},y^{\prime})\). By symmetry it suffices to show that \(d_{R}(x,y)\leq d_{R}(x^{\prime},y^{\prime})\). For this, observe that \(p_{0}=x\), \(q_{0}=x^{\prime}\), \(p_{1}=y^{\prime}\) and \(q_{1}=y\) determines a term in the infimum in (3.2) that defines \(d_{R}(x,y)\) with value \(d(x^{\prime},y^{\prime})\). Since \(d_{R}\) is a semi-metric [3, Definition 3.1.2], we now only need to show that if \(d_{R}(x,y)=0\) then \((x,y)\in R\). Suppose that \(d_{R}(x,y)=0\). Choose sequences \((p_{i,n})_{i=1}^{k_{n}}\) and \((q_{i,n})_{i=1}^{k_{n}}\) such that \(p_{0,n}=x\), \(q_{0,n}=y\), each \(q_{i,n}\sim p_{i+1,n}\) and \(\sum_{i=0}^{k_{n}}d(p_{i,n},q_{i,n})<\frac{1}{2^{n}}\). For \(n\in\mathbb{N}\) and \(i>k_{n}\), define \(p_{i,n}=q_{i,n}=y\), so that \(\sum_{i=0}^{\infty}d(p_{i,n},q_{i,n})<\frac{1}{2^{n}}\) for each \(n\). The closed subset \(R\) of the compact set \(X\times X\) is itself compact, so the sequence \(\big{(}(q_{0,n},p_{1,n})\big{)}_{n=1}^{\infty}\) has a subsequence, say \((q_{0,n_{1,j}},p_{1,n_{1,j}})\), that converges to some \((q_{0},p_{1})\in R\). Recursively, given \(i\geq 1\) and a sequence \((n_{i,j})_{j=1}^{\infty}\) such that \((q_{i-1,n_{i,j}},p_{i,n_{i,j}})\) converges in \(R\), choose a subsequence \((n_{i+1,j})_{j=1}^{\infty}\) such that \(n_{i+1,1}>n_{i,1}\) and \((q_{i,n_{i+1,j}},p_{i+1,n_{i+1,j}})\) converges to some \((q_{i},p_{i+1})\in R\). The resulting sequence \((q_{i},p_{i+1})_{i=0}^{\infty}\) converges to \((y,y)\). We have \(p_{0}=x\), and for each \(i\), since \(d(p_{i+1,n_{i+1,j}},q_{i+1,n_{i+1,j}})<2^{-n_{i+1,j}}\), we have \(p_{i}=q_{i}\) for each \(i\). That is \(x=p_{0}\sim p_{1}\sim p_{2}\cdots\to y\). In particular, each \((x,p_{i})\in R\) and we have \((x,p_{i})\to(x,y)\). So using once more that \(R\) is closed, we see that \((x,y)\in R\). **Corollary 3.13**.: _Let \((G,E)\) be a contracting self-similar groupoid action. Then its limit space \((\mathcal{J},d_{\mathcal{J}})\) is a compact metric space._ Proof.: Recall that \((E^{-\infty},d)\) is compact in the product metric \(d\) of (2.1). By Lemma 3.8 and Lemma 3.12, it suffices to show that \(\sim_{\mathrm{ae}}\) is a closed equivalence relation. For this, suppose that \(C\) is a closed subset of \(E^{-\infty}\), and fix a net \((x_{\alpha})\in C\) and a point \(x\) in \(E^{-\infty}\) such that \(q(x_{\alpha})\) converges to \(q(x)\). We must show that \(q(x)\in q(C)\). Since \(E^{-\infty}\) is compact, so is \(C\), so we may assume that \(x_{\alpha}\) converges to some \(z\in C\). Hence \(q(x_{\alpha})\to q(z)\). So we must show that \(x\sim_{\mathrm{ae}}z\). Fix an integer \(n<0\). By Corollary 3.10, we have that \(q(x_{\alpha})\in U_{x_{n}\dots x_{-1}}\) for large \(\alpha\), and since \(x_{\alpha}\) converges to \(z\) we have that \(x_{\alpha}\in Z(z_{n}\dots z_{-1}]\) for large \(\alpha\). It follows that there exist \(g_{n}\in\mathcal{N}\) such that \(Z(g_{n}\cdot x_{n}\dots x_{-1}]\cap Z(z_{n}\dots z_{-1}]\neq\varnothing\). Since \(Z(\mu]\cap Z(\nu]=\varnothing\) for distinct \(\mu,\nu\in E^{n}\), we deduce that \(g_{n}\cdot x_{n}\dots x_{-1}=z_{n}\dots z_{-1}\). Since \(n<0\) was arbitrary, it follows that \(x\sim_{\mathrm{ae}}z\) as claimed. ### Schreier graphs and recurrent self-similar actions Schreier graphs define useful combinatorial approximations to the limit space of self-similar actions. We begin with the definition that suits our situation. Note that Schreier graphs of groups have a rather general definition that generalises Cayley graphs. **Definition 3.14**.: Let \((G,E)\) be a finitely generated self-similar groupoid, and let \(A\) be a generating set for \(G\) that is closed under inverses and restriction. The level-\(n\) Schreier graph \(\Gamma_{n}:=\Gamma_{n}(G,A)\) is the (undirected) graph with vertex set \(\Gamma_{n}^{0}:=E^{n}\) and an edge labelled by \(a\in A\) between \(\mu\) and \(\nu\) if and only if \(d(a)=r(\mu)\) and \(a\cdot\mu=\nu\). Note that we could label an edge in \(\Gamma_{n}\) by either \(a\in A\) or \(a^{-1}\) since \(A\) is closed under inverses and if \(a\cdot\mu=\nu\), then \(a^{-1}\cdot\nu=\mu\). We will make use of the geodesic distance, \(d_{\mathrm{geo}}\), on the vertex set of an undirected graph: \(d_{\mathrm{geo}}(v,w)\) is the minimum length of a path between \(v\) and \(w\). The following generalises [21, Proposition 3.6.6]; the proof is virtually identical. **Proposition 3.15**.: _Let \((G,E)\) be a finitely generated, contracting self-similar groupoid action on a finite directed graph \(E\). Let \(A\) be a finite generating set for \(G\) that is closed under inverses and restriction. Let \(\Gamma\) be the level-\(n\) Schreier graph \(\Gamma_{n}(G,A)\). There is a map \(\psi_{n}:\Gamma_{n}\to\Gamma_{n-1}\) defined by_ \[\psi_{n}(e\mu)=\mu\quad\text{for $e$ in $E^{1}$ and $v$ in $s(e)E^{n-1}$}\] \[\psi_{n}(a:e\mu\to f\nu)=a|_{e}:\mu\to\nu.\] _For \(x,y\in E^{-\infty}\), the sequence \(\big{(}d_{\mathrm{geo}}(x_{-n}\dots x_{-1},y_{-n}\dots y_{-1})\big{)}_{n=1}^{\infty}\) is bounded if and only if \(x\) and \(y\) are asymptotically equivalent._ We now generalise Nekrashevych's notion of a recurrent self-similar group action to groupoid actions. While we do not require recurrence for the main results of this paper, it does illuminate interesting topological properties of the dynamics and the limit space. We note that Nekrashevych synonymously uses _recurrence_, _self-replicating_, and _fractal_ for the notion below. **Definition 3.16**.: A self-similar groupoid action \((G,E)\) is said to be _recurrent_ if, for any \(e,f\in E^{1}\) and \(h\in G\) with \(d(h)=s(e)\) and \(c(h)=s(f)\), there is \(g\) in \(Gr(e)\) such that \(g\cdot e=f\) and \(g|_{e}=h\). Recurrence of a self-similar groupoid action is obviously a rather strong condition. For example, if \((G,E)\) is recurrent, then we immediately see that the in-degree of all vertices of the graph must be equal. Another immediate consequence of recurrence is the following. **Proposition 3.17**.: _Suppose \((G,E)\) is a recurrent self-similar groupoid action on a finite directed graph \(E\). Then the action of \(G\) on \(E^{*}\) is level-transitive._ Proof.: For paths of length one, transitivity follows immediately from recurrence. Now suppose that for any paths \(\mu\) and \(\nu\) of length \(n\) and any \(h\in G\) with \(d(h)=s(\mu)\) there exists \(g\in G\) with \(d(g)=r(\mu)\) such that \(g\cdot\mu=\nu\) and \(g|_{\mu}=h\); that is, \(G\) acts transitively on paths of length \(n\) with specified restriction as in the definition of recurrence. We now consider paths \(\lambda\) and \(\rho\) of length \(n+1\) and aim to show that there exists \(g\in G\) such that \(g\cdot\lambda=\rho\). Recurrence implies that there exists \(g_{n+1}\in G\) with \(d(g_{n+1})=r(\lambda_{n+1})\) such that \(g_{n+1}\cdot\lambda_{n+1}=\rho_{n+1}\). Now the inductive hypothesis implies that there exists \(g\in G\) with \(d(g)=r(\lambda)\) such that \(g\cdot\lambda_{1}\cdots\lambda_{n}=\rho_{1}\cdots\rho_{n}\) and \(g|_{\lambda_{1}\cdots\lambda_{n}}=g_{n+1}\). Thus we have \[g\cdot\lambda=g\cdot\lambda_{1}\cdots\lambda_{n}\lambda_{n+1}=(\rho_{1}\cdots \rho_{n})g_{n+1}\cdot\lambda_{n+1}=\rho_{1}\cdots\rho_{n}\rho_{n+1}=\rho,\] the desired result. Following Nekrashevych, we now look to connectedness of the limit space, but first we will generalise [21, Proposition 2.11.3]. **Proposition 3.18**.: _Suppose that \((G,E)\) is a contracting, recurrent self-similar groupoid action on a finite directed graph \(E\) and that \(G\) is finitely generated. Then the nucleus \(\mathcal{N}\) of \((G,E)\) is a generating set._ Proof.: Let \(H\) be a finite generating set for \(G\). Then there exists \(m\in\mathbb{N}\) such that for every \(h\in H\), the set \(\{h|_{\mu}\mid|\mu|\geq m\}\) is contained in \(\mathcal{N}\). Given \(g\in G\), we have \(g=h_{1}h_{2}\cdots h_{n}\) with \(h_{i}\in H\) for \(1\leq i\leq n\). Since \((G,E)\) is recurrent, for each \(h_{i}\), there exists \(a_{i}\in G\) and \(\mu_{i}\in s(a_{i})E^{m}\) such that \(a_{i}|_{\mu_{i}}=h_{i}\). Since \(a_{i}\) is a product of elements of \(H\) it follows that \(a_{i}|_{\mu_{i}}=h_{i}\) is a product of elements of \(\mathcal{N}\). Thus \(G\) is generated by \(\mathcal{N}\). The following proof follows Nekrashevych's [21, Proposition 3.3.10 and Theorem 3.5.1], which he in turn partially attributes to K. Pilgrim and P. Haissinsky (private communication). **Theorem 3.19**.: _Suppose \((G,E)\) is a contracting self-similar groupoid action on a finite directed graph \(E\) and that \(G\) is finitely generated. Then the limit space \(\mathcal{J}_{(G,E)}\) is connected if and only if \((G,E)\) level-transitive._ Proof.: First suppose that \((G,E)\) is level transitive. We suppose that \(\mathcal{J}=\mathcal{J}_{(G,E)}\) is not connected, and derive a contradiction. Let \(H\) be a finite generating set for \(G\). Fix closed, non-empty subsets \(A,B\subset\mathcal{J}\) such that \(A\cup B=\mathcal{J}\) and \(A\cap B=\varnothing\). Let \(X_{A}=q^{-1}(A)\) and \(X_{B}=q^{-1}(B)\). Then \(X_{A}\) and \(X_{B}\) are closed, non-empty subsets of \(E^{-\infty}\) such that \(X_{A}\cup X_{B}=E^{-\infty}\) and \(X_{A}\cap X_{B}=\varnothing\). Define \[X_{A}^{n}:=\{a_{-n}\cdots a_{-1}\mid a=\ldots a_{-n-1}a_{-n}\ldots a_{-1}\in X _{A}\}.\] Since \(X_{A}\) is open, we write it as a union \(X_{A}=\bigcup_{\mu\in I}Z(\mu]\) of cylinder sets. Since \(X_{A}\) is also compact, there is a finite \(F\subseteq I\) such that \(X_{A}=\bigcup_{\mu\in F}Z(\mu]\). Put \(N=\max\{|\mu|:\mu\in F\}\). For each \(\mu\in F\), we have \(Z(\mu]=\bigcup_{\nu\in E^{*}\mu\cap E^{n}}Z(\nu]\). So \(Z(\nu]\subset X_{A}\) whenever for any \(\nu\in X_{A}^{n}\). Since \((G,E)\) is level-transitive and contracting, for each \(n\geq N\), there exist \(a_{-n}\ldots a_{-1}\) in \(X_{A}^{n}\) and \(g_{n}\in\mathcal{N}\) such that \(g_{n}\cdot a_{-n}\cdots a_{-1}\in X_{B}\). By Lemma 3.9, there exist \(\mu_{n}\in X_{A}\cap E^{*}(a_{-n}\cdots a_{-1})\) and \(\nu_{n}\in X_{B}\cap E^{*}(g_{n}\cdot a_{-n}\cdots a_{-1})\). Since \(X_{A},X_{B}\) are compact, there is an increasing sequence \((n_{k})_{k=1}^{\infty}\) of natural numbers such that \((\mu_{n_{k}})_{k}\) and \((\nu_{n_{k}})_{k}\) both converge, say to \(\mu_{n_{k}}\to\mu\in X_{A}\) and \(\nu_{n_{k}}\to\nu\in X_{B}\). Since \(\mathcal{N}\) is finite, \(\mu\sim_{\rm ae}\nu\). So \(q_{\rm ae}(\mu)=q_{\rm ae}(\nu)\in A\cap B\), a contradiction. Thus \(\mathcal{J}\) is connected. Now suppose that \((G,E)\) is not level-transitive. Fix \(n\in\mathbb{N}\) and \(a\in E^{n}\) such that \(G\cdot a\neq E^{n}\). Define \(A^{\prime}:=\bigcup_{a^{\prime}\in G\cdot p}Z(a^{\prime}]\) and \(B^{\prime}:=\bigcup_{b^{\prime}\in E^{n}\setminus G\cdot p}Z(b^{\prime}]\). Then, \(A:=q(A^{\prime})\) and \(B:=q(B^{\prime})\) are disjoint compact sets in \(\mathcal{J}\) such that \(A\cup B=\mathcal{J}\). **Corollary 3.20**.: _Suppose that \((G,E)\) is a contracting and recurrent self-similar groupoid action on a finite directed graph \(E\) such that \(G\) is finitely generated. Then the limit space \(\mathcal{J}_{(G,E)}\) is connected._ _Example 3.21_.: Consider Example 2.2. We claim that the action is contracting with nucleus \(\mathcal{N}=\{v,w,a,b,a^{-1},b^{-1}\}\). Indeed, since all elements of the automaton appear as restrictions, \(v,w,a,b,a^{-1},b^{-1}\in\mathcal{N}\). To see that this is everything we compute \[(ab)|_{3}=a|_{b\cdot 3}b|_{3} =v (ba)|_{1}=b|_{a\cdot 1}a|_{1} =a\] \[(ab)|_{4}=a|_{b\cdot 4}b|_{4} =ba (ba)|_{2}=b|_{a\cdot 2}a|_{2} =b\] and all groupoid elements of length \(2\) restrict to the nucleus. The first two Schreier graphs are depicted in Figure 3. More generally, the \(n\)th Schreier graph is a cycle of length \(2^{n}\) with a loop at each vertex, showing that the action is level-transitive. This also suggests that the limit space is homeomorphic to a circle. One can prove this, by showing inductively that the vertices of the \(n\)th Schreier graph can be mapped to the \(n\)th roots of unity on the complex circle, metrised so that it has circumference \(1\), in a way that extends the pictures in Figure 3. Specifically, each vertex is connected in the Schreier graph to its two nearest neighbours on the circle, and for any infinite path \(\mu\in E^{-\infty}\) the images of its initial segments, regarded as vertices of Schreier graphs, on the unit circle converge. The map that sends \(\mu\) to the limit-point is the desired homeomorphism: it is continuous because it is a contraction; it is surjective because its image is both dense and compact; and one checks that it is injective using the final statement of Proposition 3.15. _Example 3.22_.: Consider the graph \(E\) in Figure 3.22, and define a self-similar groupoid through the \(E\)-automaton \[\begin{array}{ccccc}a\cdot 0=2&a|_{0}=v&b\cdot 2=0&b|_{2}=v&c\cdot 2=1&c|_{2}=v \\ a\cdot 1=3&a|_{1}=a&b\cdot 3=1&b|_{3}=c&c\cdot 3=0&c|_{3}=b.\end{array} \tag{3.3}\] We claim that this action is contracting with nucleus \[\mathcal{N}=\{v,w,a,b,c,ba,ca,a^{-1},b^{-1},c^{-1},(ba)^{-1},(ca)^{-1}\}.\] To see this we note that all elements of the automaton appear as restrictions, so \(v,w,a,b,c\) and their inverses are in the nucleus. That \(ba\) and \(ca\) are in the nucleus follow from the computations \[(ba)|_{1}=b|_{a\cdot 1}a|_{1}=ca\quad\text{ and }\quad(ca)|_{1}=c|_{a\cdot 1}a|_{1 }=ba.\] Figure 4. Graph \(E\) for Example 3.22 Figure 3. The first two Schreier graphs of Example 3.21. One can now compute that all groupoid elements of length \(3\) reduce to one of the elements of the nucleus after restriction to length \(2\) words. The first four Schreier graphs are presented in Figure 5. They suggest that the action is level-transitive and that the limit-space is homeomorphic to the basilica fractal (the Julia set of \(z^{2}-1\)); one can prove this via an argument of the sort outlined in Example 3.21. Figure 5. The first four Schreier graphs of Example 3.22. ## 4. Dynamics on the limit space In this section we describe an action of \(\mathbb{N}\) by locally expansive local homeomorphisms of the limit space \(\mathcal{J}\) of a contracting, regular self-similar groupoid action. We will use this in the next section to construct a Smale space from the self-similar groupoid action. Let \(E\) be a finite directed graph. The _shift map_\(\sigma:E^{-\infty}\to E^{-\infty}\) is defined by \(\sigma(\ldots x_{-3}x_{-2}x_{-1})=\ldots x_{-3}x_{-2}\); that is, \(\sigma\) deletes the right-most edge of a left-infinite path. This \(\sigma\) is a local homeomorphism because it restricts to a homeomorphism \(\sigma:Z(\mu e]\to Z(\mu]\) for any finite path \(\mu\) and any edge \(e\) such that \(s(\mu)=r(e)\). The main result in this section is about self-similar groupoid actions that are _regular_ in the following sense, which is based on the regularity condition used by Nekrashevych in [22]. **Definition 4.1** (cf. [22, Definition 6.1]).: Let \(E\) be a finite directed graph. Let \((G,E)\) be a self-similar groupoid action. We say that \((G,E)\) is _regular_ if for every \(g\in G\) and every \(y\in E^{\infty}\) such that \(g\cdot y=y\), there exists \(\mu\) in \(E^{*}\) such that \(y\in Z[\mu)\), \(g\cdot\mu=\mu\) and \(g|_{\mu}=s(\mu)\). _Remark 4.2_.: Since, by definition, self-similar groupoid actions are faithful, the regularity condition is equivalent to the condition that if \(y\in E^{\infty}\) and \(g\cdot y=y\), then there is a clopen neighbourhood of \(y\) that is pointwise fixed by \(g\). Our main theorem in this section says that for contracting, regular self-similar groupoid actions, the shift map induces a locally expanding local homeomorphism of \(\mathcal{J}\). **Theorem 4.3**.: _Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action with limit space \(\mathcal{J}\) as in Definition 3.2. Let \(\sigma:E^{-\infty}\to E^{-\infty}\) be the shift map. Then there is a surjective map \(\tilde{\sigma}:\mathcal{J}\to\mathcal{J}\) such that \(\tilde{\sigma}([x])=[\sigma(x)]\) for all \(x\in E^{-\infty}\). Furthermore, there exists \(\varepsilon>0\) such that_ 1. _whenever_ \(d_{\mathcal{J}}([x],[y])<\varepsilon\)_, we have_ \(d_{\mathcal{J}}(\tilde{\sigma}([x]),\tilde{\sigma}([y]))=2d_{\mathcal{J}}([x ],[y])\)_, and_ 2. _whenever_ \(\alpha\leq\varepsilon\)_, we have_ \(\tilde{\sigma}(B([x],\alpha))=B(\tilde{\sigma}([x]),2\alpha)\)_._ _In particular, \(\tilde{\sigma}\) is a locally expanding local homeomorphism._ Before proving the theorem, we need to establish some preliminary results. To get started, observe that if \(x\sim_{\mathrm{ae}}y\), then there is a sequence \((g_{n})_{n<0}\in\mathcal{N}\) such that \(g_{n}\cdot x_{n}\ldots x_{-1}=y_{n}\ldots y_{-1}\) for all \(n\), and it follows that \(g_{n-1}\cdot\sigma(x)_{n}\ldots\sigma(x)_{-1}=g_{n-1}\cdot x_{n-1}\ldots x_{-2 }=y_{n-1}\ldots y_{-2}=\sigma(y)_{n}\ldots\sigma(y)_{-1}\). That is, \[x\sim_{\mathrm{ae}}y\Longrightarrow\sigma(x)\sim_{\mathrm{ae}}\sigma(y). \tag{4.1}\] Therefore, there exists a map \(\tilde{\sigma}:\mathcal{J}\mapsto\mathcal{J}\) as described in Theorem 4.3. **Lemma 4.4**.: _Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a regular self-similar groupoid action. For any finite set \(F\subseteq G\), there exists \(k\in\mathbb{N}\) such that for all \(g,h\in F\) such that \(d(g)=d(h)\) and all \(\mu\in d(g)E^{*}\) with \(|\mu|\geq k\), if \(g\cdot\mu=h\cdot\mu\), then \(g|_{\mu}=h|_{\mu}\)._ Proof.: Fix \(g,h\in F\). For each \(y\in E^{\infty}\) satisfies \(g\cdot y=h\cdot y\), we have \(h^{-1}g\cdot y=y\), and so regularity implies that there exists \(\lambda_{y}\in E^{*}\) such that \(y\in Z[\lambda_{y})\) and \((h^{-1}g)|_{\lambda_{y}}=s(\lambda_{y})\). For each \(x\in E^{\infty}\) such that \(g\cdot x\neq h\cdot x\), we have \(h^{-1}g\cdot x\neq x\), and so there exists \(\lambda_{x}\in E^{*}\) such that \(x\in Z[\lambda_{x})\) and \((h^{-1}g)\cdot\lambda_{x}\neq\lambda_{x}\). Since \(E^{\infty}=\bigcup_{x\in E^{\infty}}Z[\lambda_{x})\) and since \(E^{\infty}\) is compact, there exists a finite \(K\subseteq E^{\infty}\) such that \(E^{\infty}=\bigcup_{x\in K}Z[\lambda_{x})\). Let \(k_{g,h}:=\max\{|\lambda_{x}|:x\in K\}\). Suppose that with \(|\mu|\geq k_{g,h}\) and that \(g\cdot\mu=h\cdot\mu\). Since \(E\) has no sources we have \(Z[\mu)\neq\emptyset\). Since the \(Z[\lambda_{x})\) cover \(E^{\infty}\) we have \(Z[\mu)\cap Z[\lambda_{x})\neq\varnothing\) for some \(x\in K\). Since \(|\mu|\geq|\lambda_{x}|\), it follows that \(\mu=\lambda_{x}\mu^{\prime}\) for some \(\mu^{\prime}\) in \(E^{\infty}\). Since \(g\cdot\mu=h\cdot\mu\), we have \(g\cdot\lambda_{x}=h\cdot\lambda_{x}\) and therefore \(h^{-1}g\cdot\lambda_{x}=\lambda_{x}\). By the choice of \(\lambda_{x}\), we have \(h^{-1}g\cdot x=x\) and \((h^{-1}g)|_{\lambda_{x}}=s(\lambda_{x})\). Hence \((h^{-1}g)|_{\mu}=(h^{-1}g)|_{\lambda_{x}\mu^{\prime}}=s(\lambda_{x})|_{\mu^{ \prime}}=s(\mu^{\prime})=s(\mu)\). Hence \(g|_{\mu}=h|_{\mu}\). We have now proved that for each \(g,h\in F\) with \(d(g)=d(h)\), there exists \(k_{g,h}\in\mathbb{N}\) such that whenever \(\mu\in d(g)E^{k_{g,h}}\) satisfies \(g\cdot\mu=h\cdot\mu\), we have \(g|_{\mu}=h|_{\mu}\). So \(k=\max_{g,h\in F}k_{g,h}\) has the required property. Our next result is essentially a version of Theorem 4.3 in which the metric balls and \(\varepsilon\)-approximations are replaced by conditions in terms of the basic open sets from Corollary 3.10. We will bootstrap from this result to prove Theorem 4.3. **Proposition 4.5**.: _Let \(E\) be a finite directed graph with no sources. If \((G,E)\) is a contracting and regular self-similar group action, then_ 1. _for each_ \(z\in\mathcal{J}\)_,_ \(\sigma\) _maps_ \(q^{-1}(z)\) _bijectively onto_ \(q^{-1}(\tilde{\sigma}(z))\)_;_ 2. _there exists_ \(k\in\mathbb{N}\) _such that for every_ \(n\geq k+1\) _and every_ \(\mu\in E^{n}\)_, the map_ \(\tilde{\sigma}\) _restricts to a bijection of_ \(U_{\mu}\) _onto_ \(U_{\sigma(\mu)}\)_; and_ 3. _for every_ \(n\geq k\)_, every_ \(\omega\in E^{n}\)_, every_ \(w\in U_{\omega}\)_, and every_ \(z\in\tilde{\sigma}^{-1}(w)\)_, there exists_ \(\mu\in E^{n+1}\) _such that_ \(z\in U_{\mu}\) _and_ \(\sigma(\mu)=\omega\)_._ _In particular, \(\tilde{\sigma}\) is a local homeomorphism._ Proof.: Applying Lemma 4.4 to the finite set \(F=\mathcal{N}^{2}\cup\mathcal{N}\cup E^{0}\) yields \(k\in\mathbb{N}\) such that for all \(n_{1},n_{2}\in F\), if \(\mu\in E^{*}\) with \(|\mu|\geq k\) satisfies \(n_{1}\cdot\mu=n_{2}\cdot\mu\), then \(n_{1}|_{\mu}=n_{2}|_{\mu}\). We fix \(k\) with this property for the remainder of the proof. (1) Since \(q\circ\sigma=\tilde{\sigma}\circ q\), if \(q(x)=z\) then \(q(\sigma(x))=\tilde{\sigma}(q(x))=\tilde{\sigma}(z)\), and so \(\sigma(q^{-1}(z))\subseteq q^{-1}(\tilde{\sigma}(z))\). So we must prove the reverse inclusion. Suppose that \(q(x)=z\) and \(y^{\prime}\sim_{\mathrm{ae}}\sigma(x)\). Let \((g_{n})_{n\in\mathbb{N}}\) be a sequence in \(\mathcal{N}\) such that \(d(g_{n})=r(x_{-n-1})\) and \(g_{n}\cdot x_{-n-1}x_{-n}...x_{-2}=y^{\prime}_{-n}y^{\prime}_{-n+1}...y^{ \prime}_{-1}\) for every \(n\in\mathbb{N}\). By the choice of \(k\), for all \(n,n^{\prime}\geq k\), it follows that \(g_{n}|_{x_{-n-1}x_{-n}...x_{-2}}=g_{n^{\prime}}|_{x_{-n^{\prime}-1}x_{-n^{ \prime}}...x_{-2}}\). Let \(x^{\prime}_{-1}=(g_{k}|_{x_{-k-1}x_{-k}...x_{-2}})\cdot x_{-1}\) and \(x^{\prime}=y^{\prime}x^{\prime}_{-1}\), and let \(g^{\prime}_{n}=g_{k}|_{x_{-k-1}x_{-k}...x_{-n-1}}\) for \(n<k\), and \(g^{\prime}_{n}=g_{n-1}\) for \(n\geq k\). Then \(g^{\prime}_{n}\cdot x_{-n}x_{-n+1}...x_{-1}=x^{\prime}_{-n}x^{\prime}_{-n+1}...x^{\prime}_{-1}\) for every \(n\in\mathbb{N}\). So, \(x\sim_{\mathrm{ae}}x^{\prime}\) and \(\sigma(x^{\prime})=y^{\prime}\). Therefore, \(\sigma(q^{-1}(z))=q^{-1}(\tilde{\sigma}(z))\). Suppose that \(x^{\prime},x^{\prime\prime}\in q^{-1}(z)\) satisfy \(\sigma(x^{\prime\prime})=\sigma(x^{\prime})=y^{\prime}\). Since \(x^{\prime}\sim_{\mathrm{ae}}x\sim_{\mathrm{ae}}x^{\prime\prime}\), there exists \(g\) in \(\mathcal{N}\) such that \(g\cdot x^{\prime}_{-k-1}x^{\prime}_{-k-1}...x^{\prime}_{-1}=x^{\prime\prime}_{ -k-1}x^{\prime\prime}_{-k}...x^{\prime\prime}_{-1}\). By the choice of \(k\) and since \(x^{\prime}_{-k-1}x^{\prime}_{-k}...x^{\prime}_{-2}=x^{\prime\prime}_{-k-1}x^{ \prime\prime}_{-k}...x^{\prime\prime}_{-2}\), we have \(g|_{x^{\prime}_{-k-1}x^{\prime}_{-k}...x^{\prime}_{-2}}=s(x^{\prime}_{-2})\). Hence \(x^{\prime\prime}_{-1}=(g|_{x^{\prime}_{-k-1}x^{\prime}_{-k}...x^{\prime}_{-2}}) \cdot x^{\prime}_{-1}=x_{-1}\). Therefore, \(x^{\prime}=x^{\prime\prime}\). (2) Fix \(\mu\in E^{*}\) with \(|\mu|>k\) and \(w\in U_{\sigma(\mu)}\). For each \(y\in q^{-1}(w)\), there exists \(g\in\mathcal{N}\) such that \(y\in Z[g\cdot\sigma(\mu))\). By the choice of \(k\), the element \(g|_{\sigma(\mu)}\) does not depend on the choice of \(g\). So there is a unique map \(\delta:q^{-1}(U_{\sigma(\mu)})\to E^{-\infty}\) such that \(\delta(y)=y\big{(}(g|_{\sigma(\mu)})\cdot\mu_{-1}\big{)}\) for any \(g\in\mathcal{N}\) such that \(y\in Z[g\cdot\sigma(\mu))\). We claim that \(\delta\) descends to a map \(\tilde{\delta}:U_{\sigma(\mu)}\to U_{\mu}\) that is an inverse for \(\tilde{\sigma}|_{U_{\mu}}\). For this, suppose that \(y,y^{\prime}\in q^{-1}(w)\) are asymptotically equivalent. Fix \(g\in\mathcal{N}\) such that \(y\in Z(g\cdot\sigma(\mu)]\) and \(g^{\prime}\in\mathcal{N}\) such that \(y^{\prime}\in Z(g^{\prime}\cdot\sigma(\mu)]\). Then \(x:=\delta(y)\) satisfies \(x=y\big{(}(g|_{\sigma(\mu)})\cdot\mu_{-1}\big{)}\in Z(g\cdot\mu]\). By (1) there is a unique element \(x^{\prime}\in E^{-\infty}\) such that \(x^{\prime}\sim x\) and \(\sigma(x^{\prime})=y^{\prime}\). Since \(x\sim_{\mathrm{ae}}x^{\prime}\), there exists \(h\in\mathcal{N}\) such that \(h\cdot(x_{-k-1}\cdots x_{-1})=x^{\prime}_{-k-1}x^{\prime}_{-k}\cdots x^{ \prime}_{-1}\). Hence \((hg)\cdot\sigma(\mu)=x^{\prime}_{-k-1}x^{\prime}_{-k}\cdots x^{\prime}_{-2}=g^ {\prime}\cdot\sigma(\mu)\). Since \(hg,g^{\prime}\in F\), by the choice of \(k\) we have \((hg)|_{\sigma(\mu)}=g^{\prime}|_{\sigma(\mu)}\) and we deduce that \(x^{\prime}_{-1}=\big{(}(hg)|_{\sigma(\mu)}\big{)}\cdot\mu_{-1}=\big{(}g^{ \prime}|_{\sigma(\mu)}\big{)}\cdot\mu_{-1}\). By definition, we have \(y^{\prime}x^{\prime}_{-1}\), and since \(\sigma(x^{\prime})=y^{\prime}\) by definition of \(x^{\prime}\), we deduce that \(\delta(y^{\prime})=\sigma(x^{\prime})x^{\prime}_{-1}=x^{\prime}\). So \(\delta(y^{\prime})=x^{\prime}\sim_{\ae}x=\delta(y)\), and it follows that \(\delta\) descends to a map \(\tilde{\delta}:U_{\sigma(\mu)}\to\mathcal{J}\). To see that \(\tilde{\delta}(U_{\sigma(\mu)})\subseteq U_{\mu}\), fix \(y\in q^{-1}(w)\) and let \(x=\delta(y)\). We must show that \([x]\subseteq\bigcup_{\nu\in\mathcal{N}\cdot\mu}Z(\nu)\). Fix \(x^{\prime}\in[x]\), and let \(y^{\prime}=\sigma(x)\). Applying the argument of the preceding paragraph we obtain \(x^{\prime}=\delta(y^{\prime})\in Z((|g^{\prime}\cdot\mu)\) for some \(g^{\prime}\in\mathcal{N}\) as required. It remains to show that \(\tilde{\delta}\) is an inverse for \(\tilde{\sigma}|_{U_{\mu}}\). By construction, \(\sigma\delta(y)=y\) for \(y\in q^{-1}(U_{\sigma(\mu)})\), so \(\tilde{\sigma}\tilde{\delta}(q(y))=q(y)\). Therefore, \(\tilde{\sigma}\tilde{\delta}=\operatorname{id}_{U_{\sigma(\mu)}}\). We now show that \(\tilde{\delta}\tilde{\sigma}=id_{U_{\mu}}\). If \(q^{-1}(z)\subseteq\bigcup_{g\in\mathcal{N}\cdot d(g)=r(\mu)}Z[g\cdot\mu)\), then \(q^{-1}(\tilde{\sigma}(z))=\sigma(q^{-1}(z))\subseteq\sigma(\bigcup_{g\in \mathcal{N}\cdot d(g)=r(\mu)}Z[g\cdot\mu))=\bigcup_{g\in\mathcal{N}\cdot d(g)= r(\sigma(\mu))}Z[g\cdot\sigma(\mu))\). Hence, \(\tilde{\sigma}(U_{\mu})\subseteq U_{\sigma(\mu)}\), so the composites \(\tilde{\delta}\tilde{\sigma}\) and \(\delta\sigma\) are well defined on \(U_{\mu}\) and \(q^{-1}(U_{\mu})\) respectively. We have \(\delta(y)=y(g|_{\sigma(\mu)})\cdot\mu_{-1}\) for \(y\in q^{-1}(U_{\sigma(\mu)})\cap Z[g\cdot\sigma(\mu))\), and so \(\delta\sigma(x)=x\) for \(x\) in \(q^{-1}(U_{\mu})\cap Z[g\cdot\mu)\). Hence, \(\tilde{\delta}\tilde{\sigma}(q(x))=q(x)\) for \(x\in q^{-1}(U_{\mu})\). Therefore, \(\tilde{\delta}\tilde{\sigma}=\operatorname{id}_{U_{\mu}}\), and we have shown \(\tilde{\sigma}\) maps \(U_{\mu}\) bijectively onto \(U_{\sigma(\mu)}\). (3) Fix \(\omega\in E^{*}\) with \(|\omega|\geq k\) and \(w\in U_{\omega}\), and fix \(z\in\tilde{\sigma}^{-1}(w)\). Choose \(x,y\in E^{-\infty}\) such that \(q(x)=z\), \(q(y)=w\), and \(\sigma(x)=y\). Then there exists \(g\in Gr(\omega)\) such that \(y\in Z[g\cdot\omega)\). Since \(\sigma(x)=y\), it follows that \(x_{-n-1}x_{-n}...x_{-1}=(g\cdot\omega)x_{-1}\). We have \(d((g|_{\omega})^{-1})=c(g|_{\omega})=s(g\cdot\omega)=r(x_{-1})\). Let \(f=(g|_{\omega})^{-1}\cdot x_{-1}\) and \(\mu=\omega f\). Then \(x\in Z(g\cdot\mu)\) and \(\sigma(\mu)=\omega\), and the map \(\delta:q^{-1}(U_{\sigma(\mu)})\to q^{-1}(U_{\mu})\) constructed in the proof of (2) satisfies \(\delta(y)=x\). Hence \(z=q(x)\in U_{\mu}\). To establish Condition (2) in Theorem 4.3, we will use the following technical lemma, which we will need again in the proof of Lemma 8.6. **Lemma 4.6**.: _Resume the hypotheses of Theorem 4.3. Suppose that \(\varepsilon>0\) satisfies Condition (1) of that theorem. Then there exist \(\eta<\varepsilon\) and \(n\in\mathbb{N}\) such that_ 1. _for every_ \(w\in\mathcal{J}\)_, and_ \(\alpha\leq\eta\)_, there exists_ \(\omega\in E^{*}\) _such that_ \(|\omega|\geq n-1\) _and_ \(B(w,2\alpha)\subseteq U_{\omega}\)_; and_ 2. _for every_ \(\mu\in E^{*}\) _with_ \(|\mu|\geq n\)_, the map_ \(\tilde{\sigma}\) _restricts to a homeomorphism of_ \(U_{\mu}\) _onto_ \(U_{\sigma(\mu)}\)_; and if_ \(\tilde{\delta}\) _denotes its inverse, then for all_ \(z\in U_{\mu}\) _and all_ \(\alpha\leq\eta\) _such that_ \(B(\tilde{\sigma}(z),2\alpha)\subseteq U_{\sigma(\mu)}\)_, we have_ \(B(z,\alpha)\subseteq U_{\mu}\)_, and_ \(\tilde{\delta}\) _restricts to a homeomorphism of_ \(B(\tilde{\sigma}(z),2\alpha)\) _onto_ \(B(z,\alpha)\)_._ Proof.: For \(\mu,\omega\in E^{*}\) such that \(s(\mu)=r(\omega)\), we have \(U_{\mu\omega}\subseteq U_{\omega}\), and for any infinite path \(x\in E^{-\infty}\), we have \(\bigcap_{n\in\mathbb{N}}U_{x_{-n}...x_{-1}}=[x]\). Hence \(\lim_{n\to\infty}\sup_{\mu\in E^{n}}\operatorname{diam}(U_{\mu})=0\). Let \(k\) be as in Proposition 4.5(2), and fix \(n>k\) so that \(\operatorname{diam}(U_{\mu})<\varepsilon\) whenever \(|\mu|\geq n\). Using compactness of \(\mathcal{J}\), fix \(K\subseteq\bigcup_{m\geq n-1}E^{m}\) such that \(\mathcal{J}\subseteq\bigcup_{\omega\in K}U_{\omega}\). The Lebesgue Number Lemma yields \(0<\eta<\varepsilon\) such that for every \(w\) in \(\mathcal{J}\), there exist \(\omega\in K\) such that \(B(w,2\eta)\subseteq U_{\omega}\). These values of \(n,\eta\) satisfy (1) by construction, so we just have to establish (2). For this, let \(\mu\) be a path such that \(|\mu|\geq n\). Since \(n\geq k\), by Proposition 4.5(2), \(\tilde{\sigma}\) maps \(U_{\mu}\) homeomorphically onto \(U_{\sigma(\mu)}\). Suppose that \(B(\tilde{\sigma}(z),2\alpha)\subseteq U_{\sigma(\mu)}\) for \(z\in U_{\mu}\) and \(\alpha\leq\eta\). By hypothesis, \(\varepsilon\) satisfies Theorem 4.3(1), and so since \(\operatorname{diam}(U_{\mu})<\varepsilon\), we have \[2d_{\mathcal{J}}(\tilde{\delta}(\tilde{\sigma}z),\tilde{\delta}(w))=d_{\mathcal{ J}}(\tilde{\sigma}\tilde{\delta}(\tilde{\sigma}z),\tilde{\sigma}\tilde{\delta}(w))=d_{ \mathcal{J}}(\tilde{\sigma}z,w)\] whenever \(d_{\mathcal{J}}(\tilde{\sigma}z,w)<2\alpha\). Hence, \(\tilde{\delta}(B(\tilde{\sigma}z,2\alpha))\subseteq B(z,\alpha)\cap U_{\mu}\), so that \(B(\tilde{\sigma}z,2\alpha)\subseteq\tilde{\sigma}(B(z,\alpha)\cap U_{\mu})\). Since \(\alpha\leq\eta<\varepsilon\), another application of Theorem 4.3(1) implies that \(\tilde{\sigma}\) restricts to a homeomorphism of \(B(z,\alpha)\), and that \(\tilde{\sigma}(B(z,\alpha))\subseteq B(\tilde{\sigma}(z),2\alpha)\). Therefore, \(B(z,\alpha)\cap U_{\mu}=B(z,\alpha)\), and \(\tilde{\sigma}(B(z,\alpha))=B(\tilde{\sigma}(z),2\alpha).\) Hence, \(B(z,\alpha)\subseteq U_{\mu}\), so that \(\tilde{\delta}\circ\tilde{\sigma}|_{B(z,\alpha)}\) is well defined, and \(\tilde{\delta}(B(\tilde{\sigma}(z),2\alpha))=\tilde{\delta}\tilde{\sigma}(B(z, \alpha))=B(z,\alpha)\) Now we prove the main result of this section. For the following proof, given \(x,y\in E^{-\infty}\), a _chain_ from \(x\) to \(y\) is a pair \((P,Q)\) of finite sequences \(P=(p_{i})_{i=0}^{k}\) and \(Q=(q_{i})_{i=0}^{k}\) in \(E^{-\infty}\) such that \(p_{0}=x\), \(q_{k}=y\), and \(q_{i}\sim_{\mathrm{ae}}p_{i+1}\) for all \(1\leq i\leq k-1\). Proof of Theorem 4.3.: Since \(E\) has no sources, \(\sigma\), and hence also \(\tilde{\sigma}\) is surjective. Let \(R:=\{(x_{1},x_{2})\in E^{-\infty}\times E^{-\infty}\mid x_{1}\sim_{\mathrm{ae }}x_{2}\}\) so that \(\sigma^{*}R=\{(x_{1},x_{2})\in E^{-\infty}\times E^{-\infty}\mid\sigma(x_{1}) \sim_{\mathrm{ae}}\sigma(x_{2})\}\). By Proposition 4.5(2), \(\tilde{\sigma}\) is a local injection, so \(R\) is an open subspace of \(\sigma^{*}R\). Therefore, \(\sigma^{*}R\setminus R\) is a compact set. It follows that there exists \(\varepsilon^{\prime}>0\) such that \(d_{\mathcal{J}}(x,y)\geq\varepsilon^{\prime}\) for all \((x,y)\in\sigma^{*}R\setminus R\). Let \(\varepsilon:=\min\{\frac{\varepsilon^{\prime}}{2},\frac{1}{4}\}\). We first show that this \(\varepsilon\) satisfies (1). Fix \(x,y\in E^{-\infty}\) such that \(d_{\mathcal{J}}(x,y)<\varepsilon\). We must show that \(\tilde{d}_{\mathcal{J}}(\tilde{\sigma}([x]),\tilde{\sigma}([y]))=2d_{\mathcal{ J}}(x,y)\). We first show that \(d_{\mathcal{J}}(\sigma(x),\sigma(y))\leq 2d_{\mathcal{J}}(x,y)\). To see this, it suffices to show that \(d_{\mathcal{J}}(\sigma(x),\sigma(y))\leq 2d_{\mathcal{J}}(x,y)+2\eta\) for all \(\eta<\varepsilon-d_{\mathcal{J}}(x,y)\). So fix \(\eta<\varepsilon-d_{\mathcal{J}}(x,y)\). Let \((P,Q)\) be a chain from \(x\) to \(y\) such that \(\sum_{i=0}^{k}d(p_{i},q_{i})\leq d_{\mathcal{J}}(x,y)+\epsilon\). Consider the chain \((\sigma(P),\sigma(Q))\) from \(\sigma(x)\) to \(\sigma(y)\). Since \(d(p_{i},q_{i})\leq d_{\mathcal{J}}(x,y)+\epsilon\leq\beta\leq\frac{1}{2}\), we have \(d(\sigma(p_{i}),\sigma(q_{i}))=2d(p_{i},q_{i}).\) Hence, \(d_{\mathcal{J}}(\sigma(x),\sigma(y))\leq\sum_{i=0}^{k}d(\sigma(p_{i}),\sigma( q_{i}))\leq 2d_{\mathcal{J}}(x,y)+2\epsilon\), proving the claim. Now we show that \(d_{\mathcal{J}}(\sigma(x),\sigma(y))\geq 2d_{\mathcal{J}}(x,y)\). Fix \(0<\eta<\varepsilon-d_{\mathcal{J}}(x,y)\); it suffices to show that \(d_{\mathcal{J}}(\sigma(x),\sigma(y))+\eta\geq 2d_{\mathcal{J}}(x,y)\). Let \((P^{\prime},Q^{\prime})\) be a chain from \(\sigma(x)\) to \(\sigma(y)\) such that \(\sum_{i=0}^{k}d(p^{\prime}_{i},q^{\prime}_{i})\leq d_{\mathcal{J}}(\sigma(x), \sigma(y))+2\eta\). Since \(d_{\mathcal{J}}(p^{\prime}_{0},q^{\prime}_{0})\leq\frac{1}{2}\), we have \(s(p^{\prime}_{0})=s(q^{\prime}_{0})\). Hence so \(p_{0}:=x\) and \(q_{0}:=q^{\prime}_{0}x_{-1}\) are paths in \(E^{-\infty}\) with \(d(p_{0},q_{0})=\frac{1}{2}d(p^{\prime}_{0},q^{\prime}_{0})\). By Proposition 4.5(1), there exists a path \(p_{1}\sim_{\mathrm{ae}}q_{0}\) such that \(\sigma(p_{1})=p^{\prime}_{1}\). Again, since \(d(p^{\prime}_{1},q^{\prime}_{1})\leq\frac{1}{2}\), \(q_{1}:=q^{\prime}_{1}(p_{1})_{-1}\) is a path, and \(d(p_{1},q_{1})=\frac{1}{2}d(p^{\prime}_{1},q^{\prime}_{1})\). Proceeding this way, we obtain a chain \((P,Q)\) from \(x\) to a path \(q_{k}\) such that \((\sigma(P),\sigma(Q))=(P^{\prime},Q^{\prime})\) and \(\sum_{i=0}^{k}d_{\mathcal{J}}(p_{i},q_{i})=\frac{1}{2}\sum_{i=0}^{k}d_{ \mathcal{J}}(p^{\prime}_{i},q^{\prime}_{i})\). We claim that \(q_{k}=y\). Since \(d(x,q_{k})<\varepsilon\) and \(d(x,y)<\varepsilon\), the triangle inequality implies that \(d_{\mathcal{J}}(y,q_{k})<\varepsilon^{\prime}\). Since \(\varepsilon^{\prime}\) is the lower bound of \(d_{\mathcal{J}}\) on \(\sigma^{*}R\setminus R\) and \((y,q_{k})\in\sigma^{*}R\), we have \(q_{k}\sim_{\mathrm{ae}}y\). Proposition 4.5(1) implies that\(\sigma:q^{-1}(q(y))\mapsto q^{-1}(q(\sigma(y))\) is a bijection. So since \(q_{k}\sim_{\mathrm{ae}}y\) and \(\sigma(q_{k})=\sigma(y)\) we have \(q_{k}=y\) as claimed. It follows that \((P,Q)\) is a chain from \(x\) to \(y\) and \(2d_{\mathcal{J}}(x,y)\leq 2\sum_{i=0}^{k}d_{\mathcal{J}}(p_{i},q_{i})=\sum_{i=0}^{k}d_{ \mathcal{J}}(p^{\prime}_{i},q^{\prime}_{i})\leq d_{\mathcal{J}}(\sigma(x),\sigma( y))+\eta\) as required. (2) By Lemma 4.6(1), for any \(z\in\mathcal{J}\) and \(\alpha\leq\varepsilon\), there exists \(\omega\in E^{*}\) with \(|\omega|\geq n-1\) and \(B(\tilde{\sigma}(z),2\alpha)\subseteq U_{\omega}\). By Proposition 4.5(3), there exists \(f\in E^{1}\) such that \(s(\omega)=r(f)\) and \(z\in U_{\omega f}\). Lemma 4.6(2) then gives \(B(z,\alpha)\subseteq U_{\omega e}\) and \(\tilde{\sigma}(B(z,\alpha))=\tilde{\sigma}\tilde{\delta}(B(\tilde{\sigma}(z),2 \alpha))=B(\tilde{\sigma}(z),2\alpha)\). To finish this section, we will show that for strongly-connected graphs, the regularity hypothesis is necessary in the preceding result (we will need to restrict to strongly-connected graphs later in order to apply Kaminker, Putnam and Whittaker's results about \(KK\)-duality for \(C^{*}\)-algebras associated to Smale spaces). Recall that a directed graph \(E\) is _strongly connected_ if it has at least one edge, and if for all \(v,w\in E^{0}\) the set \(vE^{*}w\) is nonempty. **Lemma 4.7**.: _Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting self-similar groupoid action with nucleus \(\mathcal{N}\). Then there exists \(\mu\in E^{*}\) such that whenever \(g\in\mathcal{N}\) satisfies \(d(g)=r(\mu)\) and \(g\cdot\mu=\mu\), we have \(g\cdot\mu\nu=\mu\nu\) for all \(\nu\) in \(s(\mu)E^{*}\)._ Proof.: Suppose first that there is \(x\in E^{\infty}\) such that \(g\cdot x\neq x\) for all \(g\in\mathcal{N}\setminus E^{0}\) satisfying \(d(g)=r(x)\). Choose \(n\in\mathbb{N}\) so that \(g\cdot x_{1}...x_{n}\neq x_{1}....x_{n}\) for all \(g\) in \(\mathcal{N}\setminus E^{0}\) satisfying \(d(g)=r(x)\). Then \(\mu=x_{1}...x_{n}\) has the desired property. Now suppose that for every \(x\in E^{\infty}\), there exists \(g\in\mathcal{N}\setminus E^{0}\) such that \(d(g)=r(x)\) and \(g\cdot x=x\). Then, \(\bigcup_{g\in\mathcal{N}\setminus E^{0}}\{x\in d(g)E^{\infty}:g\cdot x=x\}=E^ {\infty}\). Since a finite intersection of open dense sets is itself an open dense set, there exists \(x\in E^{\infty}\) that does not belong to the boundary of \(\{x\in d(g)E^{\infty}:g\cdot x=x\}\) for any \(g\in\mathcal{N}\setminus E^{0}\). So, there is an \(n\in\mathbb{N}\) such that for all \(g\) in \(\mathcal{N}\setminus E^{0}\) with \(d(g)=r(x)\), either \(g\cdot x_{1}...x_{n}\neq x_{1}...x_{n}\) or \(g\cdot y=y\) for all \(y\) in \(Z[x_{1}...x_{n})\). Hence, \(\mu=x_{1}...x_{n}\) has the desired property. **Proposition 4.8**.: _Let \(E\) be a strongly connected finite directed graph. Let \((G,E)\) be a contracting self-similar groupoid action with nucleus \(\mathcal{N}\). If \(\tilde{\sigma}:\mathcal{J}\rightarrow\mathcal{J}\) is a local homeomorphism, then \((G,E)\) is regular. Hence, \((G,E)\) is regular if and only if \(\tilde{\sigma}\) is a local homeomorphism_ Proof.: Suppose that \((G,E)\) is not regular. It suffices to show that there exists \(k\in\mathbb{N}\) such that \(\tilde{\sigma}^{k}\) is not locally injective. Since \((G,E)\) is not regular, there exist \(x\in E^{\infty}\) and \(g\in G\) such that \(g\cdot x=x\) but \(g\) fixes no neighbourhood of \(x\). So there is a strictly increasing sequence \((n_{j})\) in \(\mathbb{N}\) and paths \(\alpha_{j}\in s(x_{n_{j}})E^{*}\) such that \(g\cdot x_{1}\dots x_{n_{j}}\alpha_{j}\neq x_{1}\dots x_{n_{j}}\alpha_{j}\). In particular, the elements \(h_{j}:=g|_{x_{1}\dots x_{j}}\) satisfy \(h_{j}\cdot\alpha_{j}\neq\alpha_{j}\) for all \(j\). By Lemma 3.4 we have \(h_{j}\in\mathcal{N}\) for large \(j\). Since \(\mathcal{N}\) is finite, by passing to a subsequence, we can assume that \(h_{j}=h_{1}=:h\) for all \(j\) (and hence \(s(x_{n_{j}})=d(h)\) for all \(j\)). So \(\alpha:=\alpha_{1}\in d(h)E^{*}\) satisfies \(h\cdot\alpha\neq\alpha\); let \(\beta:=h\cdot\alpha\). For each \(j\), fix \(y_{j}\in Z(x_{1}\dots x_{n_{j}}]\subseteq E^{-\infty}\). Since \(E^{-\infty}\) is compact, by passing to a subsequence, we can assume that \(y_{j}\to y\in E^{-\infty}\). Since \(s(y_{j})=s(x_{n_{j}})=d(h)\) for all \(j\), we have \(s(y)=d(h)\), so \(y\alpha,y\beta\in E^{-\infty}\). By definition of convergence in \(E^{-\infty}\), for each \(N\in\mathbb{N}\) there exists \(j\) such that \(y_{-N}\dots y_{-1}=x_{n_{j}-N+1}\dots x_{n_{j}}\). For this \(j\), the element \(g_{N}:=g|_{x_{1}\dots x_{n_{j}-N+1}}\) satisfies \(g_{N}\cdot y_{-N}\dots y_{-1}\alpha=y_{-N}\dots y_{-1}\beta\). Hence \(y\alpha\sim_{\text{ae}}y_{\beta}\). That is, \([y_{\alpha}]=[y_{\beta}]\in\mathcal{J}\). Moreover, \(k:=|\alpha|=|\beta|\) satisfies \(\tilde{\sigma}([y\alpha])=[y]=\tilde{\sigma}([y\beta])\). By Lemma 4.7, there exists \(\mu\in E^{*}\) such that every \(g\in\mathcal{N}\) that satisfies \(g\cdot\mu=\mu\) pointwise fixes \(Z[\mu)\). Fix \(z\in Z(r(\mu)]\) so that \(z\mu\in E^{-\infty}\). Since \(E\) is strongly connected, for each \(n\in\mathbb{N}\) there exists \(\nu_{n}\in E^{*}\) such that \(r(\nu_{n})=s(\mu)\) and \(s(\nu_{n})=r(y_{n})\). So for each \(n\) we obtain elements \(z\mu\nu_{n}y_{n}\dots y_{-1}\alpha\) and \(z\mu\nu_{n}y_{n}\dots y_{-1}\beta\) of \(E^{-\infty}\). Since \(\alpha\neq\beta\), our choice of \(\mu\) ensures that \(g\cdot\mu\nu_{n}y_{n}\dots y_{-1}\alpha\neq\mu\nu_{n}y_{n}\dots y_{-1}\beta\) for all \(g\in\mathcal{N}\), and so Lemma 3.6 shows that \(z\mu\nu_{n}y_{n}\dots y_{-1}\alpha\not\sim_{\text{ae}}z\mu\nu_{n}y_{n}\dots y_{ -1}\beta\) for all \(n\). We have \(z\mu\nu_{n}y_{n}\dots y_{-1}\alpha\to y\alpha\) and \(z\mu\nu_{n}y_{n}\dots y_{-1}\beta\to y_{\beta}\), and \(\tilde{\sigma}^{k}([z\mu\nu_{n}y_{n}\dots y_{-1}\alpha])=[z\mu\nu_{n}y_{n} \dots y_{-1}]=\tilde{\sigma}^{k}([z\mu\nu_{n}y_{n}\dots y_{-1}\beta])\) for all \(n\), and therefore \(\tilde{\sigma}^{k}\) is not locally injective. ## 5. The Smale space of a self-similar groupoid action on a graph In this section we describe the Wieler Smale space that arises from the locally expanding dynamics described in the preceding section, and show that this Smale space can be realised as the quotient of \(E^{\mathbb{Z}}\) by the natural extension of asymptotic equivalence. We first recall Wieler's axioms, under which the projective limit of a space \(V\) under iterates of a given continuous surjection \(V\to V\) becomes a Smale space with totally disconnected stable set. Wieler proves more, showing that every Smale space with totally disconnected stable set has this form, but we will not need the full power of her theorem. For the statement of Wieler's Theorem, recall that if \(g:X\to X\) is a continuous self-mapping of a topological space, then \(\varprojlim(X,g)\) is the space \[\varprojlim(X,g):=\{(x_{n})_{n=1}^{\infty}\mid x_{i}\in X\text{ and }g(x_{i+1})=x_{i} \text{ for all }i\}.\] If \(\phi:X\to X\) is a continuous self-mapping of a topological space, we say that a point \(x\in X\) is _non-wandering_ if for every neighbourhood \(U\) of \(x\) there exists \(n\geq 1\) such that \(\phi^{n}(U)\cap U\neq\varnothing\). Finally, recall that the _forward orbit_ of \(x\in X\) is \(\{\phi^{n}(x)\mid n\geq 0\}\). **Theorem 5.1** (Wieler [31, Theorem A]).: _Let \((X,d)\) be a compact metric space, and let \(\varphi:X\to X\) be a continuous surjection. Suppose that there exist \(\varepsilon>0,K\in\mathbb{N}\setminus\{0\}\), and \(\gamma\in(0,1)\) such that_ **Axiom 1:** _For all \(x,y\in X\) such that \(d(x,y)<\varepsilon\), we have_ \[d(\varphi^{K}(x),\varphi^{K}(y))\leq\gamma^{K}d(\varphi^{2K}(x),\varphi^{2K}( y)).\] **Axiom 2:** _For all \(x\in X\) and \(0<\alpha<\varepsilon\),_ \[\varphi^{K}(B(\varphi^{K}(x),\alpha))\subseteq\varphi^{2K}(B(x,\gamma\alpha)).\] _Then_ \[d_{\infty}((x_{n}),(y_{n})):=\sum_{n=1}^{K}\gamma^{-n}\sup_{m\in\mathbb{N}} \gamma^{m}d(x_{m+n},y_{m+n}),\] _defines a metric on \(X_{\infty}:=\varprojlim(X,\varphi)\), the formula_ \[\varphi_{\infty}(x_{1},x_{2},\dots)=(\varphi(x_{1}),x_{1},x_{2},\dots)\] _defines a homeomorphism \(\varphi_{\infty}:X_{\infty}\to X_{\infty}\), and \((X_{\infty},\varphi_{\infty})\) is a Smale space with totally disconnected stable set. This Smale space is irreducible if and only if every point in \(X\) is nonwandering, and there is a point in \(X\) whose forward orbit under \(\varphi_{\infty}\) is dense._ The key point for us is that the results of the preceding two sections show that every contracting, regular self-similar groupoid action on a finite directed graph with no sources gives rise to a dynamical system satisfying Wieler's axioms. **Lemma 5.2**.: _Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \(\mathcal{J}\) be the limit space of Definition 3.2, let \(d_{\mathcal{J}}\) be the equivalence relation metric as in Corollary 3.13, and let \(\tilde{\sigma}\) be the local homeomorphism of Theorem 4.3. Let \(\varepsilon\) be as in the statement of Theorem 4.3. Then the pair \((\mathcal{J},\tilde{\sigma})\) satisfies Wieler's axioms for \(\gamma=\frac{1}{2}\), \(K=1\) and \(\varepsilon^{\prime}=\frac{\varepsilon}{2}\)._ Proof.: Theorem 4.3 (1) shows that if \(d_{\mathcal{J}}([x],[y])<\varepsilon^{\prime}\) then \[d_{\mathcal{J}}(\tilde{\sigma}^{K}([x]),\tilde{\sigma}^{K}([y])) =d_{\mathcal{J}}(\tilde{\sigma}([x]),\tilde{\sigma}([y]))\] \[=\frac{1}{2}d_{\mathcal{J}}(\tilde{\sigma}^{2}([x]),\tilde{\sigma }^{2}([y]))=\gamma d_{\mathcal{J}}(\tilde{\sigma}^{2K}([x]),\tilde{\sigma}^{ 2K}([y])),\] establishing Axiom 1. Theorem 4.3(2) shows that \(\tilde{\sigma}^{2K}(B([x],\gamma\alpha))=\tilde{\sigma}^{K}(B(\tilde{\sigma}^ {K}([x]),\alpha))\) for all \(\alpha\leq\varepsilon^{\prime}\) and \([x]\in\mathcal{J}\), establishing Axiom 2. We now identify the limit space \(\mathcal{J}_{\infty}\) obtained from Theorem 5.1 applied to \((\mathcal{J},\tilde{\sigma})\) with a quotient of the bi-infinite path space of \(E\). We define asymptotic equivalence on bi-infinite paths just as we define it for right-infinite paths. That is, if \((G,E)\) is a self-similar groupoid action and \(x,y\in E^{\mathbb{Z}}\), then \(x\sim_{\text{ae}}y\) if there exists a bi-infinite sequence \((g_{n})_{n\in\mathbb{Z}}\) in \(G\) such that \(\{g_{n}\mid n\in\mathbb{Z}\}\) is a finite set, and such that \(g_{n}\cdot x_{n}x_{n+1}x_{n+2}\dots=y_{n}y_{n+1}y_{n+2}\dots\) for all \(n\in\mathbb{Z}\). The argument of Lemma 3.6 shows that \(x,y\in E^{\mathbb{Z}}\) are asymptotically equivalent if and only if there is a sequence \((g_{n})_{n\in\mathbb{Z}}\) in \(\mathcal{N}\) such that \(g_{n}\cdot x_{n}x_{n+1}x_{n+2}\cdots=y_{n}y_{n+1}y_{n+2}\dots\) for all \(n\) and such that \(g_{n}|_{x_{n}}=g_{n+1}\) for all \(n\). **Definition 5.3**.: Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting, self-similar groupoid action. We write \(\mathcal{S}\) for the quotient space \(E^{\mathbb{Z}}/{\sim_{\mathrm{ae}}}\) and call this the _limit solenoid_ of \((G,E)\). We will need the following notation. Given a directed graph \(E\) with no sinks or sources, we will write \(\tau:E^{\mathbb{Z}}\to E^{\mathbb{Z}}\) for the translation homeomorphism \(\tau(x)_{n}=x_{n-1}\), \(n\in\mathbb{Z}\). For \(n\in\mathbb{Z}\) and \(x\in E^{-\infty}\) we will write \(x(-\infty,n)\) for the element of \(E^{-\infty}\) given by \(x(-\infty,n)=\dots x_{n-2}x_{n-1}x_{n}\). **Proposition 5.4**.: _Let \(E\) be a finite directed graph with no sinks or sources. Let \((G,E)\) be a contracting, self-similar groupoid action. Let \(\mathcal{J}\) be the limit space of \((G,E)\), and let \(\tilde{\sigma}:\mathcal{J}\to\mathcal{J}\) be the map defined as in Theorem 4.3. Let \(\mathcal{J}_{\infty}:=\varprojlim(\mathcal{J},\tilde{\sigma})\), and let \(\tilde{\sigma}_{\infty}:\mathcal{J}_{\infty}\to\mathcal{J}_{\infty}\) be the homeomorphism of defined as in Theorem 5.1. Let \(\mathcal{S}=E^{\mathbb{Z}}/{\sim_{\mathrm{ae}}}\) be the limit solenoid of \((G,E)\). Then there is a homeomorphism \(\theta:\mathcal{S}\to\mathcal{J}_{\infty}\) such that \(\theta([x])=([x(-\infty,-1)],[x(-\infty,0)],[x(-\infty,1)],\dots)\) for all \(x\in E^{\mathbb{Z}}\). We have \(\theta([\tau(x)])=\tilde{\sigma}_{\infty}(\theta([x]))\) for all \(x\in E^{\mathbb{Z}}\)._ Proof.: If \(x,y\in E^{\mathbb{Z}}\) satisfy \(x\sim_{\mathrm{ae}}y\), then \(x(-\infty,n)\sim_{\mathrm{ae}}y(-\infty,n)\) for all \(n\in\mathbb{Z}\). So the formula \(\theta([x])=([x(-\infty,-1)],[x(-\infty,0)],[x(-\infty,1)],\dots)\) is well defined and determines a map \(\theta:\mathcal{S}\to\prod_{n=1}^{\infty}\mathcal{J}\). By definition of \(\tilde{\sigma}\), we have \(\tilde{\sigma}([x(-\infty,n)])=[\sigma(x(-\infty,n))]=[x(-\infty,n-1)]\) for all \(n\), and so each \(\theta([x])\in\mathcal{J}_{\infty}\). Since \(E^{\mathbb{Z}}\) is compact, so is its continuous image \(\mathcal{S}\). Projective limits of Hausdorff spaces are Hausdorff, so \(\mathcal{J}_{\infty}\) is Hausdorff. So, to see that \(\theta\) is a homeomorphism, it suffices to show that it a continuous bijection. The maps \(E^{\mathbb{Z}}\ni x\mapsto x(-\infty,n)\) indexed by \(n\in\mathbb{Z}\) are clearly continuous, and so the maps \(x\mapsto[x(-\infty,n)]\) are also continuous because the quotient map from \(E^{-\infty}\) to \(\mathcal{J}\) is continuous. Hence \(\bar{\theta}(x):=([x(-\infty,-1)],[x(-\infty,0)],[x(-\infty,1)],\dots)\) defines a continuous map \(\bar{\theta}:E^{\mathbb{Z}}\to\mathcal{J}_{\infty}\). Since \(\theta:\mathcal{S}\to\mathcal{J}_{\infty}\) is the map induced by \(\bar{\theta}\), it is also continuous. To see that \(\theta\) is surjective, fix \((\zeta_{1},\zeta_{2},\zeta_{3},\dots)\in\mathcal{J}_{\infty}\). For each \(j\), choose \(x_{j}\in E^{-\infty}\) such that \([x_{j}]=\zeta_{j}\). Since each \(\tilde{\sigma}(\zeta_{j})=\zeta_{j-1}\), we have \(\sigma(x_{j})\sim_{\mathrm{ae}}x_{j-1}\) for all \(j\). For each \(j\geq 1\), consider the sequence \((\sigma^{n-j}(x_{n}))_{n=j}^{\infty}\) in \(E^{-\infty}\). We just saw that each \(\sigma^{n-j}(x_{n})\sim_{\mathrm{ae}}x_{j}\), and so Corollary 3.7 shows that this sequence contains just finitely many distinct elements of \(E^{-\infty}\). A standard Cantor diagonal argument yields a subsequence \((x_{n_{k}})\) such that \(\left(\sigma^{n_{k}-j}(x_{n_{k}})\right)_{n_{k}\geq j}\) is a constant sequence for each \(j\). Define \(y\in E^{\mathbb{Z}}\) by \(y_{j}=\lim_{k}\sigma^{n_{k}-j}(x_{n_{k}})\). By construction, \(\sigma(y_{j})=y_{j-1}\) for all \(j\). Also, by construction, each \(y(-\infty,n)\sim_{\mathrm{ae}}x_{n}\) and so \(\theta([y])=(\zeta_{1},\zeta_{2},\zeta_{3},\dots)\). To show that \(\theta\) is injective, suppose that \(\theta([x])=\theta([y])\). We must show that \(x\sim_{\mathrm{ae}}y\). For this, fix \(m\in\mathbb{Z}\). It suffices to find \(g\in\mathcal{N}\) such that \(g\cdot x(m,\infty)=y(m,\infty)\). Since \(\theta([x])=\theta([y])\), we have \(x(-\infty,n)\sim_{\mathrm{ae}}y(-\infty,n)\) for all \(n\). Fix \(n\geq m\). Lemma 3.6 shows that there is a sequence \((g_{k,n})_{k\leq n}\) in \(\mathcal{N}\) such that \(g_{k,n}x_{k}\dots x_{n}=y_{k}\dots y_{n}\) for all \(k\). Taking \(k=m\leq n\), we obtain \(g_{n}:=g_{m,n}\in\mathcal{N}\), such that \(g_{n}\cdot x_{m}\dots x_{n}=y_{m}\dots y_{n}\). Since \(\mathcal{N}\) is finite, the sequence \((g_{n})_{n\geq m}\) has a constant subsequence. The constant value \(g\) of this subsequence then satisfies \(g\cdot x_{m}\dots x_{n}=x_{m}\dots y_{n}\) for infinitely many \(n\geq m\). It then follows that \(g\cdot x(m,\infty)=y(m,\infty)\) as required. It remains to check that \(\theta([\tau(x)])=\tilde{\sigma}_{\infty}(\theta([x]))\) for all \(x\in E^{\mathbb{Z}}\). This follows from direct calculation: for \(x\in E^{\mathbb{Z}}\), \[\theta([\tau(x)]) =\left([\tau(x)(-\infty,-1)],[\tau(x)(-\infty,0)],[\tau(x)(- \infty,1)],\dots\right)\] \[=\left([x(-\infty,-2)],[x(-\infty,-1)],[x(-\infty,0)],\dots\right)\] \[=\left(\tilde{\sigma}([x(-\infty,-1)]),[x(-\infty,-1)],[x(-\infty,0)],\dots\right)\] \[=\tilde{\sigma}_{\infty}(\theta([x])).\qed\] **Corollary 5.5**.: _Let \(E\) be a finite directed graph with no sinks or sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \(\mathcal{S}:=E^{\mathbb{Z}}/{\sim_{\mathrm{ae}}}\) be the limit solenoid of \((G,E)\). Then there is a homeomorphism \(\tilde{\tau}:\mathcal{S}\to\mathcal{S}\) such that \(\tilde{\tau}([x])=[\tau(x)]\) for all \(x\in E^{\mathbb{Z}}\). Let \(d_{\mathcal{J}}\) be the quotient metric on \(\mathcal{J}\) as in Corollary 3.13. There is a metric \(d_{\mathcal{S}}\) on \(\mathcal{S}\) such that_ \[d_{\mathcal{S}}([x],[y])=\sup_{m\in\mathbb{N}_{0}}(\frac{1}{2})^{m}d_{ \mathcal{J}}([x(-\infty,m)],[y(-\infty,m)])\] _for all \(x,y\in E^{\mathbb{Z}}\). There is a constant \(\varepsilon_{\mathcal{S}}\) such that \((\mathcal{S},d_{\mathcal{S}},\tilde{\tau},\varepsilon_{\mathcal{S}},\frac{1}{2})\) is a Smale space with totally disconnected stable set._ _If \(E\) is strongly connected, then \((\mathcal{S},\tilde{\tau})\) is irreducible. If \(E\) is primitive, then \((\mathcal{S},\tilde{\tau})\) is topologically mixing._ Proof.: Theorem 5.1 shows that \((\mathcal{J}_{\infty},\tilde{\sigma}_{\infty})\) is a Smale space with totally disconnected stable set, and Proposition 5.4 shows that \((\mathcal{S},\tilde{\tau})\) is conjugate to \((\mathcal{J}_{\infty},\tilde{\sigma}_{\infty})\). Let \(d_{\infty}\) be the metric of Theorem 5.1. For \(x,y\in E^{\mathbb{Z}}\), we have \[d_{\infty}(\theta([x]),\theta([y]))=2\sup_{m\in\mathbb{N}}(\frac{1}{2})^{m}d( \theta([x])_{m+1},\theta([y])_{m+1}).\] Since \(\theta([x])_{m+1}=[x(-\infty,m)]\), we have \(d_{\infty}=d_{\mathcal{S}}\circ(\theta\times\theta)\). Hence, there is a constant \(\varepsilon_{\mathcal{S}}>0\) such that \((\mathcal{S},d_{\mathcal{S}},\tilde{\tau},\varepsilon_{\mathcal{S}},\frac{1}{2})\) is a Smale space. For the irreducibility, by Wieler's Theorem it suffices to show that every point in \(\mathcal{J}\) is non-wandering and that \(\mathcal{J}\) admits a dense orbit. To see that every point is non-wandering, first observe that if \(x\in E^{-\infty}\) is periodic, say \(\sigma^{n}(x)=x\), then \([x]\in\mathcal{J}\) satisfies \(\tilde{\sigma}^{n}([x])=[\sigma^{n}(x)]=[x]\), so \([x]\) is also periodic. Since \(E\) is strongly connected, for each \(\lambda\in E^{*}\), there exists \(\mu\) in \(s(\lambda)E^{*}r(\lambda)\), and then \(x:=(\lambda\mu)^{\infty}\) is a periodic point in \(Z(\lambda)\). So there is a dense set of periodic points. It follows that the periodic points in \(\mathcal{J}\) are dense. So for any \([y]\in\mathcal{J}\) and any open neighbourhood \(U\) of \([y]\), we can find \([x]\in U\) and \(n\geq 1\) such that \(\tilde{\sigma}^{n}([x])=[x]\), and so \([x]\in\tilde{\sigma}^{n}(U)\cap U\). To see that \(\mathcal{J}\) has a dense orbit, let \(\lambda_{1},\lambda_{2},\lambda_{3}\dots\) be a listing of \(E^{*}\). For each \(i\geq 1\), choose \(\mu_{i}\in s(\lambda_{i})E^{*}r(\lambda_{i+1})\). Then \(x=\lambda_{1}\mu_{1}\lambda_{2}\mu_{2}\dots\in E^{\infty}\). For each \(\lambda\in E^{*}\), we have \(\lambda=\lambda_{i}\) for some \(i\), and then \(\sigma^{|\lambda_{1}\mu_{1}\dots\lambda_{i-1}\mu_{i-1}|}(x)\in Z(\lambda)\). Hence \(\{\sigma^{n}(x)\mid n\in\mathbb{N}\}\) is dense \(E^{\infty}\). Hence \(\{\tilde{\sigma}^{n}([x])\mid n\in\mathbb{N}\}=q(\{\sigma^{n}(x)\mid n\in \mathbb{N}\}\) is a dense forward orbit in \(\mathcal{J}\). If \(E\) is primitive, then \(\tau:E^{\mathbb{Z}}\mapsto E^{\mathbb{Z}}\) is topologically mixing, see [15, Observation 7.2.2]. Since the quotient map \(q:E^{\mathbb{Z}}\mapsto\mathcal{S}\) satisfies \(q\circ\tau=\tilde{\tau}\circ q\) and is surjective, \(\tau\) being topologically mixing implies \(\tilde{\tau}\) is topologically mixing. ## 6. The \(C^{*}\)-algebra of a self-similar groupoid action on a graph In this section and the next, we will discuss two \(C^{*}\)-algebras associated to self-similar groupoid actions. The first of these is the \(C^{*}\)-algebra \(\mathcal{O}(G,E)\) described by Laca-Raeburn-Ramagee-Whittaker in [17], see also [16, 20, 22]. Our main goal is to provide a groupoid model based on the one developed for self-similar group actions on graphs by Exel and Pardo [7]. This is the subject of the present section. In the next section, we consider the \(C^{*}\)-algebra obtained from the Deaconu-Renault groupoid of the dynamics \((\mathcal{J},\tilde{\sigma})\) of Section 4. Our main result will establish \(KK\)-duality between these two \(C^{*}\)-algebras for contracting, regular self-similar actions. In [17], the Toeplitz algebra of a self-similar groupoid action is defined as the Toeplitz algebra of an associated Hilbert module. Then Proposition 4.4 of [17] provides an alternative description as the universal \(C^{*}\)-algebra for generators and relations. At the beginning of Section 8 of [17], the Cuntz-Pimsner algebra of the self-similar action is defined as the quotient of the Toeplitz algebra by the ideal determined by an additional Cuntz-Krieger-type relation. We follow [17] and define the \(C^{*}\)-algebra of a self-similar action in terms of generators and relations. If \(G\) is a discrete groupoid, then a _unitary representation_ of \(G\) is a function \(g\mapsto u_{g}\) from \(G\) to a \(C^{*}\)-algebra such that \(u_{g}u_{h}=\delta_{d(g),c(h)}u_{gh}\) and \(u_{g^{-1}}=u_{g}^{*}\) for all \(g,h\in G\). This is equivalent to the definition presented at the start of [17, Section 4]. If \(E\) is a finite directed graph with no sources, and \((G,E)\) is a self-similar groupoid action, then a _covariant representation_ of \((G,E)\) in a \(C^{*}\)-algebra \(A\) is a triple \((u,p,s)\) consisting of a unitary representation \(u:g\mapsto u_{g}\) of \(G\) in \(A\) and a Cuntz-Krieger \(E\)-family \((p,s)\in A\) such that \(p_{v}=u_{v}\) for all \(v\in E^{0}\), and such that \[u_{g}s_{e}=s_{g\cdot e}u_{g]_{e}}\quad\text{ for all }g\in G\text{ and }e\in d (g)E^{1}.\] We have \(p_{v}u_{g}=\delta_{v,c(g)}u_{g}\) and \(u_{g}p_{v}=\delta_{d(g),v}u_{g}\) because \(p_{v}=u_{v}\). If \(d(g)\neq r(e)\), then \(u_{g}s_{e}=u_{g}p_{d(g)}p_{r(e)}s_{e}=0\). So the relations we have just presented are equivalent to those of [17, Proposition 4.4] combined with the additional relation determining the generators of the ideal \(I\) described in [17, Equation (8.1)]. It follows from Proposition 4.4 and the definition of \(\mathcal{O}(G,E)\) in [17] that the \(C^{*}\)-algebra \(\mathcal{O}(G,E)\) is the universal \(C^{*}\)-algebra generated by a covariant representation of \((G,E)\). Our first step is to describe a groupoid model for \(\mathcal{O}(G,E)\). Our construction is based on that of [7]. **Lemma 6.1**.: _Let \(E\) be a finite graph, and let \((G,E)\) be a self-similar groupoid action. The set_ \[S_{G,E}:=\{0\}\cup\{(\mu,g,\nu)\in E^{*}\times G\times E^{*}\mid s(\mu)=c(g) \text{ and }s(\nu)=d(g)\}\] _is an inverse semigroup with respect to the multiplication given by_ \[(\alpha,g,\beta)(\mu,h,\nu)=\begin{cases}(\alpha(g\cdot\mu^{\prime}),g]_{\mu^ {\prime}}h,\nu)&\text{ if }\mu=\beta\mu^{\prime}\\ (\alpha,gh|_{h^{-1},\beta^{\prime}},\nu(h^{-1}\cdot\beta^{\prime}))&\text{ if }\beta=\mu\beta^{\prime}\\ 0&\text{ otherwise.}\end{cases}\] _There is an action of \(S_{G,E}\) on \(E^{\infty}\) such that \(\operatorname{Dom}(\mu,g,\nu)=Z[\nu)\), and_ \[(\mu,g,\nu)\cdot\nu x=\mu g\cdot x\quad\text{ for all }x\in Z[s(\nu)).\] Proof.: It is routine, though tedious, to check that this multiplication is associative. For each \(a:=(\mu,g,\nu)\), the element \(a^{*}:=(\nu,g^{-1},\mu)\) satisfies \(aa^{*}a=a\) and \(a^{*}aa^{*}=a^{*}\). Direct computation shows that the formula \((\mu,g,\nu)\cdot\nu x=\mu g\cdot x\) defines a homeomorphism from \(Z[\nu)\) to \(Z[\mu)\). A routine calculation very similar to the associativity calculation shows that \(a\cdot(b\cdot x)=(ab)\cdot x\) whenever both sides are defined. Given any action of an inverse semigroup \(S\) on a locally compact Hausdorff space \(X\), we can form the associated groupoid of germs \(S\ltimes X\) as follows [23, Section 4.3]: we define an equivalence relation on \(\{(s,x)\mid s\in S,x\in\operatorname{Dom}(s)\}\) by \((s,x)\sim(t,y)\) if \(x=y=:z\) and there is an idempotent \(e\in S\) such that \(z\in\operatorname{Dom}(e)\), and \(se=te\) The topology has basic open sets \(W(s,V):=\{[s,x]\mid x\in V\}\) indexed by pairs \((s,V)\) consisting of an element \(s\in S\) and an open set \(V\subseteq\operatorname{Dom}(s)\). The unit space of this groupoid is \(X\), and the groupoid operations are given by \[s([t,x])=x,\quad r([t,x])=t\cdot x,\quad[t,u\cdot x][u,x]=[tu,x],\quad\text{ and }\quad[t,x]^{-1}=[t^{*},t\cdot x].\] Though this groupoid need not be Hausdorff, it is always etale with Hausdorff unit space \(X\), and hence locally Hausdorff with a basis of open bisections. The \(C^{*}\)-algebra of this groupoid is the completion of the \({}^{*}\)-algebra \[\mathcal{C}(S\ltimes X)=\operatorname{span}\{C_{\text{c}}(U)\mid U\text{ is an open bisection}\}\] in a universal norm. A very nice account of this construction can be found in [6]. **Definition 6.2**.: Let \(E\) be a finite directed graph, and let \((G,E)\) be a self-similar groupoid action. The groupoid of \((G,E)\), denoted \(S_{G,E}\ltimes E^{\infty}\) is defined to be the groupoid of germs for the action of \(S_{G,E}\) on \(E^{\infty}\) as above. To establish our duality theorem later, we will describe a groupoid equivalence between the groupoid \(S_{G,E}\ltimes E^{\infty}\) and the stable groupoid of the Smale space constructed in Section 5. To do this, it will be helpful first to establish a description of \(S_{G,E}\ltimes E^{\infty}\) as a kind of lag groupoid. A related description for self-similar group actions appears in [7, Section 8], though there the lag takes values in the "sequence group" \(\prod_{i=1}^{\infty}G/\bigoplus_{i=1}^{\infty}G\). We will give yet another description, which is particularly well suited to our application to \(KK\)-duality later. We also characterise exactly when this groupoid is Hausdorff, by characterising exactly which pairs of elements (if any) cannot be separated by disjoint open neighbourhoods. Our groupoid is based on the left-shift map on \(E^{\infty}\) given by \(x_{1}x_{2}x_{3}\cdots\mapsto x_{2}x_{3}\dots\). In the graph-algebra literature, it is standard to denote this shift map by \(\sigma\), but we have already used that symbol for the right-shift on \(E^{-\infty}\). We will instead use \(\varsigma\) for the left-shift map. **Lemma 6.3**.: _Let \(E\) be a finite directed graph, and let \((G,E)\) be a self-similar groupoid action. There is an equivalence relation \(\sim\) on_ \[\{(x,m,g,n,y)\in E^{\infty}\times\mathbb{N}\times G\times\mathbb{N}\times E^{ \infty}\mid d(g)=r(\varsigma^{n}(y))\text{ and }\varsigma^{m}(x)=g\cdot\varsigma^{n}(y)\}\] _such that \((x,m,g,n,y)\sim(w,p,h,q,z)\) if and only if_ * \(x=w\)_,_ \(y=z\) _and_ \(m-n=p-q\)_, and_ * _there exists_ \(l\geq\max\{n,q\}\) _such that_ \(g|_{y(n,l)}=h|_{z(q,l)}\)_._ _We write \([x,m,g,n,y]\) for the equivalence class of \((x,m,g,n,y)\) under \(\sim\). The set_ \[\mathcal{G}_{G,E}:=\{[x,m,g,n,y]\mid d(g)=r(\varsigma^{n}(y))\text{ and }\varsigma^{m}(x)=g\cdot\varsigma^{n}(y)\}\] _is an algebraic groupoid with unit space \(\{[x,0,r(x),0,x]\mid x\in E^{\infty}\}\) identified with \(E^{\infty}\), range and source maps \(r([x,m,g,n,y])=x\) and \(s([x,m,g,n,y])=y\), and operations_ \[[x,m,g,n,y][y,p,h,q,z]=[x,m+p,g|_{y(n,n+p)}h|_{z(q,q+n)},n+q,z], \quad\text{ and }\] \[[x,m,g,n,y]^{-1}=[y,n,g^{-1},m,x].\] _There is an injective homomorphism \(\iota\) of the graph groupoid \(\mathcal{G}_{E}\) into \(\mathcal{G}_{G,E}\) given by \(\iota(x,m-n,y)=[x,m,s(x_{m}),m,y]\) whenever \(x,y\in E^{\infty}\) satisfy \(\varsigma^{m}(x)=\varsigma^{n}(y)\)._ Proof.: Reflexivity and symmetry of the relation \(\sim\) are clear. For transitivity, suppose that \((x,m,g,n,y)\sim(x^{\prime},m^{\prime},g^{\prime},n^{\prime},y^{\prime})\) and \((x^{\prime},m^{\prime},g^{\prime},n^{\prime},y^{\prime})\sim(x^{\prime\prime},m^{\prime\prime},g^{\prime\prime},n^{\prime\prime},y^{\prime\prime})\). Then \(x=x^{\prime}=x^{\prime\prime}\), \(y=y^{\prime}=y^{\prime\prime}\) and \(m-n=m^{\prime}-n^{\prime}=m^{\prime\prime}-n^{\prime\prime}\). Choose \(l\geq n,n^{\prime}\) and \(l^{\prime}\geq n^{\prime},n^{\prime\prime}\) with \(g|_{y(n,l)}=g^{\prime}|_{y(n^{\prime},l)}\) and \(g^{\prime}|_{y(n^{\prime},l^{\prime})}=g^{\prime\prime}|_{y(n^{\prime\prime},l^{ \prime})}\), and put \(L=\max\{l,l^{\prime}\}\). Then \(L\geq n,n^{\prime\prime}\), and we have \[g|_{y(n,L)} =\big{(}g|_{y(n,l)}\big{)}|_{y(l,L)}=\big{(}g^{\prime}|_{y(n^{ \prime},l)}\big{)}_{y(l,L)}=g^{\prime}|_{y(n^{\prime},L)}\] \[=\big{(}g^{\prime}|_{y(n^{\prime},l^{\prime})}\big{)}|_{y(l^{ \prime},L)}=\big{(}g^{\prime\prime}|_{y(n^{\prime\prime},l^{\prime})}\big{)}|_ {y(l^{\prime},L)}=g^{\prime\prime}|_{y(n^{\prime\prime},L)}.\] To show that \(\mathcal{G}_{G,E}\) is a groupoid, we first check that if \([x,m,g,n,y]\) and \([y,p,h,q,z]\) belong to \(\mathcal{G}_{G,E}\), then so does \([x,m+p,g|_{y(n,n+p)}h|_{z(q,q+n)},n+q,z]\). For this, just check: \[(g|_{y(n,n+p)})(h|_{z(q,q+n)})\cdot\varsigma^{q+n}(z) =(g|_{y(n,n+p)})\cdot\varsigma^{n}(h\cdot\varsigma^{q}(z))\] \[=(g|_{y(n,n+p)})\cdot\varsigma^{n+p}(y)=\varsigma^{p}(g\cdot \varsigma^{n}(y))=\varsigma^{m+p}(x).\] The range and source maps are well-defined by definition. We must check that multiplication is well-defined. First suppose that \([x,m,g,n,y]=[x^{\prime},m^{\prime},g^{\prime},n^{\prime},y^{\prime}]\); so \(x=x^{\prime}\), \(y=y^{\prime}\) and \(m-n=m^{\prime}-n^{\prime}\), and there exists \(l\geq n,n^{\prime}\) such that \(g|_{y(n,l)}=g^{\prime}|_{y(n^{\prime},l)}\). Fix \([y,p,h,q,z]\in\mathcal{G}_{G,E}\). We must show that \[[x,m+p,g|_{y(n,n+p)}h|_{z(q,q+n)},n+q,z]=[x,m^{\prime}+p,g^{\prime}|_{y(n^{ \prime},n^{\prime}+p)}h|_{z(q,q+n^{\prime})},n^{\prime}+q,z].\] We have \(m+p-(m^{\prime}+p)=n+q-(n^{\prime}+q)\). We will show that \[\big{(}g|_{y(n,n+p)}h|_{z(q,q+n)}\big{)}|_{z(q+n,q+l)}=\big{(}g^{\prime}|_{y( n^{\prime},n^{\prime}+p)}h|_{z(q,q+n^{\prime})}\big{)}|_{z(q+n^{\prime},q+l)}.\] We have \[\big{(}g|_{y(n,n+p)}h|_{z(q,q+n)}\big{)}|_{z(q+n,q+l)}=\big{(}(g|_{y(n,n+p)}) |_{h|_{z(q,q+n)}\cdot z(q+n,q+l)}\big{)}h|_{z(q,q+l)},\] and similarly \[\big{(}g^{\prime}|_{y(n^{\prime},n^{\prime}+p)}h|_{z(q,q+n^{\prime})}\big{)}|_ {z(q+n^{\prime},q+l)}=\big{(}(g^{\prime}|_{y(n^{\prime},n^{\prime}+p)})|_{h|_{ z(q,q+n^{\prime})}\cdot z(q+n^{\prime},q+l)}\big{)}h|_{z(q,q+l)}.\] So we need to check that \((g|_{y(n,n+p)})|_{h|_{z(q,q+n)}\cdot z(q+n,q+l)}=(g^{\prime}|_{y(n^{\prime},n^ {\prime}+p)})|_{h|_{z(q,q+n^{\prime})}\cdot z(q+n^{\prime},q+l)}\). For this, we observe that \(h|_{z(q,q+n)}\cdot z(q+n,q+l)=(h\cdot\varsigma^{q}(z))(n,l)\), which is equal to \(\varsigma^{p}(y)(n,l)\) because \([y,p,h,q,z]\in\mathcal{G}_{G,E}\). Hence \[(g|_{y(n,n+p)})|_{h|_{z(q,q+n)}\cdot z(q+n,q+l)} =(g|_{y(n,n+p)})|_{y(n+p,l+p)}=(g|_{y(n,l)})|_{y(l,l+p)}=(g^{ \prime}|_{y(n^{\prime},l)})|_{y(l,l+p)}\] \[=(g^{\prime}|_{y(n^{\prime},n^{\prime}+p)})|_{y(n^{\prime}+p,l+p)} =(g^{\prime}|_{y(n^{\prime},n^{\prime}+p)})|_{h|_{z(q,q+n^{\prime})}\cdot z(q+n ^{\prime},q+l)}\] as required. A very similar calculation shows that if \([y,p,h,q,z]=[y^{\prime},p^{\prime},h^{\prime},q^{\prime},z^{\prime}]\) and \([x,m,p,n,y]\in\mathcal{G}_{G,E}\), then \[[x,m+p,g|_{y(n,n+p)}h|_{z(q,q+n)},n+q,z]=[x,m+p^{\prime},g|_{y(n,n+p^{\prime})} h^{\prime}|_{z(q^{\prime},q^{\prime}+n)},n+q^{\prime},z],\] so multiplication in \(\mathcal{G}_{G,E}\) is well-defined. It is routine that \([x,0,r(x),0,x][x,m,g,n,y]=[x,m,g,n,y]=[x,m,g,n,y][y,0,r(y),0,y]\) for all \([x,m,g,n,y]\in\mathcal{G}_{G,E}\), so that \(\mathcal{G}_{G,E}\) admits units. We have \[[x,m,g,n,y][y,n,g^{-1},m,x]=[x,m+n,g|_{y(n,2n)}g^{-1}|_{x(m,m+n)},m+n,x].\] We calculate: \[g^{-1}|_{x(m,m+n)}=g^{-1}|_{\varsigma^{m}(x)(0,n)}=g^{-1}|_{g\cdot(\varsigma^{n} (y)(0,n))}=\big{(}g|_{\varsigma^{n}(y)(0,n)}\big{)}^{-1}.\] So \([x,m+n,g|_{y(n,2n)}g^{-1}|_{x(m,m+n)},m+n,x]=[x,m+n,r(x),m+n,x]\). We now have that \((x,p,r(x),p,x)\sim(x,q,r(x),q,x)\) for all \(x,p,q\), and we deduce that \([y,n,g^{-1},m,x]\) is an inverse for \([x,m,g,n,y]\). Associativity of the multiplication described follows from straightforward calculations like those above, and we deduce that \(\mathcal{G}_{G,E}\) is a groupoid. Using the definition of \(\sim\), we see that \([x,m,r(x),n,y]\sim[x^{\prime},m^{\prime},r(x^{\prime}),n^{\prime},y^{\prime}]\) if and only if \(x=x^{\prime}\), \(y=y^{\prime}\) and \(m-n=m^{\prime}-n^{\prime}\), and it follows that \(\iota(x,m-n,y)=[x,m,e,n,y]\) defines a groupoid homomorphism \(\mathcal{G}_{E}\to\mathcal{G}_{G,E}\), which is injective by definition of \(\sim\). We now describe an algebraic isomorphism of \(S_{G,E}\ltimes E^{\infty}\) onto the groupoid \(\mathcal{G}_{G,E}\) of Lemma 6.3, and use it to define an etale topology on \(\mathcal{G}_{G,E}\). **Lemma 6.4**.: _Let \(E\) be a finite directed graph, and let \((G,E)\) be a self-similar groupoid action. Let \(\mathcal{G}_{G,E}\) be the groupoid of Lemma 6.3, and let \(S_{G,E}\ltimes E^{\infty}\) be the groupoid of germs described in Definition 6.2. There is an algebraic isomorphism \(\psi:S_{G,E}\ltimes E^{\infty}\to\mathcal{G}_{G,E}\) such that \(\psi([(\mu,g,\nu),\nu x])=[\mu(g\cdot x),|\mu|,g,|\nu|,\nu x]\) for all \((\mu,g,\nu)\) in \(S_{G,E}\) and \(x\in Z[s(\nu))\). The sets_ \[Z(\mu,g,\nu):=\{[\mu(g\cdot y),|\mu|,g,|\nu|,\nu y]\mid y\in Z[s(\nu))\}\] _indexed by triples \((\mu,g,\nu)\in E^{*}\times G\times E^{*}\) such that \(s(\mu)=g\cdot s(\nu)\) constitute a basis of compact open sets for a locally Hausdorff topology on \(\mathcal{G}_{G,E}\) on which the range and source maps are homeomorphisms. Under this topology, \(\mathcal{G}_{G,E}\) is an etale groupoid._ Proof.: Define \[\psi^{0}:\{\big{(}(\mu,g,\nu),\nu x\big{)}\mid(\mu,g,\nu)\in S_{E,G},x\in s( \nu)E^{\infty}\}\to\mathcal{G}_{G,E}\] by \(\psi^{0}\big{(}((\mu,g,\nu),\nu x)\big{)}=[\mu g\cdot x,|\mu|,g,|\nu|,\nu x]\). We claim that \[[(\mu,g,\nu),\nu x]=[(\alpha,h,\beta),\beta y]\quad\Longleftrightarrow\quad \psi^{0}\big{(}(\mu,g,\nu),\nu x\big{)}=\psi^{0}\big{(}(\alpha,h,\beta),\beta y \big{)}. \tag{6.1}\] For this, first suppose that \([(\mu,g,\nu),\nu x]=[(\alpha,h,\beta),\beta y]\). Then \(\nu x=\beta y\), and there is an idempotent \((\lambda,e,\lambda)\) of \(S_{G,E}\) such that \(x\in Z[\lambda)\) and \((\mu,g,\nu)(\lambda,e,\lambda)=(\alpha,h,\beta)(\lambda,e,\lambda)\). Without loss of generality, we may assume that \(\lambda=x(0,n)\) with \(n\geq|\nu|,|\beta|\). So \(\lambda=\nu\nu^{\prime}=\beta\beta^{\prime}\). Since \((\lambda,e,\lambda)=(\lambda,e,\lambda)(\lambda,e,\lambda)=(\lambda,e^{2},\lambda)\), we have \(e^{2}=e\), so \(e=s(\lambda)\). Hence \[(\mu g\cdot\nu^{\prime},g|_{\nu^{\prime}},\nu\nu^{\prime})=(\mu,g,\nu)( \lambda,e,\lambda)=(\alpha,h,\beta)(\lambda,e,\lambda)=(\alpha h\cdot\beta^{ \prime},h|_{\beta^{\prime}},\beta\beta^{\prime}).\] In particular, \(g|_{(\nu x)(|\nu|,|\lambda|)}=g|_{\nu^{\prime}}=h|_{\beta^{\prime}}=h|_{( \alpha y)(|\alpha|,|\lambda|)}\). Also, \(\mu(g\cdot x)=\mu(g\cdot\nu^{\prime})g|_{\nu^{\prime}}\cdot\varsigma^{|\nu^{ \prime}|}(x)=\alpha(h\cdot\beta^{\prime})h|_{\beta^{\prime}}\cdot\varsigma^{| \beta^{\prime}|}(y)=\alpha(h\cdot y)\). Hence \[\psi^{0}\big{(}(\mu,g,\nu),\nu x\big{)} =[\mu(g\cdot x),|\mu|,g,|\nu|,\nu x]\] \[=[\mu(g\cdot x),|\mu|+|\nu^{\prime}|,g|_{(\nu x)(|\nu|,|\nu|+|\nu ^{\prime}|)},|\nu|+|\nu^{\prime}|,\nu x]\] \[=[\alpha(h\cdot y),|\alpha|+|\beta^{\prime}|,h|_{(\beta y)(| \beta|,|\beta|+|\beta^{\prime}|)},|\beta|+|\beta^{\prime}|,\beta y]\] \[=\psi^{0}\big{(}(\alpha,h,\beta),\beta y\big{)}.\] Conversely suppose that \(\psi^{0}\big{(}(\mu,g,\nu),\nu x\big{)}=\psi^{0}\big{(}(\alpha,h,\beta),\beta y \big{)}\). Then \(|\mu|-|\nu|=|\alpha|-|\beta|\), \(\nu x=\beta y\), \(\mu g\cdot x=\alpha h\cdot y\), and there exists \(l\geq|\nu|,|\beta|\) such that \(g|_{x(0,l-|\nu|)}=h|_{y(0,l-|\beta|)}\). Let \(e=s(y_{l-|\beta|})=s(x_{l-|\nu|})\). Then \[(\alpha,h,\beta) (\beta y(0,l-|\beta|),e,\beta y(0,l-|\beta|))\] \[=(\alpha h\cdot y(0,l-|\beta|),h|_{y(0,l-|\beta|)},(\beta y)(0,l))\] \[=(\mu g\cdot x(0,l-|\nu|),g|_{x(0,l-|\nu|)},(\nu x)(0,l))\] \[=(\mu,g,\nu)(\nu x(0,l-|\nu|),e,\nu x(0,l-|\nu|)).\] Since \(\nu x(0,l-|\nu|)=(\nu x)(0,l)=(\beta y)(0,l)=\beta y(0,l-|\beta|)\), we obtain \[(\alpha,h,\beta)(\beta y(0,l-|\beta|),e,\beta y(0,l-|\beta|))=(\mu,g,\nu)( \beta y(0,l-|\beta|),e,\beta y(0,l-|\beta|)).\] Hence \([(\mu,g,\nu),\nu x]=[(\alpha,h,\beta),\beta y]\). This completes the proof of (6.1). It follows that \(\psi^{0}\) descends to an injective map \(\psi:S_{G,E}\ltimes E^{\infty}\to\mathcal{G}_{G,E}\). To see that \(\psi\) is surjective, fix \([x,m,g,n,y]\in\mathcal{G}_{G,E}\), let \(z:=\varsigma^{n}(y)\), \(\nu=y(0,n)\) and \(\mu=x(0,m)\), and observe that \([x,m,g,n,y]=[\mu g\cdot z,|\mu|,g,|\nu|,\nu z]=\psi([(\mu,g\cdot\nu),\nu z])\). It is routine to check that \(\psi\) is multiplicative, and hence an algebraic isomorphism of groupoids as claimed. For \((\mu,g,\nu)\) with \(s(\mu)=g\cdot s(\nu)\) and an open set \(U\subseteq Z[\nu]\) in \(E^{\infty}\), let \[Z(\mu,g,\nu,U):=\{[\mu g\cdot x,|\mu|,g,|\nu|,\nu x]\mid\nu x\in U\}.\] Proposition 4.14 of [6] combined with the algebraic isomorphism \(\psi\) above shows that the sets \(Z(\mu,g,\nu,U)\) are a basis for a topology on \(\mathcal{G}_{G,E}\) under which it becomes a topological groupoid. If \([\mu g\cdot x,|\mu|,g,|\nu|,\nu x]\) is in \(Z(\mu,g,\nu,U)\), then by definition of the topology on \(E^{\infty}\) there exists \(n\in\mathbb{N}\) for which \(\nu^{\prime}:=x(0,n)\) is a path such that \(\nu x\in Z[\nu\nu^{\prime})\subseteq U\). Then, we have \[[\mu g\cdot x,|\mu|,g,|\nu|,\nu x]\in Z(\mu g\cdot\nu^{\prime},g|_{\nu^{\prime }},\nu\nu^{\prime})=Z(\mu,g,\nu,Z(\nu^{\prime}))\subseteq Z(\mu,g,\nu,U).\] So the \(Z(\mu,g,\nu)\) are a basis for the same topology as the \(Z(\mu,g,\nu,U)\). Now Proposition 4.15 of [6] shows that the range and source maps restrict to homeomorphisms \(r:Z(\mu,g,\nu)\to Z[\mu)\) and \(s:Z(\mu,g,\nu)\to Z[\nu)\). Since the \(Z[\nu)\) are compact and Hausdorff, we deduce that the \(Z(\mu,g,\nu)\) are also compact and Hausdorff. It follows that \(\mathcal{G}_{G,E}\) is locally Hausdorff and etale as claimed. We now show that the \(C^{*}\)-algebra of the groupoid \(\mathcal{G}_{G,E}\) just constructed coincides with the \(C^{*}\)-algebra of the self-similar action \((G,E)\). Note that for each \(\mu\in E^{*}\) and \(g\in G\) with \(c(g)=s(\mu)\), we have \((\mu,g,d(g))\in S_{G,E}\), and \(\operatorname{Dom}((\mu,g,d(g)))=Z[d(g))\). Hence \(W((\mu,g,d(g)),Z[\lambda))\) is a compact open subset of \(S_{G,E}\ltimes E^{\infty}\) for each \(\lambda\in d(g)E^{*}\). **Proposition 6.5**.: _Let \(E\) be a finite directed graph with no sources, and let \((G,E)\) be a self-similar groupoid action. Let \(\mathcal{G}_{G,E}\) be the groupoid described in Lemmas 6.3 and 6.4. There is an isomorphism \(\pi:\mathcal{O}(G,E)\to C^{*}(\mathcal{G}_{G,E})\) such that \(\pi(u_{g})=1_{Z(c(g),g,d(g))}\) and \(\pi(s_{e})=1_{Z(e,s(e),s(e))}\) for all \(g\in G\) and \(e\in E^{1}\)._ Proof.: By Lemma 6.4, it suffices to construct an isomorphism \(\pi:\mathcal{O}(G,E)\to C^{*}(S_{G,E}\ltimes E^{\infty})\) such that each \(\pi(u_{g})=1_{W((c(g),g,d(g)),Z[d(g)))}\), and each \(\pi(s_{e})=1_{W((c,s(e),s(e)),Z[s(e)))}\). For \(g\in G\), \(e\in E^{1}\) and \(v\in E^{0}\), define \[U_{g}:=1_{W((c(g),g,d(g)),Z[d(g)))},\quad S_{e}:=1_{W((e,s(e),s(e)),Z[s(e)))}, \quad\text{ and }\quad P_{v}:=1_{W((v,v,v),Z[v))}.\] Elementary calculations using the definition of multiplication in \(S_{G,E}\) shows that \((U,P,S)\) is a covariant representation of \((G,E)\in C^{*}(S_{G,E}\ltimes E^{\infty})\). It follows that there is a homomorphism \(\pi:\mathcal{O}(G,E)\to C^{*}(S_{G,E}\ltimes E^{\infty})\) satisfying the given formulas. For each \(\lambda\in E^{*}\), write \(\lambda=\lambda_{1}\lambda_{2}\dots\lambda_{n}\) as a concatenation of edges, and then define \(S_{\lambda}:=S_{\lambda_{1}}\dots S_{\lambda_{n}}\). Then \[1_{Z[\lambda)}=S_{\lambda}S_{\lambda}^{*}\in\pi(\mathcal{O}(G,E)).\] Since the \(Z[\lambda)\) constitute a basis for the topology on \(E^{\infty}\), it follows that \(C_{0}(E^{\infty})\subseteq\pi(\mathcal{O}(G,E))\). If \(V\) is a compact open bisection in \(S_{G,E}\ltimes E^{\infty}\), we can write it as a finite disjoint union of bisections of the form \(W((\mu,g,\nu),Z[\nu\alpha))\). We have \[1_{W((\mu,g,\nu),Z[\nu\alpha))}=1_{W((\mu g\cdot\alpha,g|_{\alpha},\nu\alpha),Z [s(\alpha)))}=S_{\mu g\cdot\alpha}U_{g|_{\alpha}}S_{\nu\alpha}^{*}\in\pi( \mathcal{O}(G,E)),\] and we deduce that the indicator function of each compact open bisection belongs to the range of \(\pi\). For each compact open bisection \(V\), indicator functions of this form linearly span a dense sub-algebra of \(C_{c}(V)\). It follows that \(\pi\) is surjective. It remains to show that \(\pi\) is injective. To do this, it suffices to construct a right inverse \(\rho:C^{*}(S_{G,E}\ltimes E^{\infty})\to\mathcal{O}(G,E)\) for \(\pi\). Observe that since \(S_{G,E}\ltimes E^{\infty}\) is the groupoid of germs of the action \(\theta\) of the inverse semigroup \(S_{G,E}\) on \(E^{\infty}\), [6, Theorem 8.5] shows that \(C^{*}(S_{G,E}\ltimes E^{\infty})\) is universal for representations, as defined in [6, Definition 8.1] of \((\theta,S_{G,E},E^{\infty})\). Since the \(s_{e}\) and \(p_{v}\) constitute a Cuntz-Krieger \(E\)-family in \(\mathcal{O}(G,E)\), there is a homomorphism \(\rho_{0}:C_{0}(E^{\infty})\to\mathcal{O}(G,E)\) such that \(\rho_{0}(1_{Z[\lambda)})=s_{\lambda}s_{\lambda}^{*}\) for each \(\lambda\). For each \((\mu,g,\nu)\in S_{G,E}\) define \(S_{(\mu,g,\nu)}:=s_{\mu}u_{g}s_{\nu}^{*}\). If \(\lambda=\nu\lambda^{\prime}\) then the relations in \(\mathcal{O}(G,E)\) give \[S_{(\mu,g,\nu)}\rho_{0}(1_{Z[\lambda)})S_{(\mu,g,\nu)}^{*}=s_{\mu}u_{g}s_{\nu }^{*}s_{\lambda}s_{\nu}u_{g^{-1}}s_{\mu}^{*}=s_{\mu g\cdot\lambda^{\prime}}s_{ \mu g\cdot\lambda^{\prime}}^{*},\] and then linearity and continuity imply that \(S_{(\mu,g,\nu)}\rho_{0}(f)S_{(\mu,g,\nu)}^{*}=\rho_{0}(f\circ\theta_{(\mu,g, \nu)^{*}})\) whenever \(f\) is supported on \(\operatorname{Dom}(\theta_{(\mu,g,\nu)})\). Routine calculations show that \(S_{a}S_{b}=S_{ab}\) and \(S_{a^{*}}=S_{a}^{*}\) for all \(a,b\in S_{G,E}\). So \((\rho_{0},S)\) is a representation of \((\theta,S_{G,E},E^{\infty})\) and it follows that there is a homomorphism \(\rho:C^{*}(S_{G,E}\ltimes E^{\infty})\to\mathcal{O}(G,E)\) such that \(\rho(f)=\rho_{0}(f)\) for all \(f\) in \(C_{0}(E^{\infty})\) and \(\rho(1_{W((\mu,g,\nu),Z[\nu))})=s_{\mu}u_{g}s_{\nu}^{*}\) for all \((\mu,g,\nu)\in S_{G,E}\). In particular, \(\rho\circ\pi\) fixes the generators of \(\mathcal{O}(G,E)\) and therefore \(\rho\circ\pi=\operatorname{id}_{\mathcal{O}(G,E)}\) as required. We conclude this section by characterising exactly when \(\mathcal{G}_{G,E}\) is Hausdorff. **Proposition 6.6**.: _Let \(E\) be a finite directed graph, and let \((G,E)\) be a self-similar groupoid action. Let \(\mathcal{G}_{G,E}\) be the groupoid of Lemmas 6.3 and 6.4. Points \([x,m,g,n,y]\) and \([w,p,h,q,z]\in\mathcal{G}_{G,E}\) are distinct but cannot be separated by disjoint open sets if and only if both of the following hold_ 1. \(x=w\)_,_ \(y=z\)_,_ \(m-n=p-q\) _and_ \(g|_{y(n,l)}\neq h|_{y(q,l)}\) _for all_ \(l\geq n,q\)_; and_ 2. _for every_ \(l\geq n,q\) _there exists_ \(\lambda\in s(y_{l})E^{*}\) _such that_ \(g|_{y(n,l)}\cdot\lambda=h|_{y(q,l)}\cdot\lambda\) _and_ \(g|_{y(n,l)\lambda}=h|_{y(q,l)\lambda}\)_._ Proof.: First, suppose that \([x,m,g,n,y]\) and \([w,p,h,q,z]\) are distinct but cannot be separated by open neighbourhoods. The range map \(r\), the source map \(s\) and the co-cycle map \(c:\mathcal{G}_{(G,E)}\mapsto\mathbb{Z}\), defined by \(c([x,m,g,n,y])=m-n\), are continuous mappings onto Hausdorff spaces, so since \([x,m,g,n,y]\) and \([w,p,h,q,z]\) cannot be separated by open neighbourhoods, their images under \(r,s\) and \(c\) coincide. Hence \(x=w\), \(y=z\) and \(m-n=p-q\). Since \([x,m,g,n,y]\neq[w,p,h,q,z]\), the definition of the equivalence relation of Lemma 6.3 forces \(g|_{y(n,l)}\neq h|_{y(q,l)}\) for all \(l\geq\max(n,q)\). Therefore, (1) is satisfied. For (2), let \(\xi=y(n,l)\), and \(\tau=y(q,l)\). Then \([x,m,g,n,y]\in Z(x(0,m)g\cdot\xi,g|_{\xi},y(0,n)\xi)\), and \([x,p,h,q,y]\) is in \(Z(x(0,p)h\cdot\tau,h|_{\tau},y(0,q)\tau)\). Let \(\gamma:=y(0,n)\xi=y(0,q)\tau\) and \(\omega:=x(0,m)g\cdot\xi=x(0,p)h\cdot\tau\). By assumption, \(Z(\omega,g|_{\xi},\gamma)\cap Z(\omega,h|_{\tau},\gamma)\neq\emptyset\), so there exist \(u\in s(\gamma)E^{\infty}\) and \(v\in s(\omega)E^{\infty}\) such that \(g|_{\xi}\cdot u=h|_{\tau}\cdot u=v\) and \([\omega v,|\omega|,g|_{\xi},|\gamma|,\gamma u]=[\omega v,|\omega|,h|_{\tau},| \gamma|,\gamma u].\) Hence, there exists \(k\) in \(\mathbb{N}\) such that \((g|_{\xi})|_{u(0,k)}=(h|_{\tau})|_{u(0,k)}\). Thus \(\lambda:=u(0,k)\in s(y_{l})E^{*}\) satisfies (2). Now suppose that (1) and (2) hold. The last part of (1) and the definition of \(\sim\) imply that \([x,m,g,n,y]\neq[w,p,h,q,z]\). Fix neighbourhoods \(U\ni[x,m,g,n,y]\) and \(V\ni[w,p,h,q,z]\). We must show that \(U\cap V\neq\varnothing\). By definition of the topology, there exists \(l\geq n,q\) such that \(Z(x(0,m-n+l),g|_{y(n,l)},y(0,l))\subseteq U\) and \(Z(x(0,p-q+l),h|_{y(q,l)},y(0,l))\subseteq V\). Fix \(\lambda\) as in condition (2) for this \(l\). Then \[U \ni\big{[}x(0,m-n+l)g|_{y(n,l)}\cdot(\lambda z),m-n+l+|\lambda|,g |_{y(n,l)\lambda},l+|\lambda|,y(0,l)\lambda z\big{]}\] \[=\big{[}x(0,p-q+l)h|_{y(q,l)}\cdot(\lambda z),p-q+l+|\lambda|,h |_{y(q,l)\lambda}),l+|\lambda|,y(0,l)\lambda z\big{]}\in V\] for all \(z\in Z[s(\lambda))\), so \(U\cap V\neq\emptyset\) For the following corollary, we use the following terminology adapted from [7, Definition 5.2]: if \((G,E)\) is a self-similar groupoid action on a finite graph \(E\), we say that a path \(\lambda\in E^{*}\) is _strongly fixed_ by an element \(g\in Gr(\lambda)\) if \(g\cdot\lambda=\lambda\) and \(g|_{\lambda}=s(\lambda)\). **Corollary 6.7**.: _Let \(E\) be a finite directed graph, and let \((G,E)\) be a self-similar groupoid action. Then the following are equivalent._ 1. _The groupoid_ \(\mathcal{G}_{G,E}\) _is Hausdorff._ 2. _The subgroupoid_ \(\mathcal{G}_{G,E}^{\mathbb{T}}:=\{[x,m,g,n,y]\in\mathcal{G}_{G,E}\mid m=n\}\) _is Hausdorff._ 3. _The subgroupoid_ \(\{[g\cdot x,0,g,0,x]\mid x\in E^{\infty}\text{ and }d(g)=r(x)\}\) _is Hausdorff._ 4. _If_ \(g\in G\) _and_ \(y\in E^{\infty}\) _satisfy_ \(g\cdot y=y\) _and_ \(g|_{y(0,n)}\neq s(y_{n})\) _for all_ \(n\)_, then there exists_ \(\lambda\in E^{*}\) _such that_ \(y\in Z[\lambda)\) _and no element of_ \(\lambda E^{*}\) _is strongly fixed by_ \(g\)_._ _In particular, if \((G,E)\) is regular then \(\mathcal{G}_{G,E}\) is Hausdorff._ Proof.: Since \(\mathcal{G}_{G,E}^{\mathbb{T}}\subseteq\mathcal{G}_{G,E}\) and \(\mathcal{G}_{G,E}^{\mathbb{T}}\) contains the subgroupoid of (3), we have \((1)\implies(2)\implies(3)\). For \((3)\implies(4)\), we prove the contrapositive. So suppose that (4) fails with respect to \(g\in G\) and \(y\in Z[d(g))\). That is \(g\cdot y=y\) and \(g|_{y(0,n)}\neq s(y_{n})\) for all \(n\), but for every \(\lambda\) such that \(y\in Z[\lambda)\) there exists \(\mu\in s(\lambda)E^{*}\) such that \(\lambda\mu\) is strongly fixed by \(g\). Equivalently, for every \(l\geq 0\), there exists \(\mu\in s(y_{l})E^{*}\) such that \(g\cdot(y(0,l)\mu)=y(0,l)\mu\) and \(g|_{y(0,l)\mu}=s(\mu)\). Hence Proposition 6.6 implies that \([y,0,g,0,y]\) and \([y,0,d(g),0,y]\) are distinct but cannot be separated by open sets. Hence the open subgroupoid \(\{[g\cdot x,0,g,0,x]\mid x\in E^{\infty}\text{ and }d(g)=r(x)\}\) is not Hausdorff. For \((4)\implies(1)\), suppose that (4) holds. Fix \([x,m,g,n,y]\) and \([w,p,h,q,z]\in\mathcal{G}_{G,E}\). It suffices to show that the conditions of Proposition 6.6 do not both hold for these points. To do this, we suppose that Proposition 6.6(1) holds, and show that Proposition 6.6(2) fails. Let \(l:=\max{(n,q)}\), let \(k:=(g|_{y(n,l)})^{-1}h|_{y(q,l)}\), and put \(y^{\prime}=\varsigma^{l}(y)\). Since \(x=w\), we have \((g|_{y(n,l)})\cdot y^{\prime}=h|_{y(q,l)}\cdot y^{\prime}\) and so \(k\cdot y^{\prime}=y^{\prime}\). Moreover, \(k|_{y^{\prime}(0,a)}=\big{(}(g|_{y(n,l)})^{-1}h|_{y(q,l)}\big{)}|_{y^{\prime} (0,a)}=\big{(}(g|_{y(n,l+a)})^{-1}h|_{y(q,l+a)}\big{)}\neq s(y_{(l+a)})\) for all \(a\). So \((4)\) implies that for large \(a\in\mathbb{N}\) and \(\lambda\in s(y^{\prime}_{a})E^{*}\), either \(k|_{y^{\prime}(0,a)}\cdot\lambda\neq\lambda\) or \(k|_{y^{\prime}(0,a)\lambda}\neq s(\lambda)\). That is, for large \(a\), for every \(\lambda\in s(y_{a})E^{*}\), either \(g|_{y(n,l+a)}\cdot\lambda\neq h|_{y(q,l+a)}\cdot\lambda\), or \(g|_{y(n,l+a)\lambda}\neq h|_{y(q,l+a)\lambda}\). Thus condition (2) of Proposition 6.6 fails for \([x,m,g,n,y]\) and \([w,p,h,q,z]\) as required. For the final statement, we show that if \((G,E)\) is regular, then (4) holds. Suppose that \(g\in G\) and \(y\in E^{\infty}\) satisfy \(g\cdot y=y\). Regularity gives \(n\in\mathbb{N}\) such that \(g\) pointwise fixes \(Z[y(0,n))\). Hence \(g|_{y(0,n)}\) pointwise fixes \(Z[y(n))\). Since self-similar groupoid actions are, by definition, faithful this implies that \(g|_{y(0,n)}=y(n)\in G^{(0)}\). So the hypothesis of (4) is never satisfied, and so (4) holds vacuously. We can characterise regularity of \((G,E)\) in terms of of \(\mathcal{G}_{G,E}\). Recall that a groupoid \(\mathcal{G}\) is _principal_ if its isotropy bundle \(\operatorname{Iso}(\mathcal{G})=\{g\in\mathcal{G}:r(g)=s(g)\}\) is equal to the unit space \(\mathcal{G}^{(0)}\). **Proposition 6.8**.: _Let \((G,E)\) be a self-similar groupoid action on a finite directed graph \(E\). Then, \((G,E)\) is regular if and only if \(\mathcal{G}_{G,E}^{\mathbb{T}}\) is principal._ Proof.: Suppose that \((G,E)\) is regular. Fix \(\gamma\in\operatorname{Iso}(\mathcal{G}_{G,E})\). Then \(\gamma=[x,n,g,n,x]\) for some \(x\in E^{\infty}\), \(n\geq 0\) and \(g\in G\) such that \(d(g)=r(\varsigma^{n}(x))\) and \(g\cdot\varsigma^{n}(x)=\varsigma^{n}(x)\). By regularity, there exists \(k\in\mathbb{N}\) such that \(g|_{x(n+1,n+k)}=s(x_{n+k})\). By definition of the equivalence relation \(\sim\) defining \(\mathcal{G}_{G,E}\) (see Lemma 6.3), we have \([x,n,g,n,x]=[x,n+k,s(x_{n+k}),n+k,x]\in\mathcal{G}_{G,E}^{(0)}\). Therefore, \(\mathcal{G}_{G,E}^{\mathbb{T}}\) is principal. Now, suppose that \(\mathcal{G}_{G,E}^{\mathbb{T}}\) is principal. If \(g\in G\) and \(x\in E^{\infty}\) satisfy \(d(g)=r(x)\) and \(g\cdot x=x\), then \([x,0,g,0,x]\in\text{Iso}(\mathcal{G}_{G,E})\). Therefore, \((x,0,g,0,x)\sim(x,n,s(x_{n}),n,x)\) for some \(n\). Hence there exists \(k\geq n\) such that \(g|_{x(0,k)}=s(x_{n})|_{x(n,k)}=s(x_{k})\). Therefore, \((G,E)\) is regular. ## 7. The dual algebra of a self-similar graph We now describe a second \(C^{*}\)-algebra associated to a contracting, regular self-similar groupoid action on a finite directed graph with no sources; namely the \(C^{*}\)-algebra of the Deaconu-Renault groupoid of the homeomorphism \(\tilde{\sigma}:\mathcal{J}\to\mathcal{J}\) of Section 4. Recall that if \(X\) is a locally compact Hausdorff space, and \(T:X\to X\) is a local homeomorphism, then \(\mathcal{G}_{X,T}\) is the set \[\mathcal{G}_{X,T}:=\{(x,m-n,y)\in X\times\mathbb{Z}\times X\mid m,n\in\mathbb{ N}_{0},T^{m}(x)=T^{n}(y)\},\] endowed with the topology arising from the basic open sets \[Z(U,m,n,V):=\{(x,m-n,y)\in U\times\{m-n\}\times V\mid T^{m}(x)=T^{n}(y)\}.\] The unit space is \(\mathcal{G}_{X,T}^{(0)}:=\{(x,0,x)\mid x\in X\}\) and is identified with \(X\). The groupoid structure is given by \[r(x,p,y):=x,\qquad s(x,p,y):=y,\] \[(x,p,y)(y,q,z):=x(p+q,z)\qquad\text{and}\qquad(x,p,y)^{-1}:=(y,-p,x).\] It is not hard to check that \[\{Z(U,m,n,V)\mid T^{m}|_{U}\text{ and }T^{n}|_{V}\text{ are homeomorphisms and }T^{m}(U)=T^{n}(V)\}\] is a basis of open bisections for the topology, so \(\mathcal{G}_{X,T}\) is etale. It is easy to see that it is Hausdorff, and it is locally compact because \(X\) is. **Definition 7.1**.: Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \(\mathcal{J}\) and \(\tilde{\sigma}\) be the space and local homeomorphism of Definition 3.2 and Proposition 4.3. We define \(\widehat{\mathcal{G}}_{G,E}\) to be the Deaconu-Renault groupoid \(\widehat{\mathcal{G}}_{G,E}:=\mathcal{G}_{\mathcal{J},\tilde{\sigma}}\), and we define \(\widehat{\mathcal{O}}(G,E):=C^{*}(\widehat{\mathcal{G}}_{G,E})\), and call this the _dual \(C^{*}\)-algebra_ of \((G,E)\). Let \(\mathcal{G}_{\sigma}\) be the Deaconu-Renault groupoid associated to \(\sigma:E^{-\infty}\mapsto E^{-\infty}\). Recall from Section 3 the quotient map \(q:E^{-\infty}\mapsto\mathcal{J}\). Since \(q\circ\tilde{\sigma}=\sigma\circ q\), we see that \(q\) extends to a groupoid homomorphism \(q:\mathcal{G}_{\sigma}\mapsto\widehat{\mathcal{G}}_{G,E}\) which sends \((x,k,y)\in\mathcal{G}_{\sigma}\) to \(q(x,k,y)=(q(x),k,q(y))\). The next result will allow us to deduce properties of \(\widehat{\mathcal{G}}_{G,E}\) from those of \(\mathcal{G}_{\sigma}\). **Proposition 7.2**.: _Let \(E\) be a finite directed graph with no sources and \((G,E)\) a contracting regular self-similar groupoid action. For every \(y\in E^{-\infty}\), \(q:(\mathcal{G}_{\sigma})y\mapsto(\hat{\mathcal{G}}_{G,E})q(y)\) is a bijection, and \(q:\mathcal{G}_{\sigma}\mapsto\hat{\mathcal{G}}_{G,E}\) is proper._ Proof.: Suppose \(z\in\mathcal{J}\) and \(m,n\in\mathbb{N}\) satisfy \(\tilde{\sigma}^{n}(q(y))=\tilde{\sigma}^{m}(z):=w\). Proposition 4.5(1) implies that \(\sigma^{m+k}:q^{-1}(z)\mapsto q^{-1}(\tilde{\sigma}^{k}(w))\) and \(\sigma^{n+k}:q^{-1}(q(y))\mapsto q^{-1}(\tilde{\sigma}^{k}(w))\) are bijective for all \(k\geq 0\). Thus there is a unique \(x^{\prime}\) in \(q^{-1}(z)\) such that \(\sigma^{m+k}(x^{\prime})=\sigma^{n+k}(y)\), for all \(k\geq 0\). Hence, \((x^{\prime},m-n,y)\) is the unique element of \((\mathcal{G}_{\sigma})y\) such that \(q((x^{\prime},m-n,y))=(z,m-n,q(y))\). The same uniqueness property shows that if \(x,y\in E^{-\infty}\) and \(m,n,k\geq 0\) satisfy \(\sigma^{m+k}(x)=\sigma^{n+k}(y)\) and \(\tilde{\sigma}^{m}(q(x))=\tilde{\sigma}^{n}(q(y))\), then \(\sigma^{m}(x)=\sigma^{n}(y)\). It follows that \(q^{-1}(Z(\mathcal{J},m,n,\mathcal{J}))=Z(E^{-\infty},m,n,E^{-\infty})\) for any \(m,n\geq 0\). Since these sets form compact open coverings of the respective groupoids, \(q\) is proper. We investigate when \(\widehat{\mathcal{O}}(G,E)\) is simple. A groupoid \(\mathcal{G}\) is _minimal_ if \(\{r(\gamma)\mid s(\gamma)=x\}\) is dense in \(\mathcal{G}^{(0)}\) for every \(x\in\mathcal{G}^{(0)}\), and that an etale groupoid \(\mathcal{G}\) is _effective_ if the interior of \(\operatorname{Iso}(\mathcal{G})=\{\gamma\in\mathcal{G}\mid r(\gamma)=s(\gamma)\}\) is \(\mathcal{G}^{(0)}\). **Lemma 7.3**.: _Let \(E\) be a finite directed graph. Let \((G,E)\) be a contracting, regular self-similar groupoid action. If \(E\) is strongly connected and not a simple cycle, then \(\widehat{\mathcal{G}}_{G,E}\) is minimal and effective, and \(\widehat{\mathcal{O}}(G,E)\) is simple._ Proof.: Fix \(x\in E^{-\infty}\). Then \(\{x\mu\mid\mu\in E^{*}\text{ and }s(x)=r(\mu)\}\subseteq\{r(\gamma)\mid\gamma\in \mathcal{G}_{\sigma}\text{ and }s(\gamma)=x\}\). Since \(E\) is strongly connected, for every \(\nu\in E^{*}\), there is \(\mu\in E^{*}\) such that \(s(\mu)=r(\nu)\) and \(r(\mu)=s(x)\). Hence \(x\mu\nu\in E^{-\infty}\). Therefore, \(Z(\nu)\cap\{r(\gamma)\mid s(\gamma)=x\}\neq\emptyset\) and consequently \(\mathcal{G}_{\sigma}\) is minimal. By Proposition 7.2, for every \(x\in E^{-\infty}\), \(q:(\mathcal{G}_{\sigma})x\mapsto(\hat{\mathcal{G}}_{G,E})q(x)\) is surjective. Hence, \(q(r((\mathcal{G}_{\sigma})x)=r((\hat{\mathcal{G}}_{G,E})q(x))\). Since \(q\) is surjective and continuous, \(r((\hat{\mathcal{G}}_{G,E})q(x))\) is dense whenever \(r((\mathcal{G}_{\sigma})x)\) is dense. Therefore, minimality of \(\mathcal{G}_{\sigma}\) implies minimality of \(\hat{\mathcal{G}}_{G,E}\). To see that \(\hat{\mathcal{G}}_{G,E}\) is effective, suppose that \(w\in\mathcal{J}\) satisfies \(\tilde{\sigma}^{m}(w)=w\). By Proposition 4.5(1), \(\sigma^{m}\) maps \(q^{-1}(w)\) bijectively onto \(q^{-1}(w)\). Since \(q^{-1}(w)\) is finite, there exists \(k\in\mathbb{N}\) such that \(\sigma^{mk}(x)=x\) for all \(x\in q^{-1}(w)\). Let \(P_{\sigma}\) and \(P_{\tilde{\sigma}}\) denote the sets of periodic points for \(\sigma\) and \(\tilde{\sigma}\) respectively. Then \(q^{-1}(P_{\tilde{\sigma}})=P_{\sigma}\). Hence, \(q^{-1}(\bigcup_{n\geq 0}\tilde{\sigma}^{-n}(P_{\tilde{\sigma}}))=\bigcup_{n \geq 0}\sigma^{-n}(P_{\sigma}).\) We have \(P_{\sigma}=\bigcup_{\lambda\in E^{*}}\bigcup_{\mu\in s(\lambda)E^{*}s(\lambda )\setminus\{\lambda\}}\{\lambda\mu^{\infty}\}\), and so \(P_{\sigma}\), and hence \(\bigcup_{n\geq 0}\tilde{\sigma}^{-n}(P_{\tilde{\sigma}})\), is countable. If \(g\in\hat{\mathcal{G}}_{G,E}\setminus\mathcal{J}\) satisfies \(r(g)=s(g)\), then \(r(g)\in\bigcup_{n\geq 0}\tilde{\sigma}^{-n}(P_{\tilde{\sigma}})\). Thus \(r\big{(}\operatorname{Iso}(\hat{\mathcal{G}}_{G,E})\setminus\mathcal{J}\big{)}\) is countable. Since \(r\) is an open map, to show \(\operatorname{Iso}(\hat{\mathcal{G}}_{G,E})\setminus\mathcal{J}\) has empty interior, it suffices to show no countable set in \(\mathcal{J}\) is open. By the Baire Category Theorem, it suffices to show that \(\mathcal{J}\) has no isolated points. Since \(E\) is strongly connected and not a simple cycle, every open subset of \(E^{-\infty}\) is infinite. By continuity and surjectivity of \(q\), the preimage of every nonempty open subset of \(\mathcal{J}\) is open and hence infinite. Since \(q\) is finite-to-one, it follows that no singleton in \(\mathcal{J}\) is open. Hence \(\mathcal{J}\) has no isolated points, and consequently \(\hat{\mathcal{G}}_{G,E}\) is effective. It now follows from [2, Theorem 5.1] that \(\widehat{\mathcal{O}}(G,E)=C^{*}(\widehat{\mathcal{G}}_{G,E})\) is simple. ## 8. \(Kk\)-duality via Smale spaces In this section we establish our \(KK\)-duality result. We do this using a general result of Kaminker-Putnam-Whittaker [12], which says that the stable and unstable Ruelle algebras of any irreducible Smale space are \(KK\)-dual. We show that the stable Ruelle algebra of the Smale space of Section 5 is Morita equivalent to the \(C^{*}\)-algebra \(\widehat{\mathcal{O}}(G,E)\) of Section 7, and that the unstable Ruelle algebra is Morita equivalent to the \(C^{*}\)-algebra \(\mathcal{O}(G,E)\) of Section 6. This, combined with the duality of the Ruelle algebras, gives our main result. **Theorem 8.1**.: _Let \(E\) be a strongly connected finite directed graph. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Then \(\mathcal{O}(G,E)\) and \(\widehat{\mathcal{O}}(G,E)\) are \(KK\)-dual in the sense that there are classes \(\mu\in KK^{1}(\mathcal{O}(G,E)\otimes\widehat{\mathcal{O}}(G,E),\mathbb{C})\) and \(\beta\in KK^{1}(\mathbb{C},\mathcal{O}(G,E)\otimes\widehat{\mathcal{O}}(G,E))\) such that_ \[\beta\mathbin{\widehat{\otimes}}_{\mathcal{O}(G,E)}\mu=\operatorname{id}_{KK( \widehat{\mathcal{O}}(G,E),\widehat{\mathcal{O}}(G,E))}\quad\text{ and }\quad\beta\mathbin{\widehat{\otimes}}_{\widehat{\mathcal{O}}(G,E)}\mu=- \operatorname{id}_{KK(\mathcal{O}(G,E),\mathcal{O}(G,E))}.\] _In particular, \(K^{*}(\mathcal{O}(G,E))\cong K_{*+1}(\widehat{\mathcal{O}}(G,E))\) and \(K_{*}(\mathcal{O}(G,E))\cong K^{*+1}(\widehat{\mathcal{O}}(G,E))\)._ ### The stable algebra We will show that the groupoid \(\widehat{\mathcal{G}}_{G,E}\) of Lemmas 6.3 and 6.4 is equivalent to the stable Ruelle groupoid \(G^{s}\rtimes\mathbb{Z}\) of the Smale space \(\mathcal{S}\) of Section 5. The idea is to show that in fact \(G^{s}\rtimes\mathbb{Z}\) is equal to the amplification of \(\widehat{\mathcal{G}}_{G,E}\) with respect to the surjection \(\tilde{\pi}:\mathcal{S}\to\mathcal{J}\) induced by the natural surjection of \(E^{\mathbb{Z}}\) onto \(E^{-\infty}\). For this, we first need to show that this \(\tilde{\pi}\) makes sense and is an open map. **Lemma 8.2**.: _Let \(E\) be a finite directed graph with no sources. Let \((G,E)\) be a contracting regular self-similar groupoid action. Let \(\mathcal{J}\) be the limit space of Definition 3.2, and let \(\mathcal{S}\) be the limit solenoid of Definition 5.3. There is a continuous open surjection \(\tilde{\pi}:\mathcal{S}\to\mathcal{J}\) such that \(\tilde{\pi}([x])=[\dots x_{-3}x_{-2}x_{-1}]\) for all \(x\in E^{\mathbb{Z}}\)._ Proof.: Let \(\theta:\mathcal{S}\to\mathcal{J}_{\infty}\) be the homeomorphism of Proposition 5.4. Let \(P_{1}\) be the projection map \(P_{1}:\mathcal{J}_{\infty}\to\mathcal{J}\) given by \(P_{1}([x_{1}],[x_{2}],[x_{3}],\dots)=[x_{1}]\). Then \(P_{1}\) is continuous by definition of the projective-limit topology, and surjective because \(\tilde{\sigma}\) is surjective. By definition of the topology on \(\mathcal{J}_{\infty}\), the sets \(Z(W,n):=\{([x_{1}],[x_{2}],[x_{3}],\dots)\mid[x_{n}]\in W\}\) indexed by pairs \((W,n)\) consisting of an open \(W\subseteq J\) and an element \(n\in\mathbb{N}\) constitute a basis for the topology on \(\mathcal{J}_{\infty}\). Since \(\tilde{\sigma}\) is surjective, each \(P_{1}(Z(W,n))=\tilde{\sigma}^{n}(W)\), which is open. So \(P_{1}\) is an open map. Hence \(\tilde{\pi}:=P_{1}\circ\theta\) is a continuous open surjection from \(\mathcal{S}\) to \(\mathcal{J}\). It satisfies \(\tilde{\pi}([x])=[\dots x_{-3}x_{-2}x_{-1}]\) by definition. If \(X\) is a locally compact Hausdorff space, \(\mathcal{G}\) is an etale groupoid and \(\pi:X\to\mathcal{G}^{(0)}\) is a continuous open surjection, then we can form the _amplification_ of \(\mathcal{G}\) by \(\pi\) which, as a topological space, is \[\mathcal{G}^{\pi}:=\{(x,\gamma,y)\in X\times\mathcal{G}\times X\mid r(\gamma) =\pi(x)\text{ and }s(\gamma)=\pi(y)\}\] under the topology inherited from the product topology. Its unit space is \((\mathcal{G}^{\pi})^{(0)}=\{(x,\pi(x),x)\mid x\in X\}\) which we identify with \(X\). The range and source maps are given by \(r(x,\gamma,y)=x\) and \(s(x,\gamma,y)=y\). The multiplication and inversion are given by \((x,\gamma,y)(y,\eta,z)=(x,\gamma\eta,z)\) and \((x,\gamma,y)^{-1}=(y,\gamma^{-1},x)\). By, for example, [8, \((4)\Longrightarrow(1)\) of Proposition 3.10], the groupoids \(\mathcal{G}\) and \(\mathcal{G}^{\pi}\) are equivalent groupoids, and hence \(C^{*}(\mathcal{G})\) and \(C^{*}(\mathcal{G}^{\pi})\) are Morita equivalent by [19, Theorem 2.8]. Recall that the stable equivalence relation associated to a Smale space \((\mathcal{S},d,\tau,\varepsilon_{S},\lambda)\) is the equivalence relation \[G^{s}:=\{(\xi,\eta)\in\mathcal{S}\times\mathcal{S}\mid\ \lim_{m\to\infty}d(\tau^{m}(\xi),\tau^{m}(\eta))=0\}.\] By [25, pg. 179], there exists \(\varepsilon_{\mathcal{S}}^{\prime}\leq\varepsilon_{\mathcal{S}}\) such that for any \(\delta\leq\varepsilon_{\mathcal{S}}^{\prime}\), \[G^{s}:=\{(\xi,\eta)\in\mathcal{S}\times\mathcal{S}\mid\text{ there exists }M\in\mathbb{N}\text{ such that}\] \[d(\tau^{m}(\xi),\tau^{m}(\eta))<\delta\text{ for all }m\geq M\} \tag{8.1}\] For \(M\geq 0\) we define \[G^{s}_{\varepsilon,M}=\{(\xi,\eta)\mid d(\tau^{m}(\xi),\tau^{m}(\eta))< \varepsilon\text{ for all }m\geq M\},\] endowed with the subspace topology inherited from \(\mathcal{S}\times\mathcal{S}\). We endow \(G^{s}\) with the inductive-limit topology obtained from the inductive limit decomposition \(G^{s}=\bigcup_{M}(G^{s}_{\varepsilon,M})\). It is straightforward to check this agrees with the topology on \(G^{s}\) described on [27, pg. 282]. We now give a description of the stable equivalence relation and its topology for the Smale space \((\mathcal{S},d_{\mathcal{S}},\tilde{\tau},\varepsilon_{\mathcal{S}},\frac{1}{2})\) that will help us in proving the amplification \(\widehat{\mathcal{G}}^{\tilde{\pi}}_{G,E}\) and the stable Ruelle groupoid \(G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}\) are isomorphic. **Lemma 8.3**.: _Let \(E\) be a finite directed graph with no sinks or sources and \((G,E)\) be a contracting, regular self-similar groupoid. Let \((\mathcal{S},d_{\mathcal{S}},\tilde{\tau},\varepsilon_{\mathcal{S}},\frac{1}{2})\) be the Smale space of Corollary 5.5. Let \(\varepsilon\) be as in Theorem 4.3 and let \(\varepsilon^{\prime}_{\mathcal{S}}\) be a constant such that (8.1) holds for all \(\delta<\varepsilon^{\prime}_{\mathcal{S}}\). Let \(\beta=\min\{\varepsilon,\varepsilon^{\prime}_{\mathcal{S}}\}\). For each \(m\in\mathbb{N}\), let_ \[G^{s}_{m}=\{([x],[y])\in\mathcal{S}\times\mathcal{S}\mid\ [x(-\infty,-m)]=[y(- \infty,-m)]\}.\] _Then there exists \(k\in\mathbb{N}\) such that, for every \(m\in\mathbb{N}\), we have \(G^{s}_{\beta,m}\subseteq G^{s}_{m}\subseteq G^{s}_{\beta,m+k}\). Points \([x],[y]\in\mathcal{S}\) are stably equivalent if and only if there exists \(m\in\mathbb{N}\) such that \([x(-\infty,-m)]=[y(-\infty,-m)]\). The topology on \(G^{s}\) is equal to the inductive limit topology for the decomposition \(G^{s}=\bigcup_{m}G^{s}_{m}\)._ Proof.: To see that \(G^{s}_{\beta,m}\subseteq G^{s}_{m}\), fix \(x,y\in E^{\mathbb{Z}}\) such that \(d_{\mathcal{S}}([\tau^{n}(x)],[\tau^{n}(y)])<\beta\) for all \(n\geq m\). Fix \(n\geq m\). Then \[d([x(-\infty,-n)],[x(-\infty,-n)]) =d([\tau^{n}(x)(-\infty,0)],[\tau^{n}(x)(-\infty,0)])\] \[\leq d_{\mathcal{S}}([\tau^{n}(x)],[\tau^{n}(y)])<\beta<\varepsilon.\] By definition of \(\epsilon\), \[2d([x(-\infty,-n)],[x(-\infty,-n)]) =d(\tilde{\sigma}([x(-\infty,-n)]),\tilde{\sigma}([y(-\infty,-n)]))\] \[=d([x(-\infty,-(n+1))],[x(-\infty,-(n+1))]).\] Hence \[d([x(-\infty,-m)],[x(-\infty,-m)])=2^{m-n}d([x(-\infty,-n)],[x(- \infty,-n)])\leq 2^{m-n}.\] Since \(n\geq m\) was arbitrary, we deduce that \([x(-\infty,-m)]=[y(-\infty,-m)]\). Fix \(k\in\mathbb{N}\) such that \((\frac{1}{2})^{k}<\beta\). We show that \(G^{s}_{m}\subseteq G^{s}_{\beta,m+k}\). Fix \(x,y\in E^{\mathbb{Z}}\) such that \([x(-\infty,-m)]=[y(-\infty,-m)]\). Then, \([\tau^{n}(x)(-\infty,l)]=[\tau^{n}(y)(-\infty,l)]\) for all \(l-n\leq-m\). Therefore, \[d_{\mathcal{S}}(\tilde{\tau}^{n}[x],\tilde{\tau}^{n}[y])=\sup_{l>n-m}(2^{-l}d ([\tau^{n}(x)(-\infty,l)],[\tau^{n}(y)(-\infty,l)])\leq 2^{-(n-m)}.\] So, whenever \(n\geq m+k\), we have \(d_{\mathcal{S}}(\tilde{\tau}^{n}[x],\tilde{\tau}^{n}[y])<2^{-k}<\beta\). Therefore, \(([x],[y])\in G^{s}_{\beta,m+k}\). Recall that the stable Ruelle groupoid is the skew groupoid for the action of \(\mathbb{Z}\) on the unit space of \(G^{s}\). That is, \[G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}=\{(\xi,n,\eta)\in\mathcal{S}\times \mathbb{Z}\times\mathcal{S}\mid(\tilde{\tau}^{n}(\xi),\eta)\in G^{s}\}.\] **Theorem 8.4**.: _Let \(E\) be a finite directed graph with no sinks or sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \(\tilde{\pi}:\mathcal{S}\to\mathcal{J}\) be the continuous open surjection of Lemma 8.2, and let \(\tilde{\tau}:\mathcal{S}\to\mathcal{S}\) be the homeomorphism of Corollary 5.5. Then there is an isomorphism \(\kappa\) of the stable Ruelle groupoid \(G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}\) onto the amplification \(\widehat{G}^{\tilde{\pi}}_{G,E}\) of the dual groupoid of \((G,E)\) by \(\tilde{\pi}\) satisfying \(\kappa([x],n,[y])=([x],(\tilde{\pi}([x]),n,\tilde{\pi}([y])),[y])\) for all \(([x],n,[y])\in G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}\)._ Proof.: For \(z\in E^{\mathbb{Z}}\) and \(n\in\mathbb{N}\), we have \(\tilde{\pi}(\tilde{\tau}^{n}([z]))=[z(-\infty,-n)]\). Hence Lemma 8.3 gives \(([x],n,[y])\in G^{s}\) if and only if \(\tilde{\pi}(\tilde{\tau}^{m+n}([x]))=\tilde{\pi}(\tilde{\tau}^{m}([y]))\) for some \(m\in\mathbb{N}\) such that \(m+n\geq 0\). Since \(\tilde{\pi}\circ\tilde{\tau}^{m}=\tilde{\sigma}^{m}\circ\tilde{\pi}\) for any \(m\in\mathbb{N}\), \[([x],n,[y])\in G^{s}\quad\Longleftrightarrow\quad([x],(\tilde{\pi}([x]),n, \tilde{\pi}([y])),[y])\in\widehat{G}^{\tilde{\pi}}_{G,E}. \tag{8.2}\] Hence there is a bijection \(\kappa:G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}\to\widehat{\mathcal{G}}^{\tilde{ \pi}}_{G,E}\) satisfying the desired formula. We show \(\kappa\) is continuous. Extend of \(\kappa\) to a map \(\tilde{\kappa}:\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\to\mathcal{S} \times\mathcal{J}\times\mathbb{Z}\times\mathcal{J}\times\mathcal{S}\) by \(\tilde{\kappa}((\xi,n,\eta))=(\xi,(\tilde{\pi}(\xi),n,\tilde{\pi}(\eta)),\eta)\). Then \(\tilde{\kappa}\) is continuous. For \(M\in\mathbb{N}\) and \(n\in\mathbb{Z}\) such that \(M+n\geq 0\) the restriction of \(\kappa\) to \(G^{s}_{M}\rtimes_{\tilde{\tau}}\{n\}=\{([x],n,[y])\mid(\tilde{\tau}^{n}([x]),[y]) \in G^{s}_{M}\}\) has co-domain contained in \[\mathcal{S}*\widehat{\mathcal{G}}^{(M+n,M)}_{G,E}*\mathcal{S}:=\{([x],(\tilde{ \pi}([x]),n,\tilde{\pi}([y])),[y])\mid\ \tilde{\sigma}^{M+n}(\tilde{\pi}([x]))=\tilde{\sigma}^{M}(\tilde{\pi}([y]))\}.\] The subspace topology of \(G^{s}_{M}\rtimes_{\tilde{\tau}}\{n\}\) relative to \(G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}\) is equal to the subspace topology relative to \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\), and the subspace topology of \(\mathcal{S}*\widehat{\mathcal{G}}^{(M+n,M)}_{G,E}*\mathcal{S}\) relative to \(\widehat{\mathcal{G}}_{G,E}\) is equal to the subspace topology relative to \(\mathcal{S}\times\mathcal{J}\times\mathbb{Z}\times\mathcal{J}\times\mathcal{S}\). So, continuity of \(\tilde{\kappa}\) implies that \(\kappa:G^{s}_{M}\rtimes_{\tilde{\tau}}\{n\}\to\mathcal{S}*\widehat{\mathcal{G }}^{(M+n,M)}_{G,E}*\mathcal{S}\) is continuous. Fix \(n\in\mathbb{Z}\). The universal property of the inductive limit topology on \(G^{s}\rtimes_{\tilde{\tau}}\{n\}=\bigcup_{M+n\geq 0}G^{s}_{M}\rtimes_{\tilde{ \tau}}\{n\}\) implies that \(\kappa\) is continuous on the clopen subspace \(G^{s}\rtimes_{\tilde{\tau}}\{n\}\subseteq G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z}\), for each \(n\) in \(\mathbb{Z}\). Hence, \(\kappa\) is continuous. Since \(\mathcal{S}*\widehat{\mathcal{G}}^{(M+n,M)}_{G,E}*\mathcal{S}\) and \(G^{s}_{M}\rtimes_{\tilde{\tau}}\{n\}\) are compact, and since \(\kappa^{-1}(\mathcal{S}*\widehat{\mathcal{G}}^{(M+n,M)}_{G,E}*\mathcal{S}) \subseteq G^{s}_{M}\rtimes_{\tilde{\tau}}\{n\}\), and \(\widehat{\mathcal{G}}^{\tilde{\pi}}_{G,E}=\bigcup_{M+n\geq 0}\mathcal{S}* \widehat{\mathcal{G}}^{(M+n,M)}_{G,E}*\mathcal{S}\), the map \(\kappa\) is proper. Since proper continuous maps between locally compact Hausdorff spaces are closed, \(\kappa\) is a homeomorphism. **Corollary 8.5**.: _Let \(E\) be a finite directed graph with no sinks or sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Then the dual \(C^{*}\)-algebra \(\widehat{\mathcal{O}}(G,E)\) of Definition 7.1 is Morita equivalent to the stable Ruelle algebra \(C^{*}(G^{s}\rtimes\mathbb{Z})\) of the Smale space \((\mathcal{S},\tilde{\tau})\) of Corollary 5.5._ Proof.: Since \(\tilde{\pi}:\mathcal{S}\to\mathcal{J}\) is an open map by Lemma 8.2, [8, Proposition 3.10] shows that \(\widehat{\mathcal{G}}^{\tilde{\pi}}_{G,E}\) is groupoid equivalent to \(\widehat{\mathcal{G}}_{G,E}\). Therefore [19, Theorem 2.8] shows that \(C^{*}(\widehat{\mathcal{G}}_{G,E})\) is Morita equivalent to \(C^{*}(\widehat{\mathcal{G}}^{\tilde{\pi}}_{G,E})\). Theorem 8.4 shows that \(C^{*}(\widehat{\mathcal{G}}^{\tilde{\pi}}_{G,E})\) is isomorphic to \(C^{*}(G^{s}\rtimes_{\tilde{\tau}}\mathbb{Z})\), which is precisely the stable Ruelle algebra of \((\mathcal{S},\tilde{\tau})\). ### The unstable algebra We now need to show that the unstable Ruelle algebra of the Smale space \((\mathcal{S},\tilde{\tau})\) is Morita equivalent to the \(C^{*}\)-algebra \(\mathcal{O}(G,E)\). Our approach again is via groupoid equivalence. We use Putnam and Spielberg's construction of an etale groupoid \(G^{u}_{[x]}\rtimes_{\tilde{\tau}}\mathbb{Z}\) corresponding to a choice of orbit in \(\mathcal{S}\). We will show that this groupoid is isomorphic to a suitable amplification of the groupoid \(\mathcal{G}_{G,E}\) of Lemma 6.3. To do this, we shall need an alternative description of the unstable equivalence relation and its topology, which we provide in the next lemma. For \(M\in\mathbb{N}\) and \(\varepsilon>0\), let \(G^{u}_{\varepsilon,M}=\{(\eta,\xi)\in\mathcal{S}\times\mathcal{S}\mid d_{ \mathcal{S}}(\tilde{\tau}^{-m}(\eta),\tilde{\tau}^{-m}(\xi))<\varepsilon\text{ for all }m\geq M\}\). By [25, pg. 179], there exists \(\varepsilon^{\prime}_{\mathcal{S}}\leq\varepsilon_{\mathcal{S}}\) such that for every \(\varepsilon<\varepsilon^{\prime}_{\mathcal{S}}\), we have \[G^{u}=\bigcup_{M}G^{u}_{\varepsilon,M}\] in the inductive-limit topology. This agrees with the topology on \(G^{u}\) on [27, pg. 282]. **Lemma 8.6**.: _Let \(E\) be a finite directed graph with no sinks or sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \(\varepsilon^{\prime}_{\mathcal{S}}\) be as above, and let \(\varepsilon\) be as in Theorem 4.3. Let \(\beta=\min\{\varepsilon,\varepsilon^{\prime}_{\mathcal{S}}\}\). Let \(F\) be the smallest finite set containing \(\mathcal{N}\cup\mathcal{N}^{2}\) that is closed under restriction. Then, there is an \(l\in\mathbb{N}\) such that for every \(M\in\mathbb{N}\), we have_ \(G^{u}_{\beta,M}\subseteq G^{u}_{M}\subseteq G^{u}_{\beta,M+l}\)_, where_ \[G^{u}_{M}=\{([y],[x])\in\mathcal{S}\times\mathcal{S}\mid\ \exists g\in F:g \cdot x(M+1,\infty)=y(M+1,\infty)\}.\] _More precisely, if \((\eta,\xi)\in G^{u}_{\beta,M}\), then for any representatives \(y,x\) such that \([y]=\eta\), \([x]=\xi\), there is an element \(g\in F\) such that \(g\cdot x(M+1,\infty)=y(M+1,\infty)\)._ _In particular, \([x]\) and \([y]\) in \(\mathcal{S}\) are unstably equivalent if and only if there is an \(M\in\mathbb{N}\) and \(g\in G^{s}(x_{M})\) such that \(g\cdot x(M+1,\infty)=y(M+1,\infty)\), and the topology on \(G^{u}\) is equal to the inductive limit topology provided by the decomposition \(G^{u}=\bigcup_{M}G^{u}_{M}\)._ Proof.: Fix \(M\in\mathbb{N}\). We first show \(G^{u}_{\beta,M}\subseteq G^{u}_{M}\). Suppose \(([x],[y])\in G^{u}_{\beta,M}\). For \(m\in\mathbb{Z}\), let \(x^{m}:=[x(-\infty,m)]\) and \(y^{m}:=[y(-\infty,m)]\). By definition of the metric \(d_{\mathcal{S}}\), for every \(m\geq M\), \[d(x^{m},y^{m})=d([\tau^{-m}(x)(-\infty,0)],[\tau^{-m}(y)(-\infty,0)])\leq d_{ \mathcal{S}}(\tilde{\tau}^{-m}([x]),\tilde{\tau}^{-m}([y]))\ <\beta.\] Since \(\beta\leq\varepsilon\), Theorem 4.3(1) gives \(d(x^{m},y^{m})=d(\tilde{\sigma}(x^{m+1}),\tilde{\sigma}(y^{m+1}))=2d(x^{m+1}, y^{m+1})\) for all \(m\geq M\). So \(d(x^{M+s},y^{M+s})=(\frac{1}{2})^{s}d(x^{M},y^{M})\) for all \(s\geq 0\). Hence \(\alpha:=\frac{d(x^{M},y^{M})+\beta}{2}\), satisfies \(y^{M+s}\in B(x^{M+s},(\frac{1}{2})^{s}\alpha)\) for every \(s\in\geq 0\). Fix \(n\in\mathbb{N}\) as in Lemma 4.6 and \(k\in\mathbb{N}\) as in Proposition 4.5. Since \(\alpha<\varepsilon\), Lemma 4.6(1) yields \(\omega\in E^{*}\) such that \(|\omega|\geq n-1\) and \(B(x^{M},\alpha)\subseteq U_{\omega}\). Since \(\tilde{\sigma}(x^{M+1})=x^{M}\) and \(|\omega|\geq n-1\geq k-1\), Proposition 4.5(3) gives \(e\in E^{1}\) such that \(s(\omega)=r(e)\) and \(x^{M+1}\in U_{\omega e}\). Since \(B(x^{M},\alpha)\subseteq U_{\omega}\), Lemma 4.6 implies that \(B(x^{M+1},\frac{1}{2}\alpha)\subseteq U_{\omega e}\). Applying the argument in the above paragraph inductively gives paths \(\{\mu_{s}\}_{s\in\mathbb{N}}\) such that \(\mu_{s+1}=\mu_{s}e_{s+1}\), for some edge \(e_{s+1}\), and that \(B(x^{M+s},(\frac{1}{2})^{s}\alpha)\subseteq U_{\omega\mu_{s}}\), for all \(s\in\mathbb{N}\). Since \(y^{M+s}\in B(x^{M+s},(\frac{1}{2})^{s}\alpha)\) for each \(s\in\mathbb{N}\), we obtain \(x^{M+s},y^{M+s}\in U_{\omega\mu_{s}}\) for each \(s\in\mathbb{N}\). Therefore, there exist sequences \(\{g_{s}\}_{s\in\mathbb{N}}\), \(\{h_{s}\}_{s\in\mathbb{N}}\) in \(\mathcal{N}\) such that \(d(g_{s})=d(h_{s})=r(\mu_{s})\), \(g_{s}\cdot\mu_{s}=x(M+1,M+s)\), and \(h_{s}\cdot\mu_{s}=y(M+1,M+s)\) for all \(s\in\mathbb{N}\). Let \(z\in E^{\infty}\) be the unique element of \(\bigcap_{s}Z[\mu_{s})\). Choose an increasing subsequence \(\{n_{s}\}_{s\in\mathbb{N}}\) such that \((g_{n_{s}})_{s}\) and \((h_{n_{s}})_{s}\) are constant sequence, with constant values \(g\) and \(h\), say. Then \(g\cdot z=x(M+1,\infty)\) and \(h\cdot z=y(M+1,\infty)\). So \(t:=hg^{-1}\in\mathcal{N}^{2}\subseteq F\) satisfies \(t\cdot x(M+1,\infty)=y(M+1,\infty)\). Therefore, \(([y],[x])\in G^{u}_{M}\). Take \(l_{1}\in\mathbb{N}\) such that for all \(g\in F\) and \(\mu\in d(g)E^{*}\) such that \(|\mu|\geq l_{1}\) we have that \(g|_{\mu}\in\mathcal{N}\). Fix \(l_{2}\in\mathbb{N}\) such that \(\operatorname{diam}(U_{\nu})<\frac{1}{2}\beta\) for every path \(\nu\) with \(|\nu|\geq l_{2}\). Let \(l:=l_{1}+l_{2}\). We show that \(G^{u}_{M}\subseteq G^{u}_{\beta,M+l}\). Take \([x],[y]\in\mathcal{S}\) such that there exists \(g\in F\) such that \(d(g)=s(x_{M})\) and \(g\cdot x(M+1,\infty)=y(M+1,\infty)\). As above, let \(x^{m}:=[x(-\infty,m)]\) and \(y^{m}:=[y(-\infty,m)]\) for all \(m\in\mathbb{Z}\). Let \(h=g|_{x(M+1,M+l_{1})}\). By the choice of \(l_{1}\), we have \(h\in\mathcal{N}\). Since \(h\cdot x(M+l_{1},\infty)=y(M+l_{1},\infty)\), it follows that \(x^{M+l_{1}+m},y^{M+l_{1}+m}\in U_{x(M+l_{1}+1,M+l_{1}+m)}\) for all \(m\in\mathbb{N}\). By the choice of \(l_{2}\), we have \(d(x^{M+l_{1}+m},y^{M+l_{1}+m})<\frac{1}{2}\beta\) for all \(m\geq l_{2}\). So, for all \(s\geq M+l\) and \(k\geq 0\), we have \((\frac{1}{2})^{k}d(x^{k+s},y^{k+s})<\frac{1}{2}\beta\). Therefore, \(d_{\mathcal{S}}(\tilde{\tau}^{-s}([x]),\tilde{\tau}^{-s}([y]))=\sup_{k\in \mathbb{N}_{0}}(\frac{1}{2})^{k}d(x^{k+s},y^{k+s})\leq\frac{1}{2}\beta<\beta\) for all \(s\geq M+l\), forcing \(([y],[x])\in G^{u}_{\beta,M+l}\). Hence \(G^{u}_{M}\subseteq G^{u}_{\beta,M+l}\). We have established that \([x]\) and \([y]\) are unstably equivalent if and only if there exist \(M\in\mathbb{N}\) and \(g\in Fs(x_{M})\) such that \(g\cdot x(M+1,\infty)=y(M+1,\infty)\). If \([x],[y]\in\mathcal{S}\), \(N\) in \(\mathbb{N}\) and \(g\in Gr(x_{N})\) satisfy \(g\cdot x(N+1,\infty)=y(N+1,\infty)\), then, since \((G,E)\) is contracting, there exists \(M\geq N\) such that \(h:=g|_{x(N+1,M)}\in\mathcal{N}\subseteq F\) and \(h\cdot x(M+1,\infty)=y(M+1,\infty)\). This proves the penultimate statement of the lemma. Since \(F\) is closed under restriction, \(G^{u}_{M}\subseteq G^{u}_{M+1}\) for all \(M\in\mathbb{N}\). Hence the inductive limit topology with respect to the decomposition \(G^{u}=\bigcup_{M}G^{u}_{M}\) is well defined. Since \(G^{u}_{\beta,M}\subseteq G^{u}_{M}\subseteq G^{u}_{\beta,M+l}\) for all \(M\in\mathbb{N}\), this topology is equal to the one provide by the decomposition \(G^{u}=\bigcup_{M}G^{u}_{\beta,M}\). Let \((X,\tau)\) be an irreducible Smale space, and recall that \(G^{u}\rtimes\mathbb{Z}=\{(\eta,l,\xi)\in X\times\mathbb{Z}\times X:(\tau^{-l}( \eta),\xi)\in G^{u}\}\). In [27], Putnam and Spielberg show that, given any point \(x\in X\), the groupoid \(G^{u}_{x}\rtimes\mathbb{Z}\) defined as \[G^{u}_{x}\rtimes\mathbb{Z}:=\{(\eta,n,\varepsilon)\in G^{u}\rtimes\mathbb{Z}\mid (x,\eta),(x,\varepsilon)\in G^{s}\}, \tag{8.3}\] endowed with a suitable topology, is an etale groupoid that is equivalent to \(G^{u}\rtimes\mathbb{Z}\) when \((X,\tau)\) is mixing, and use this to study the unstable \(C^{*}\)-algebra of a Smale space up to Morita equivalence. We will make use of the same technique here. Let \(E\) be a finite directed graph with no sinks or sources, and let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \((\mathcal{S},d_{\mathcal{S}},\tilde{\tau},\varepsilon_{S},\frac{1}{2})\) be the Smale space of Corollary 5.5. Let \(\beta>0\) and \(k\in\mathbb{N}\) be as in Lemma 8.3. Consider \(x\in E^{\mathbb{Z}}\). In line with [27], the global stable equivalence class \[\mathcal{S}^{s}([x]):=\{\xi\in\mathcal{S}:([x],\xi)\in G^{s}\} \tag{8.4}\] is endowed with the inductive-limit topology coming from the decomposition \(\mathcal{S}^{s}([x])=\bigcup_{M}\mathcal{S}^{s}_{\beta,M}([x])\), where \[\mathcal{S}^{s}_{\beta,M}([x])=\{\xi\in\mathcal{S}\mid d_{\mathcal{S}}(\tilde {\tau}^{n}([x]),\tilde{\tau}^{n}(\xi))<\beta,\ \forall\ n\geq M\}\] is given the subspace topology relative to \(\mathcal{S}\). For \(M\in\mathbb{N}\), define \[\mathcal{S}^{s}_{M}([x])=\{[y]\in\mathcal{S}:[x(-\infty,M)]=[y(-\infty,M)]\}.\] Then Lemma 8.3 implies that \(\mathcal{S}^{s}_{\beta,M}([x])\subseteq\mathcal{S}^{s}_{M}([x])\subseteq \mathcal{S}^{s}_{\beta,M+k}([x])\). Hence, the inductive-limit topology on \(\mathcal{S}^{s}([x])\) is equivalent to the inductive-limit topology for the decomposition \(\mathcal{S}^{s}([x])=\bigcup_{M}\mathcal{S}^{s}_{M}([x])\). Note that \(\mathcal{S}^{s}([x])\) is not compact in this topology even though \(\mathcal{S}\) is compact. We equip the groupoid \(G^{u}_{[x]}\rtimes\mathbb{Z}=(G^{u}\rtimes\mathbb{Z})\cap(\mathcal{S}^{s}([x ])\times\mathbb{Z}\times\mathcal{S}^{s}([x]))\) with the topology with sub-basis \[\big{\{}U\cap r^{-1}(V)\cap s^{-1}(W)\mid U\subseteq G^{u}\rtimes\mathbb{Z} \text{ and }V,W\subseteq\mathcal{S}^{s}([x])\text{ are open}\big{\}}.\] Fix a periodic orbit \(P=\{\tilde{\tau}^{l}(p_{0}):l\in\mathbb{Z}\}=\{\tilde{\tau}^{l}(p_{0}):0\leq l <N\}\) of \(\tilde{\tau}\). Then \(\mathcal{S}^{s}(p)\cap S^{s}(q)=\emptyset\) for distinct \(p,q\in P\). So \(\mathcal{S}^{s}(P):=\bigcup_{p\in P}\mathcal{S}^{s}(p)\) is the topological disjoint union of the sets \(\mathcal{S}^{s}(p)\). Consider the groupoid \(G^{u}(P)\rtimes\mathbb{Z}:=(G^{u}\rtimes\mathbb{Z})\cap(\mathcal{S}^{s}(P) \times\mathbb{Z}\times\mathcal{S}^{s}(P))\). As in the preceding paragraph, we give \(G^{u}(P)\rtimes\mathbb{Z}\) the topology with sub-basis \[\big{\{}U\cap r^{-1}(V)\cap s^{-1}(W)\mid U\subseteq G^{u}\rtimes\mathbb{Z} \text{ and }V,W\subseteq\mathcal{S}^{s}(P)\text{ are open}\big{\}}.\] Then \(G^{u}(P)\rtimes\mathbb{Z}\) is a locally compact etale Hausdorff groupoid [12, Section 3], and \(G^{u}(P)\rtimes\mathbb{Z}\) is groupoid equivalent to \(G^{u}\rtimes\mathbb{Z}\)[26, Section 2]. We claim that for any \(p\in P\), the subset \(G^{u}_{p}\rtimes\mathbb{Z}\) is also a locally compact Hausdorff groupoid that is equivalent to \(G^{u}\rtimes\mathbb{Z}\). Indeed, the open subgroupoid \(G^{u}(P)\rtimes\mathbb{Z}|_{\mathcal{S}^{s}(p)}=\{g\in G^{u}(P)\rtimes \mathbb{Z}:r(g),s(g)\in\mathcal{S}^{s}(p)\}\) is equal to \(G^{u}_{p}\rtimes\mathbb{Z}\), and the relative topology of \(G^{u}_{p}\rtimes\mathbb{Z}\) inherited from \(G^{u}(P)\rtimes\mathbb{Z}\) is equal to the topology on \(G^{u}_{p}\rtimes\mathbb{Z}\) described above. Hence \(G^{u}_{p}\rtimes\mathbb{Z}\) is a locally compact etale Hausdorff groupoid. We show \(G^{u}_{p}\rtimes\mathbb{Z}\) is equivalent to \(G^{u}(P)\rtimes\mathbb{Z}\), and hence to \(G^{u}\rtimes\mathbb{Z}\). Since \(H=\{g\in G^{u}(P)\rtimes\mathbb{Z}:r(g)\in\mathcal{S}^{s}(p)\}\) is a clopen subset, it follows from [19, Example 2.7] that \(H\) implements a groupoid equivalence between \(G^{u}_{p}\rtimes\mathbb{Z}\) and \(G^{u}(P)\rtimes\mathbb{Z}\) if and only if the source map restricted to \(H\) surjects onto \(\mathcal{S}^{s}(P)\). Fox \(q\in P\) and \(z\in\mathcal{S}^{s}(q)\). There exists \(n\in\mathbb{N}\) such that \(\tilde{\tau}^{n}(q)=p\). So \((\tilde{\tau}^{n}(z),n,z)\in H\), proving surjectivity. We describe an amplification of \(\mathcal{G}_{G,E}\) which we will prove in Theorem 8.12 is Morita equivalent to \(G^{u}\rtimes\mathbb{Z}\). For \(x\in E^{\mathbb{Z}}\), we write \[E^{\mathbb{Z}}_{x}:=\{y\in E^{\mathbb{Z}}\mid y(-\infty,-M)=x(-\infty,-M)\text{ for some }M\in\mathbb{N}\}. \tag{8.5}\] For \(M\in\mathbb{N}\), let \[E_{x}^{\mathbb{Z}}(M):=\{y\in E^{\mathbb{Z}}:y(-\infty,-M)=x(-\infty,-M)\},\] endowed with the relative topology inherited from \(E^{\mathbb{Z}}\). We endow \(E_{x}^{\mathbb{Z}}\) with the inductive-limit topology determined by this decomposition. The map \(\pi_{x}:E_{x}^{\mathbb{Z}}\mapsto E^{\infty}\) sending \(z\in E_{x}^{\mathbb{Z}}\) to \(\pi_{x}(z)=z(1,\infty)\) is an open continuous map (in fact, it is a local homeomorphism onto an open set in \(E^{\infty}\)). It is a surjection whenever \(E\) is strongly connected. We show that the amplification \(\mathcal{G}_{G,E}^{\pi_{x}}\) is isomorphic to \(G_{[x]}^{u}\rtimes\mathbb{Z}\). We start by analysing the space \(E_{x}^{\mathbb{Z}}\). We first prove that an element \(x\in E^{\mathbb{Z}}\) is completely determined by its class \([x]\) in the limit solenoid \(\mathcal{S}\) of Definition 5.3 together with the tail \(\ldots x_{n-3}x_{n-2}x_{n-1}\) for any \(n\in\mathbb{Z}\). **Lemma 8.7**.: _Let \(E\) be a finite directed graph with no sinks or sources. Let \((G,E)\) be a contracting, regular self-similar groupoid action. Let \(\mathcal{S}\) be the limit solenoid of Definition 5.3. Suppose that \(x,y\in E^{\mathbb{Z}}\) satisfy \([x]=[y]\in\mathcal{S}\), and suppose that there exists \(n\) in \(\mathbb{Z}\) such that \(x_{m}=y_{m}\) for all \(m\leq n\). Then \(x=y\)._ Proof.: Fix \(n\) satisfying \(x_{m}=y_{m}\) for all \(m\leq n\). Let \(k\) be as in Lemma 4.4, with respect to the finite set \(F^{\prime}=\mathcal{N}\cup G^{0}\). Since \(x\sim_{\text{\rm{ae}}}y\), Lemma 3.6 shows that there is a sequence \((g_{m})_{m\in\mathbb{Z}}\) in \(\mathcal{N}\) such that \(g_{m}\cdot x_{m}x_{m+1}\cdots=y_{m}y_{m+1}\dots\) for all \(m\in\mathbb{Z}\) and \(g_{m}|_{x_{m}\dots x_{l-1}}=g_{l}\) for all \(m\leq l\). In particular, \[g_{m}\cdot x_{-n-k}\dots x_{-n-1}=y_{-n-k}\dots y_{-n-1}=x_{-n-k}\dots x_{-n-1}.\] Since \(v:=r(x_{-n-k})\in G^{(0)}\subseteq F^{\prime}\) and also satisfies \(v\cdot x_{-n-k}\dots x_{-n-1}=x_{-n-k}\dots x_{-n-1}\), the choice of \(k\) guarantees that \(g_{m}|_{x_{-n-k}\dots x_{-n-1}}=v|_{x_{-n-k}\dots x_{-n-1}}=r(x_{n})\). We then have \[y_{n+1}y_{n+2}\dots =g_{n-1}\cdot x_{n+1}x_{n+2}\dots\] \[=g_{m}|_{x_{-n-k}\dots x_{-n-1}}\cdot x_{n+1}x_{n+2}\dots=v\cdot x _{n+1}x_{n+2}\dots=x_{n+1}x_{n+2}\dots\] Since we already have \(x_{m}=y_{m}\) for \(m\leq n\), we conclude that \(x=y\). Lemma 8.7 shows that for \(x\in E^{\mathbb{Z}}\) the quotient map \(y\mapsto[y]\) from \(E^{\mathbb{Z}}\) to \(\mathcal{S}\) restricts to an injection \(E_{x}^{\mathbb{Z}}\to\mathcal{S}^{*}([x])\). We show that this is a homeomorphism with respect to the inductive-limit topologies. **Lemma 8.8**.: _Let \(E\) be a finite directed graph with no sinks or sources, and let \((G,E)\) be a contracting, regular self-similar groupoid action. Fix \(x\in E^{\mathbb{Z}}\). The map \(y\mapsto[y]\) is a homeomorphism from \(E_{x}^{\mathbb{Z}}\) onto \(\mathcal{S}^{*}([x])\) with respect to the inductive-limit topologies described at (8.4) and (8.5)._ Proof.: Fix \(n\in\mathbb{N}\). Then \(y\mapsto[y]\) restricts to a bijection of \(E_{x}^{\mathbb{Z}}(n)\) onto \(\mathcal{S}_{n}^{s}([x])\). By definition of the inductive-limit topologies it suffices to show that this map is continuous and open. For fixed \(n\) the set \(E_{x}^{\mathbb{Z}}(n)\) is compact because it is closed in \(E^{\mathbb{Z}}\), and the set \(\mathcal{S}_{n}^{*}([x])\) is Hausdorff because \(\mathcal{S}\) is. Since \(y\mapsto[y]\) is the quotient map, it is continuous on \(E_{x}^{\mathbb{Z}}(n)\), and we deduce that it is a homeomorphism. We now analyse the topology of \(G_{[x]}^{u}\rtimes\mathbb{Z}\), for any \(x\in E^{\mathbb{Z}}\). _Remark 8.9_.: For any \(m,M\in\mathbb{N}\) such that \(m\leq M\), we have \[G_{\beta,m}^{u}=G_{\beta,M}^{u}\bigcap_{k:m\leq k\leq M}\{(\eta,\xi)\in \mathcal{S}\times\mathcal{S}:d_{\mathcal{S}}(\tau^{-k}(\eta),\tau^{-k}(\xi))< \beta\}.\] Therefore, \(G_{\beta,m}^{u}\) is open in \(G^{u}\), and consequently \(G_{\beta,m}^{u}\rtimes\{l\}:=\{(\eta,l,\xi):(\tilde{\tau}^{-l}(\eta),\xi)\in G _{\beta,m}^{u}\}\) is open in \(G^{u}\rtimes\mathbb{Z}\). **Lemma 8.10**.: _Let \(E\) be a finite directed graph with no sources, and let \((G,E)\) be a contracting, regular self-similar action. Fix \(x\in E^{\mathbb{Z}}\). For \(k,m\in\mathbb{N}\) and \(l\in\mathbb{Z}\), let_ \[X_{k,l,m}:=\{([z],l,[y])\in G^{u}_{\beta,m}\rtimes\{l\}:x(-\infty,-m)=y(-\infty,-m)=z(-\infty,-m)\}.\] _Then, \(X_{k,l,m}\) is an open subset of \(G^{u}_{[x]}\rtimes\mathbb{Z}\), and the relative topology it inherits from \(G^{u}_{[x]}\rtimes\mathbb{Z}\) coincides with the relative topology inherited from \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\)._ Proof.: We have \(X_{k,l,m}=G^{u}_{\beta,k}\rtimes\{l\}\bigcap r^{-1}(q(E^{\mathbb{Z}}_{x}(m))) \cap s^{-1}(q(E^{\mathbb{Z}}_{x}(m)))\). The first factor is open in \(G^{u}\rtimes\mathbb{Z}\) by Remark 8.9. The image \(q(E^{\mathbb{Z}}_{x}(m))\) is open in \(\mathcal{S}^{*}([x])\) by Lemma 8.8. Therefore, \(X_{k,l,m}\) is open in \(G^{u}_{[x]}\rtimes\mathbb{Z}\). The topologies on \(G^{u}_{\beta,k}\rtimes\{l\}\) and \(q(E^{\mathbb{Z}}_{x}(m))\) are the subspace topologies relative to \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\) and to \(\mathcal{S}\) respectively. Hence, for all triples of open sets \(U\subseteq G^{u}\rtimes\mathbb{Z}\) and \(V,W\subseteq\mathcal{S}^{s}([x])\), there exist open sets \(U^{\prime}\subseteq\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\) and \(V^{\prime},W^{\prime}\subseteq\mathcal{S}\) such that \[X_{k,l,m} \cap U\cap r^{-1}(V)\cap s^{-1}(W)\] \[=\left(U\cap G^{u}_{\beta,k}\rtimes\{l\}\right)\cap\left(r^{-1}( V)\cap r^{-1}(q(E^{\mathbb{Z}}_{x}(m)))\right)\cap\left(s^{-1}(W)\cap s^{-1}(q(E^ {\mathbb{Z}}_{x}(m)))\right)\] \[=X_{k,l,m}\cap U^{\prime}\cap r^{-1}(V^{\prime})\cap s^{-1}(W^{ \prime}).\] Since \(r,s\) are continuous with respect to the subspace topologies, \(U^{\prime}\cap r^{-1}(V^{\prime})\cap s^{-1}(W^{\prime})\) is open in \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\). Hence \(X_{k,l,m}\) has the subspace topology relative to \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\). To lighten notation, for all \(z,y\in E^{\mathbb{Z}}_{x}\), all \(m,n\in\mathbb{N}\) and all \(g\in G\) such that \(g\cdot\sigma^{n}(\pi_{x}(y))=\sigma^{m}(\pi_{x}(z))\), we define \[[z,m,g,n,y]:=(z,[\pi_{x}(z),m,g,n,\pi_{x}(y)],y)\in\mathcal{G}^{\pi_{x}}_{G,E}. \tag{8.6}\] **Notation 8.11**.: Give finite paths \(\mu_{-},\mu_{+},\nu_{-},\nu_{+}\in E^{*}\) such that \(s(\mu_{-})=r(\mu_{+})\), \(s(\nu_{-})=r(\nu_{+})\), \(|\mu_{-}|=|\nu_{-}|\) and an element \(g\in G\) with \(s(\nu_{+})=d(g)\), \(s(\mu_{+})=c(g)\), we define \[\mathcal{Z}(\mu_{-}\mu_{+})=\{z\in E^{\mathbb{Z}}_{x}: \;z(-|\mu_{-}|,|\mu_{+}|)=\mu_{-}\mu_{+}\] \[\text{ and }z(-\infty,-|\mu_{-}|-1)=x(-\infty,-|\mu_{-}|-1)\},\] and \[\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+})=\{[z,|\mu_{+}|,g,|\nu_{-}|,y]\in \mathcal{G}^{\pi_{x}}_{G,E}:z\in\mathcal{Z}(\mu_{-}\mu_{+}),y\in\mathcal{Z}( \nu_{-}\nu_{+})\}.\] These two collections form a basis for the topologies on \(E^{\mathbb{Z}}_{x}\), \(\mathcal{G}^{\pi_{x}}_{G,E}\), respectively. We show that \(\mathcal{G}^{\pi_{x}}_{G,E}\) and \(G^{u}_{[x]}\rtimes\mathbb{Z}\) are isomorphic. **Theorem 8.12**.: _Let \(E\) be a finite directed graph with no sinks or sources, and let \((G,E)\) be a contracting, regular self-similar groupoid action. Fix \(x\in E^{\mathbb{Z}}\). The map \(\theta:\mathcal{G}^{\pi_{x}}_{G,E}\to G^{u}_{[x]}\rtimes\mathbb{Z}\) such that \(\theta([z,m,g,n,y])=([z],m-n,[y])\) is an isomorphism of topological groupoids._ Proof.: If \([z,m,g,n,y]=[z^{\prime},m^{\prime},g^{\prime},n^{\prime}y^{\prime}]\), then \(z=z^{\prime}\) and \(y=y^{\prime}\) by (8.6), and \(m-n=m^{\prime}-n^{\prime}\) by definition of the equivalence relation defining \(\mathcal{G}_{G,E}\). So there is a well-defined map \(\theta\) satisfying \(\theta([z,m,g,n,y])=([z],m-n,[y])\). If \([z,m,h,n,y]\in\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+})\) for \(g\) in the nucleus \(\mathcal{N}\), then \(g\cdot y(|\nu_{+}|+1,\infty)=z(|\nu_{+}|+1+(m-n),\infty)\). Lemma 8.6 yields \(l\in\mathbb{N}\) such that \(([z],m-n,[y])\in G^{u}_{\beta,|\nu_{+}|+l}\rtimes\{n-m\}\). We have \(([z],m-n,[y])\in r^{-1}(q(\mathcal{Z}(\mu_{-}\mu_{+})))\cap s^{-1}(q(\mathcal{ Z}(\nu_{-}\nu_{+})))\), so \(([z],m-n,[y])\in X_{|\nu_{+}|+l,|\mu_{+}|-|\nu_{+}|,|\nu_{-}|}\). Hence, \[\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\subseteq X_{|\nu_{+}|+l,| \mu_{+}|-|\nu_{+}|,|\nu_{-}|},\ g\in\mathcal{N}. \tag{8.7}\] Hence \(\theta(\mathcal{G}_{G,E}^{\pi_{x}})\subseteq G_{[x]}^{u}\rtimes\mathbb{Z}\). It is straightforward that \(\theta\) is a groupoid homomorphism. We show that \(\theta\) is continuous. It is enough to show that \(\theta\) restricts to a continuous map from \(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+})\) to \(X_{|\nu_{+}|+l,|\mu_{+}|-|\nu_{+}|,|\nu_{-}|}\) for all \(\mu_{+},\mu_{-},\nu_{+},\nu_{-}\) as in Notation 8.11. By Lemma 8.10, \(X_{|\nu_{+}|+l,|\mu_{+}|-|\nu_{+}|,|\nu_{-}|}\) has the subspace topology inherited from \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\), so it suffices to show that \(\theta:\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+})\mapsto\mathcal{S}\times \mathbb{Z}\times\mathcal{S}\) is continuous. Let \(([z_{\lambda},m_{\lambda},h_{\lambda},n_{\lambda},y_{\lambda}])_{\lambda\in \Lambda}\) be a net in \(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+})\) converging to \([z,m,h,n,y]\). Since \(r,s\) are continuous, it follows that \(z_{\lambda}\to z\) in \(\mathcal{Z}(\mu_{-}\mu_{+})\) and \(y_{\lambda}\to y\) in \(\mathcal{Z}(\nu_{-}\nu_{+})\). Since \(\mathcal{Z}(\mu_{-}\mu_{+})\) and \(\mathcal{Z}(\nu_{-}\nu_{+})\) carry the subspace topologies relative to \(E^{\mathbb{Z}}\), we have \(y_{n}\to y\) and \(z_{n}\to z\) in \(E^{\mathbb{Z}}\). Since the quotient map \(q:E^{\mathbb{Z}}\mapsto\mathcal{S}\) is continuous, \[([z_{\lambda}],m_{\lambda}-n_{\lambda},[y_{\lambda}])\to([z],m_{\lambda}-n_{ \lambda},[y])=([z],m-n,[y])\] in \(\mathcal{S}\times\mathbb{Z}\times\mathcal{S}\). Hence, \(\theta\) is continuous. Now, we show that \(\theta\) is an open map. It suffices to show that each \(\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\) is open. By (8.7) and Lemma 8.10, it is enough to show that \(\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\) is an open subset of \(X_{|\nu_{+}|+l,|\mu_{+}|-|\nu_{+}|,|\nu_{-}|}\). Write \(n=|\nu_{+}|\), \(m=|\mu_{+}|\) and \(k=|\nu_{-}|\). Let \[\tilde{X}=\{([z],m-n,[y])\mid\exists h\in F:h\cdot z(m+l+1, \infty)=y(n+l+1,\infty)\] \[\text{and }x(-\infty,-k)=z(-\infty,-k)=y(-\infty,-k)\}.\] Then Lemma 8.6 gives \(X_{n+l,m-n,k}\subseteq\tilde{X}\). So it suffices to show that \(\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\) is open in the relative topology inherited from \(\mathcal{S}_{k}^{s}([x])\times\{m-n\}\times\mathcal{S}_{k}^{s}([x])\). Let \(c\) be the number from Lemma 4.4 applied to the set \(F=E^{0}\cup\mathcal{N}\cup\mathcal{N}^{2}\). Let \(L=\max\{(m+l+1+c),k,(n+l+1+c)\}\). Fix \((u,m,g,n,v)\in\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+})\). Let \(W=q(\mathcal{Z}(u(-L,-1)u(1,L)))\times\{m-n\}\times q(\mathcal{Z}(v(-L,-1)v(l, L)))\). Then \(W\) is open in \(\mathcal{S}_{k}^{s}([x])\times\{m-n\}\times\mathcal{S}_{k}^{s}([x])\). We show that \(W\cap\tilde{X}\subseteq\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\). Fix \(a,b\in E_{x}^{\mathbb{Z}}\), \(m,n\in\mathbb{N}\) and \(h\in F\) such that \(h\cdot a(m+l+1,\infty)=b(n+l+1,\infty)\) and \(x(-\infty,-k)=a(-\infty,-k)=b(-\infty,-k)\). Suppose that \(([a],m-n,[b])\in W\cap\tilde{X}\). For \(L\geq m,n,k\), the paths \(\mu_{-}\mu_{+},\nu_{-}\nu_{+}\) are subwords of \(u(-L,-1)u(1,L)\), \(v(-L,-1)v(1,L)\), respectively. Therefore, since \(q:E_{x}^{\mathbb{Z}}\mapsto\mathcal{S}^{s}([x])\) is bijective (Lemma 8.8), that \(([a],m-n,[b])\in W\) implies that \(a\in\mathcal{Z}(\mu_{-}\mu_{+})\) and \(b\in\mathcal{Z}(\nu_{-}\nu_{+})\). It only remains to show that \(g\cdot a(m+1,\infty)=b(n+1,\infty)\). By the choice of \(L\), we have \(a(m+l+1,m+l+c)=u(m+l+1,m+l+c)=:p\) and \(g|_{u(m+1,m+l+1)}\cdot p=h\cdot p\). By the choice of \(c\), we have \(g|_{u(m+1,m+l+c)}=h|_{a(m+l+1,m+l+c)}\). Therefore, \[g\cdot a(m+1,\infty) =v(n+1,n+l+c)(g|_{u(m+1,m+l+c)})\cdot a(m+l+1+c,\infty)\] \[=b(n+1,n+l+c)(h|_{a(m+l+1,m+l+c)})\cdot a(m+l+1+c,\infty)\] \[=b(n+1,\infty).\] We have shown that the open neighbourhood \(W\cap\tilde{X}\) of \(\theta([u,m,g,n,v])\) is contained in \(\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\). Therefore, \(\theta(\mathcal{Z}(\mu_{-}\mu_{+},g,\nu_{-}\nu_{+}))\) is open, and hence \(\theta\) is an open map. Now, we show that \(\theta\) is a bijection. We first show injectivity. Since \(q:E_{x}^{\mathbb{Z}}\mapsto\mathcal{S}^{s}([x])\) is a bijection, \(\theta([z,m,g,n,y])=\theta([z^{\prime},m^{\prime},g^{\prime},n^{\prime},y^{ \prime}])\) implies \(z=z^{\prime}\) and \(y=y^{\prime}\). Since \(m-n=m^{\prime}-n^{\prime}=l\), for \(k\) large enough, we have \(g|_{y(n,k)}\cdot y(k+1,\infty)=g^{\prime}|_{y(n^{\prime},k)}\cdot y(k+1,\infty)\). By regularity, \(g|_{y(n,K)}=g^{\prime}|_{y(n,K)}\) for some \(K\geq k\). Therefore, \([z,m,g,n,y]=[z^{\prime},m^{\prime},g^{\prime},n^{\prime},y^{\prime}]\). Finally, we show surjectivity. Fix \(([z],l,[y])\in G^{u}_{[x]}\rtimes\mathbb{Z}\). By Lemma 8.6, there exist \(M\in\mathbb{N}\) and \(g\in F\) such that \(M+l\geq 0\) and \(g\cdot y(M+1,\infty)=z(M+l+1,\infty)\). Hence \([z,M+l,g,M,y]\in\mathcal{G}^{\pi_{x}}_{G,E}\) satisfies \(\theta([z,M+l,g,M,y])=([z],l,[y])\). **Corollary 8.13**.: _Let \(E\) be a finite strongly connected directed graph, and let \((G,E)\) be a contracting, regular self-similar groupoid action. Then the \(C^{*}\)-algebra \(\mathcal{O}(G,E)\) of Section 6 is Morita equivalent to the unstable Ruelle algebra \(C^{*}(G^{u}\rtimes\mathbb{Z})\) of the Smale space \((\mathcal{S},\tilde{\tau})\) of Corollary 5.5._ Proof.: Fix \(x\in E^{\mathbb{Z}}\) such that \([x]\) is periodic under the action of \(\tilde{\tau}\). Since \(E\) is assumed strongly connected, Corollary 5.5 implies \((\mathcal{S},\tilde{\tau})\) is irreducible. Thus, as shown above Lemma 8.10, \(G^{u}\rtimes\mathbb{Z}\) is groupoid equivalent to \(G^{u}_{[x]}\rtimes\mathbb{Z}\). Hence \(C^{*}(G^{u}\rtimes\mathbb{Z})\) is Morita equivalent to \(C^{*}(G^{u}_{[x]}\rtimes\mathbb{Z})\). Theorem 8.12 implies that \(C^{*}(G^{u}_{[x]}\rtimes\mathbb{Z})\cong C^{*}\big{(}\mathcal{G}^{\pi_{x}}_{G,E})\). Since \(E\) is strongly connected, \(\pi_{x}\) is an open surjection, so [8, Proposition 3.10] implies that \(C^{*}(\mathcal{G}_{G,E})\) is Morita equivalent to \(C^{*}(\mathcal{G}^{\pi_{x}}_{G,E})\), and Proposition 6.5 shows that \(\mathcal{O}(G,E)\cong C^{*}(\mathcal{G}_{G,E})\). Stringing these isomorphisms and Morita equivalences together gives the desired Morita equivalence \(C^{*}(G^{u}\rtimes\mathbb{Z})\sim_{\mathrm{Me}}\mathcal{O}(G,E)\). _Remark 8.14_.: If \(E\) is not strongly connected, then \(\pi_{x}\) is not necessarily surjective, and \(\mathcal{G}^{\pi_{x}}_{G,E}\) is only groupoid equivalent to the reduction \(\mathcal{G}_{G,E}|_{\pi_{x}(E^{\mathbb{Z}}_{x})}\) of \(\mathcal{G}_{G,E}\) to the image of \(\pi_{x}\), which is open. However, if \(\mathcal{G}_{G,E}\) is minimal, then \(\mathcal{G}_{G,E}|_{\pi_{x}(E^{\mathbb{Z}}_{x})}\) is still groupoid equivalent to \(\mathcal{G}_{G,E}\). We can now prove our main theorem. Proof of Theorem 8.1.: Corollary 5.5 shows that \((\mathcal{S},\tilde{\tau})\) is an irreducible Smale space. So Theorem 1.1 of [12] shows that there are classes \(\delta\) in such that \(\delta\widehat{\otimes}_{C^{*}(G^{u}\rtimes\mathbb{Z})}\Delta=\operatorname{ id}_{\mathbb{K}(C^{*}(G^{s}\rtimes\mathbb{Z}),C^{*}(G^{s}\rtimes\mathbb{Z}))}\) and \(\delta\widehat{\otimes}_{C^{*}(G^{s}\rtimes\mathbb{Z})}\Delta=-\operatorname{ id}_{\mathbb{K}(C^{*}(G^{s}\rtimes\mathbb{Z}),C^{*}(G^{s}\rtimes\mathbb{Z}))}\). Corollary 8.5 gives a Morita equivalence bimodule between \(\widehat{\mathcal{O}}(G,E)\) and \(C^{*}(G^{s}\rtimes\mathbb{Z})\), which induces a \(KK\)-equivalence \(\hat{\alpha}\in KK^{0}\big{(}\widehat{\mathcal{O}}(G,E),C^{*}(G^{s}\rtimes \mathbb{Z})\big{)}\). Likewise Corollary 8.13 gives a \(KK\)-equivalence \(\alpha\) in \(KK\big{(}\mathcal{O}(G,E),C^{*}(G^{u}\rtimes\mathbb{Z})\big{)}\). So the Kasparov products \[\beta:=(\hat{\alpha}\otimes\alpha)\,\widehat{\otimes}\,\delta\,\widehat{ \otimes}\,(\hat{\alpha}\otimes\alpha)^{-1}\quad\text{ and }\quad\mu:=(\hat{\alpha}\otimes\alpha)\,\widehat{\otimes}\,\Delta\,\widehat{ \otimes}\,(\hat{\alpha}\otimes\alpha)^{-1}\] implement the desired duality.
2305.17783
Visual Affordance Prediction for Guiding Robot Exploration
Motivated by the intuitive understanding humans have about the space of possible interactions, and the ease with which they can generalize this understanding to previously unseen scenes, we develop an approach for learning visual affordances for guiding robot exploration. Given an input image of a scene, we infer a distribution over plausible future states that can be achieved via interactions with it. We use a Transformer-based model to learn a conditional distribution in the latent embedding space of a VQ-VAE and show that these models can be trained using large-scale and diverse passive data, and that the learned models exhibit compositional generalization to diverse objects beyond the training distribution. We show how the trained affordance model can be used for guiding exploration by acting as a goal-sampling distribution, during visual goal-conditioned policy learning in robotic manipulation.
Homanga Bharadhwaj, Abhinav Gupta, Shubham Tulsiani
2023-05-28T17:53:09Z
http://arxiv.org/abs/2305.17783v1
# Visual Affordance Prediction for Guiding Robot Exploration ###### Abstract Motivated by the intuitive understanding humans have about the space of possible interactions, and the ease with which they can generalize this understanding to previously unseen scenes, we develop an approach for learning visual affordances for guiding robot exploration. Given an input image of a scene, we infer a distribution over plausible future states that can be achieved via interactions with it. We use a Transformer-based model to learn a conditional distribution in the latent embedding space of a VQ-VAE and show that these models can be trained using large-scale and diverse passive data, and that the learned models exhibit compositional generalization to diverse objects beyond the training distribution. We show how the trained affordance model can be used for guiding exploration by acting as a goal-sampling distribution, during visual goal-conditioned policy learning in robotic manipulation. ## I Introduction Consider the images \(o_{g}\) in Figure 1. We humans can effortlessly understand the depicted scenes e.g. a bottle lying on the table in the top image, or a teddy lying next to a pot. More importantly, we can infer not only what _is_, but also what _can be_ e.g. one can imagine the bottle being placed at a different location on the table, or turned to lie horizontally, or perhaps taking the teddy to place it inside the pan. We instinctively make such judgements across a myriad of everyday scenarios, understanding that an open cabinet can be closed, or that an egg can be broken, or spilled liquid wiped away. In this work, our goal is to build a computational system with similar capabilities, which can be used directly as a goal-sampling distribution for robot exploration. Given just a single image depicting generic (possibly unseen) objects, we wish to predict possible configurations that may occur as a result of a human (or a robot) interacting in the scene. We are inspired by Gibson's seminal work on _affordances_ which argues for developing intelligent agents with an intuitive understanding of possible interactions they can have with their environment [13]. Initial attempts in the vision community formalized this as a pixelwise labeling task (e.g.'sittable' surface), but these do not explicitly model the actions or their effects [6, 42]. An alternative approach has been to use geometric models [15, 11] to predict human-centric affordances. Towards a richer parametrization, recent approaches have pursued 'visuo-motor' affordances where the space of possible interactions is modeled via predicting possible low-level actions that an agent can execute to affect its environment [32, 36, 28, 43]. While such visuo-motor affordances can be directly translated to agent behavior, we argue that the requirement of inferring precise actions associated can be restrictive e.g. babies may understand that a fruit can be cut even if they can't do it themselves, as perhaps an adult may know that a bike's tire can be removed even if not sure how to precisely do so. In addition, training models for such visuo-motor affordances requires annotated data, which can be restrictive to scale for diverse settings. In our work, we therefore pursue 'visual affordances', where instead of modeling the action trajectories, we aim to model what their _effects_ can be. More concretely, given a single input image, we pursue the task of conditionally generating diverse and plausible images of the same scene that can be achieved as a result of an agent's interaction. This task of inferring visual affordances is an increasingly common one in the robot learning community, where an understanding of interesting plausible actions can help guide exploration. However, the typical methods only learn domain-specific affordances -- relying on active interaction in a specific (lab) setting, they learn models for interactions with the particular objects in the particular environment [32, 36, 22, 31]. In addition to their limited generalizability, these active interaction-based methods also do not learn more complex behaviors (e.g. stacking blocks) as random interactions from an agent are unlikely to lead to such interesting transitions. Our key insight in this work is that instead of only relying on active interactions with limited variation, such visual affordances can be learned from large-scale diverse passive data. Just as we humans can learn from watching others (e.g. instructional videos), our system learns visual affordances using interaction videos depicting generic human and robot interactions across varied settings. Using this trained affordance model, when a robot is placed in a certain scene, we can generate goals corresponding to plausible manipulations of the objects in the scene, and learn a goal-conditioned policy through self-supervised exploration. Specifically, we consider the setting of exploration in goal-conditioned policy learning and use the affordance model for sampling goals that the policy tries to reach during training. The affordance model consists of two stages: 1) we learn discrete latent embeddings from images by training a VQ-VAE [41] on the entire dataset 2) we learn conditional prediction of latent codes through an auto-regressive generation procedure using a Transformer architecture [5]. Because of the diversity seen in training, the affordance model is able to generalize to new environments and unseen objects, and predict interesting plausible configurations in settings never seen in training. Due to the diversity of sampled goals, the policy is incentivized to explore broadly and learn interesting behaviors like stacking, compared to curiosity-based exploration strategies. ## II Related Works **Actionable Information with Visual Observations.** Prior work has explored the problem of learning how to interact with objects in the scene from visual observations, in different settings like dextrous manipulation [27] and mobile navigation [30]. These approaches typically use either passive observations (for example human videos) [29, 2, 11] or active interactions with a robotic agent [35]. Some prior works also leverage robot simulators to randomize the generation of different types of objects in order to learn affordances for tasks like grasping [21] and pushing [45] objects. The unifying idea in all these approaches is learning _how_ to interact with objects in the scene. They tackle a problem slightly orthogonal to ours, where by learning affordances, we refer to visualizing the _result_ of interactions, and not specifically how to obtain those interactions. **Generative Modeling.** There have been significant recent advances in deep generative models, with high-quality realistic images being produced by StyleGAN [19, 20] and BigGAN [3, 7] among others that use transformers [41, 40, 33, 5, 9]. While most of these models tackle the problem of unconditional generation, a closely related problem to our setting is that of conditional image generation. This is typically defined as the task of generating new images from a dataset conditional on certain class attributes [39, 10]. Video generation methods [24] using transformers perform per-timestep prediction, whereas our affordance model samples (long-horizon) goal images that we show are useful for robot exploration. Some style transfer approaches [18, 46] also have a similar conditional generation formulation, where the conditioning is done on a source image as opposed to a class attribute. Our affordance model is an approach for better conditioning on images, where we want the generations to capture configurational changes in the attributes of objects in the source image. **Robot Exploration.** Generative models have been used for goal-sampling in visual robot learning [32, 31, 36, 26]. Learning a good goal-sampling strategy for diverse goals and training goal-conditioned policies to try and reach sampled goals is a paradigm for exploration without the use of explicit heuristics like curiosity [34, 38] or reduction of uncertainty [4, 8]. Most prior work in this vein has learned goal-sampling models only in the experiment domain either by collecting large robot interaction datasets, or training the goal-sampler online jointly with policy learning [32, 26, 36, 43]. Collecting large datasets in the lab is expensive, and the goal-sampling models trained solely on a specific-lab setting is unlikely to generalize to other settings without requiring re-training. In contrast to these approaches we learn a model for generating visual affordances from various internet data - _not_ data collected in the experimenter's lab. We show how the learned affordance model can be used to guide exploration for goal-conditioned policy learning in environments with unseen objects. ## III Approach We develop an approach for learning visual affordances from passive data that can be used as goals for guiding robot exploration. We use the term affordance to mean the set of possible interactions in a particular scene. Given an image of a scene, we want the affordance model to generate a new image with different configurations of the same objects in the scene. This model can be used for goal-directed exploration, such that in a new scene, the agent can sample a goal conditioned on the current image, and execute actions to try and reach the sampled goal. Since collecting data in-lab is expensive due to a lot of manual effort, we make use of the rich diversity of passive data existing in the internet, to learn the affordance model, and use it directly for goal-sampling. The affordance model is two-staged: we first learn a VQ-VAE [41] to discretize the space of continuous images, and then learn a transformer based auto-regressive model that (in latent space) predicts a possible goal given the input. We then project the generated latent goal, back to the image-space with the decoder of the Fig. 1: We develop an approach for goal-directed robot exploration by learning a goal-sampling distribution followed by self behavior cloning with exploration trajectories. The goal-sampling distribution is a visual affordance model trained from large passive datasets of image pairs to predict a distribution over goal images given an initial image of the scene. We show that this affordance model enables goal-directed robot exploration, for learning diverse behaviors like pushing, grasping, and stacking in a table-top manipulation setting. VQ-VAE. This framework lets us both: a) generate diverse goals, and b) produce realistic and high-resolution images. For goal-conditioned policy learning, self-supervised approaches typically adopt a learning pipeline consisting of two (often inter-leaved) components: 1) exploration to collect data of interactions with the environment, and 2) policy learning using the interaction trajectories. We show that it is possible to drive better exploration by using the trained affordance model as a conditional goal-sampling distribution that samples diverse realistic goals in an unseen scene. Concretely, we learn a goal-conditioned policy through exploration, where the goals are sampled from a learned affordance model in the beginning of the trajectory. In order to learn such a policy through self-supervised exploration, the sampled goals must be _plausible_ and _diverse_, corresponding to different arrangement of the objects in the scene. Depending on the objects in the scene, the different arrangements could be putting a lid on a pot, pushing a cup across the table, grasping an object and rotating it etc. We show that such behaviors emerge from the trained affordance model, and describe how we can learn a goal-conditioned policy through hindsight re-labeling of exploration trajectories. In the next sub-sections, we first describe the architecture and training details of the affordance model, and then discuss how we use the model as a goal-sampler for guiding robot exploration in manipulation tasks. ### _Learning to Imagine Goals_ Instead of modeling in the image-space directly, in order to reduce spatial redundancies and implicitly abstract out interesting objects in the scene, we first learn a down-sampled encoding of high resolution images. To learn these discrete latent embeddings, we employ a VQ-VAE to learn lossy encoder and decoder models that can transform latents to generate realistic image samples. Having a discrete latent space allows flexibility in modeling features (such as objects in the scene) that span multiple pixels in the image, and do not encode local imperceptible differences that are not significant. **Training details of the VQ-VAE.** Let the images be \(o\in\mathbb{R}^{H\times W\times 3}\). The VQ-VAE model consists of an encoder \(E(o)\in\mathbb{R}^{h\times w\times L}\), a codebook \(C=\{e_{k}\}_{k=1}^{K}\) with elements \(e_{i}\) of size \(L\) and a decoder \(D(e)\in\mathbb{R}^{H\times W\times 3}\). The quantization happens in the channel space, where all the \(h\times w\) vectors of size \(L\) are replaced by their nearest codebook vectors \(e_{i}\), and the resulting quantized latent code of dimension \(h\times w\times L\) is fed to the decoder. The training objective for this model, following prior work [41] is as follows: \[\mathcal{L}_{\text{vqvae}}=\mathbb{E}_{o\sim D}\left[||o-D(e)||_{2}+||E(o)- \text{sg}[e]||_{2}\right]\] Here, sg refers to the stopgradient operator, that is defined as identity during forward-computation. After training the VQ-VAE, corresponding to each image \(o_{c}\), we obtain a downsampled encoding \(z_{c}\). We then learn an auto-regressive prior using a Transformer architecture to model \(p(z_{g}|z_{c})\). Finally, with the decoder of the VQ-VAE we obtain a corresponding image \(o_{g}\) from the latent \(z_{g}\). Learning this conditional generation model in the latent space with a powerful Transformer architecture enables us to achieve compositional generalization, by generation of plausible goals in scenes where there are multiple objects. The stochasticity of the model is helpful in ensuring that the generated samples are allowed to be diverse. **Auto-regressive goal sampling.** In Fig. 3, we show a forward pass through our affordance model that generates a goal image \(o_{g}\), given a conditioning image \(o_{c}\). The initial image \(o_{c}\) is first encoded by the VQ-VAE to a discrete latent code \(z_{c}\). The Transformer then generates the latent code corresponding to the goal image, \(z_{g}\sim p(z_{g}|z_{c})\). The generation process happens pixel by pixel in raster-scan order and channel by channel for one pixel. Hence we can denote the auto-regressive generation process as: \[p(z_{g})=\prod_{i}p\left(z_{(g,i)}|z_{(g,<i)},z_{c}\right)\] This equation shows that the prediction of the \(i^{\text{th}}\) pixel in Fig. 2: Affordance-driven exploration and policy learning through hindsight goal re-labeling. Given an initial configuration, we sample a goal with the affordance model, execute rollouts with the current policy, and store the transitions in the replay buffer. For updating the policy, we sample transitions from the replay buffer and re-label goals to be the final frames in the corresponding trajectory. The process is described in detail in section III-B On the right we show two different views of the robot workspace, with a Franka Panda arm and an overhead Realsense camera. the latent code \(z_{(g,i)}\) is conditioned on the already predicted pixels \(z_{(g,<i)}\) and the initial latent code \(z_{c}\). **Learning objective.** We instantiate the affordance model as \(p_{\psi}(o_{g}|o_{c})\) and aim to maximize \(\mathbb{E}_{(o_{g},o_{c})\sim\mathcal{D}}[\log p_{\psi}(o_{g}|o_{c})]\). Given image pairs \((o_{c},o_{g})\sim\mathcal{D}\), we transform each image to their respective quantized latent code \(z_{c}\) and \(z_{g}\). The autoregressive transformer model is used to generate \(\hat{z}_{g}\) conditioned on \(z_{c}\), i.e. \(\hat{z}_{g}\sim p(z_{g}|z_{c})\). We train the model to maximize \(\mathbb{E}_{(z_{g},z_{c})\sim\mathcal{D}}[\log p(z_{g}|z_{c})]\). **Training Details of the Transformer.** For the above generation, we use an architecture similar to Image-GPT [5]. It consists of an encoder-decoder model with masked convolutions and self-attention layers in the decoder. The auto-regressive model is preserved by padding the input, and replacing the paddded pixel values with the generated values and repeating the process recursively. Finally, all the values in \(z_{g}\) correspond to valid generations that become more and more accurate as training progresses, and so we eventually have a model of the form \(p(z_{g}|z_{c})\). We then feed in \(z_{g}\) to the VQ-VAE decoder, obtaining the decoded image frame \(o_{g}\). This completes the structure of the affordance model \(p_{\psi}(o_{g}|o_{c})\). After training the model, given a new initial image \(\hat{o}_{c}\), we can sample a plausible goal \(o_{g}\sim p_{\psi}(o_{g}|\hat{o}_{c})\). We empirically analyze this model in the next section, and show results of goal generations in unseen scenes. ### _Affordance-driven Robot Exploration_ We use the affordance model trained solely on passive data for goal-conditioning and do not fine-tune on any lab-specific data. Because we trained the model on diverse passive data, we observe good generalization for affordance prediction in the robot learning setting. We obtain exploration trajectories with interesting behaviors, and are eventually able to learn better goal-conditioned policies compared to alternate exploration strategies like curiosity [34] that learn from scratch in the environment, or those that sample goals from other sampling distributions like a VAE-prior [32]. Given the affordance model, we train a policy that chooses actions for executing a particular task specified by a goal image. We refer to the observation the robot sees at time-step \(t\) of an episode as \(o_{t}\), and denote the goal image as \(o_{g}\). We denote the goal-conditioned policy as \(\pi(a_{t}|o_{t},o_{g})\). We learn this policy by simple behavior cloning with goal-relabeling. After sampling a goal \(o_{g}\sim p_{\psi}(o_{g}|o_{1})\) and executing a trajectory \((o_{1},a_{1},o_{2},a_{2},...,o_{T};o_{g})\) to reach the goal, even if the final configuration \(o_{T}\) is not similar to the goal configuration, the executed trajectory provides useful information for reaching the configuration the system ended up in. We describe the overall framework in Fig. 2. During deployment, test goal images are sampled from some distribution \(o_{g}\sim p(\mathcal{G})\), and the robot must interact with the objects in the scene to reach the goal configuration. **Training the goal-conditioned policy.** Given observation \(o_{1}\) corresponding to the initial scene, we sample a goal from the affordance model \(o_{g}\sim p_{\psi}(o_{g}|o_{i})\) and execute a trajectory \((o_{1},a_{1},o_{2},a_{2},...,o_{T};o_{g})\), which we store in the replay buffer \(\mathcal{R}\). In Fig. 2 we see examples of goals sampled in a scene. The affordance model enables sampling diverse goals and encourages the policy to try and reach them, thereby ensuring interesting interactions. Given the data of interactions in the replay buffer \(\mathcal{R}\), we perform goal relabeling and use tuples \((o_{t},a_{t},o_{t+1},o_{g}=o_{T})\) to update the policy \(\pi(a_{t}|o_{t},o_{g})\). This corresponds to a variation of hindsight-experience replay, that has been shown to be useful in prior works [1, 12, 25]. The rationale for this is that during exploration, although the incompletely trained policy might not reach the sampled goal, the final state it reaches is still a _potential_ goal, and the executed sequence of actions provides guidance on reaching this goal. The policy update method is to do simple behavior cloning that maximizes the the likelihood \(\mathbb{E}_{(o_{t},a_{t},o_{t+1},o_{g}=o_{T})\sim\mathcal{R}}[\log\pi(a_{t}|o_ {t},o_{g})]\). Note that we do not need samples to be on-policy for this update and so we can interleave exploration of a few trajectories with a policy update phase where we randomly sample tuples from different trajectories in the replay buffer. ## IV Experiments Through experiments, we aim to understand the following research questions: * How effective is the affordance model in generating diverse scenes with plausible object manipulations? * How effective are the generated plausible and diverse affordances in guiding robot exploration? ### _Setting_ **Data.** For training the affordance model, we use data from three different internet sources: somethingsomethingv2 [14], Berkeley [22], and JHU CoStar [17]. We did not collect any additional data ourselves, and use only pairwise frames extracted from these datasets for training. We use the trained visual affordance model as-is for robot experiments and do not do any fine-tuning in the lab. **Baselines.** We compare our model against two relevant baselines, a Conditional VAE (CVAE) [39], and a Conditional GAN (Pix2Pix) [18] for conditional generation and robot Fig. 3: Illustration of a forward pass through the affordance model. The initial scene \(o_{c}\) is first encoded by a VQ-VAE encoder, and converted to a discretized latent code by swapping nearest neighbor encodings from a codebook \(C\). The resulting discrete latent code \(z_{c}\) is passed through the Transformer. The autoregressive generation yields a latent code \(z_{g}\) which is decoded by the VQ-VAE decoder to a plausible goal image \(o_{g}\). exploration, and in addition with a curiosity baseline [34] for robot exploration. We train these models on the same data as our model, and perform qualitative evaluations as well as quantitative comparisons through several metrics described in the next sub-section. After analyzing the visual generations of the baselines in section IV-B, we evaluate policy learning behaviors in section IV-C ### _Evaluating Predicted Affordances_ As highlighted by prior work, evaluating the quality of synthesized images is challenging [18, 46]. Typical metrics used in assessing image reconstruction (like the pixel mean-squared error) do not translate well to assessing the quality of novel image generations. Further, these metrics will not be useful for understanding how diverse and plausible are the generated affordances for novel scenes. Hence we consider a metric based on _human perceptual evaluation_. Motivated by evaluation protocols in prior works [16, 18, 46, 44], we conduct an perceptual study with Amazon Mechanical Turk (MTurk). We run the MTurk perceptual study, following the protocol from prior work [18, 46] where images are shown on the screen for 3 seconds and workers are asked to guess certain proprieties of the images. In our setting, we show three images per screen and ask the workers to choose one among the two rightmost images. We mention in the instructions that the task is to guess which of the two images on the right correspond to plausible manipulations of objects on the left image. We randomize the ordering of images such that one image is from our model and the other is from a baseline and compare the average number of times workers choose ours compared to the baselines. **Qualitative results.** In Fig. 4 we perform a qualitative analysis of the proposed affordance model with the baselines CVAE and Pix2Pix. For each of the three initial scenes \(o_{c}\) in the first column, we generate three affordances with our model, three with the CVAE model, and one with the Pix2Pix model (since the model is not stochastic). The initial scenes are unseen during training. We can see that the affordances generated by our model are diverse and involve plausible manipulations of the different objects in the scene. We observe the emergence of interesting affordances like stacking of blocks and re-orientation of the bottle. Whereas for the baselines, sometimes certain objects are omitted from the scene, are non-realistically generated, and do not involve diverse manipulations of the objects. Given that the objects have never appeared in the training data before, this provides evidence that our method can generalize to new scenes and compose affordances for different objects. **Quantitative results.** In Table I, we report results from the MTurk perceptual study. 40 workers participated in our study. In each trial, workers saw an image on the left, and were asked to choose which of the two images on the right corresponded to plausible manipulations of objects on the left image. The numbers show the \(\%\) of trials where workers chose a sample from our model as opposed to that from the baselines, averaged over all the workers. Since the numbers are \(>50\%\), we see that workers preferred samples from our model compared to the baselines. This confirms the plausibility of our generated affordances. ### _Benchmarking Affordance-Guided Exploration_ In this section, we empirically analyze the framework in terms of generating diverse affordances that serve as goals for aiding exploration in robot policy learning. We consider a goal-directed policy learning setting, where the robot needs to set its own goals, and explore the environment to try and reach those goals. There is no notion of _tasks_ during the exploration phase, but for evaluation, the experimenters provide goal images corresponding to \begin{table} \begin{tabular}{l c c} \hline \hline & **Pix2Pix** & **CVAE** \\ \hline **Our** & \(69.8\pm 11.9\) & \(75.5\pm 10.8\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Pairwise comparison of results for the MTurk perceptual study. We tabulate the \(\%\) of trials where workers chose a sample from our model as opposed to that from the baselines, averaged over 40 workers who participated in our study. Higher is better. A number \(>50\%\) shows that workers chose our model’s sample more number of times compared to the baselines. Fig. 4: Qualitative analysis of the proposed affordance model with the baselines CVAE and Pix2Pix. Given initial conditioning frame \(o_{c}\) that corresponds to an unseen configuration during training, we see that the sampled affordances \(o_{g}\) from our model are diverse and correspond to plausible interactions in the scene. For the baselines, we see that the generated frames are not diverse and sometimes omit certain objects from the scene or introduce different artifacts. Fig. 5: We show visualizations of exploration during training, corresponding to different episodes. On the right, we show the respective goal sampled from the affordance model. Towards the end, we see the evolution of interesting behaviors like grasping and the policy leading to behaviors that reach the goal image. stacking that can be performed in the scene. The overall aim is to evaluate goal-reaching behaviors that can be learned by training a goal-conditioned policy described in section III-B. **Setup.** Our setup is shown in Fig. 2, where we use a Franka Emika Panda robot arm, with an overhead Intel Realsense camera for observations. The robot is controlled through end-effector (EE) control, and the action-space is four dimensional - (x,y,z) position of the EE and opening/closing of the gripper. We place certain objects in the scene, and let the robot interact with them. We choose diverse everyday objects like teddy bears, cloth, ketchup bottle, blocks etc. such that different behaviors like pushing, pick and place, stacking are plausible to emerge through interaction. We reset the scene after \(T\) steps of interaction while introducing new objects and/or changing the position of existing objects. As is standard in evaluating goal-conditioned policies, after training, we measure % success in reaching a set of test goal-images. For comparison, in addition to goal-sampling from CVAE and Pix2Pix baselines, we consider a curiosity-based exploration baseline [34] for data collection, and follow the same training protocol as our method to train a goal-conditioned policy with the exploration data. **Results.** We visualize the progression of exploration during training in Fig. 5, for different episodes corresponding to different sampled goals. Towards the end, we see the evolution of interesting behaviors like grasping and the policy leading to behaviors that reach the goal image. This illustrates that the proposed affordance-guided exploration described in section III-B is helpful in training a policy with emergent manipulation capabilities. After training the policy through affordance-driven exploration, for evaluation we consider a set of goal images, and evaluate the fraction success of the policy in reaching goal configurations. We emphasize that only a _single_ goal-conditioned policy is trained and evaluated, but for ease of analysis we show results in Table. II split across 3 different type of tasks, with the same trained policy. In Table. II, we show comparisons of the success rates of the two approaches, across three types of tasks - pushing, pick and place, and stacking. For each task, we consider two goal-images, and tabulate % success across 10 trials. In Fig. 6 we visualize some evaluation runs for our method and the curiosity baseline. Please refer to the supplementary video for additional visualizations. We can see an average of 25% higher success rates compared to the baselines, demonstrating the efficacy of the affordance model for goal-directed exploration. We observe significantly lower success rates for the CVAE and Pix2Pix baselines, which we believe is because the images generated either do not correspond to realistic reachable goals for the robot or do not change the configuration at all (see examples in Fig. 4), and thus are unable to help in guiding exploration for policy learning. This confirms the generalization of the proposed affordance model trained on diverse passive data, for aiding in goal-directed exploration. ## V Discussion In this paper we developed a framework for learning visual affordances from passive data such that the learned model can be directly used for goal-directed exploration in unseen scenes. Given an image of an initial scene, our affordance model could generate diverse images corresponding to plausible interactions. Further, its ability to generalize allowed us to directly leverage this affordance model to drive exploration in robot learning. We believe our approach is indicative of the broader potential of large-scale and diverse visual data in developing intelligent agents that can act in generic environments. However, ours is but a first step towards this and we believe several exciting questions remain to be addressed. First, our notion of 'visual affordances', while capturing high-level changes, did not model the actions required. In extending this approach to capture 'visuomotor affordances', one must overcome the challenges of varying morphology variation across humans and hands, while also developing action representations beyond low-level trajectories. It would also be interesting to generate intermediate 'checkpoints' to continuously depict the transition, but these may not be directly useful for robots due to the presence of human hands. Broadly, we are hopeful that extending on our work, there will be future approaches that can leverage increasingly diverse and multi-modal passive datasets for generalization in robot learning, alleviating the necessity of in-domain data collection that has greatly bottle-necked the field. \begin{table} \begin{tabular}{c c c c} \hline & **Pushing** & **Pick and Place** & **Stacking** \\ \hline **Curiosity** & 50\% & 40\% & 30\% \\ **CVAE** & 30\% & 20\% & 10\% \\ **Pix2Pix** & 30\% & 10\% & 10\% \\ **Our** & 70\% & 60\% & 60\% \\ \hline \end{tabular} \end{table} TABLE II: We show results of tests for various robot manipulation tasks. For each task, we have two test goal configurations, and report success rate over 10 trials each. Fig. 6: We show examples of two evaluation runs of our method and the curiosity baseline corresponding to two test goals in stacking and pick and place respectively. On the right are the goal images that the policy is conditioned on for evaluation, and the sequence of observations \(o_{1},...,o_{T}\) corresponds to the executed trajectory.
2302.13542
Continuous descriptor-based control for deep audio synthesis
Despite significant advances in deep models for music generation, the use of these techniques remains restricted to expert users. Before being democratized among musicians, generative models must first provide expressive control over the generation, as this conditions the integration of deep generative models in creative workflows. In this paper, we tackle this issue by introducing a deep generative audio model providing expressive and continuous descriptor-based control, while remaining lightweight enough to be embedded in a hardware synthesizer. We enforce the controllability of real-time generation by explicitly removing salient musical features in the latent space using an adversarial confusion criterion. User-specified features are then reintroduced as additional conditioning information, allowing for continuous control of the generation, akin to a synthesizer knob. We assess the performance of our method on a wide variety of sounds including instrumental, percussive and speech recordings while providing both timbre and attributes transfer, allowing new ways of generating sounds.
Ninon Devis, Nils Demerlé, Sarah Nabi, David Genova, Philippe Esling
2023-02-27T06:40:11Z
http://arxiv.org/abs/2302.13542v1
# Continuous Descriptor-Based Control for Deep Audio Synthesis ###### Abstract Despite significant advances in deep models for music generation, the use of these techniques remains restricted to expert users. Before being democratized among musicians, generative models must first provide _expressive control_ over the generation, as this conditions the integration of deep generative models in creative workflows. In this paper, we tackle this issue by introducing a deep generative audio model providing expressive and continuous descriptor-based control, while remaining lightweight enough to be embedded in a hardware synthesizer. We enforce the controllability of real-time generation by explicitly removing salient musical features in the latent space using an adversarial confusion criterion. User-specified features are then reintroduced as additional conditioning information, allowing for _continuous_ control of the generation, akin to a synthesizer knob. We assess the performance of our method on a wide variety of sounds including instrumental, percussive and speech recordings while providing both _timbre_ and _attributes_ transfer, allowing new ways of generating sounds. Ninon Devis*, Nils Demerle*, Sarah Nabi*, David Genova, Philippe Esling IRCAM - Sorbonne Universite, CNRS UMR 9912, 1, place Igor Stravinsky, Paris, France Deep audio synthesis, continuous control, timbre transfer, representation learning, adversarial training Footnote *: These authors contributed equally to this work ## 1 Introduction In recent years, deep generative models have offered exciting new ways to synthesize sound and accomplished impressive results in generating high-quality audio samples. Early works on auto-regressive (AR) models successfully achieved high-quality raw waveform synthesis [1], but at the cost of expensive computation [2]. Subsequent approaches have leveraged a time-frequency representation of the signals and reduce significantly the inference cost [3], but remain hard to control. These methods mainly rely on Generative Adversarial Networks (GAN) which produces highly realistic samples [4] or Variational AutoEncoders (VAE) [5] which provides a latent representation that captures high-level signal features. The RAVE model [6] leverages both VAE representation abilities and GAN training to achieve high-quality waveform synthesis in real-time on a standard laptop CPU. However, controlling the generation process over non-differentiable attributes remains a challenging task. Prior works tried to enhance generative models with control attributes [7, 8], notably the Conditional VAE (C-VAE) [9], which adds an attribute vector as input to the encoder and decoder to condition the generation [9]. However, it can lead to poor control abilities as the model could ignore the conditioning and sample directly from the latent variables. To address this, Fader Networks [10] force the latent representation to be invariant to given target attributes through adversarial learning. This work has been extended for VAEs to control real-valued attributes instead of binary tags for symbolic generation [11]. Yet, providing _high-level, continuous and time-varying_ controls with perceptually meaningful parameters over the raw audio waveform remains an open challenge. In this paper, we propose to tackle this issue by applying the _Fader Networks_ approach on top of the state-of-art deep audio model RAVE [6] in order to provide intuitive controls for sound synthesis. After explicitly disentangling salient musical features in the RAVE latent space by relying on the _Fader_ adversarial confusion criterion, we reintroduce them as additional inputs to the RAVE decoder to condition the generation, allowing for continuous time-varying control akin to a synthesizer knob. We show that our model provides more accurate control compared to our baselines on various speech and instrumental datasets. We evaluate the impact of control on the generation quality using various quality metrics. Unlike prior works, our method orthogonalizes descriptor-based control attributes from the latent representation. Hence, it provides independent priors on both the latent representation and the control attributes, allowing to separately (or jointly) perform _timbre transfer_ and _attribute transfer_. Hence, the user can choose any set of audio descriptors to condition the generation process. Our approach remains lightweight and can be embedded into highly-constrained hardware synthesizers1. Footnote 1: All of our source code and experiments are available on a supplementary webpage: [https://github.com/neuroave/neuroave](https://github.com/neuroave/neuroave) ## 2 Proposed Model We aim to provide _continuous attributes control_ on any type of audio descriptor for _raw waveform generation_. To do so, we propose to regularize the latent space of a generative model with an adversarial criterion and condition the generation with this given set of continuous controls. This overall architecture is depicted in Figure 1. Generative models.We base our work on RAVE [6] as it allows fast and high quality audio waveform synthesis (with sampling rates up to 48kHz) on any type of signal while being usable with real-time constraints on a laptop CPU. To do so, it leverages a multi-band decomposition of the raw waveform with a _Pseudo Quadrature Mirror Filter bank_ (PQMF) [12, 8], which allows to decrease the temporal dimensionality of the data. Similarly to the RAVE [6] model (depicted in blue on Figure 1), the training is split between a _representation learning_ stage and an _adversarial fine-tuning_ stage. The first stage aims to learn relevant features for the latent space by optimizing a multiscale spectral distance [13]. The _encoder_ and _decoder_ are trained to minimize the loss \(\mathcal{L}_{vae}\) derived from the ELBO [5]. Hence, for a given signal \(\mathbf{x}\in\mathbb{R}^{n}\) (where \(n\) is the initial discrete time length), we train a VAE to reconstruct this input signal by learning an informative and compact latent representation. This representation keeps the temporal dimension as it produces a matrix \(\mathbf{z}\in\mathbb{R}^{d\times m}\), where \(d<n\) is the number of latent dimensions and \(m\in\mathbb{N}\) is the compressed discrete time length. This dimension depends on the sampling rate and compression ratio applied by the encoder. Therefore, the embedded representation can be seen as a temporal trajectory in a \(d\)-dimensional space. Once the latent representation is learned, the _encoder_ is frozen to perform the _adversarial fine-tuning_ stage. During the second stage, the latent space is seen as a base GAN distribution and the _decoder_ learns to generate more realistic signals by relying on a _discriminator_\(D\) optimizing the Hinge loss \(\mathcal{L}_{g}\)[14]. To ensure that the synthesized signal \(\mathbf{\hat{x}}\) does not diverge too much from the ground truth \(\mathbf{x}\), the model keeps minimizing the multiscale spectral distance, but also adds the feature matching loss \(\mathcal{L}_{FM}\) proposed in [3], which minimizes the \(L_{1}\) distance between the _discriminator feature maps_ of real and synthesized audio. Hence, the full objective loss for the _decoder_ becomes \[\mathcal{L}_{dec}=\mathcal{L}_{vae}+\mathcal{L}_{g}+\mathcal{L}_{FM}. \tag{1}\] Control.Within the image field, one of the first approach of control [9] adds the features information as conditioning input to a VAE. GAN models also incorporated attributes control, such as in StyleGAN [15] which splits high-level features from stochastic variation, or AttGans [16] which includes an attribute classification constraint to the generated image. Within the audio generation domain, DDSP [13] relies on explicit control signals, but only offers pitch and loudness modifications over the generation. Fader Networks [10] is a particularly interesting approach as it provides explicit control on any set of desired attributes. It is achieved by applying an adversarial discriminator in the latent space, forcing the learning of invariant representations with respect to the varying attributes. This discriminator is trained to predict the real attributes from the latent representation, while the encoder is forced to bypass all features information in order to prevent the discriminator from achieving its goal. This implies the adversarial losses \[\begin{split}\mathcal{L}_{dis}(\theta_{dis}|\theta_{enc})=-\mathbb{ E}_{p_{\text{true}}(\mathbf{z}|\mathbf{x})}[\text{log}(p_{dis}(\mathbf{y}| \mathbf{z}))]\\ \mathcal{L}_{dis}(\theta_{enc}|\theta_{dis})=-\mathbb{E}_{p_{ \text{true}}(\mathbf{z}|\mathbf{x})}[\text{log}(p_{dis}(1-\mathbf{y}|\mathbf{ z}))],\end{split} \tag{2}\] where \(\theta_{enc}\) denotes the encoder parameters, \(\theta_{dis}\) those of the discriminator and \(p_{dis}(\mathbf{y}|\mathbf{z})\) represents the (discriminator) probability of an attribute vector \(\mathbf{y}\) given a latent representation \(\mathbf{z}\). The drawback of this method is that it only considers binary and static attributes, which are insufficient in the context of audio generation control. However, [11] relies on an extension of Faders to provide control over symbolic musical attributes. Although their method allows for real-valued parameters instead of binary attributes, they still do not address the problem of continuous temporal control. Hence, we propose to extend this method to provide both continuous high-level attributes control and raw audio generation. Our approach.The _representation learning_ stage of training perfectly fits with the first objective of Fader networks, which is to re-organize an encoded representation. Once the model converges, the _encoder_ and _fader discriminator_ are frozen and the second _adversarial fine-tuning_ stage begins in order to improve the quality of the generation. In order to introduce control (depicted in orange in Figure 1), we add an Figure 1: Overall workflow of our proposed method combining RAVE, Fader network and joint prior training. adversarial _discriminator_ to the latent space during the first stage similarly to _Fader Networks_[10]. The _encoder_ is enforced to learn a representation invariant to the chosen control attributes \(\mathbf{y}_{n}\in\mathbb{R}\), \(\forall n\in[1,N]\), where \(N\) is the number of considered attributes. However, contrarily to previous approaches, we perform on-the-fly attribute computation from input \(\mathbf{x}\), such that \(\mathbf{a}_{n}=f_{a}(\mathbf{x})\), with \(f_{a}\) being the descriptor computation function. Then, the _decoder_ receives the audio descriptor \(\mathbf{a}_{n}\), which is resampled to match the temporal dimension, as additional information to the latent matrix \(\mathbf{z}\). In order to constrain the _decoder_ to use the attribute information to reconstruct the signal, the _latent discriminator_ gives feedback to the _encoder_ by optimizing the adversarial criterion, which forces the _encoder_ to remove any attribute information from the latent space. The attributes \(\mathbf{y}_{n}\) used for this training are obtained from the conversion of continuous attributes \(\mathbf{a}_{n}\) into discrete categorical quantities by performing a quantization. It is achieved by applying an RMS threshold to remove silent parts of the sound and then sorting these values to obtain bins with equal density. The final training loss is defined by the combination of Eq. 1 and Eq. 2 \[\mathcal{L}=\mathcal{L}_{vae}+\lambda\cdot\mathcal{L}_{dis}(\theta_{enc}| \theta_{dis}), \tag{3}\] where the \(\lambda\) hyperparameter controls the trade-off between the ELBO and the invariance of the latent representation. ## 3 Experiments **Audio descriptors.** Descriptors provide quantitative measures on the properties of a given sound [17]. To perform a direct _descriptor-based_ synthesis, we selected a subset of _local descriptors_ (providing a value for each frame of signal), which are also complex to differentiate, while being linked to perceptual properties. We only retain descriptors that are pairwise independent to successfully disentangle the selected attributes from the latent representation. In addition to spectral descriptors (_RMS_ perceptually similar to _loudness_, _Centroid_ to _brightness_, and _Bandwidth_ to _richness_), we rely on more advanced timphal features provided by _Audio Commons2_: _Sharpness_ and _Boominess_, which we adapt to be computed frame-wise rather than globally. Footnote 2: [http://github.com/AudioCommons/timphal_models](http://github.com/AudioCommons/timphal_models) **Datasets.** We experiment on a variety of dataset divided between train (80%), validation (10%) and test (10%) splits. **Instrumental (NSynth [18])** is a collection of instruments with 4-seconds monophonic sounds sampled at 16kHz. We use only _string_ family, which corresponds to 20594 samples. _Percussive (Darboux)_ is an internal dataset composed of about 2 hours of raw recordings of darboux sampled at 48kHz. _Speech (SC09)_ is part of the _Speech Commands_[19] set containing the spoken digits from multiple speakers in various recording conditions sampled at 16kHz (23666 samples). Implementation detailsAll experiments are conducted using the ADAM optimizer [20] with \(\beta_{1}=0.5\), \(\beta_{2}=0.9\), a batch size of 8 and a learning rate of \(0.0001\) as suggested in [6]. The weights are initialized from a Normal distribution \(\mathcal{N}(0,0.02)\) and activations are LeakyReLU with a leak of \(0.2\). We implemented the same architecture as the original RAVE paper for the _encoder_, _decoder_ and _discriminator_ with an \(8\)-band PQMF decomposition. For the _latent discriminator_ architecture, we implemented a series of three 1-d convolutional blocks per attributes along the latent dimensions \(d\), in order to preserve the temporal dimension \(m\) of the latent space. Each block is composed of convolution, BatchNorm and LeakyReLU, with an additional softmax activation on the last convolution layer. We train the _representation learning_ stage with _Fader_ discriminator for \(1000k\) steps. The \(\beta\) warmup from \(0\) to \(0.1\) and fader \(\lambda\) warmup from \(0\) to \(0.5\), start respectively at step \(5k\) and \(15k\), and end at step \(10k\) and \(30k\). Baselines.To evaluate our method, we implement the _RAVE_ model as a baseline for reconstruction accuracy. Then we add to it a conditioning mechanism (concatenating to the latent vector the repeated temporal attributes as extra channels) leading to a conditional-RAVE (_C-RAVE_) model. Finally, we compare them to Fader-Rave (_F-RAVE_) described in Sec 2. Evaluation.For the evaluation of the _reconstruction_, we use the multiscale-STFT (_mSTFT_). As this is also our training criterion, we rely on two independent spectral metrics: the \(L_{1}\) distance between Mel-spectrograms (_Mel_) and the _Just Noticeable Difference_ (_JND_) score [21]. Regarding the efficiency of _attributes control_, we compute the correlation between changed control attributes and resulting output attributes through the Spearman rank-order correlation coefficient. We also compute the \(L_{1}\) distance to evaluate quantitatively how the generation reflects the control. Additionally, we perform a _cycle consistency_ distance to ensure the model is able to reproduce the original audio when reverting the attributes of a transformed signal to their original values. ## 4 Results Models quality and control.We compare the quality of reconstruction for inputs from the test set for _RAVE_, _C-RAVE_ and _F-RAVE_. Then, we evaluate the control behaviour by changing the attributes of the input sound with those of out-of-distribution examples (e.g. we switch the attributes of the violin for those of the darboux). We compute this for mono-attribute training (swapping only the _RMS_) or for multi-attribute training (swapping all attributes) cases. We detail our results obtained on _Nsynth_[18] in Table 1. Interestingly, introducing conditioning (C-RAVE) seems to improve the overall reconstruction quality, in both the mono- (left) and multi- (right) cases. This could be explained by the fact that this extraneous information simplifies the work of the decoder, by providing structured information on the generation. Although the C-RAVE model is able to adequately control the single _RMS_ attribute case, it loses in control quality for the more complicated multi-attribute case (although it still improves reconstruction). Oppositely, F-RAVE provides stronger correlation in mono-attribute RMS control, while maintaining equivalent audio quality. It seems also more resilient to the multi-attribute changes, providing the strongest correlation in control and the lowest reconstruction error. This implies that the model could be trained once for a whole set of descriptors, while still maintaining control quality. This is also reflected by the cycle consistency, which appears more coherent than the C-RAVE model. Multiple control attributes.We also analyze the control quality by training a separate model for each of the 4 descriptors, and a model for all descriptors at once (termed C-RAVE (m.) and F-RAVE (m.)). We analyze the correlation between target and output attributes when changing a single descriptor, or when changing random sets of 2, 3, or 4 attributes at once. The results obtained with _Nsynth_[18] are displayed in Table 2. As we can see (up), models that are trained on single attributes provide a stronger control over that descriptor. In these, it seems that _RMS_, the most salient descriptor is the easiest to control, while more complex concepts (_Sharpness_, _Boominess_, _Centroid_) are harder to grasp for the control models. As can be expected, relying on a multi-attribute model rather than specifically-trained models impact the quality of control, especially for simple conditioning (C-RAVE). However, _F-RAVE_ model seems to be less affected by this property. Similarly, when increasingly changing a larger number of attributes (bottom), the _C-RAVE (m.)_ steadily degrades in terms of control quality, while the _F-RAVE (m.)_ model maintains a higher quality of control, even when switching all attributes of a sound. Comparing various datasets.We finally evaluate how our proposed F-RAVE can be used on a wide diversity of sounds in the multi-attribute setup. We display the reconstruction (_Rec._) and control (_Ctr._) results in Table 3. RAVE appears versatile enough to maintain a high quality on any type of sounds, and confirm that the reconstruction quality stands even under multiple attribute conditioning. Regarding other models, the trends from the previous results seem to be maintained across different datasets, with a slight advantage for the F-RAVE model. The percussive dataset appears to be the easiest to control, which could be explained by the larger variance in the descriptor values for this set. ## 5 Conclusion In this paper, we combined _Fader Networks_ and the recent _RAVE_ model in order to achieve continuous descriptor-based control on real-time deep audio synthesis. We showed that our method outperforms previous proposals in both quantitative and qualitative analyses. By orthogonalizing the continuous time-varying attributes from the latent representation, our approach provides independent priors which allows to separately perform both _timbre transfer_ and _attribute transfer_ enabling new creative prospects. Hence, the user can choose a set of descriptors to condition the generation process and create a large variety of sounds. Altogether, we hope this expressive control approach will benefit both experts and non-experts musicians and provide new means of exploring audio synthesis while promoting co-creative musical applications. \begin{table} \begin{tabular}{c|c c c|c c||c c c|c c|c} \hline & \multicolumn{3}{c|}{**Reconstruction**} & \multicolumn{2}{c||}{**Control**} & \multicolumn{2}{c||}{**Cycle**} & \multicolumn{3}{c|}{**Reconstruction**} & \multicolumn{2}{c|}{**Control**} & **Cycle** \\ \hline \hline & _JND_ & _Mel_ & _mSTFT_ & _Corr._ & _L1_ & _JND_ & _JND_ & _Mel_ & _mSTFT_ & _Corr._ & _L1_ & _JND_ \\ \hline RAVE & 0.264 & 15.097 & 7.754 & - & - & - & 0.264 & 15.097 & 7.754 & - & - & - \\ \hline C-RAVE & **0.225** & **12.567** & 5.842 & 0.890 & 0.135 & 0.603 & 0.208 & **12.236** & **4.584** & 0.425 & 0.361 & 0.752 \\ \hline F-RAVE & 0.238 & 13.876 & **5.394** & **0.917** & **0.112** & **0.581** & **0.187** & 13.050 & 4.681 & **0.445** & **0.357** & **0.776** \\ \hline \end{tabular} \end{table} Table 1: Comparison of various models on the mono-attribute _rms_ training (left), and the complete multi-attribute training (right). We compare these on their quality of reconstruction, ability to control attributes and cycle consistency. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline & \multicolumn{2}{c|}{**NSynth**} & \multicolumn{2}{c|}{**Darbouxa**} & \multicolumn{2}{c}{**SC09**} \\ \hline \hline & _Rec._ & _Ctr._ & _Rec._ & _Ctr._ & _Rec._ & _Ctr._ \\ \hline C-RAVE & **4.58** & 0.42 & 5.31 & 0.62 & 6.41 & **0.35** \\ \hline F-RAVE & 4.68 & **0.45** & **5.14** & **0.61** & **6.33** & 0.34 \\ \hline \end{tabular} \end{table} Table 3: Results on the _instrumental_ (NSynth), _percussive_ (Darbouxa) and _speech_ (SC09) datasets.
2305.07590
Nature of M31 gamma-ray halo in relation to dark matter annihilation
The present work analyzes various aspects of M31 gamma-ray halo emission in its relation to annihilating dark matter (DM). The main aspect is the predicted effect of asymmetry of the intensity of emission due to inverse Compton scattering (ICS) of a possible population of relativistic electrons and positrons ($e^\pm$) in the galactic halo on starlight photons. This asymmetry is expected to exist around the major galactic axis, and arises due to anisotropy of the interstellar radiation field and the inclination of M31. ICS emission and its asymmetry were modeled by GALPROP code for the trial case of $e^\pm$ generated by annihilating weakly interacting massive particles (WIMPs) with various properties. The asymmetry was obtained to appear at photon energies above $\sim$ 0.1 MeV. Morphological and spectral properties of the asymmetry were studied in detail. Potential observational detection of the asymmetry may allow to infer the leptonic fraction in the emission generation mechanism, thus providing valuable inferences for understanding the nature of M31 gamma-ray halo emission. Specific asymmetry predictions were made for the recently claimed DM interpretation of the outer halo emission. The paper also studied the role of secondary -- ICS and bremsstrahlung -- emissions due to DM annihilation for that interpretation. And, finally, the latter was shown to be somewhat restricted by the recently derived WIMP constraints from radio data on M31.
Andrei E. Egorov
2023-05-12T16:36:05Z
http://arxiv.org/abs/2305.07590v2
# On the nature of M31 gamma-ray halo in its relation to dark matter annihilation ###### Abstract The present work analyzes various aspects of M31 gamma-ray halo emission in its relation to annihilating dark matter (DM). The main aspect is the predicted effect of asymmetry of the intensity of emission due to inverse Compton scattering (ICS) of a possible population of relativistic electrons and positrons (\(e^{\pm}\)) in the galactic halo on starlight photons. This asymmetry is expected to exist around the major galactic axis, and arises due to anisotropy of the interstellar radiation field and the inclination of M31. ICS emission and its asymmetry were modeled by GALPROP code for the trial case of \(e^{\pm}\) generated by annihilating weakly interacting massive particles (WIMPs) with various properties. The asymmetry was obtained to appear at photon energies above \(\sim 0.1\) MeV. Morphological and spectral properties of the asymmetry were studied in detail. Potential observational detection of the asymmetry may allow to infer the leptonic fraction in the emission generation mechanism, thus providing valuable inferences for understanding the nature of M31 gamma-ray halo emission. Specific asymmetry predictions were made for the recently claimed DM interpretation of the outer halo emission. The paper also studied the role of secondary - ICS and bremsstrahlung - emissions due to DM annihilation for that interpretation. And, finally, the latter was shown to be in significant tension with the recently derived WIMP constraints by radio data on M31. ## I Introduction and motivation M31 (Andromeda galaxy) is the closest large spiral galaxy. Its proximity allows to study in detail a wide variety of astrophysical phenomena under a view and environment, which are alternative to our own Galaxy, Milky Way (MW). This paper concerns the results of gamma-ray band observations of M31 in their possible relation to annihilating dark matter (DM). M31 was detected in gamma rays for the first time by Fermi-LAT during the first years of its operation [1]. Later, more observational data was accumulating, which enabled certain detailization of both the emission spectrum and morphology. Thus, [2] reported that the gamma-ray emission source at M31 center is extended, has the angular radius \(\approx 0.4^{\circ}\) and presumably has a uniform brightness distribution. The emission spectrum was measured up to \(\approx 10\) GeV. Then [3] (among other works on the subject) confirmed the cited results of [2] and also reported a possible presence of another emission component, which extends up to \(\approx 1^{\circ}\). At about the same time, [4] also found out a tentative presence of very large outer gamma-ray halo, which extends up to at least \(\approx 9^{\circ}\). And very recently, [5] elaborated that the central source, which was previously thought to be extended with the radius \(0.4^{\circ}\), in fact represents two point sources: one is in the center and one is \(\approx 0.4^{\circ}\) away from the center. And it is not very clear whether the non-central source belongs to M31. Simultaneously with the observational progress briefly described above, extensive theoretical modeling of the possible gamma-ray emission mechanisms in M31 was developing. A wide variety of emission sources has been proposed: a population of millisecond pulsars (MSPs) [6], cosmic rays (CRs) [7; 8] and DM annihilation (e.g., [9]). It is very possible that more than one emission mechanisms work together, and different processes are responsible for the emission generation in different regions of the galaxy. Many works studied the possibility of presence of DM contribution in the gamma-ray emission [6; 9; 10; 11; 12] and derived the respective constraints. One brief conclusion from these studies is impossibility to explain all the emission by annihilating DM only. Thus, the fit of the inner halo (IH) region spectrum requires the mass of DM particle (traditional weakly interacting massive particles (WIMPs) are being considered) to be very small - \(m_{x}\approx(6-11)\) GeV according to [11], while the outer halo (OH) region spectrum can be fitted by heavier WIMP with \(m_{x}\approx(45-72)\) GeV [9]. But this is an absolutely natural situation: we would primarily expect for DM contribution to be minor, while the majority of emission is generated by usual astrophysical processes, like it is in our own Galaxy. And it is very interesting and promising to understand the emission nature in detail, which may eventually lead to a robust detection of DM signal among other emission components. Thus, M31 studies in the gamma-ray band represent a valuable direction in the field of DM indirect searches. One potential emission generation mechanism is leptonic, i.e. through the inverse Compton scattering (ICS) process between the photon field in the galaxy and relativistic electrons/positrons. This mechanism has a big relevance: [7; 8] showed that both the IH and OH emissions could be explained by CR interactions, when a significant contribution comes from ICS of CR \(e^{\pm}\). The emission due to WIMP annihilation (or decay) would also necessarily have the leptonic (secondary) component; since besides the prompt gammas, the annihilation produces relativistic \(e^{\pm}\) too (DM \(e^{\pm}\) hereafter). However, among all the works cited above, only [10] paid attention to the ICS emission component due to WIMPs. The main goal of this work is to conduct comprehensive theoretical modeling of the interesting effect of ICS emission intensity asymmetry between M31 hemispheres. This effect was pointed out for the first time in [13] (see Fig. 2 there): we view M31 under the certain inclination angle; hence, there must be some difference in the upscattered photon spectra from the regions above and below M31 major axis, since the starlight radiation field in any disk galaxy has a significant anisotropy. The physical nature of such asymmetry will be explained in more details below. The authors of [13] provided just simple analytical estimates of this effect, concluding that an average upscattered photon energy would differ by \(\sim 10\%\) between the hemispheres. Here I aim to study this asymmetry effect much more precisely and in detail, since the former may potentially provide a valuable observational test for the physical nature of the gamma-ray emission from the halo. GALPROP code [14] was employed for the modeling of ICS emission in M31. As the source of \(e^{\pm}\) population in the halo, WIMP annihilation was assumed in connection with the results of [9], where WIMPs were shown to be the probable source of the gamma-ray emission from the OH. The study here is majorly theoretical and does not aim so far to derive any observational constraints or implications, although some minor qualitative discussion of the observational prospects is given. Sec. II below is dedicated to the asymmetry effect and represents the main content of this paper. Then Secs. III and IV discuss somewhat different but important aspects of DM interpretation of the OH emission. The authors of [9] performed the fit of the OH by only the prompt gamma-ray emission from annihilating WIMPs and did not take into account the secondary (ICS and bremsstrahlung) emissions. Sec. III analyzes the role of secondary emission contributions due to WIMPs. Sec. IV debates the relation between the OH fit and WIMP radio constraints. And Sec. V summarizes the findings. In general, this work continues our series of papers [15; 16; 17] dedicated to WIMP searches/constraints in M31 and utilizes the methodology developed previously there. M31 model parameters assumed here are the same as in [16], if it is not explicitly stated otherwise. ## II ICS emission asymmetry An interstellar radiation field (ISRF), i.e. the photon field produced by stars, gas and dust, is anisotropic at some level in any disk galaxy due to its non-spherical shape. Let us imagine that a disk galaxy, particularly M31 in our case, possesses a population of relativistic \(e^{\pm}\) of any origin, they produce gamma rays through ICS at ISRF, and we observe the galaxy in the gamma-ray band from aside under the certain inclination angle - see Fig. 1. In our case \(E_{e}>E_{\gamma}\gg E_{\gamma 0}\), where \(E_{e}\) is the electron or positron energy before scattering, \(E_{\gamma 0}\) and \(E_{\gamma}\) are the photon energies before and after scattering. Such energy ratio implies, that the momentum of upscattered photon would have almost exactly the same direction as the initial \(e^{\pm}\) momentum. This fact is quite obvious from just a general intuition, but can also be obtained from the equation for the differential cross section of the ICS process derived in quantum electrodynamics (e.g., [[18], Eq. (86.6)]). Hence, we would see the gamma-ray photons produced by \(e^{\pm}\), which move toward the observer. Then we can deduce from Fig. 1, that the effective average angle between the initial momenta of photons and emitting \(e^{\pm}\) slightly differs for the lines of sight lying above and below the major axis of the galaxy. The energy of upscattered photons strongly depends on that angle through the following relation (taken from [[19], Eq. (16.54)]): \[E_{\gamma}=\frac{E_{\gamma 0}(1-\beta_{e}\cos\theta_{i})}{1-\beta_{e}\cos \theta_{f}+(1-\cos(\theta_{i}-\theta_{f}))E_{\gamma 0}/E_{e}}, \tag{1}\] where \(\beta_{e}\equiv V_{e}/c\), \(\theta_{i}\) is the angle between the initial momenta of \(e^{\pm}\) and photon, \(\theta_{f}\approx 0\) is the angle after the collision (which nearly vanishes according to the considerations above). Therefore, we can expect slightly different ICS emission spectra from regions of interest (ROIs), which are located above and below the major axis, and are symmetric with respect to it. This is the ICS emission asymmetry effect in its essence. Roughly speaking, one hemisphere is expected to be brighter than the other at the same photon energy. Such effect does not exist for our own Galaxy, since we are located at its plane and view the ISRF from both hemispheres symmetrically. However, for M31 the asymmetry may be substantial. And potential observational studies of this asymmetry may allow to derive or constrain the leptonic fraction in the total gamma-ray emission, facilitating the unraveling of the emission mechanism. The asymmetry effect is modeled here specifically for the case of \(e^{\pm}\) population produced by annihilating DM, whose properties fit the outer gamma-ray halo of M31 according to [9]. The primary motivation for such choice is to build precise and testable predictions for this specific model of the OH emission. At the same time, we may expect rather similar asymmetry properties for other cases of \(e^{\pm}\) population, since the properties of the latter primarily influence the ICS emission intensity amplitude and spectral shape rather than the asymmetry. The latter is primarily defined by the galaxy inclination and ISRF directionality, which are the same for any \(e^{\pm}\) population. Hence, in the first approximation, we may extrapolate the obtained asymmetry picture from our particular case of interest to other cases. Nevertheless, I computed the asymmetry properties for some alternative \(e^{\pm}\) populations too in order to study the parameter dependence to a certain extent. It is not easy to compute a large grid of various \(e^{\pm}\) model populations due to the computational heaviness of the task. ### Modeling the ICS emission due to WIMP annihilation This subsection describes the details of the ICS emission modeling procedure. WIMP mass was set to \(m_{x}=60\) GeV as the main value of interest. This is the approximate average value among those derived in [9] for the fit of OH. The primary annihilation products are assumed to be \(b\bar{b}\) quark pairs. The annihilation cross section is set to the thermal value \(\langle\sigma v\rangle(m_{x}=60\ \mathrm{GeV})=2.1\times 10^{-26}\ \mathrm{cm^{3}/s}\) according to [20]. Actually, [9] found very large uncertainty range for the cross section value, which fits the OH (Table 2 there): \(\langle\sigma v\rangle\sim(10^{-26}-10^{-23})\ \mathrm{cm^{3}/s}\) depending on a variety of their model parameters and assumptions. Only the thermal cross section is mainly considered here, because it represents the most natural and motivated value in cosmology and particle physics. But also the intensity of emission of any kind due to WIMP annihilation is linearly proportional to the cross section; hence, the intensity can be easily rescaled for an arbitrary cross section value. And the asymmetry, defined as the intensity ratio between the hemispheres, would not depend on the cross section at all. Regarding DM density profile, [9] utilized two profiles for the fit: Einasto profile from [21] and Navarro-Frenck-White (NFW) profile from [4]. The latter profile was noted to have two problems. In the inner region of M31 DM halo, NFW profile yields the density below the range of possible densities derived in [22] - see Fig. 12 there. Thus, if we would plot NFW profile from [4] on Fig. 12, it would lie significantly below the band of suitable profiles. Another problem is the total (virial) halo mass: NFW profile yields \(M_{\mathrm{vir}}\approx 2.7\times 10^{11}M_{\odot}\), while a consensus among various studies (e.g., [[21], Table 5]) provides \(M_{\mathrm{vir}}=(0.68-1.3)\times 10^{12}M_{\odot}\), i.e. larger by \(\approx\)(3-4) times. Based on these considerations, NFW profile was suspected to underestimate DM density significantly and was not included in the analysis here as a potentially unrealistic one.1 I utilized only Einasto profile, which meanwhile exactly corresponds to MAX DM density profile in [16] (its parameters can be seen in Table 3 there). MIN-MED-MAX here have the traditional meaning of parameter configurations, which provide respectively the minimal, medium and maximal emission intensities due to DM. Another important aspect is DM annihilation rate boost due to substructures in DM halo. This aspect is highly uncertain. The boost factor has the biggest influence on the annihilation rate at the largest radii \(\sim 100\) kpc. The authors of [9] quantify the uncertainty in the boost factor by employing of four various models: the smooth density profile (i.e., no boost at all) and "low-mid-high" boost configurations (Table 2 there). A higher boost implies a lower annihilation cross section required for the outer gamma-ray halo fit. The boost factor radial profile adapted here is based on [23] and closely resembles the "high" case in [9]. In general, we do not expect a high influence of details of the boost factor model on the characteristics of the secondary emissions due to WIMPs, which we are interested in. This is because DM \(e^{\pm}\) live only in the diffusion zone, which is typically assumed to extend up to at most \(R\approx 20\) kpc (e.g., [24]). And the annihilation rate boost factor is not yet significant at such distances (see, e.g., [[23], Fig. 4]). Thus, although the boost factor is included in the calculations here, it does not play a big role for the final results. But speaking technically, the model setup here was made to be consistent with [9] in the sense, that the thermal annihilation cross section requires the "high" boost for the OH fit according to Table 2 there. Footnote 1: The author also accidentally noted the probable error in J-factor value for NFW profile (smooth without substructure) for M31 written in the last line and the sixth column of Table 2 in [9]: instead of 0.05 there, it must be actually 0.003. This would change respectively the cross section value in the last line and column from 851 to \(\approx 1.4\times 10^{4}\). The computation of all the emission maps was performed by GALPROP code v57 [25] paired with the addition [26] developed by the author. This addition enables a precise computation of all DM-related particles and emissions, and incorporates the energy spectra of stable annihilation products (at injection) from PPPC 4 DM ID resource [27; 28; 29]. GALPROP solves the transport equation for DM \(e^{\pm}\), computes their emissivity and integrates it along a line of sight, yielding emission intensity sky maps. GALPROP provides the powerful capability to compute the ICS emission precisely in the frame of full anisotropic formalism, which is described in [30]. Indeed, GALPROP had to be specifically adapted for modeling of M31. In this aspect, the experience, which was developed in the frame of work [16], was widely used. Our model is 2D, i.e. the axial symmetry of the galaxy is assumed. An important model element for the ICS emission computation is ISRF, i.e. the model of distribution of the Figure 1: The schematic central vertical section of M31 with four trial points of interest in regard to ICS process. The dashed line denotes the approximate boundary of the assumed diffusion zone; \(\vec{V_{e}}\) – the velocities of \(e^{\pm}\), which produce the upscattered photons visible for the observer; and the wavy lines – the purportive slightly predominant directions of ISRF photons at the corresponding points. intensity of field target photons over energy, spatial coordinates and direction. In general, GALPROP naturally includes three distinct components of ISRF: CMB, a far-infrared (IR) emission from dust and an optical emission from stars and gas. The standard MW 2D ISRF model created by GALPROP authors was utilized for M31 here with one modification: the energy densities of IR and optical components were globally rescaled according to the ratio of M31 and MW IR/optical total luminosities. GALPROP lacks the dedicated model of M31 ISRF. But MW and M31 are rather similar galaxies in their structure and size. Hence, it is a reasonable approximation to extrapolate MW ISRF to M31 for estimation purposes. And the ratio of M31 and MW luminosities serves as an estimator of the ratio of the photon field energy densities (again assuming the same size for both galaxies), since a radiation energy density is linearly proportional to an intensity. Table 1 provides the relevant luminosities of both galaxies. The corresponding ISRF energy density global rescaling factors are listed in Table 2 together with other important GALPROP model parameters. It is implicitly assumed here that there are no causes for the ICS emission asymmetry other than the galaxy inclination. It means that \(e^{\pm}\) population and ISRF are constructed in such a way, that they are both axisymmetric and symmetric with respect to the galactic plane. These are generally natural assumptions for a disk galaxy modeled in 2D. And all the used ISRF models were empirically checked to satisfy those symmetries by making test GALPROP runs with the observer placed at the galactic plane. These test runs yielded no ICS asymmetry, as anticipated. Another essential model ingredient for the computation of ICS emission due to DM \(e^{\pm}\) is their cooling and propagation. The cooling mechanisms include ICS, synchrotron, bremsstrahlung radiative energy losses; Coulomb scattering and ionization losses. The respective energy loss rates (as they are implemented in GALPROP) are written out in [[16], Eqs. (10)-(13)]. The ICS energy loss rate is defined by the energy density of ISRF, which is described above. The synchrotron loss rate is defined by the magnetic field (MF) strength. M31 MF model developed in [[16]] (Sec. IIIC there) was naturally adopted here. Specifically, MED (i.e., medium) MF configuration with the central field strength of 50 \(\mu\)G was used as the base scenario here. The energy losses of other kinds require to set the gas distributions in M31. The former is described in [[16], Eq. (14), Appendix]. Regarding \(e^{\pm}\) propagation (prop.) parameters, the experience gained in [[16]] was again utilized: MED propagation parameter configuration, described in Sec. IIID there, was adopted as the base scenario. Thus, \(e^{\pm}\) population is assumed to be residing and emitting inside the diffusion cylinder with the radius \(r_{\rm max}=20\) kpc and the half-height \(z_{\rm max}=2.7\) kpc. Indeed, such cylinder is a conventional modeling abstraction - there is no sharp physical boundary at the cylinder edges. In reality, MF vanishes smoothly, and, hence, the diffusion coefficient diverges gradually too near the boundaries of the diffusion zone. For this reason, our model calculations of the ICS emission intensity might be unreliable for the lines of sight passing near the edges of the diffusion zone. M31 was placed at the real distance in the model setup here, which differs from the model in [[16]], where a smaller than real distance was used in order to reduce computational efforts without a loss of precision. But it would be unreliable to alter the distance in the current task of the ICS emission asymmetry calculation, since such alteration may introduce unphysical asymmetry; geometrical proportions are more important here. The real distance required, in turn, very high angular resolution of sky maps, which were computed in the traditional HEALPix format [[36]]. It was found empirically, that \(\approx 1\) arcmin resolution is sufficient for mitigating the finite pixelization effects on relevant angular scales. HEALPix resolution parameter \(N_{\rm side}\) was set to \(2^{12}=4096\) in order to achieve the mentioned pixel size. The anisotropic ICS computation in GALPROP is heavy, and so high angular resolution required to employ the computing cluster with 64 CPU cores in order to compute the whole task in a few weeks. ### Obtained emission characteristics Putting together all the model ingredients described in the previous subsection, the ICS emission maps were computed at discrete photon energies over the wide range 1 keV - 10 GeV with the photon energy increment factor \begin{table} \begin{tabular}{l c} Parameter & Value \\ \hline Radius of the diffusion cylinder \(r_{\rm max}\) & 20 kpc \\ Spatial grid step size \(\Delta r=\Delta z\) & 0.05 kpc \\ \(e^{\pm}\) propagation energy range & \((5\times 10^{-4}-1)m_{x}c^{2}\) \\ Energy grid step increment & 1.1 \\ Optical ISRF energy density & \\ factor with respect to MW & 1.3 \\ IR ISRF energy density & \\ factor with respect to MW & 0.57 \\ HEALPix resolution of maps \(N_{\rm side}\) & \(2^{12}=4096\) \\ Conversion factor from CO intensity to H\({}_{2}\) column density \(X_{CO}\) & \({\rm mol\ cm^{-2}(K\ km/s)^{-1}}\) \\ He/H ratio in the galactic gas & 0.1 \\ \end{tabular} \end{table} Table 2: Values of various parameters used in GALPROP. \begin{table} \begin{tabular}{c c c} Parameter & MW & M31 \\ \hline Optical (V-band) & & \\ luminosity [\(L_{\odot}\)] & \(2.1\times 10^{10}\)[[31]] & \(2.7\times 10^{10}\)[[31]] \\ IR (dust) luminosity [\(L_{\odot}\)] & \(7.4\times 10^{9}\)[[32]] & \(4.3\times 10^{9}\)[[33]] \\ Gas mass [\(M_{\odot}\)] & \(7\times 10^{9}\)[[31]] & \(8\times 10^{9}\)[[34; 35]] \\ \end{tabular} \end{table} Table 1: The total luminosities and gas masses of MW and M31 with respective references. of 10. Let us start the analysis of obtained maps from the selection of relevant ROIs. Historically, as was described in Sec. I, two radial zones around M31 center were considered separately: inside and outside \(R\approx 5\) kpc \(\leftrightarrow 0.4^{\circ}\). They seem to have different mechanisms of the gamma-ray emission generation. Based on these considerations, as a natural and intuitive choice, the following ROIs were constructed for the analysis here - they are depicted in Fig. 2. The first ROI resembles the IH and is the disk with 5 kpc radius around the center. The second ROI is the annulus fragment restricted by \(R=5\) kpc and \(R\) = 10 kpc circles; and the lines, which are parallel to the major axis and are 5 kpc away from it. The major axis divides each of these ROIs into symmetric halves. One half is closer to the northwest, the other - to the southeast. Let us briefly call them the northern and southern parts. These parts are designed for their intensity comparison in the study of ICS emission asymmetry. The second ROI is intended to represent the OH. Both ROIs are constructed in a way, that they do not approach too close to the boundary of the sky projection of the diffusion cylinder, where the intensity values might be imprecise according to the explanation above. Thus, it would not make sense to study the OH by, for example, the full 5-10 kpc annulus, since it goes far beyond the diffusion cylinder projection at the upper and bottom areas. Another potentially useful feature of the chosen ROIs, if to consider separately their halves, is that they both have the characteristic size of about 5 kpc, which approximately matches the resolution of Fermi-LAT in the angular measure at the energies 1-10 GeV [37]. One detail about the OH ROI is that, generally speaking, there are two such ROIs - one on each side from the minor axis. However, it does not matter which of these two ROIs is considered, since the emission intensity map is symmetric about the minor axis. As was explained above, the asymmetry is anticipated only about the major (horizontal in Fig. 2) axis. And let us declare the exact definition of the asymmetry, which is used through this paper: this is the relative difference between the ICS emission intensities in the northern and southern halves of the designated ROIs, i.e. \(I_{N}/I_{S}-1\). An intensity inside any region means the value averaged over all map pixels in that region. As the next step, let us discuss an important test of accuracy of the obtained maps. As was mentioned above, our constructed model of M31 implies, that the ICS emission asymmetry is caused by the ISRF anisotropy and inclination _only_. Hence, the ICS emission maps must be perfectly symmetric about the major axis in the case of isotropic ISRF. And CMB ISRF component is very helpful in testing this implication, since the CMB emission is almost absolutely isotropic everywhere by its nature. CMB ICS emission component may have only a minor asymmetry of the second order due to the following peculiar mechanism. The primary asymmetry of the IR and optical ICS components implies the difference in ICS cooling rates for \(e^{\pm}\), which reside in different hemispheres. In other words, \(e^{\pm}\) from the brighter hemisphere radiate out their energy faster. And a faster cooling rate implies, in turn, a lower equilibrium concentration of emitting \(e^{\pm}\). Therefore, we may expect a minor second order anisotropy with the opposite sign due to such a difference in cooling rates. And it may be notable mainly for the CMB ICS component, which lacks the first order anisotropy. The asymmetry of CMB ICS emission component in all the computed parameter configurations at all photon energies in both ROIs does not exceed \(\approx 1\%\) by absolute value. I attribute this small residual asymmetry to a combination of two effects: the described second order asymmetry and the finite map pixelization. A nearly perfect symmetry of the obtained CMB ICS emission maps confirms the overall correctness of the whole calculation algorithm. And the cited \(\approx 1\%\) residual asymmetry represents and is accepted as a fair estimate of the uncertainty of all the asymmetry values calculated in the frame of our model in the selected ROIs. Indeed, this uncertainty does not include all systematical model uncertainties, like those related to a difference between the real M31 ISRF and the benchmark MW ISRF utilized here. Now we can look at the obtained ICS emission spectra in the prepared ROIs. The spectra are shown in Fig. 3 separately for IH and OH ROIs. For definiteness, the northern ROI halves were mainly used for Fig. 3 (shown by the solid lines). Besides the total ICS emission spectra, the spectra from all three main ISRF components are also shown individually. According to the basic Eq. (1) above, the energies of photon before and after the scattering act are linearly proportional to each other. This fact is clearly reflected in Fig. 3: the ICS spectrum from CMB photons peaks at 0.01-0.1 MeV in the chosen units, from IR photons - at 0.1-1 MeV and from optical photons (as well as the total) - at 10-100 MeV. Overall, the contribution from optical photons dominates at energies above 0.1-1 MeV. The IH ROI is brighter than the OH ROI by more than an order of magnitude. This reflects a very steep radial dependence of DM annihilation rate. The spectral shape does not differ much between the IH and OH ROIs. The magenta lines represent the prompt (primary) gammas due to annihilation of the same WIMPs. The prompt gamma-ray emission maps were computed by the absolutely same GALPROP framework with only one difference: the medium emissivity integration along the line of sight was done up to \(R\approx 200\) kpc, which approximately corresponds to the virial radius of M31 DM halo. The prompt emission component is expectedly brighter than the ICS component, although the latter peaks at significantly lower energies. The dashed lines mark the total ICS spectrum in the southern ROI halves. And in spite of the very large range over the vertical axis, we can clearly see the difference between the northern and southern ROI spectra, meaning that the predicted ICS emission asymmetry effect truly exists! And the asymmetry has different signs in the IH and OH ROIs. In order to have some connection with the real sky, Figure 3: The obtained ICS emission spectra in the IH ROI (_left panel_) and in the OH ROI (_right panel_) for our model \(e^{\pm}\) population produced by the thermal WIMPs with \(m_{x}=60\) GeV (\(\chi\chi\to b\overline{b}\)), which fit the OH gamma-ray emission according to [9]. ICS emission from various ISRF components are shown individually. ”North” and ”south” mean respective ROI halves. The estimated spectrum of the isotropic background was taken from [38], Fig. 3. The measured spectrum of the Galactic foreground at M31 location was taken from Fermi-LAT diffuse Galactic emission map [41]. More details are in Sec. II. Figure 2: The visual image of M31 (obtained from Aladin sky atlas) with the border lines of ROIs chosen for our analysis. The abbreviations have the following meanings: NIH – northern inner halo ROI (i.e. the northern half of the inner halo ROI), SIH – southern inner halo ROI, NOH – northern outer halo ROI and SOH – southern outer halo ROI. The square marked as ”Tile” represents the uppermost and rightmost tile in Fig. 4. The approximate angular size of the diffusion cylinder sky projection is \(3.0^{\circ}\times 1.2^{\circ}\) (major \(\times\) minor axes, base (MED) case). More details are in Sec. II. Fig. 3 also contains the approximate measured spectra of the isotropic background and Galactic foreground at M31 location. The former does not continue below 50 MeV due to a scarcity of data. Another relevant aspect is the actual observed intensity of M31. Concerning the IH ROI, according to [5] the point source (if to spread it over the whole IH) in M31 center produces the intensity, which is approximately the same as the Galactic foreground shown by the red line in Fig. 3. For the OH, a good estimate of M31 intensity would be the prompt DM emission spectrum shown by the magenta line, since DM model used here fits all the OH emission according to [9]. Then let us analyze the morphology of the ICS emission. In order to visualize the intensity and asymmetry map, I tiled the sky projection of the diffusion cylinder at the representative photon energy of 1 GeV by the squares with 2.5 kpc sides symmetrically about M31 center - see Fig. 4. The thick line there denotes the major axis, the thick dot - M31 center. Also, Fig. 2 illustrates the position of the uppermost rightmost tile on the sky. The specific size of 2.5 kpc was chosen due to the following reasons. It provides a convenient tiling of the diffusion cylinder projection without too close approach to the boundary and with the sufficient number of map pixels (\(\approx 170\)) inside every tile for mitigating pixel "noise". Also, 2.5 kpc corresponds to the angular resolution, which may be achievable in the near future with new gamma-ray telescopes (see, e.g., [[38], Fig. 19]). The tiles contain their ICS emission intensity value, and the asymmetry values of the total intensity and its component from CMB ISRF (in brackets). The asymmetry in Fig. 4 means the relative intensity difference between the tiles in the rows above the major axis and their symmetric counterparts below. Looking at Fig. 4 we may notice, first of all, that the intensity distribution is almost perfectly symmetric \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \hline \begin{tabular}{c} Parameter configuration \\ \end{tabular} & 0.1 MeV & 1 MeV & 10 MeV & 100 MeV & 1 GeV & 10 (100) GeV \\ \hline \multicolumn{1}{c}{Base: \(m_{x}\) = 60 GeV, \(\chi\chi\to b\bar{b}\),} & & & & & & & \\ \multicolumn{1}{c}{MAX DM density, MED MF/prop.} & \multicolumn{1}{c}{–7; 4} & –11; 8 & –13; 13 & –14; 15 & –16; 16 & –21; 10 \\ \hline Base with DM prompt gammas added. & -7; 4 & –11; 8 & –13; 13 & –10; 5 & –1; 0 & –1; 0 \\ \hline Base with misc parameter variations: & [–7, -2]; & [–11, -3]; & [–13, -5]; & [–14, -8]; & [–16, -10]; & [–21, -17]; \\ \(i=[70^{\circ},77^{\circ}]\), [MIN, MAX] MF/prop. & [3, 8] & [6, 17] & [10, 24] & [11, 26] & [12, 27] & [6, 22] \\ \hline Base with misc parameter variations & [–7, -2]; & [–11, -3]; & [–13, -5]; & [–10, -5]; & –1; & –1; \\ and DM prompt gammas added. & [3, 8] & [6, 17] & [10, 24] & [4, 8] & 0 & 0 \\ \hline Base except \(m_{x}\) = 600 GeV. & -4; 1 & -7; 3 & –10; 8 & –12; 14 & –14; 18 & –15; 16 (–17; 13) \\ \hline Base except \(\chi\chi\to\tau^{+}\tau^{-}\). & -3; 0 & –5; 2 & –9; 10 & –12; 18 & –14; 18 & –18; 15 \\ \hline Base except MIN DM density. & –3; 3 & –5; 6 & –7; 8 & –9; 7 & –12; 6 & –19; 0 \\ \hline \hline \end{tabular} \end{table} Table 3: The ICS emission asymmetry [%] dependence on the photon energy and model parameters configuration. The asymmetry values are provided for both IH ROI and OH ROI (separated by ”;”, ”[]” means a range of values). Figure 4: The ICS emission intensity and asymmetry map at \(E_{\gamma}\) = 1 GeV for the base parameter configuration by means of the square tiling. The thick line marks M31 major axis, the thick dot – the galactic center. Each tile contains the following values: the average intensity (multiplied by the photon energy) in the units [MeV cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\)] (with ”e” meaning \(\times 10^{\circ}\)), the total asymmetry defined here as the relative intensity difference in % between the upper row tiles and their symmetric lower counterparts, and the asymmetry of CMB ICS component (in brackets). The upward direction corresponds to northwest. about the minor galactic axis. This matches expectations from our model of M31. The intensity steeply decreases with the central distance. Considering the asymmetry distribution, we see a peculiar picture. Naively, we might expect the asymmetry to have the same sign over the whole hemisphere, i.e., when one hemisphere is uniformly brighter than the other one. However, the asymmetry is very non-uniform: it is negative close to the center in the bulge region and positive elsewhere. The asymmetry goes down to -18% near the center and rises up to 170% in the corners. The asymmetry of CMB component serves as an indicator of the correctness of the calculation algorithm. As was explained above, the CMB component must be nearly symmetric. It is indeed so in the second row from the top. However, the first row demonstrates some minor systematically negative CMB asymmetry at the level of few %, which slightly exceeds our uncertainty estimate of 1%. This asymmetry can be attributed to two possible natural reasons. One is the second order asymmetry due to the difference in cooling rates described above. Indeed, the primary asymmetry in the first row tiles is very large, therefore \(e^{\pm}\) cool down much faster there. And the second reason may be geometrical: the upper (northwest) part of M31 disk is further from us than the lower (southeast) part according to [39]. Then the difference in distances to the emitting regions corresponding to the tiles in the first and fourth rows is larger than that to the tiles in the second and third rows (see also Fig. 1). Therefore, the tiles in the first row may look fainter than their counterparts in the fourth row, but for the second row, this effect is still not pronounced. Thus, we may conclude again, that our model reasonably reproduces the symmetry of CMB component. Now let us try to deduce an intuitive physical interpretation of the obtained non-uniform asymmetry distribution. Fig. 1 serves for this purpose. In general, we may expect, that the biggest ICS emissivity is reached in the places with the highest ISRF density, i.e. inside M31 bulge and disk. Let us choose four trial points (1-4 at Fig. 1) there in order to study the geometry of ICS process. Let us assume that the near vicinity of these points produces a defining contribution to the emission intensity integrated along the whole line of sight inside the diffusion cylinder. The points 1 and 4 represent pairs of locations outside the bulge, the points 2 and 3 - inside. \(\vec{V_{e}}\) is the velocity of emitting \(e^{\pm}\) before scattering. And the photon symbols represent the supposed prevailing momentum direction of ISRF target photons before scattering. Thus, \(\theta_{i}\) are the angles between the initial momenta of photons and \(e^{\pm}\). According to Eq. (1) above, the energy of upscattered photon grows monotonically with \(\theta_{i}\). It means that the energy transfer between \(e^{\pm}\) and a photon happens most efficiently in head-on collisions and least efficiently, when \(e^{\pm}\) chases a photon moving in nearly the same direction. Therefore, the ICS emission intensity increases with \(\theta_{i}\), if to consider the latter as the effective average angle, which is defined by the prevailing direction of photon motion. I attempted to guess such directions at Fig. 1. Considering points 1 and 4, the ISRF intensity in the direction along the galactic plane should be much higher than across. And the intensity in the direction outward from the nucleus is probably slightly higher than toward, at least because ISRF density is generally expected to decrease with the radius. These intuitive considerations led to the marked prevailing photon momenta directions at points 1 and 4. \(\theta_{i1}>\theta_{i4}\), hence the upper (northern) hemisphere outside the bulge region must be brighter than the lower (southern) one, which reconciles with Fig. 4. But when our line of sight dives into the bulge, the situation is less obvious - see points 2 and 3. The bulge has a spheroidal shape, and ISRF is expected to be rather isotropic inside. Hence, the bulge by itself does not seem to provide any significantly prevailing direction of photon momenta. The former may be produced by the photons from the disk parts, which are closest to points 2 and 3, as shown in Fig. 1. Then \(\theta_{i2}<\theta_{i3}\), which explains the negative asymmetry in the bulge region according to Fig. 4. This is a qualitative interpretation of the peculiarities of obtained asymmetry distribution best to the author's imagination. However, other mechanisms may indeed work too. Thus, we have also keep in mind, that the medium ICS emissivity also depends on \(e^{\pm}\) concentration, which has a steep radial profile defined majorly by DM density profile. And the interpretation above considered, for simplicity, just the plane section of the system, although the volumic picture does not seem to introduce anything essentially else. Finally, let us analyze the asymmetry dependence on the photon energy and other model parameters. Table 3 serves for this purpose. It lists the asymmetry values starting from \(E_{\gamma}=0.1\) MeV for both IH and OH ROIs (the same ROIs as were used for Fig. 3). Below \(E_{\gamma}\sim 0.1\) MeV the asymmetry vanishes, because the CMB ISRF component dominates in the production of ICS emission and makes it symmetric at those energies, as can be seen in Fig. 3. We can see from the table, that the absolute value of asymmetry generally grows with energy for all the computed parameter configurations. At low energies this trend is caused again by a decrease of CMB ICS contribution with the energy increase. Considering the IH ROI, the growth of asymmetry is monotonous there over the whole energy range. In the OH ROI the asymmetry reaches its maximum typically around \(E_{\gamma}\sim 1\) GeV and then decreases. The first row of the table represents the base parameter configuration, which was described in Sec. II.1 and fits the OH emission according to [9]. We may note for this case, that the asymmetry is quite substantial: it reaches -21% in the IH ROI and 16% in the OH ROI. The second row reflects the same parameter configuration, but when the prompt gamma-ray emission due to WIMP annihilation is added on top of the ICS emission. The prompt emission spectrum is relatively narrow: as can be seen in Fig. 3, it is narrower than the ICS emission spectrum. The prompt emission reaches a significant contribution and begins to influence the asymmetry around \(E_{\gamma}\sim 100\) MeV, and then vanishes the asymmetry completely at \(E_{\gamma}\gtrsim 1\) GeV due to a full dominance over the ICS emission at those energies. The third and fourth rows of Table 3 illustrate the ranges of asymmetry values (around the base scenario) due to the uncertainties in M31 inclination and MF/prop. model parameters. The inclination angle (\(i\)) value has key importance for the ICS emission asymmetry effect. For the base scenario \(i=74^{\circ}\) was assumed as the average value among various determinations cited in [40]. Those determinations obtained values ranging from \(70^{\circ}\) to \(77^{\circ}\). And I computed the asymmetry values for these boundary inclination values too in order to study the respective uncertainties. MF/prop. parameters were also varied together with the inclination, although the former are expected to play a secondary role. The ranges of possible values of MF/prop. parameters were taken from [[16], Table 3]. As can be seen in Table 3, the resulting uncertainty ranges of the asymmetry are quite substantial, especially in the OH ROI. The IH ROI did not show a clear correlation between the asymmetry and the varied model parameter values. A low sensitivity to the inclination value in the IH ROI could be due to the fact, that the bulge and ISRF there have quasi-spherical geometry. The next rows of the table show the asymmetry for alternative DM models: heavier (by an order of magnitude) WIMPs with \(m_{x}=600\) GeV, another important annihilation channel \(\chi\chi\to\tau^{+}\tau^{-}\) and MIN DM density profile (from [[16], Sec. IIIB]). In each of these three cases, only the mentioned model element was changed, other parameters correspond to the base configuration. The listed DM models do not have direct relevance for the fit of outer gamma-ray halo. They were computed for a somewhat different purpose - to study the asymmetry dependence on the properties of \(e^{\pm}\) population. In a broader view, WIMP mass (i.e., the rest energy) can be considered as the upper limit for the energy of \(e^{\pm}\). WIMP annihilation channel defines the shape of \(e^{\pm}\) energy spectrum (at the injection). DM density profile defines the degree of concentration of \(e^{\pm}\) source. Hence, by varying these DM model parameters, we may explore to some extent an overall possible variation of the asymmetry due to a variation of the properties of \(e^{\pm}\) population. The thermal WIMPs much lighter than 60 GeV were not considered, because they are quite robustly excluded by various indirect searches and, hence, are not interesting. The fifth row of Table 3 reflects the case of heavier WIMPs. In this case we may note slightly smaller asymmetry at low photon energies and a confident propagation of the asymmetry to very high energies \(E_{\gamma}\sim 100\) GeV. This can be explained by an overall shift of \(e^{\pm}\) population toward higher energies. The noted persistence of the asymmetry at \(E_{\gamma}\sim 100\) GeV can be considered as an indication of possibility of the asymmetry existence at an arbitrarily high energy, as far as \(e^{\pm}\) exist with that energy. The sixth row shows the case of \(\chi\chi\to\tau^{+}\tau^{-}\): at low photon energies it shows appreciably lower asymmetry with respect to the base configuration. And the last row of the table reflects the case of very different DM density profile: MIN, i.e. the cored profile, which is opposite to the MAX cusped profile. As was discussed in [[16], Sec. IIIB], MIN and MAX profiles approximately enclose the range of possible DM densities in the central region (inner several kpc) of M31 DM halo. MIN profile demonstrates significantly lower asymmetry values at all the energies in both ROIs in comparison with the base case. Thus, the degree of concentration, i.e. the radial steepness of \(e^{\pm}\) source distribution, substantially influences the asymmetry. The zero value of the asymmetry in the OH ROI at \(E_{\gamma}=10\) GeV does not necessarily mean full symmetry: likely, it is just the result of superposition of negative and positive asymmetries in different parts of the OH ROI. Looking at Table 3 overall, we can find out, that the negative asymmetry in the IH ROI reaches at most -21%, while the positive asymmetry in the OH ROI - +27%. Indeed, the computed trial cases do not provide full and smooth coverage of the whole parameter space. But they outline well the characteristic asymmetry values, which we may anticipate from some arbitrary \(e^{\pm}\) population. Thus, concluding broadly, the ICS emission from any reasonable \(e^{\pm}\) population is expected to possess an asymmetry at the level of 10%-20% (at least at some energies) in the selected ROIs. ### Observational prospects Although this paper aims mainly at the theoretical modeling of the ICS emission asymmetry effect, this subsection discusses some general considerations of the effect's observability. Let us go back to Fig. 3, which reflects the base thermal WIMP scenario for the outer gamma-ray halo fit. Considering the ICS emission intensity averaged over IH and OH ROIs, we see that, unfortunately, it is smaller than the background and foreground emission intensities by more than one order of magnitude at all photon energies in both ROIs. Hence, it would be very challenging to separate even just the ICS component itself from other emission components in the direction of M31. The asymmetry, i.e. the intensity difference between the ROI halves, is even smaller: it can be seen as the difference between the dark-green and dashed lines. In comparison with the backgrounds/foregrounds, this difference is tiny at all. However, one may consider other regions, where the asymmetry values are larger: according to Fig. 4, the asymmetry exceeds 100% in the locations far from M31 center, i.e., the intensity ratio exceeds factor of 2 there. But the problem for such locations is a very small intensity amplitude, since the latter steeply decreases with the distance from center. Hence, changing the ROIs does not seem to provide an obvious sensitivity gain. Thus, at first glance, the overall picture appears to be quite pessimistic in the sense; that the predicted asymmetry effect indeed exists, but mainly theoretically, since its observational detection requires a virtually un limited instrumental sensitivity. At the same time, there are some reasons for optimism. One reason is that, according to Fig. 3, the prompt and ICS emission spectra peak at significantly different energies. This circumstance and a relative narrowness of the prompt spectrum loosen an observational degeneracy between these two components: the bright prompt signal does not "interfere" much with the weaker ICS signal and its asymmetry at \(E_{\gamma}\lesssim 100\) MeV. The latter energy range can be optimal for a search of the asymmetry also due to another reason: photon fluxes are relatively high there, since they steeply decrease with energy. At higher gamma-ray energies \(E_{\gamma}\gtrsim 1\) GeV precise intensity measurements are very limited due to small photon counts and, hence, large statistical errors. But at lower energies photon counts are large (for the same detector area); which may allow, in principle, precise intensity measurements and, hence, separation of faint emission components. Another challenge is the angular resolution: that achievable now by Fermi-LAT at low energies is insufficient. However, in general, we may anticipate significant progress in both the sensitivity and angular resolution of gamma-ray telescopes in the future. The relevant future missions, which are expected to have a good sensitivity at low energies, include e-ASTROGAM [38], AMEGO [42], HERD [43] and GAMMA-400 [44; 45]. So far, the discussion has been tied solely to the specific thermal WIMP, which is needed for the fit of OH. However, according to [[9], Table 2], the fit may require higher than thermal cross sections even for the same (high) J-factor values (the exact definition of J-factor notion is given by Eq. (7) there). The required cross section value is highly uncertain due to the complicated systematic uncertainties described there. The ICS emission intensity is linearly proportional to the cross section. Hence, in the case of higher cross sections, the ICS signal would be proportionally brighter and, therefore, easier to detect over backgrounds. As was emphasized in Sec. I, the observed gamma-ray emission in M31, speaking generally, must not necessarily originate from DM. The emission may come from a population of CRs through both the leptonic and hadronic mechanisms. If the leptonic component, i.e. the ICS emission from CR \(e^{\pm}\), is non-zero, then the asymmetry of the total emission would indeed be non-zero too. The asymmetry of ICS emission from CR \(e^{\pm}\) is even easier to detect in comparison with DM \(e^{\pm}\) case; because in the latter case the ICS emission has a secondary role, and the asymmetry is present only inside a relatively narrow energy range. In the CR case there is no prompt gamma-ray component, i.e. the ICS component is primary, and only a potential hadronic component may "interfere" with the ICS emission of interest. Also in this case the ICS emission spectrum and its asymmetry do not have a steep cut-off near WIMP rest energy - at \(E_{\gamma}\sim 10\) GeV in our case. The spectrum from CR \(e^{\pm}\) would be much wider over the energy. The asymmetry is naturally expected to be the most prominent in the case, when all the observed emission comes from CR \(e^{\pm}\). Summarizing, detection of the asymmetry effect is a very challenging task, but potentially not hopeless for future highly sensitive gamma-ray telescopes. ## III Estimation of DM e\({}^{\pm}\) ICS and bremsstrahlung emission contributions This section is dedicated to a somewhat different aspect of DM interpretation of the outer gamma-ray halo. Best to the author's understanding, the work [9] took into account only the prompt emission component due to DM annihilation for the fit of OH emission and neglected by the secondary components - ICS and bremsstrahlung from DM \(e^{\pm}\). This section aims to check the validity of such approximation by estimating the emission fluxes from these secondary contributions and comparing them with the prompt emission flux. This estimation was done for the same thermal WIMP with \(m_{x}=60\) GeV annihilating to \(b\bar{b}\), which fits the OH. The secondary emission fluxes were calculated for the annular region on the sky, which was used in [9] for the fitting procedure and called "spherical halo (SH)" there. This annulus is centered at M31 center, and has angular radii \(0.4^{\circ}\) and \(8.5^{\circ}\). For definiteness, I consider here the case "I" according to the classification in [9]; i.e., when annihilating WIMPs in both MW and M31 form the signal along M31 line of sight, and there is no effect of absorption of MW DM signal by isotropic emission template. The secondary emission fluxes were computed here individually for MW and M31. Regarding DM density profile, it is the same here as in the previous section. The profile parameters for MW were obtained according to the recipe in [9]. And MED MF/prop. configuration was employed. Let us start the discussion from the ICS component of secondary emissions. M31 ICS emission intensity maps were already calculated by GALPROP in the frame of modeling conducted in the previous section. Here we need in the emission flux from the annular region cited above for a convenient comparison with the prompt fluxes in [9]. This region encloses the whole M31 diffusion zone sky projection except the central disk with \(0.4^{\circ}\) radius. MW ICS emission intensity maps were calculated by the same way with DM density and MF/prop. parameters adapted for MW. MW ICS emission spreads over the whole sky, indeed, since we look at the sky from the inside of MW diffusion cylinder. The extracted ICS emission fluxes from the annular ROI at three representative energies are written out in Table 4. Another potentially relevant emission mechanism is bremsstrahlung from DM \(e^{\pm}\). Bremsstrahlung emission maps were computed by GALPROP too. This calculation requires knowledge of the interstellar gas distribution in both galaxies. GALPROP conveniently provides the relevant gas distributions for MW. This allows to compute quite detailed all-sky bremsstrahlung intensity maps. For M31 such detailed gas maps do not exist, and MW gas maps were substituted. Such a rough approximation is acceptable due to the following reasons. First, contrary to MW, we do not need in the detailed intensity map of M31. Instead, we need in just the total flux from almost the whole galaxy (without its central part). The total bremsstrahlung flux from the whole galaxy is presumably proportional to the total galactic gas mass. MW and M31 have very similar total gas mass values - they are provided in Table 1. Hence, MW gas distributions are expected to provide a reasonable estimate of the total bremsstrahlung flux from M31. Also, Table 2 displays the values of certain parameters used in the bremsstrahlung calculation procedure. The resulting bremsstrahlung fluxes are displayed in Table 4. Now we can compare the obtained secondary emission fluxes with the prompt ones. The latter were extracted from ([9], Fig. 2 (right), the whole SH] and are written out above the horizontal line in Table 4. In general, we can note that the prompt emission flux is much larger than the ICS one, and the latter in turn is much larger than the bremsstrahlung flux. Thus, the latter plays the least role among three components and, by the way, would not create much of a nuisance background for detection of the ICS emission asymmetry. All MW emissions are much brighter than M31 ones. [9] used the photon energy range starting from 1 GeV for the fit. We see from Table 4, that all the secondary components are negligible in comparison with both MW and M31 prompt components at \(E_{\gamma}\gtrsim 1\) GeV. Thus, it is valid to neglect by secondary emissions in the fit at those energies. However, at \(E_{\gamma}=0.1\) GeV we can see, that MW secondary flux becomes comparable to M31 prompt flux. Hence, if one would employ lower energies \(E_{\gamma}\lesssim 1\) GeV for the fit, then it is necessary to take into account at least MW ICS component (if the scheme "I" is realized, when both galaxies are included). And there is another important caveat, which concerns the annihilation cross section value. So far we have discussed the case of thermal cross section, which is the most natural and approximately corresponds to the parameter configuration described by the first row of Table 2 in [9]. However, other rows of this table represent alternative viable parameter configurations: they have less extreme substructure boosts, but require more exotic DM with high annihilation cross sections. Lowering the boost factor would not decrease the secondary emission fluxes significantly, since the boost factor works mainly on the largest radii \(R\sim 100\) kpc and does not affect the annihilation rate all that much inside the diffusion cylinder, i.e. at \(R\lesssim 20\) kpc. But increasing the cross section would linearly increase the secondary fluxes. Hence, for lower boost factor cases, the secondary emission fluxes would be higher and contribute more to the total flux. Thus, one may need to take into account certain secondary components, if the cross sections higher than thermal are employed for the fit. Speaking very generally, there is another - the fourth - gamma-ray emission type due to annihilating WIMPs: the emission from decaying pions, which are produced by DM proton and antiproton collisions with an interstellar gas. However, hadronic emissions are beyond the scope of this work. ## IV Interrelation between DM radio constraints and fit of the outer halo This section aims to discuss another aspect, which is very relevant for DM explanation of the outer gamma-ray halo emission: how WIMP parameter values, which are needed for the fit, relate to the constraints obtained from radio observational data on M31. DM \(e^{\pm}\) inevitably produce synchrotron emission too in the galactic MF in the radio band. And radio observations provide a powerful tool for constraining the properties of any \(e^{\pm}\) population. This aspect was already discussed briefly in ([16], Sec. VI]. Here more details are revealed. Similarly to the previous sections, only Einasto DM density profile is considered here, because the alternative NFW profile used in [9] may be unrealistic due to the problems described in Sec. II.1. Fig. 5 shows WIMP parameter plane - the annihilation cross section vs. mass plane, and the green rectangle there represents the region needed for the OH fit. The range of cross sections is taken from the last two columns of ([9], Table 2). Both scenarios I and II are considered here. These cross section values are taken without the uncertainties related to those in J-factor values (columns 7-9 there). The dot-dashed lines in Fig. 5 display the parameter limits derived in [16] from the radio non-thermal emission maps of M31 bulge region. These exclusion lines were taken from Fig. 8 (left) there preserving the original line style. The line color encodes the commented MF/prop. parameter configurations. And all three lines reflect the case of MAX \begin{table} \begin{tabular}{c c c c} & Flux at & Flux at & Flux at \\ Emission component & 0.1 GeV & 1 GeV & 10 GeV \\ \hline Total observed = & \(3\times 10^{-7}\) & \(6\times 10^{-6}\) & \(4\times 10^{-6}\) \\ MW prompt + & \(3\times 10^{-7}\) & \(5\times 10^{-6}\) & \(3\times 10^{-6}\) \\ M31 prompt & \(4\times 10^{-8}\) & \(7\times 10^{-7}\) & \(4\times 10^{-7}\) \\ \hline MW ICS + & \(2\times 10^{-8}\) & \(1\times 10^{-8}\) & \(7\times 10^{-10}\) \\ MW bremsstrahlung = & \(6\times 10^{-9}\) & \(4\times 10^{-9}\) & \(2\times 10^{-10}\) \\ MW secondary & \(3\times 10^{-8}\) & \(1\times 10^{-8}\) & \(8\times 10^{-10}\) \\ M31 ICS + & \(2\times 10^{-10}\) & \(8\times 10^{-11}\) & \(6\times 10^{-12}\) \\ M31 bremsstrahlung = & \(7\times 10^{-12}\) & \(6\times 10^{-12}\) & \(3\times 10^{-13}\) \\ M31 secondary & \(2\times 10^{-10}\) & \(9\times 10^{-11}\) & \(7\times 10^{-12}\) \\ \end{tabular} \end{table} Table 4: The emission fluxes in the units [MeV cm\({}^{-2}\) s\({}^{-1}\)] of various components due to DM annihilation in MW and M31 from the annular ROI centered at M31 center with angular radii \(0.4^{\circ}\) and \(8.5^{\circ}\). The prompt emission fluxes above the horizontal line are taken from [9]. DM parameters correspond to the OH fit. DM density profile according to the classification in [16]. As was already noted, this profile is the same as Einasto profile in [9], except its substructure boost factor radial dependencies. Indeed, we must be comparing WIMP parameters in the radio and gamma-ray bands for similar DM density profiles in order to draw meaningful inferences. However, the differences in substructure boost models do not imply a significant practical difference in our case. The reason is that the radio constraints in [16] were derived employing mainly only the central region of M31 with \(R\lesssim 3\) kpc, where the role of substructures is very small due to their tidal disruption. Thus, it was calculated that the cross section radio limits differ just by 4%-6% between the cases with and without the substructure boost of MAX DM density profile (the boost radial profile was from [23] for this case). Hence, the radio limits are quite universal with respect to the assumed boost factor model, and they can be compared directly to any cases outlined in [[9], Table 2] within \(\approx\) 10% precision. We see from Fig. 5, that only MIN MF/prop. configuration does not exclude a very small part of the green rectangle. But this part is so tiny by its relative area, that we can state a quite robust exclusion of WIMP, which fits the OH, by radio constraints for _any_ reasonable MF/prop. configuration! One possible caveat to this strong conclusion is the uncertainties in the fitting cross section values due to the uncertainties in J-factor values for both MW and M31. These uncertainties are written out in [[9], Table 2, columns 7-9] and are related mainly to a possible asphericity of DM halos of both galaxies. We can see from those columns, that the cross section may deviate by up to 2-3 times from the average fitting value in the last two columns. This deviation may stretch the green rectangle in Fig. 5 over the vertical direction. The shift of the upper rectangle border does not matter, since the upper cross section values are absolutely excluded. Only the bottom border has relevance. The bottom border is defined by the first row of [[9], Table 2]. This row shows, that J-factor uncertainties may provide the maximal cross section decrease with respect to the average value by \(1.52\times 1.32\times 1.38\approx 2.8\) times. However, at the same time, there is another systematical uncertainty of observational origin. [9] analyzed individually the whole OH (SH in their terminology) and its northern/southern halves (SHN/SHS), and obtained the fitting cross sections for all three cases. The cross section values for the whole halo and the southern half match quite well. But the northern half requires the cross section value by \(\approx 2\) times larger. This discrepancy can be attributed to significant observational systematics due to the overall weakness of the purported OH emission. Here I would like to note, that there is a mistake in the text of [9], where the cross section ratios for the different halo parts are described. This mistake was confirmed by the author of [9] in private communication. Thus, the text must state "SH/SHS \(\approx 1\)" and "SH/SHN \(\approx 0.5\)" instead of "SH/SHS = 1.8" and "SH/SHN = 1.0". Table 2 there provides the cross section values for the whole halo (SH), i.e. the minimal values with respect to this observational systematics, since SHN requires \(\approx 2\) times higher cross section. Therefore, this observational systematics is able to counteract the cross section decrease by 2.8 times, which may be provided by J-factor uncertainties described above. Summarizing, we would need in extremely fine-tuned circumstances in order to push the bottom border of the green rectangle into the allowed (at least partially) zone in Fig. 5. These circumstances include: an extreme boost factor configuration, high and tuned asphericity of DM halos of both galaxies, favorably tuned observational systematics and other. A random coincidence of all these factors is unrealistic. Thus, the caveat described in this paragraph does not seem to be able to change our conclusion about the incompatibility with the radio constraints. Now let us discuss another more significant caveat. [9] conducted the fit avoiding the IH region, i.e., the fit was done over the radial distances \(R\gtrsim 5\) kpc \(\leftrightarrow\) 0.4\({}^{\circ}\). However, the radio constraints were derived oppositely inside the IH region (outside, these constraints are weak). Thus, the radio and gamma-ray data are being compared in essentially non-overlapping regions. And the conclusion about incompatibility between those data is crucially based on the extrapolation of the same DM density profile from the region outside the IH to the region inside. However, DM density is poorly known inside the inner several kpc, as was extensively described in [21],[[16], Sec. IIIB]. And DM density may, in principle, follow the necessary Einasto (MAX) profile outside IH and go much below MAX inside, still conserving the fit. I.e., DM density must not necessarily follow the same radial dependence over all distances. Various sophisticated effects may wash out the central DM density cusp - e.g., [46]. The minimal possible density in the inner few kpc was estimated in [22]. This density represents MIN profile, which can be seen at [[16], Fig. 1]. It lies much below MAX profile in the inner kpc, which would imply very much lower DM annihilation rate and, therefore, much weaker radio constraints (see [[16], Figs. 5,8]). Such unusual mixed (MAX/non-MAX) profile would not spoil the total DM halo mass, since the inner several kpc contribute very little to it. Thus, in principle, we _can_ construct DM density profile; which would satisfy the outer gamma-ray halo fit by WIMP annihilation, survive the strong WIMP radio constraints and not violate any basic requirements from stellar kinematics, etc. Although, indeed, this is a bit finely-tuned construction. Summarizing this section, we can state a significant tension between the radio constraints and the outer gamma-ray halo fit. However, a certain fine-tuning of DM density profile seems to be able to alleviate this tension, hence allowing DM interpretation of the OH. Indeed, the discussion here was mainly qualitative. ## V Conclusions and discussion The main goal of this work was to model theoretically the interesting effect of the asymmetry of ICS emission from a population of relativistic \(e^{\pm}\) in M31 halo. This effect was pointed out for the first time in [13]. The asymmetry arises, because the ISRF of a disk galaxy is anisotropic, and we view M31 under the certain inclination angle. The asymmetry appears as the difference in ICS emission intensity between the points, which are located symmetrically with respect to the major galactic axis. ICS emission intensity maps were computed by GALPROP code (v57). As the source of \(e^{\pm}\) population, the thermal WIMPs with \(m_{x}=60\) GeV annihilating to \(b\bar{b}\) were set. This WIMP may be responsible for the gamma-ray emission from M31 OH according to [9]. This \(e^{\pm}\) population was chosen as a trial example to study the effect in general and, at the same time, in order to build specific observational predictions for DM interpretation of the OH emission. Some alternative \(e^{\pm}\) populations from DM were studied too. The conducted computation confirmed the asymmetry effect presence at a significant level. Table 3 displays the main results of calculations: the asymmetry dependence on the energy and model parameters in the IH and OH ROIs. We can emphasize the following key properties of the asymmetry: 1) it appears at the energies \(E_{\gamma}\gtrsim 0.1\) MeV; 2) has the opposite sign in the IH and OH ROIs; 3) by absolute value it is comparable in the IH and OH ROIs; 4) has a moderate sensitivity to the galactic and \(e^{\pm}\) population model parameters; 5) ranges at most from 0 to \(\approx 30\%\) by absolute value in the IH and OH ROIs; 6) addition of the prompt emission from DM annihilation vanishes the asymmetry at \(E_{\gamma}\gtrsim 1\) GeV. A potential observational detection of the asymmetry would provide valuable inferences for the emission mechanism understanding. We can outline the following semi-qualitative classification in this regard: 1. If a diffuse gamma-ray emission is absolutely symmetric at all energies with respect to the major axis of M31, this implies likely a purely hadronic emission mechanism. 2. A mild emission asymmetry at the level of few \(\%\) implies likely a mixed leptonic/hadronic emission scenario. 3. A significant asymmetry at the level \(\gtrsim 10\%\) implies a dominance of the leptonic sources; which may include \(e^{\pm}\) from CRs, DM and even probably MSPs. 4. The asymmetry in the case of DM \(e^{\pm}\) appears over a significantly narrower energy range than in the case of CR \(e^{\pm}\). This scheme does not necessarily need to be applied globally to the whole galaxy. Different parts of the galactic halo may have different emission mechanisms, and the latter can be tested through the asymmetry measurements regionally. Overall, the logic here indeed applies to only a large-scale diffuse emission and does not apply to an emission from discrete sources. And, of course, the above scheme is majorly idealistic in the sense, that it ignores possible nuisance asymmetries due to, e.g., inhomogeneities of the galactic medium. Considering specifically DM interpretation of the outer gamma-ray halo, this emission component is expected to manifest the following distinct feature: \(\approx 10\%\) asymmetry in the energy range \(\sim\)(1-100) MeV in both the IH and OH ROIs (see the third row of Table 3). Thus, potential asymmetry measurements may provide a valuable observational tool for diagnosing the emission generation mechanism. Particularly, such tool may confirm DM origin of the emission and, thus, unravel DM nature eventually. However, from an observational point of view, such measurements are very challenging and require very high sensitivity; since M31 is rather faint object on the gamma-ray sky. It is generally hard to separate various emission components inside M31. And it would be especially difficult to detect the asymmetry of the potential DM signal, since its ICS component has the intensity much lower than both the Galactic foreground and isotropic background, as illustrated in Fig. 3. And hadronic emissions inside M31 Figure 5: WIMP parameter plane. The green rectangle shows the parameter region, which fits the outer gamma-ray halo according to [[9]], Table 2 (both I and II scenarios, Einasto profile). The dot-dashed lines represent the parameter constraints from radio observations at 95% confidence level from [16] for the same density profile. The line color encodes the different commented MF/prop. parameter configurations. The black dashed line reflects the thermal cross section (taken from [20; 47]). may create additional nuisance background too. Summarizing, the revealed ICS emission asymmetry effect is rather theoretical so far, since its observational detection may require virtually unlimited sensitivity. However, if, for example, the brightest (central) region of M31 is shining through ICS, the asymmetry detection there may be achievable in the near future. In general, it is a very sophisticated task to model the quantitative observational predictions for any specific gamma-ray telescope, this is beyond the scope of this work. Meanwhile, as was already mentioned in Sec. IV, the authors of [9] reported the observed intensity difference between the northern and southern halves of the OH by \(\approx 2\) times. One may naturally ask whether this observed asymmetry can be explained by the ICS emission asymmetry being discussed. Considering DM as the source of \(e^{\pm}\), the answer is absolutely no, since, as was noted above, the bright prompt DM emission component vanishes the asymmetry in the energy range \(E_{\gamma}\geqslant 1\) GeV employed in [9]. The ICS emission asymmetry effect would, in principle, exist in any disk galaxy, which is viewed under the inclination angle different from \(0^{\circ}\) or \(90^{\circ}\). Examples of such nearby galaxies include Large Magellanic Cloud (LMC), M81, M33 and others. However, all of them except LMC are farther than M31. Hence, it would be even more difficult to study their gamma-ray emission details. A potential direction for further development of this topic is to conduct the same asymmetry modeling for the case of CR \(e^{\pm}\) population and compare the results with DM case. The results may differ due to the differences in both the \(e^{\pm}\) source spatial distribution and energy spectrum at injection. Other sections of the paper discussed other aspects, which are directly relevant for DM interpretation of the outer gamma-ray halo. One aspect is the role of the secondary, i.e. ICS and bremsstrahlung, emissions due to DM annihilation in both MW and M31. It was proved that these secondary components can be neglected in the fit of OH in the relevant energy range \(E_{\gamma}\gtrsim 1\) GeV for the case of nearly thermal annihilation cross section. However, at lower energies or/and in the case of higher cross sections, one may need to take into account some of the secondary components. The second aspect is the relation between WIMP radio constraints derived recently in [16] and the outer gamma-ray halo fit. These constraints significantly restrict the possibility of the fit (see Fig. 5). Only one peculiar way to preserve WIMP explanation of the OH and bypass the radio constraints seems to be the construction of finely-tuned DM density profile, which follows the Einasto profile required for the fit in the OH and goes below Einasto in the IH. But looking more broadly, the possibility of DM explanation of the OH does not appear probable, even if the radio constraints are evaded by somehow. The main reason is that this explanation requires an extremely high substructure boost factor for the case of reasonable and motivated WIMP annihilation cross sections. Also, WIMP constraints from observations of other objects, particularly dwarf MW satellites, may provide another independent restrictions. Thus, it is probably premature now to consider M31 outer gamma-ray halo phenomenon as DM manifestation. More observational data on M31 is needed in order to unravel large systematical uncertainties and understand the emission mechanisms better. It is indeed difficult to provide full coverage of such a complicated topic in the frame of one single paper. The modeling here was tied mainly to the quite standard thermal WIMPs and the diffusion of their annihilation products. However, more exotic scenarios may be relevant. E.g., the so-called leptophilic DM [48], which produces a lot of \(e^{\pm}\), which in turn would produce a bright ICS emission. Regarding the diffusion model, one may consider a larger size of the diffusion zone (e.g., [8]). ###### Acknowledgements. I dedicate this paper to the kind memory of **Nikolay Topchiev** and **Arkadii Galper**, who recently passed away. They had been working in the field of experimental high-energy astrophysics for nearly 50 and 70 years, respectively. They will always be remembered as enthusiastic scientists and great supervisors, fellows and friends. I greatly appreciate the following software, which is very useful: Wolfram Mathematica; "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France [49]; NumPy [50]; Healpy [51]; WebPlotDigitizer [52]. I am very grateful to Alexei Alexeev for providing access to high-performance computing cluster, Mikhail Zavertyaev for providing another server and also to Mikhail Razumeyko.
2306.00977
AGILE3D: Attention Guided Interactive Multi-object 3D Segmentation
During interactive segmentation, a model and a user work together to delineate objects of interest in a 3D point cloud. In an iterative process, the model assigns each data point to an object (or the background), while the user corrects errors in the resulting segmentation and feeds them back into the model. The current best practice formulates the problem as binary classification and segments objects one at a time. The model expects the user to provide positive clicks to indicate regions wrongly assigned to the background and negative clicks on regions wrongly assigned to the object. Sequentially visiting objects is wasteful since it disregards synergies between objects: a positive click for a given object can, by definition, serve as a negative click for nearby objects. Moreover, a direct competition between adjacent objects can speed up the identification of their common boundary. We introduce AGILE3D, an efficient, attention-based model that (1) supports simultaneous segmentation of multiple 3D objects, (2) yields more accurate segmentation masks with fewer user clicks, and (3) offers faster inference. Our core idea is to encode user clicks as spatial-temporal queries and enable explicit interactions between click queries as well as between them and the 3D scene through a click attention module. Every time new clicks are added, we only need to run a lightweight decoder that produces updated segmentation masks. In experiments with four different 3D point cloud datasets, AGILE3D sets a new state-of-the-art. Moreover, we also verify its practicality in real-world setups with real user studies.
Yuanwen Yue, Sabarinath Mahadevan, Jonas Schult, Francis Engelmann, Bastian Leibe, Konrad Schindler, Theodora Kontogianni
2023-06-01T17:59:10Z
http://arxiv.org/abs/2306.00977v4
# AGILE3D: Attention Guided Interactive Multi-object 3D Segmentation ###### Abstract During interactive segmentation, a model and a user work together to delineate objects of interest in a 3D point cloud. In an iterative process, the model assigns each data point to an object (or the background), while the user corrects errors in the resulting segmentation and feeds them back into the model. From a machine learning perspective the goal is to design the model and the feedback mechanism in a way that minimizes the required user input. The current best practice segments objects one at a time, and asks the user to provide _positive clicks_ to indicate regions wrongly assigned to the background and _negative clicks_ to indicate regions wrongly assigned to the object (foreground). Sequentially visiting objects is wasteful, since it disregards synergies between objects: a positive click for a given object can, by definition, serve as a negative click for nearby objects, moreover a direct competition between adjacent objects can speed up the identification of their common boundary. We introduce AGILE3D, an efficient, attention-based model that (1) supports simultaneous segmentation of multiple 3D objects, (2) yields more accurate segmentation masks with fewer user clicks, and (3) offers faster inference. We encode the point cloud into a latent feature representation, and view user clicks as queries and employ cross-attention to represent contextual relations between different click locations as well as between clicks and the 3D point cloud features. Every time new clicks are added, we only need to run a lightweight decoder that produces updated segmentation masks. In experiments with four different point cloud datasets, AGILE3D sets a new state of the art, moreover, we also verify its practicality in real-world setups with a real user study. ## 1 Introduction Accurate 3D instance segmentation is a crucial task for a variety of applications in computer vision and robotics, where the goal is to predict a segmentation mask for every object in a 3D scene. Existing 3D segmentation methods [4; 11; 14; 17; 22; 25; 36; 37; 38; 41; 42] use deep learning with full mask supervision, and hence need extensively human-labeled training data. Manually annotating ground-truth segmentation masks on 3D scenes is expensive, and fully-automatic 3D instance segmentation approaches do not scale well to unseen object categories of an open-world setting [20], as also demonstrated later. Interactive segmentation techniques have been commonly adopted for large-scale 2D image segmentation [3], where a user interacts with a segmentation model by providing assistive clicks or scribbles iteratively. While interactive image segmentation has been the subject of extensive research for 2D images [5; 16; 21; 23; 26; 31; 40], there are only a few approaches that support users in generating 3D segmentation masks [20; 28; 34; 30] but they allow user interaction mainly on the 2D domain [30; 43]. This requires images with associated camera poses w.r.t. the 3D scene: this is not suitable for non-camera sensors (_e.g._, LiDAR), or for dynamically changing viewpoints like in AR/VR settings. Even with available images, providing feedback for the same object from multiple viewpoints can be tedious. Here, we focus on interactive segmentation directly in point clouds. Recently [20] have proposed a single-stage method for interactive 3D segmentation that operates directly on the point cloud and achieves state-of-the-art performance. The task is formulated as a binary foreground/background segmentation for one object at a time (Fig. 1, _left_), and follows the mainstream approach to interactive 2D image segmentation [5, 16, 21, 23, 26, 31, 40]: user input takes the form of clicks that indicate missed portions of the target object ("positive clicks") or background regions within the segmentation mask ("negative clicks"). The two sets of click coordinates are turned into binary masks and concatenated with the 3D point coordinates to form the input to a deep learning model. A subtle, but important limitation of that approach is that objects are processed sequentially, one at a time: obviously, object outlines within a scene are boundaries between different objects (rather than between an isolated object and a monolithic background): Positive clicks for one object can, by definition, serve as negative clicks for other, nearby objects. In most cases also the opposite is true, as negative clicks will often also lie on other objects of interest. Handling multiple objects jointly rather than sequentially also brings a computational advantage, because every round of user input requires a forward pass. In this work, we rethink interactive segmentation of 3D point clouds from the perspective of _information retrieval_[2] by modeling the user clicks as _queries_ that retrieve semantic information from the input 3D point cloud. During the iterative refinement process, these queries are updated based on the corrective user clicks which are subsequently used to obtain the most relevant information. The scope of each of these queries is gauged by employing attention mechanisms. To that end, the clicks are first encoded into high-dimensional feature vectors, and they interact with each other and also with the 3D point cloud via cross-attention, with the aim to describe an object or object part. To exploit not only the locations of user clicks, but also their temporal order in the iterative annotation process, they are fed through a spatio-temporal positional encoding. Unlike schemes built on top of binary foreground/background segmentation (_e.g._, [20]) our click-as-query strategy imposes no constraint on the number of objects and seamlessly models clicks on multiple objects, including their contextual relations. As click guidance for segmentation may leave room for ambiguities, multiple iterations of queries are typically required to reach a satisfactory result - a property our problem shares with other information retrieval tasks. To aggregate the roles of all click queries and obtain a single, holistic segmentation mask, we fuse the predictions for each query to region-specific masks and train the network in a way that regions compete for space. In terms of network architecture, we employ a standard 3D sparse convolutional backbone to extract per-point features from the input point cloud, and then enable interactions between click queries and point cloud features through a mask decoder (Fig. 1, _right_). Disentangling the encoding of the point cloud from the processing of the clicks makes it possible to pre-compute the backbone features, such that during iterative user feedback one must only run the lightweight mask decoder, thus significantly reducing the computation burden (Fig. 1). Interactive segmentation systems should be trainable with limited data, and able to annotate unseen datasets in an open-world setting. We train on a single dataset, ScanNetV2-Train [9], and then evaluate on ScanNetV2-Val [9], S3DIS [1], KITTI-360 [24]. It is too expensive and impractical to collect real user click sequences for training. To mimic the testing conditions, we design an iterative training strategy to generate click sequences with a more similar distribution to that during test. Our iterative training strategy applies to both single-object and multi-object training and it can improve the segmentation significantly. We start by comparing to the state-of-the-art in interactive Figure 1: **Architecture comparison.**_Left:_ InterObject3D [20]. _Right:_ our AGILE3D. Given the same set of 10 user clicks, AGILE3D can effectively segment three objects while [20] can only segment one. Since we only run a lightweight mask decoder per iteration, not a full forward pass through the entire network, [20] takes 0.5s for one object while we need 0.25s to segment all three.1 single-object segmentation. To validate the benefits of interactive multi-object segmentation, we set up the benchmark, propose the evaluation metrics, and an automatic evaluation strategy. Our method outperforms the baseline in both single-object and multi-object tasks. We also develop a user interface and perform a real user study to verify the effectiveness of our model and the proposed evaluation strategy in real annotation tasks. In summary, our main contributions are: 1. We introduce AGILE3D, a novel method for interactive 3D segmentation that can segment multiple objects concurrently by leveraging information from each click for the segmentation masks of multiple objects. 2. We show that our AGILE3D achieves state-of-the-art performance for multi-object as well as single-object interactive segmentation in multiple datasets requiring fewer user clicks while providing better segmentation masks. 3. We provide the setup, evaluation and analysis for interactive multi-object segmentation on 3D scenes along with a novel iterative training strategy that better mimics user behavior during training and is applicable to the multi-object scenario. ## 2 Related Work 3D instance segmentation.Fully-supervised 3D instance segmentation is dominated by two schemes: proposal-based and grouping-based. Proposal-based methods [11; 14; 41; 42] consider a top-down strategy that first generates object proposals and then predicts instance segmentation masks inside each proposal. Grouping-based methods [4; 17; 22; 25; 36; 37; 38] employ a bottom-up pipeline that learns per-point features (_e.g._, geometric shifts or latent features) and groups points into instances. Several recent approaches [12; 13; 32; 39] aim to avoid relying on proposals or heuristic clustering algorithms and propose dynamic kernels or queries to predict instance masks directly. To relieve the labeling cost of dense annotation, [7; 15] explore weakly-supervised 3D instance segmentation by learning from weak annotations such as sparse points or bounding boxes. While weakly-supervised methods require less annotation, they typically exhibit a notable performance gap compared to fully-supervised methods. Interactive segmentation differs from both fully and weakly-supervised methods in several aspects. Firstly, the latter methods are typically trained for each dataset and are unable to generalize to classes that are not part of the training set, whereas interactive segmentation aims to train a single model that can segment new objects in an open-world setting. Additionally, fully and weakly-supervised methods cannot incorporate additional information (_e.g._, user input) to further refine any inaccurate predictions, whereas interactive segmentation aims to produce high-quality masks using the model's ability to interact with humans. Interactive 3D segmentation.Although interactive image segmentation has been the subject of extensive research [5; 16; 21; 23; 26; 31; 40], there are only a few approaches that support users in generating 3D segmentation masks [20; 28; 30; 34; 43]. Previous works [30; 43] focused on semantic labeling of a 3D scene during online 3D reconstruction [34], rather than acquiring high-quality instance masks in 3D. Furthermore, they rely on hand-crafted feature extraction and label propagation, a strategy that quickly reaches its limits as the target scene becomes more complex. More recent works shift the user interaction to the 2D domain [34; 43]. As this requires images along with their associated camera poses _w.r.t._ the 3D scene, it is not suitable for non-camera sensors such as LiDAR, or for dynamically changing viewpoints like in augmented reality settings. Even if images with known poses are available, providing feedback for the same object in multiple viewpoints is fairly cumbersome and inefficient. The closest work to ours is the approach by Kontogianni _et al._[20], which also aims for efficient 3D annotation that directly operates on point clouds. But [20] concatenates encoded clicks maps with the point cloud and passes the combined representation through a deep network. Instead, we encode clicks as queries, enabling interactive multi-object segmentation and require only a forward pass through a lightweight mask decoder for each refinement. Interactive 2D image segmentation.There are several works on interactive 2D image segmentation [5; 16; 21; 23; 26; 31; 40] but they all model the interactive object segmentation task as binary foreground/background segmentation of one object at a time. There are some concurrent unpublished works [19; 27; 45] in the 2D image domain which encode user corrections as learnable prompts but they specialize in the 2D domain and are constrained by its specific characteristics. Our work, on the other hand, designs an interactive object segmentation pipeline specifically for 3D scenes, proposes to encode user corrections as queries and exploits them for multi-object segmentation. ## 3 Method Consider a 3D scene \(P\in\mathbb{R}^{N\times C}\), where \(N\) is the number of 3D points in the scene and \(C\) is the feature dimension associated with each point. \(C\) is normally set as 3 for locations \(xyz\), otherwise 6 if colors \(rgb\) are also available as input. **Interactive single-object segmentation.** Given such a scene, in interactive single-object segmentation [20], the user provides a sequence of clicks, where positive clicks are considered on the desired object and negative clicks are on the background. The segmentation mask is obtained through an iterative process: the model provides the user with a segmentation mask, then the user provides feedback to the model via positive/negative clicks. The model provides an updated mask given the user corrections. The process repeats until the user is satisfied with the result. **Interactive multi-object segmentation.** We extend the above formulation to incorporate interactive multi-object scenarios. Let us assume a user wants to segment \(M\) target objects in a scene. We denote user clicks as \(S=\left(c_{1},c_{2},...,c_{k}\right)_{k=1}^{K}\), where \(k\) is the click index. \(k\) also doubles as a timestamp indicator since the clicks come as a sequence in time. Each click \(c_{k}\) is represented by two attributes \(c_{k}=\left\{p_{k},o_{k}\right\}\), where \(p_{k}\in\mathbb{R}^{3}\) are the 3D coordinates and \(o_{k}\in\left\{0,1,...,M\right\}\) is the region index, indicating whether the click comes from the background (when \(o_{k}=0\)) or associated with object \(o_{k}\) (when \(o_{k}\geqslant 1\)). \(S\) is initialized as an empty sequence and iteratively extended when the user gives more clicks. Given the 3D scene \(P\) and click sequence \(S\), our goal is to predict a single, holistic segmentation mask \(\mathcal{M}\in\left\{0,1,...,M\right\}^{N}\), which indicates the interest region each point belongs to. Please note we aim to ensure that each point belongs to a single segmentation, which is different from the interactive single-object segmentation task, where several passes of the same scene with different objects of interest might result in points assigned to several segmentation masks. When \(M=1\), the above formulation matches the interactive single-object segmentation setting as in [20]. ### Agile3d Our overall architecture is summarized in Fig. 2. It consists of (a) a feature backbone that extracts per-point features, (b) a click-to-query module that converts user clicks to semantically-aware spatial-temporal query vectors, (c) a mask decoder that iteratively refines the click queries and point features, and finally, (d) a mask prediction module that predicts a segmentation for all desired objects. **Feature backbone**. Our feature backbone is a sparse convolutional U-Net, based on the Minkowski Engine [8] as in [20]. It takes as input a 3D scene and produces a feature map \(F\in\mathbb{R}^{N^{\prime}\times D}\), where \(D\) is the feature dimension. Unlike [20] which sends the concatenation of \(P\) and encoded click maps to the backbone, we only feed \(P\) and deal with user clicks separately in our click-to-query module. **Click-to-query module** converts each user click \(c_{k}\) to query \(q_{k}\). We rethink interactive segmentation as information retrieval. In this sense, a click is a user query indicating a desired region. From a machine learning perspective, the click query should properly encode prior knowledge of an object so that the system can correctly classify all relevant points. On the other hand, clicks are a sequence of 3D points that are inherently spatial and temporal. Motivated by this, we encode the click query as three parts \(q_{k}=\left\{\mathbf{c}_{k},\mathbf{s}_{k},\mathbf{t}_{k}\right\}\), each of which models its _content_, _spatial_, and _temporal_ properties. Figure 2: **Model of AGILE3D.** Given a 3D scene and a user click sequence, (a) the feature backbone extracts per-point features and (b) the click-to-query module converts user clicks to high-dimensional query vectors. (c) The mask decoder refines the point features and click queries through multiple attention mechanisms. (d) The mask prediction module first fuses the per-click mask logits to region-specific mask logits and then produces a final mask through a softmax. With \(\rightarrow\) we denote the user click information and with \(\rightarrow\) the scene information. The _content_ part \(\mathbf{c}_{k}\in\mathbb{R}^{D}\) is initialized from the point feature map \(F\) of the backbone. For each click, we find the nearest voxel position in \(F\) and use the feature of that voxel as \(\mathbf{c}_{k}\). The _spatial_ part \(\mathbf{s}_{k}\in\mathbb{R}^{D}\) is created by mapping the click coordinates \(p_{k}\) to the same feature space as \(\mathbf{c}_{k}\) using Fourier positional encodings [33]. Similarly, we transform the timestamp \(k\) to a _temporal_ embedding \(\mathbf{t}_{k}\in\mathbb{R}^{D}\) using a sin/cos positional encoding of [35]. We consolidate the spatial and temporal parts to a _positional_ part by summing \(\mathbf{s}_{k}\) and \(\mathbf{t}_{k}\). After initialization, the content part of each query \(\mathbf{c}_{k}\) along with per-point features \(F\) will be iteratively refined through the decoder attention mechanism. **Mask decoder** is designed to enable interaction between the click queries themselves and between them and the point features. Each decoder layer consists of a click-to-scene attention module (C2S), a click-to-click attention module (C2C), a feed-forward network, and finally a scene-to-click attention module (S2C). All queries are represented by the positional and the content part. We denote the positional part of all click queries as \(Q_{p}\) and the content part at layer \(l\in\{0,1,...,L\}\) as \(Q_{c}^{l}\). We use the same representation for the scene points. We denote the point features at layer \(l\) as \(F_{c}^{l}\), where \(F_{c}^{0}=F\), which represents the content part of the 3D points. The positional part of 3D points \(F_{p}\) is encoded via Fourier positional encodings [33] based on voxel positions to ensure access to point cloud geometric information to the decoder. The C2S performs cross-attention from click queries to point features, which enables click queries to extract information from relevant regions in the point cloud. In the C2C, we let each click query self-attend to each other to realize inter-query communications. The C2C is followed by an FFN that further updates each query. All three steps only update click queries while the point features are static. To make the point features click-aware, we add the S2C that performs cross-attention from point features to click queries. More details on the mask decoder can be found in the Supp. material. **Mask prediction**. We apply a shared mask head (MLP) \(f_{mask}(\cdot)\) to convert the per-click content embeddings \(Q_{c}^{L}\) to \(K\) mask embeddings and then compute the dot product between these mask embeddings and the refined point features, which reproduces \(K\) mask logits maps \(\mathcal{C}_{\zeta}\in\mathbb{R}^{N^{\prime}\times K}\), _i.e._, \(\mathcal{C}_{\zeta}=F_{c}^{L}\cdot f_{mask}(Q_{c}^{L})^{T}\). Since there can be multiple queries representing the same region (either an object or background), we apply a per-point max operation on \(\mathcal{C}_{\zeta}\) that shares the same region. This step can be achieved through the association between each click and its region index \(o_{k}\), and gives us \(M+1\) region-specific mask logits map \(\mathcal{R}_{\zeta}\in\mathbb{R}^{N^{\prime}\times(M+1)}\). We obtain the final segmentation mask \(\mathcal{M}\in\{0,1,...,M\}^{N^{\prime}}\) through a softmax, which indicates the region each point belongs to. ### User Simulation and Training To address the challenge of collecting real user clicks for training interactive models, simulated clicks are commonly used in the interactive community [21; 26; 31]. Similarly, evaluation protocols simulate real user behavior to ensure unbiased results and reproducible scores. In [20] positive and negative clicks are uniformly sampled on the object from the object neighborhood respectively. At test time they imitate a user who always clicks at the center of the largest error region. However, the training and test strategies are different since randomly sampled clicks are independent of network errors and lack a specific order. We propose an iterative strategy that approximates real user behavior even during training. Although iterative training has been explored in interactive image segmentation [21; 26; 31], we tackle the more complex scenario of simultaneous user interaction with multiple 3D objects. **Multi-object iterative training.** Our iterative strategy is shown in Fig. 3. We simulate user clicks for each batch separately in an iterative way with \(n\) number of iterations sampled uniformly from 1 to \(N_{iter}\). \(S^{i}\) are the clicks sampled in the \(i\) th iteration. The training starts from the initial clicks (\(S^{0}\)) collected from each target object's center. Full iterative training, like in testing, is costly, requiring an iteration after each sampled click. Therefore, when sampling clicks for the next iteration, instead of only sampling one click from the largest error region, we sample \(N_{i}\) clicks from the top \(N_{i}\) error regions (one click per region), where \(N_{i}=\text{min}\left(M,|\mathcal{E}_{i}|\right)\) and \(\mathcal{E}_{i}\) is a set of error clusters in the \(i\) th iteration. This strategy can generate training samples that contain a large number of clicks in a small number of iterations, keeping the training complexity reasonable. We freeze the model when sampling clicks in iterations \(1\) to \(N_{iter}-1\) and only allow backpropagation in the last iteration. Figure 3: **Iterative training.** **Multi-object user simulation during test.** Interactive single-object segmentation [5; 16; 20; 21; 23; 26; 31; 40] evaluation strategies imitate a user who clicks at the largest error region. In our multi-object scenario, we share the spirit and enable users to focus first on the largest errors in the whole scene, rather than the ones on one given object. Our user simulation strategy starts with one click at the center of each foreground object to get an initial prediction. We then compute error clusters (comparing prediction to ground truth): error clusters contain all points of objects that were assigned to the same wrong label. The next click is sampled from the center of the largest cluster. **Loss.** We supervise our network with the cross-entropy and the Dice loss [10] for multi-class segmentation since we want neighboring masks to compete for space and ensure that each point is assigned to a single label. The number of classes varies per scene and is \(M+1\), where \(M\) is the number of objects the user wants to segment. \[\mathcal{L}=\frac{1}{N}\sum\nolimits_{p\in P}w_{p}(\lambda_{\text{CE}} \mathcal{L}_{\text{CE}}(p)+\lambda_{\text{Dice}}\mathcal{L}_{\text{Dice}}(p)) \tag{1}\] where \(\lambda_{\text{CE}}\) and \(\lambda_{\text{Dice}}\) are the balancing loss weights and \(w_{p}\) the distance of the points to the user click. Additional implementation details are in the supplementary. ## 4 Experiments **Tasks.** AGILE3D is a versatile model, able to perform both interactive single- and multi-object segmentation in 3D scenes. Since there is no existing benchmark for interactive multi-object segmentation we propose this new task including a new evaluation protocol and metrics. We evaluate our method in both scenarios. **Datasets.** A key aspect of interactive segmentation systems is their ability to work on datasets that exhibit significant variations in data distribution compared to the training data. To this end, we train the system on ScanNetV2-Train [9], an indoor dataset, and subsequently, we evaluate on datasets that follow distinct distributions, including ScanNetV2-Val [9] (same distribution), S3DIS [1] (different sensor), and even KITTI-360 [24] (outdoor LiDAR point clouds). We also train two models on ScanNet-train: one with 40 classes and another with only the subset of 20 benchmark classes. Note that our method does not rely on semantic class information. **Evaluation metrics.** For _single-object_ evaluation we follow the evaluation protocol of [20]. We compare the methods on (1) NoC@q% \(\downarrow\), the average number of clicks needed to reach q% IoU, and (2) IoU@k \(\uparrow\), the average IoU for k number of clicks per object (capped at 20). We extend the evaluation protocol for _multi-object_. We no more enforce a per-object budget but we allow a total budget of \(M\times 20\) clicks for a user who wants to segment \(M\) objects in a scene. We propose the IoU@k \(\uparrow\) metric, which represents the average IoU of all target objects after an average of k clicks allocated to each object. \(\bar{\text{k}}\) is an average of k over \(M\) and each object does not necessarily share exactly \(\overline{\text{k}}\) clicks. Similarly, we report \(\overline{\text{NoC@q}\overline{\text{q}}\overline{\text{q}}\overline{\text{ q}}}\downarrow\), which represents the average number of clicks to reach an average q% IoU for all target objects in the scene. **Baseline.** We use InterObject3D [20] as our baseline in interactive single-object segmentation. However, [20] is designed to segment objects sequentially and cannot directly be evaluated by our interactive multi-object segmentation protocol. In our protocol, we aim to obtain a non-overlapping segmentation mask for all \(M\) objects and enable the users to focus on the largest error across all \(M\) objects in each iteration instead of a single object. To this end, we use [20] to process each object with one click per object sequentially and obtain a probability mask for each object and fuse the final segmentation using argmax over those probability masks, then use our multi-object user simulation to sample the next click. We additionally created a strong re-implementation of [20], an **enhanced** baseline (InterObject3D++) that already achieves state-of-the-art by incorporating our proposed iterative training. Outperforming this even stronger baseline shows that there is more merit in leveraging the clicks of multiple objects together than handling objects in isolation. ### Evaluation on Single-object Segmentation. **Comparison with state-of-the-art.** Results are summarized in Tables 1, 2 and Fig 4, 5. We compare our method with [20] in scenarios of increasing difficulty and distribution shift: _ScanNet-train_[9]\(\rightarrow\)_ScanNet-val[9]_: We evaluate our method on the ScanNet-val dataset, considering small distribution shifts. In both the ScanNet20 and ScanNet40 setups, our method surpasses the baseline of [20] and the enhanced baseline, as shown in Tab. 1 (In.1,2). We test the generalization of our method to novel classes by evaluating the trained model on ScanNet 20 classes with the additional remaining 20 unseen classes. Tab. 1 clearly demonstrates that our AGILE3D outperforms the current state-of-the-art in all metrics. _ScanNet-train[9] \(\rightarrow\) S3DIS[1]_: We assess the effectiveness of our model's generalization by employing it on diverse datasets: S3DIS, an indoor dataset with different characteristics than ScanNet, and KITTI-360, an outdoor dataset captured with a LiDAR sensor and containing distinct object classes. Our model, trained on ScanNet 40 classes, outperforms the state-of-the-art baseline in both cases. For example, with only 5 clicks, our AGILE3D achieves an impressive IoU of 83.5 on S3DIS, surpassing the baseline's performance of 72.4. _ScanNet-train[9] \(\rightarrow\) KITTI-360[24]_: Our method excels in the challenging domain shift of training on ScanNet and testing on KITTI-360, surpassing the state-of-the-art baseline by a factor of 4 and outperforming the enhanced baseline by a factor of 2 on IoU@5. Our method's performance is particularly impressive in the low click regime (\(\leq\) 3 clicks). With just a single click, our method produces masks with an IoU of \(\approx 60\) on both ScanNet and the unseen dataset of S3DIS (Tab. 2). This is a significant improvement over the baseline, which achieves on average \(\approx 40\) IoU with one click. With three clicks, our method achieves even higher IoU scores of \(75.4\) and \(77.4\) on ScanNet and S3DIS, respectively. **Comparison with fully-supervised methods.** Interactive object segmentation aims to generalize to data distributions beyond those seen in training. Fully supervised methods for 3D instance segmentation achieve remarkable results on tasks and data distributions similar to those encountered during training. However, we demonstrate that even with minimal human feedback, we can surpass the performance of fully supervised methods, particularly in classes that were not seen during training. Our method achieves precision four times higher than the strong state-of-the-art method Mask3D for unseen classes with just one click (Tab. 3). As shown in Fig. 4, our AGILE3D obtains high-quality masks of novel objects (_e.g._, statue, phone) with a few user clicks. \begin{table} \begin{tabular}{l l|c c c c} \hline \hline **Method** & **Train \(\rightarrow\) Eval** & **IoU@5 \(\uparrow\)** & **IoU@5 \(\uparrow\)** & **NoC@5 \(\uparrow\)** & **NoC@5 \(\uparrow\)** & **NoC@9 \(\uparrow\)** \\ \hline InterObject3D [20] & & 67.6 & 77.6 & 81.2 & 9.6 & 11.8 & 14.6 \\ InterObject3D+ & ScanNet40 \(\rightarrow\) ScanNet40 (\(\approx\) antesm) & 76.4 & 82.2 & 83.8 & 7.8 & 10.2 & 13.4 \\ AGILE3D (Ours) & **78.5** & **82.9** & **84.5** & **7.4** & **9.8** & **13.1** \\ InterObject3D [20] & & 72.4 & 79.9 & 82.4 & 8.9 & 11.2 & 14.2 \\ InterObject3D+ & ScanNet40 \(\rightarrow\) ScanNet40 & 78.0 & 82.9 & 84.2 & 7.7 & 10.0 & 13.2 \\ AGILE3D (Ours) & **79.9** & **83.7** & **85.0** & **7.1** & **9.6** & **12.9** \\ InterObject3D [20] & & 72.4 & 83.6 & 88.3 & 6.8 & 8.4 & 11.0 \\ InterObject3D+ & ScanNet40 \(\rightarrow\) S3DIS-A5 & 80.8 & **89.2** & **91.5** & 5.2 & 6.7 & **9.3** \\ AGILE3D (Ours) & **80.5** & 88.2 & 89.5 & **4.8** & **6.4** & 9.5 \\ InterObject3D [20] & & 14.3 & 26.3 & 35.0 & 19.1 & 19.4 & 19.7 \\ InterObject3D+ & ScanNet40 \(\rightarrow\) KITTI-360 & 19.9 & 40.6 & **85.1** & 17.0 & 17.7 & 18.4 \\ AGILE3D (Ours) & **44.4** & **49.6** & 54.9 & **14.2** & **15.5** & **16.8** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results on interactive single-object segmentation.** We compare our method with the current state-of-the-art [20] (and our enhanced [20]) in several scenarios. Our method offers 3D masks of higher quality with fewer user clicks and generalizes better to new classes and datasets. \begin{table} \begin{tabular}{l l c c c c} \hline \hline & **Method** & **\#clicks** & **AP** & **AP\%** & **AP\%** \\ \hline \multirow{4}{*}{**S3DIS**} & Mask3D [29] & – & 51.5 & 77.0 & 90.2 \\ & & 1 & 53.5 & 75.6 & 91.3 \\ & & 2 & 64.0 & 86.4 & 96.0 \\ **AGILE3D** & 3 & 70.3 & 91.4 & 98.1 \\ **(Ours)** & 10 & 83.2 & 98.3 & 99.8 \\ & & 20 & 86.8 & 99.2 & 100.0 \\ \hline \multirow{4}{*}{**S3DIS**} & Mask3D [29] & – & 5.3 & 13.1 & 24.7 \\ & & 1 & 24.8 & 45.7 & 72.4 \\ & & 2 & 36.9 & 63.5 & 85.8 \\ **AGILE3D** & 3 & 45.5 & 74.4 & 92.2 \\ **(Ours)** & 10 & 67.8 & 94.8 & 99.7 \\ & & 20 & 74.5 & 97.6 & 99.9 \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparison with fully-supervised on ScanNet.** We compare our method with the state-of-art fully supervised instance segmentation method. Both methods have been trained on ScanNet20-seen and evaluated on the ScanNet20-seen and evaluated on the ScanNet20-seen. Figure 4: **Open-world segmentation** from ScanNet20. AGILE3D can segment new objects like statue and phone. ### Evaluation on Multi-object Segmentation. Results are summarized in Tab. 4. We adapt [20] to be evaluated on our protocol for a fair comparison. We do not enforce a per-object click budget for the baselines but allow them to sample the next click from the biggest error region across all target objects. Nevertheless, the baselines are still limited to segmenting one object in each forward pass. Our AGILE3D outperforms InterObject3D [20] and our enhanced baseline InterObject3D++, requiring significantly fewer clicks for the same quality of segmentation masks, _e.g._, AGILE3D requires 4 clicks less than InterObject3D and 2 clicks less than InterObject3D++ to achieve on average 80% IoU on ScanNet40. The benefits of handling clicks on all objects together in AGILE3D can also be validated qualitatively in Fig. 6. In this scene, after a total of 8 clicks (both methods), AGILE3D achieves an average IoU of 83.4 _vs._ 76.0 of InterObject3D++. These results suggest the benefits of interactive multi-object segmentation: (1) _Click sharing_: in AGILE3D, clicks on one object are naturally utilized to segment other objects, _e.g._, the positive click on Chair 1 (Fig. 6 (g)) naturally serves as a negative click for Chair 2 () and improves the segmentation for both objects (compare with Fig. 6 (f)). By contrast, in the baselines, clicks are given individually and only have an effect on that one object. (2) _Holistic reasoning_: since we segment all the objects together, AGILE3D can capture their contextual relationships, enabling holistic reasoning. For example, clicks on the armrest of one chair will also help correct the armrests of other chairs (Fig. 6 (i)(j)). (3) _Globally-consistent mask_: AGILE3D encourages different regions to directly compete for space in the whole scene so that each point is assigned exactly one label. By contrast, in single-object segmentation, the segmentation mask for each object is obtained separately. Post-processing is required to generate a non-overlapping mask. (4) _Faster inference_: AGILE3D can pre-compute backbone features _once_ per scene (\(\sim\)0.05s) and run a light-weight mask decoder per iteration (\(\sim\)0.02s). By contrast, InterObject3D++ must go through the entire network per iteration (\(\sim\)0.05s). In this scene, after 8 clicks, AGILE3D takes an inference time of 0.15s _vs._ 0.4s of InterObject3D++. ### User Study To go beyond simulated user clicks and assess performance with clicks from real, human user behavior, we perform a user study. The right table shows that real users achieve comparable results to the simulator. The slight drop in \(\overline{\text{IoU@3}}\) is expected as different users have varying preferences for mask quality. More details are in the Supp. material. \begin{table} \begin{tabular}{l c|c c c|c c c} \hline \hline **Method** & **Train \(\rightarrow\) Eval** & **IoU@3 \(\uparrow\)** & **IoU@1 \(\uparrow\)** & **IoU@1 \(\uparrow\)** & **NoC@3 \(\uparrow\)** & **NoC@5 \(\downarrow\)** & **NoC@5 \(\downarrow\)** \\ \hline InterObject3D & & 75.1 & 80.3 & 81.6 & 10.2 & 13.5 & 16.6 \\ InterObject3D++ & ScanNet40 \(\rightarrow\) ScanNet40 & 79.2 & 82.6 & 83.3 & 8.6 & 12.4 & 15.7 \\ AGILE3D (Ours) & & **82.3** & **85.0** & **86.0** & **6.3** & **10.0** & **14.4** \\ \hline InterObject3D & & 76.9 & 85.0 & 87.3 & 6.8 & 8.8 & 13.5 \\ InterObject3D++ & ScanNet40 \(\rightarrow\) SDDIS-A5 & 81.9 & 88.3 & 89.3 & 5.7 & 7.6 & 11.6 \\ AGILE3D (Ours) & & **86.3** & 88.3 & **90.3** & **3.4** & 5.7 & **9.6** \\ \hline InterObject3D & & 10.5 & 22.1 & 31.0 & 19.8 & 19.8 & 19.9 \\ InterObject3D++ & ScanNet40 \(\rightarrow\) KITTI-360 & 16.7 & 37.1 & **52.2** & 18.3 & 18.9 & 19.3 \\ **AGILE3D (Ours)** & & **40.5** & **44.3** & 48.2 & **17.4** & **18.3** & **18.8** \\ \hline \hline \end{tabular} \end{table} Table 4: **Quantitative results on interactive multi-object segmentation.** We compare our multi-object method with the state-of-the-art in interactive single-object segmentation to show the importance of multi-object interaction. We adapt the state-of-the-art method in interactive single-object segmentation to our multi-object protocol for a fair comparison (_Baseline_ paragraph of Sec. 4). Figure 5: **Qualitative results on interactive single-object segmentation.** Given only a few clicks, our AGILE3D can offer better results. Models are trained on ScanNet40 and evaluated on ScanNet40. ### Ablation Studies and Discussion We ablate several aspects of our architecture and training in Tab. 5. All the experiments are conducted on the ScanNet40 dataset. More ablations are available in the Supplementary material. **Iterative training.** We validate the effectiveness of our iterative training strategy (Sec. 3.2) by training a model with randomly sampled clicks as in [20]. Tab. 5, 1 shows that the random sampling strategy performs noticeably worse than our iterative training (82.0 _vs._ 84.4 \(\overline{\text{IoU}}@\overline{10}\)). **Attention design.** We employ multiple attention mechanisms to enable explicit interaction between the click queries themselves and between them and the point features. Tab. 5 shows that the absence of any type attention mechanism harms the model's performance. Especially, the cross-attention between click queries and point features, _i.e._, 2 C2S attn and 4 S2C attn, have a more pronounced effect compared to the self-attention between click queries 3 C2C attn. **Spatial-temporal encodings for click queries.** We regard user clicks as an ordered sequence of 3D coordinates and supplement each click query with a spatial-temporal encoding. Tab. 5, 6 demonstrates that both the spatial and temporal encoding contribute to improved performance. Interestingly, the temporal encoding exhibits a greater impact than the spatial one. We hypothesize that this is because we initialize the content part of click queries with backbone features, which already contain geometric information. **What does the backbone learn?** As we exclusively input the 3D scene to the backbone, the backbone learns general features at object level which we visualize in Fig. 7 (_left_). These features are beneficial for click queries for extracting target object features in the click-to-scene attention module. **What does each click query attend to?** We encode clicks as queries, which undergo iterative updates by cross-attending to point features and self-attending to one another, resulting in a meaningful representation of the target object. In Fig. 7 (_right_), we visualize the attention maps of each click query which reveal that each query attends to specific regions, _e.g._, click \(c_{1}\) attends to the entire chair with emphasis on legs, \(c_{2}\) captures the general shape of the table while \(c_{3}\) focuses on the nearby leg, aligning well with the user's intention to use click \(c_{3}\) for refining the segmentation of the table leg. Conclusion We have introduced AGILE3D, a novel method for interactive 3D segmentation that simultaneously segments multiple objects in context. Drawing inspiration from information retrieval, we encode user clicks as queries and employ attention mechanisms to enable interactions among clicks and the 3D scene and clicks. While offering fast inference for real-time applications, AGILE3D further achieves state-of-the-art in both interactive single- and multi-object benchmarks, in particular for the most challenging low-click regime.We hope AGILE3D inspires further research in the new direction of interactive multi-object 3D segmentation. ## Acknowledgments and Disclosure of Funding We sincerely thank all volunteers who participated in our user study. Francis Engelmann and Theodora Kontogianni are postdoctoral research fellows at the ETH AI Center. This project is partially funded by the ETH Career Seed Award - Towards Open-World 3D Scene Understanding, NeuroSys-D (03ZU1106DA) and BMBF projects 6GEM (16KISK036K).
2302.03751
Understanding Why ViT Trains Badly on Small Datasets: An Intuitive Perspective
Vision transformer (ViT) is an attention neural network architecture that is shown to be effective for computer vision tasks. However, compared to ResNet-18 with a similar number of parameters, ViT has a significantly lower evaluation accuracy when trained on small datasets. To facilitate studies in related fields, we provide a visual intuition to help understand why it is the case. We first compare the performance of the two models and confirm that ViT has less accuracy than ResNet-18 when trained on small datasets. We then interpret the results by showing attention map visualization for ViT and feature map visualization for ResNet-18. The difference is further analyzed through a representation similarity perspective. We conclude that the representation of ViT trained on small datasets is hugely different from ViT trained on large datasets, which may be the reason why the performance drops a lot on small datasets.
Haoran Zhu, Boyuan Chen, Carter Yang
2023-02-07T20:56:21Z
http://arxiv.org/abs/2302.03751v1
# Understanding Why ViT Trains Badly on Small Datasets: An Intuitive Perspective ###### Abstract Vision transformer (ViT) is an attention neural network architecture that is shown to be effective for computer vision tasks. However, compared to ResNet-18 with a similar number of parameters, ViT has a significantly lower evaluation accuracy when trained on small datasets. To facilitate studies in related fields, we provide a visual intuition to help understand why it is the case. We first compare the performance of the two models and confirm that ViT has less accuracy than ResNet-18 when trained on small datasets. We then interpret the results by showing attention map visualization for ViT and feature map visualization for ResNet-18. The difference is further analyzed through a representation similarity perspective. We conclude that the representation of ViT trained on small datasets is hugely different from ViT trained on large datasets, which may be the reason why the performance drops a lot on small datasets. Our code and documentation are publically available at: [https://github.com/BoyuanJackChen/MiniProjec](https://github.com/BoyuanJackChen/MiniProjec) t2_VisTrans. ## 1 Introduction Attention mechanism has become the most effective tool in natural language processing tasks. In recent years, it has been proven to perform well on computer vision tasks, such as image detection, image classification and video processing. With the advent of Visual Transformer (ViT)[1], the pure attention network began to rival the accuracy of convolutional neural networks (CNN) in image classification tasks. Further research on vision transformers will not only improve the capability of machine learning in vision tasks, but also improve our understanding on transformers and their relationship with CNN. Nonetheless, one major drawback of vision transformer is its bad performance on small-scale datasets [2]. Traditional CNN can be trained to make high accuracy predictions on the test dataset, and their accuracy increases as we increase the number of parameters and layers. On the other hand, vision transformers usually have poor performance when trained on small datasets. Existing Methods such as Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA) are proven to improve the transformers' accuracy on small datasets[3], yet their accuracy is still lower than CNN's. We explore the reason why vision transformers perform worse than CNN on smaller datasets. We provide visual evidence, such as attention visualization and forward propagation, and representation similarity analysis. We expect our results may contribute to the understanding of the attention mechanism on image data, as well as to inspire a new solution to improve vision transformer networks. The contributions of our work could be summarized as follows: * By comparing the performance of ViT and ResNet on CIFAR-10, CIFAR-100, and SVHN datasets, we confirm that ViT does not perform well on small datasets compared with CNN. * We conduct attention visualization on ViT and feature map visualization on ResNet to visualize the weights of each layer in each model. * We empirically measure the representation similarity between ViT and ResNet on small datasets and compare the difference on large datasets in [4]. Unlike [4], which mainly compares the representation similarity on large datasets, we focus on analyzing differences and explore reasons for performance drop of ViT on small datasets. ## 2 Model and Data We initialize a ViT model based on [1] and train it on three image datasets: CIFAR-10, CIFAR-100, and Street View House Numbers (SVHN) [5]. This is done with two primary objectives: to re-generate existing literature results on ViT for small datasets [3; 6], and to understand which kind of small dataset ViT learns less well, by comparing the evaluation accuracy. We compare the performance of our ViT model, which has 9.6M parameters, to the performance of a standard ResNet18 model, which has 11.5M parameters. We choose the latter for comparison because it is widely used to assess model efficiency, and its parameter count is relatively close to that of ViT, when compared to other ResNet architectures. Overall, ViT performs comparably to ResNet on SVHN, but significantly worse on CIFAR-10 and CIFAR-100. We will discuss their respective accuracy and provide evidence to support this finding in the following sections. ### Dataset and Augmentation We train models for image classification using the CIFAR-10, CIFAR-100, and SVHN. The CIFAR-10 data contains 50k training images and 10k testing images with 10 classes, each class having the same number of images for both training and testing sets. The CIFAR-100 data has the same image size, and the same volumes of training and testing dataset. The only difference from CIFAR-10 is that CIFAR-100 has 100 classes, evenly assigned to images in both training and testing sets. Therefore, the number of samples in each class is only 1/10 of that in CIFAR-10, making the training harder. The SVHN dataset contains 600k images of digits of house numbers, and each label is the digit that image shows. All three datasets have images of 32\(\times\)32 pixels in three channels of color. The unification of this factor eliminates the possible difference in outcome based on image size. We introduce data augmentation methods for image classification tasks including flipping and cropping. For the training set, we cropped the input image at a random location in 32 \(\times\) 32 pixels with a padding of 4, and randomly flip the image horizontally with the probability of 0.5, which is applied to all types of datasets. For both ResNet and ViT, we did not implement normalization on pixel values. ### Model Architecture For the ResNet, we implement the standard ResNet-18 architecture, as it is widely used for comparison in many works on image classification. Each residual block has 2 convolutional layers, with three expansions at a rate of 4 every two residual blocks. For Vision Transformer (ViT), we divide the image into 4 batches. Each attention layer has 8 heads, each having a dimension of 64. The transformer encoder has a depth of 6, and a drop-out rate of 0.1. Finally, the MLP layer has a dimension of 512, and a drop-out rate of 0.1. ### Training Details We train two models: ResNet-18, and Vision Transformer, on 3 different datasets: cifar10, cifar100, and svhn. To make it fair, all of the hyper-parameters are kept the same such as learning rate = 1e-4, batch size = 100, and using adam optimizer. We ran each experiment for 500 epochs. and use wandb (Weights and Biases) [7] library to track and visualize the results. The built-in visualization features in wandb provide multiple plots of metrics mainly about train/test loss and accuracy, allowing us to compare across different models with the same dataset. ## 3 Model Accuracy Table 1 shows the performance of ViT compared with ResNet18 on CIFAR-10, CIFAR-100 and SVHN dataset after training for 500 epochs. Figure 2(a)-2(c) show the accuracy testing curve during training. We can see that ViT performs significantly worse on CIFAR-10 and CIFAR-100 compared to ResNet18. The error rate of the former is twice of the latter. Nonetheless, ViT performs equally well on SVHN, a colored dataset on digit recognition, though its convergence is slower than ResNet from Figure 2(c). Figure 1: ResNet-18 architecture. Figure 2: ViT architecture, image from [1]. In our experiments, the transformer encoder has L=6 layers. This result confirms the assumption that ViT performs worse on small datasets. ViT achieves a similar result on SVHN because of the simplicity of the dataset. In general, ViT also performs well on MNIST, which is a one-channel version of digit recognition. If the model can fit well on one channel, then it is likely to also fit well on three channels. ## 4 Visualization of Layers In this section, we gain intuitions about what each layer of a model does by using attention weights extraction for ViT model and feature mapping for ResNet. Unfortunately, the two visualizations are not directly comparable, as they use vastly totally different learning strategies. However, the visualization tools can still help us understand the logic behind the black box of parameters. ### Attention Weights for ViT Attention weights visualization is proposed in the original ViT paper [1] to demonstrate how the model processes image data. As the method instructs, we first extract the attention layer with shape (\(n_{heads}\), \(n_{patch}+1\), \(n_{patch}+1\)) in each transformer block. Then we average the weight across the heads and then add with an identity matrix to account for residual connection. We then normalize and reshape the matrix to form a weight mask. To facilitate visualization, each weight mask is scaled by a common factor so that the max weight of all masks is equal to 1. Finally, each weight mask overlaps on top of the original image. The bright areas receive more attention, and the darker areas receive less attention. While the original work only shows the weight mask for the first attention layer, here we provide the visualization of all the 6 layers' attention weights in Figure 4. The first layer exhibits concentrated attention on a small area, while later layers expand their attention to the whole image. Our experiments show that the first layer tends to put more weight on areas with higher contrast in the neighborhood, and future layers expand their attention from the previous layer. Based on the visualizations, we speculate that the ViT model is trained to put more attention on regions with higher local contrast. While this strategy works well on most pictures in CIFAR-10 dataset, there are also images where this strategy does not perform well. Such examples can be found in the Appendix. ### Feature Map Visualization for ResNet A feature map is the output of a convolution layer in a CNN network. The values across channels are averaged, so we can directly visualize the output in grayscale. For a ResNet18 network, we can generate 17 feature maps. For dimensions, layers 1-5 are of size \((32,32)\); layers 6-9 are of size \((16,16)\); layers 10-13 are of size \((8,8)\); layers 14-17 are of size \((4,4)\). Figure 5 shows the feature maps of the model forwarding the same images as we used for ViT. ## 5 Representation Similarity Analysis After confirming our hypothesis that ViT performs less well compared to ResNet on small image datasets, we next try to provide an intuitive explanation on ViT's behavior when trained on a small dataset. We use **CKA (Centered Kernel Alignment)[4]** to analyze representation similarity between ViT and ResNet: \[\mathrm{CKA}(\mathbf{K},\mathbf{L})=\frac{\mathrm{HSIC}(\mathbf{K},\mathbf{L})}{\sqrt{ \mathrm{HSIC}(\mathbf{K},\mathbf{K})\,\mathrm{HSIC}(\mathbf{L},\mathbf{L})}}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline & CIFAR-10 & CIFAR-100 & SVHN \\ \hline ViT & 81.36 & 54.31 & 95.17 \\ \hline ResNet18 & **92.8** & **70.7** & **95.78** \\ \hline \end{tabular} \end{table} Table 1: Top-1 accuracy(%) of ViT and ResNet18, trained from scratch on different small datasets (500 epochs). where \(\mathbf{X}\in\mathbb{R}^{m\times p_{1}}\) and \(\mathbf{Y}\in\mathbb{R}^{m\times p_{2}}\) are representations of two layers with \(p_{1}\) and \(p_{2}\), \(\mathbf{K}=\mathbf{X}\mathbf{X}^{\top}\) and \(\mathbf{L}=\mathbf{Y}\mathbf{Y}^{\top}\) denote the Gram matrix for two layers, and HSIC is the Hilbert-Schmidt independence criterion [8]. In general, when the CKA value between two layers are high, the representations of these two layers are much similar. Using this metric, we can analyze the representation difference when ViT faces small datasets. Unlike [4] (See Figure 6), we focus on comparing the difference and giving interpretations on small datasets, which is novel. (See Figure 6(a), Figure 6(b) and Figure 6(c)) Figure 3: Accuracy in 500 epoch Note that when computing the representation similarity, we not only compare the convolution layers and attention layers, we also compare all the normalization and activation layers. By comparing Figure 6 (from [4], computed on large dataset) with Figure 6(a), 6(b) and 6(c) (computed on small datasets by us), we observe a huge change of representation power of ViT when trained from large datasets to huge datasets. From Figure 6, when trained with large datasets, almost the lower half of ResNet layers have a similar representation of the lowest quarter of layers in ViT; the latter half of ResNet layers have a similar representation of the next quarter of layers in ViT; the highest quarter of layers are dissimilar with all layers of ResNet. However, when trained on small datasets, the patterns of representation similarity change a lot. By comparing the representation similarity and observing the visualizations in the previous section, we observe: Figure 4: Attention weight visualization from ViT. The top two rows show the model trained on CIFAR-10, forwarding an image of an airplane; the lower two show the same model trained on SHVN, forwarding an image of number 2. The input is the 10th image from CIFAR-10, with the label of an airplane. The original image is shown on the top-left. The rest shows the original image overlapped with the weight mask of each attention layer. Bright regions represent higher attention weights (close to 1); darker regions represent lower attention weights (close to 0). Figure 6(a) and Figure 6(b) change a lot, we completly lose the pattern on large datasets of Figure 6. Which means the representation on CIFAR-10 and CIFAR-100 is completely different with large datasets and causes the huge drop on final performance. Figure 6(c) is most similar to Figure 6, we can still observe the lower layers of ResNet have similar representation of lower layers of ViT, higher layers of ResNet have similar representaion of middle-to-higher layers of ViT and highest layers of ViT are dissimilar with all layers of ResNet. However, Figure 5: Feature map visualization from lower layers to higher layers in ResNet18. The top three rows show the model train on CIFAR-10 forwarding an airplane image; the lower three rows show the same model train on SHVN forwarding an image of number 2. The top-left is the original image in grayscale; from left to right top to bottom, we exhibit feature maps of convolution layer 1 to convolution layer 17. in this case, ViT needs more layers to get the same representation of ResNet, compared with less layers before. From Figure 4, lower layers of ViT is more focusing on local areas on SVHN dataset, which means that ViT can learn more locality on SVHN compared with CIFAR-10 and CIFAR-100. The reason might be that SVHN is a simpler dataset, thus ViT can catch the inductive bias of locality on this simple dataset and that can explain the reason why the performance of ViT is similar to ResNet on SVHN while it loses a lot on CIFAR-10 and CIFAR-100 in Table 1. ## 6 Conclusion In this project, we would like to explore the reason why vision transformer does not perform well on small datasets. We firstly conduct extensive experiments to confirm the phenomenon of performance drop of ViT on small datasets. We later interpret the results by showing attention visualization and feature map visualization. Next, we conduct representation similarity analysis to further investigate the results. Finally, by comparing the difference between attention map visualization and representation similarity. We can speculate the reason for the performance drop of ViT on small datasets as follows: Figure 6: Figure from [4] representation similarity between ViT and ResNet on large datasets (JFT-300M) Figure 7: Representation similarity between ViT and ResNet on different datasets * When trained with small datasets, the representation of ViT is hugely different from ViT trained with large datasets and thus affects the performance a lot. * The huge change of representation may be due to a lack of inductive bias of locality for ViT. Lower layers of ViT can not learn the local relations well with a small amount of data on complicated small datasets, e.g., CIFAR-10 and CIFAR-100. For simpler datasets, e.g., SVHN, ViT can learn locality relatively well, it's reflected on feature map visualization and might be the reason that ViT can achieve worse but similar performance on the SVHN dataset. ## 7 Acknowledgements Our code for visual transformer model construction and training is adapted from: * [https://github.com/kentaroy47/vision-transformers-cifar10](https://github.com/kentaroy47/vision-transformers-cifar10) CKA Similarity Analysis is adapted from: * [https://github.com/AntixK/PyTorch-Model-Compare](https://github.com/AntixK/PyTorch-Model-Compare) ViT attention weights extraction and visualization are adapted from: * [https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb](https://github.com/jeonsworld/ViT-pytorch/blob/main/visualize_attention_map.ipynb) ResNet feature map visualization is adapted from: * [https://ravivaishnav20.medium.com/visualizing-feature-maps](https://ravivaishnav20.medium.com/visualizing-feature-maps) -using-pytorch-12a48cdle573
2302.12884
Moving Target Detection via Multi-IRS-Aided OFDM Radar
An intelligent reflecting surface (IRS) consists of passive reflective elements capable of altering impinging waveforms. The IRS-aided radar systems have recently been shown to improve detection and estimation performance by exploiting the target information collected via non-line-of-sight paths. However, the waveform design problem for an IRS-aided radar has remained relatively unexplored. In this paper, we consider a multi-IRS-aided orthogonal frequency-division multiplexing (OFDM) radar and study the theoretically achievable accuracy of target detection. In addition, we jointly design the OFDM signal and IRS phase-shifts to optimize the target detection performance via an alternating optimization approach. To this end, we formulate the IRS phase-shift design problem as a unimodular bi-quadratic program which is tackled by a computationally cost-effective approach based on power-method-like iterations. Numerical experiments illustrate that our proposed joint design of IRS phase-shifts and the OFDM code improves the detection performance in comparison with conventional OFDM radar.
Zahra Esmaeilbeig, Arian Eamaz, Kumar Vijay Mishra, Mojtaba Soltanalian
2023-02-24T20:39:26Z
http://arxiv.org/abs/2302.12884v2
# Moving Target Detection via Multi-IRS-Aided OFDM Radar ###### Abstract An intelligent reflecting surface (IRS) consists of passive reflective elements capable of altering impinging waveforms. The IRS-aided radar systems have recently been shown to improve detection and estimation performance by exploiting the target information collected via non-line-of-sight paths. However, the waveform design problem for an IRS-aided radar has remained relatively unexplored. In this paper, we consider a multi-IRS-aided orthogonal frequency-division multiplexing (OFDM) radar and study the theoretically achievable accuracy of target detection. In addition, we jointly design the OFDM signal and IRS phase-shifts to optimize the target detection performance via an alternating optimization approach. To this end, we formulate the IRS phase-shift design problem as a unimodular _bi-quadratic_ program which is tackled by a computationally cost-effective approach based on power-method-like iterations. Numerical experiments illustrate that our proposed joint design of IRS phase-shifts and the OFDM code improves the detection performance in comparison with conventional OFDM radar. Intelligent reflecting surfaces, non-line-of-sight sensing, OFDM, unimodular bi-quadratic programming, waveform design. ## I Introduction Intelligent reflective surface (IRS) is an emerging technological advancement for next-generation wireless systems. An IRS comprises meta-material units that enable smart and programmable radio environments by introducing predetermined phase-shifts to the impinging signal [1]. The IRS-aided wireless communications are shown to provide range extension to users with obstructed direct links [2], enhance physical layer security [3, 4], facilitate unmanned air vehicle (UAV) communications [5], and shaping the wireless channel through multi-beam design [6]. Recent works have also introduced IRS to integrated communications and sensing systems [7, 3, 8, 9]. Recently, following the advances in [7, 10], IRS-aided sensing for non-line-of-sight (NLoS) target estimation has been investigated in [10, 11, 12]. In [13], the phase-shift matrix of the IRS was optimized for collected MIMO radar to improve the estimation and detection performance. Target detection was also considered in cases where the radar is aided by a single IRS [14] or multiple IRS platforms [15, 16]. The deployment of multiple IRS platforms is necessary to overcome line-of-sight (LoS) blockage or obstruction in cases where the NLoS path formed by a single IRS is unable to provide the desired coverage. To this end, [17] jointly designed the radar transmitter and IRS beamformers for a multi-IRS-aided radar. Similar to a conventional radar [18], a judicious design of transmit waveforms improves the performance of IRS-aided radar. In general, radar waveform design is a well-investigated problem [18, 19, 20]. However, it is relatively unexamined for IRS-aided scenarios. Among prior works, [21] designed a transmit radar code with constant-modulus for a narrowband IRS-aided radar. However, wideband signaling compensates for signal fading resulting from multipath propagation [22]. Therefore, very recent works [23, 24, 8] investigate wideband waveforms such as orthogonal frequency-division multiplexing (OFDM) signaling to improve detection with IRS-aided radar. In this paper, we focus on designing a wideband radar waveform for multi-IRS-aided radar jointly with the IRS phase-shifts. In particular, we formulate the detection problem as a hypothesis test to decide the presence of a target in a particular range cell. Then, we jointly design the OFDM signal and the IRS phase-shifts to enhance the receiver operating characteristics (RoC) associated with moving target detection. We adopt noncentrality parameter of the asymptotic distribution of the generalized likelihood ratio test (GLRT) statistic [22] as the performance metric for target detection. We demonstrate that maximizing the noncentrality parameter with respect to the system parameters such as the transmit waveform and phase-shifts of IRS, yields improvement in the probability of detection. Contrary to prior works, wherein only IRS phase-shifts were optimized in an IRS-aided radar [10, 15, 16], we show that jointly optimal waveform and phase-shifts increase the probability of detection. Further, our IRS-aided radar outperforms the multipath OFDM radar [22] with specular reflection in the exactly identical paths between the target and radar. The remainder of this paper is organized as follows. In the next section, we describe the signal model for the multi-IRS-aided OFDM radar. The moving target detector based on GLRT is introduced in Section III. We present our joint waveform and IRS phase-shift design in Section IV. We validate our model and methods via numerical experiments in Section V and conclude in Section VI. Throughout this paper, we use bold lowercase and bold uppercase letters for vectors and matrices, respectively. The \((m,n)\)-th element of the matrix \(\mathbf{B}\) is \([\mathbf{B}]_{mn}\). The sets of complex and real numbers are \(\mathbb{C}\) and \(\mathbb{R}\), respectively; \((\cdot)^{\top}\), \((\cdot)^{\star}\) and \((\cdot)^{\mathrm{H}}\) are the vector/matrix transpose, conjugate, and Hermitian transpose, respectively; the trace of a matrix is \(\mathrm{Tr}(\cdot)\); the function diag\(\left(.\right)\) returns the diagonal elements of the input matrix; and \(\mathrm{Diag}(.)\) produces a diagonal/block-diagonal matrix with the same diagonal entries/blocks as its vector/matrices argument. The Hadamard (element-wise) and Kronecker products are \(\odot\) and \(\otimes\), respectively. The vectorized form of a matrix \(\mathbf{B}\) is written as \(\mathrm{vec}(\mathbf{B})\) and the block diagonal vectorization [25] is denoted by \(\mathrm{vec}(\mathbf{B})\). The \(s\)-dimensional all-ones vector and the identity matrix of \(z\ast s\ast s\) and \(\mathbf{I}_{s}\), and respectively. The minimum eigenvalue of \(\mathbf{B}\) is denoted by \(\lambda_{min}(\mathbf{B})\). The real, imaginary, and angle/phase components of a complex number are \(\mathrm{Re}\left(\cdot\right)\), \(\mathrm{Im}\left(\cdot\right)\), and \(\mathrm{arg}\left(\cdot\right)\), respectively. \(\mathrm{vec}_{-L_{L}}^{-1}\left(\mathbf{B}\right)\) reshapes the input vector \(\mathbf{b}\in\mathbb{C}^{KL}\) into a matrix \(\mathbf{B}\in\mathbb{C}^{K\times K}\) such that \(\mathrm{vec}\left(\mathbf{B}\right)=\mathbf{b}\). Also, \(\mathbf{0}_{N}\) is the all-zero vector of size \(N\). The generalized inversion of a matrix \(\mathbf{B}\) is \(\left(\mathbf{B}\right)^{-}\). We use \(\mathrm{Pr}\left(.\right)\) to denote the probability. ## II System Model Consider a multi-IRS-aided radar with transmitter and receiver located at \(\mathfrak{p}_{{}_{r}}=[0,0]^{T}\) in the two-dimensional (2-D) Cartesian coordinate system. The radar transmits an OFDM signal with the bandwidth \(B\) Hz consisting of \(L\) subcarriers as \[s_{\text{OFDM}}(t)=\sum_{l=0}^{L-1}a_{l}e^{\mathrm{i}2\pi f_{l}t},\quad 0\leq t \leq T, \tag{1}\] where \(a_{l}\) is the waveform code, \(f_{l}=f_{c}+l\Delta_{f}\) denotes the \(l\)-th subcarrier frequency and the subcarrier spacing is chosen as \(\Delta_{f}=B/(L+1)=1/T\) to guarantee the orthogonality of the subcarriers. The vector \(\mathbf{a}=[a_{1},\dots,a_{L}]^{\top}\) collects the OFDM coefficients of all subcarriers for which we have \(\|\mathbf{a}\|_{2}^{2}=1\). The pulses in (1) are transmitted with the pulse repetition interval (PRI) \(T_{\text{PRI}}\). The \(M\) IRS platforms denoted as IRS\({}_{1}\), IRS\({}_{2}\),..., IRS\({}_{M}\) are installed at stationary known locations (Fig. 1). Each IRS is a uniform linear array (ULA) with \(N_{m}\) reflecting elements, and with an inter-element spacing of \(d\). The first element of IRS\({}_{m}\) is located at a known coordinate \(\rho_{i}^{(m)}=[x_{i}^{(m)},y_{i}^{(m)}]^{T}\). The space-frequency steering vector of the \(m\)-th IRS is \(\mathbf{b}_{m}(\mathbf{\theta},f_{l})=\left[1,e^{\mathrm{i}\frac{2\pi f_{l}}{c} dsin\theta},\dots,e^{\mathrm{i}\frac{2\pi f_{l}}{c}d(N_{m-1})sin\theta}\right]^{\top}\), where \(c\) is the speed of light, \(f_{l}\) is the subcarrier frequency, \(d\) is the half-wavelength Nyquist spacing and \(\theta\) is the direction of the impinging wavefront at the ULA. Each reflecting element of IRS\({}_{m}\) reflects the incident signal with a phase-shift change that is configured via a smart controller [1]. We denote the phase-shift vector of IRS\({}_{m}\) by \(\mathbf{v}_{m}=[e^{\mathrm{i}\phi_{m,1}},\dots,e^{\mathrm{i}\phi_{m,N_{m}}}]^{ \top}\in\mathbb{C}^{N_{m}}\), where \(\phi_{m,k}\in[0,2\pi)\) is the phase-shift associated with the \(k\)-th passive element of IRS\({}_{m}\). Clearly, \(\mathbf{v}_{m}\) is a unimodular vector chosen from the set \(\Omega^{N_{m}}\), where \(\Omega=\left\{s\in\mathbb{C}|\mathbf{s}=e^{\mathrm{i}\omega_{i}},\omega_{i}\in [0,2\pi)\right\}\). Assume a target at \(\rho_{i}=[x_{t},y_{t}]^{\top}\) moving with velocity \(\mathbf{v}_{t}=[\mathbf{v}_{x},\mathbf{v}_{y}]^{\top}\). In the LoS path, the target is characterized by its Doppler shift and time delay given by, respectively, \[\nu_{0}=\frac{1}{c}\frac{2\mathbf{v}_{t}^{\top}(\rho_{i}-\rho_{i})}{d_{tr}}, \tag{2}\] and \[\tau_{0}=\frac{2d_{tr}}{c}, \tag{3}\] where \(d_{tr}=\|\rho_{i}-\rho_{r}\|_{2}\) is the distance between the radar and target. The IRS deployment also yields \(M\) non-line-of-sight (NLoS) paths from the target to the radar. The Doppler shift and time delay in the radar-IRS\({}_{m}\)-target-IRS\({}_{m}\)-radar path are, respectively, \[\nu_{m}=\frac{1}{c}\frac{2\mathbf{v}_{t}^{\top}(\rho_{t}-\rho_{i}^{(m)})}{d_{ tt}^{(m)}}, \tag{4}\] and \[\tau_{m}=2\frac{d_{ri}^{(m)}+d_{it}^{(m)}}{c}, \tag{5}\] for \(m=1,\dots,M\), where \(d_{ri}^{(m)}=\|\rho_{i}^{(m)}-\rho_{r}\|_{2}\) and \(d_{it}^{(m)}=\|\rho_{t}-\rho_{i}^{(m)}\|_{2}\) are the radar-IRS\({}_{m}\) and target-IRS\({}_{m}\) distances, respectively. We make the following assumptions about the IRS-aided OFDM radar and target parameters: * "Bandwidth-invariant Doppler": The bandwidth of OFDM signal is much smaller than the carrier frequency, i.e., \(B\ll f_{c}\). Hence, the phase-shifts arising from the Doppler effect are identical over all subcarriers. * "Slow Target": The Doppler frequency of the target does not change during one coherent processing interval (CPI) i.e. \(\nu_{m}<<\frac{1}{N_{\text{PRI}}B}\). Therefore, the following piecewise-constant approximation holds \(\nu_{m}t\approx\nu_{m}nT_{\text{PRI}}\), for \(t\in[nT_{\text{PRI}},(n+1)T_{\text{PRI}}]\). * "Narrow surveillance area": The radar is deployed in a region, where the range of the target is much greater than the width or cross-range extent of the surveillance area. The relative time gaps between any two signals received from NLoS paths are very small in comparison to the LoS round trip delays, i.e., \(\tau_{m}\approx\tau_{0}=\frac{2d_{tr}}{c}\) for \(m\in\{1,\dots,M\}\). * "Frequency-invariant IRS phase-shift": The IRS platforms impose the same phaseshifts over all subcarrier frequencies and therefore the IRS phase-shift matrix is not indexed over different frequencies, i.e., \(\mathbf{\Phi}_{m}(f_{l})=\mathbf{\Phi}_{m}\), for \(l\in\{0,\dots,L-1\}\) and \(m\in\{1,\dots,M\}\). * "Inter-IRS interference": The mutual interference between various IRS platforms is negligible. In other words, the interference caused by reflections in the radar-IRS\({}_{m}\)-target-IRS\({}_{m^{\prime}}\)-radar path for \(m\neq m^{\prime}\) is insignificant because IRS is a passive reflector and the reflections in non-beamformed directions are weaker. Define the NLoS channel along the \(l\)-th subcarrier and \(m\)-th path as \[h_{l_{m}}=\mathbf{b}(\theta_{ir,m},f_{l})^{\top}\mathbf{\Phi}_{m}\mathbf{b}(\theta_{ ii,m},f_{l})\mathbf{b}(\theta_{it,m},f_{l})^{\top}\mathbf{\Phi}_{m}\mathbf{b}(\theta_{ ii,m},f_{l}), \tag{6}\] for \(m>1\) and \(h_{l0}\) is the LoS channel [22]. We define \(\mathbf{\Phi}_{m}=\mathrm{Diag}\left(\mathbf{v}_{m}\right)\) as diagonalization of the 1-D phase-shift vector of IRS\({}_{m}\). Assume the LoS path between radar and target is obstructed, i.e., \(h_{l0}\approx 0\). The signal reflected from a Swering-0 [26] target with \(\alpha_{lm}\) as the complex reflectivity/amplitude along the \(l\)-th subchannel and \(m\)-th path is a delayed, modulated and scaled version of the transmit signal in (1) as \[y_{l}(t) =\sum_{m=0}^{M}a_{l}h_{l_{m}}\alpha_{lm}e^{\mathrm{i}2\pi l\Delta _{f}(1+\nu_{m})(t-\tau_{m})}\] \[\times e^{-\mathrm{i}2\pi f_{c}(1+\nu_{m})\tau_{m}}e^{\mathrm{i}2 \pi f_{c}\nu_{m}t}e^{\mathrm{i}2\pi f_{c}t}+w_{l}(t), \tag{7}\] where the signal independent interference (noise) for the \(l\)-th subcarrier is denoted by \(w_{l}(t)\). We collect \(N\) samples from the signal, at \(t=\tau_{0}+nT_{\text{PRI}}\), \(n=0,\dots,N-1\). By applying \(\nu_{m}t\approx\nu_{m}nT_{\text{PRI}}\) (**A2**) and \(\tau_{m}\approx\tau_{0}\) (**A3**) to (7), the discrete-time received signal corresponding to the range-cell of interest is \[y_{l}[n]=\sum_{m=0}^{M}a_{l}h_{l_{m}}\alpha_{lm}p_{l}(n,\nu_{m})+w_{l}[n], \tag{8}\] where, \(p_{l}(n,\nu_{m})=e^{-\mathrm{i}2\pi f_{l}\tau_{0}}e^{\mathrm{i}2\pi f_{l}\nu_{m} nT_{\text{PRI}}}\) contains the unknown target delay and Doppler. We stack measurements from all L subchannels to obtain the \(L\times 1\) vector \[\mathbf{y}[n]=\mathrm{Diag}\left(\mathbf{a}\right)\mathbf{X}\mathbf{p}(n,\mathbf{\nu})+ \mathbf{w}[n], \tag{9}\] Fig. 1: A simplified illustration of various NLoS or virtual LoS links provided by multiple IRS platforms mounted on urban infrastructure between the radar and the hidden moving target. where the Doppler steering vector is \(\mathbf{p}(n,\mathbf{\nu})=[\mathbf{p}_{0}(n,\mathbf{\nu})^{\top},\dots,\mathbf{p}_{L-1}(n, \mathbf{\nu})^{\top}]^{\top},\) with \(\mathbf{p}_{i}(n,\mathbf{\nu})=[p_{i}(n,\nu_{0}),\dots,p_{i}(n,\nu_{L})]^{\top}\), and the \(L\times 1\) noise vectors is \(\mathbf{w}[n]=[w_{0}[n],\dots,w_{L-1}[n]]^{\top}\). Stacking all \(N\) temporal measurements, the \(L\times N\) OFDM received signal matrix is \[\mathbf{Y}_{\text{OFDM}}=\mathbf{A}\mathbf{X}\mathbf{P}(\mathbf{\nu})+\mathbf{N}, \tag{10}\] where \(\mathbf{A}=\operatorname{Diag}\left(\mathbf{a}\right)\), \(\mathbf{N}=[\mathbf{w}[0],\dots,\mathbf{w}[N-1]]^{\top}\) and the Doppler information of the target is collected in \[\mathbf{P}(\mathbf{\nu})=[\mathbf{p}(0,\mathbf{\nu}),\dots,\mathbf{p}(N-1,\mathbf{\nu})], \tag{11}\] and \[\mathbf{X} =\mathbf{D}\odot\mathbf{H}, \tag{12}\] \[\mathbf{D} =\operatorname{Diag}\left(\mathbf{\alpha}_{0}^{\top},\dots,\mathbf{ \alpha}_{L-1}^{\top}\right),\] (13) \[\mathbf{\alpha}_{i} =[\alpha_{i1},\dots,\alpha_{iM}]^{\top},\] (14) \[\mathbf{H} =\operatorname{Diag}\left(\mathbf{h}_{0}^{\top},\dots,\mathbf{h} _{L-1}^{\top}\right),\] (15) \[\mathbf{h}_{l} =[h_{l_{1}},\dots,h_{LM}]^{\top}, \tag{16}\] We assume the noise is from complex zero-mean Gaussian distribution and correlated with a positive-definite covariance \(\mathbf{\Sigma}\). The columns of \(\mathbf{N}\) are assumed to be (independent and identically distributed) i.i.d. Then, OFDM measurements are distributed as \[\mathbf{Y}_{\text{OFDM}}\sim\mathcal{CN}(\mathbf{A}\mathbf{X}\mathbf{P}(\mathbf{ \nu}),\mathbf{I}_{N}\otimes\mathbf{\Sigma}) \tag{17}\] where \(\mathbf{I}_{N}\otimes\mathbf{\Sigma}\) is the covariance of the temporally white noise. Our goal is to design a waveform that maximizes the detection of a moving target located at a given range. ## III Target Detection In order to decide whether a target is present in a particular known range-cell, we perform binary hypothesis testing between \(\mathcal{H}_{0}\) (target-free hypothesis) and \(\mathcal{H}_{1}\) (target-present hypothesis), that is \[\mathcal{H}_{0}:\quad\mathbf{Y}_{\text{OFDM}} =\mathbf{N}, \tag{18}\] \[\mathcal{H}_{1}:\quad\mathbf{Y}_{\text{OFDM}} =\mathbf{A}\mathbf{X}\mathbf{P}(\mathbf{\nu})+\mathbf{N}. \tag{19}\] The likelihood ratio is [22, 27] \[\mathcal{L}\left(\mathbf{Y}_{\text{OFDM}};\mathbf{\nu}\right)=\frac{f_{\mathcal{ H}_{1}}\left(\mathbf{Y}_{\text{OFDM}};\mathbf{\nu},\mathbf{X},\mathbf{\Sigma}_{1}\right)}{f_{ \mathcal{H}_{0}}\left(\mathbf{Y}_{\text{OFDM}};\mathbf{\Sigma}_{0}\right)}, \tag{20}\] where \(f_{\mathcal{H}_{0}}\) and \(f_{\mathcal{H}_{1}}\) are the likelihood functions under \(\mathcal{H}_{0}\) and \(\mathcal{H}_{1}\), respectively and \(\mathbf{\nu}\) is the Doppler frequency under test. Since the \(\mathbf{\Sigma}\) and target parameters are unknown, we employ a generalized likelihood ratio test (GLRT) by replacing the unknowns with their maximum likelihood estimates (MLEs) in the \(\mathcal{L}\left(\mathbf{Y}_{\text{OFDM}};\mathbf{\nu}\right)\) to obtain the GLRT for our detection problem (18) as \[\mathcal{T}_{\text{\tiny{ax}}}=\frac{f_{\mathcal{H}_{1}}\left(\mathbf{Y}_{ \text{OFDM}};\mathbf{\nu},\widehat{\mathbf{X}},\widehat{\mathbf{\Sigma}}_{1}\right)}{ f_{\mathcal{H}_{0}}\left(\mathbf{Y}_{\text{OFDM}};\widehat{\mathbf{\Sigma}}_{0}\right)} \underset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{1}}{\overset{\mathcal{H}_{ 1}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_ {2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_ {2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_ {2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2} \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{ \overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}}{\overset{\mathcal{H}_{2}{\overset{ \mathcal{H}}_{2}{\overset{\mathcal{H}_{2}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{ \overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\mathcal{H}}{\overset{\cdot}{\cdot}{ \cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{ \cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{ \cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{\cdot}{ \ The sequence of unimodular vectors at the \(t\)-th PMLI iteration is \[\mathbf{s}^{(t+1)}=e^{\mathrm{j}\arg\left(\mathbf{G}\mathbf{s}^{(t)}\right)}, \tag{34}\] leads to a monotonically increasing objective value for the UQP, when \(\mathbf{G}\) is a positive semidefinite matrix [19, 33]. The following Lemma 1 states the required transformations of (32) to facilitate the application of the PMLI approach. **Lemma 1**.: _Define \(\mathbf{C}=\left[\begin{array}{c|c|c|c}\mathbf{\Upsilon}_{1}&\cdots& \mathbf{\Upsilon}_{L}&\end{array}\right]\),\(\mathbf{\Upsilon}_{l}=\sum_{i=1}^{LM}\mathbf{C}_{i}(\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \} \} \right\] \)\] \)\)\] \] \end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end\end{\] (\end{tableV) Subsequently, we will use Lemma. 1 to propose a quadratic form with respect to \(\mathbf{h}\) for the SNR metric. **Proposition 1**.: _Denote \(\mathbf{h}=[\mathbf{h}_{1}^{\top},\ldots,\mathbf{h}_{L}^{\top}]^{\top}\), \(\mathbf{W}=\mathbf{C}^{\mathrm{H}}\mathbf{U}^{\mathrm{H}}\mathcal{A}\mathbf{U}\mathbf{C}\) and \(\mathbf{U}=\mathrm{Diag}\left(\mathrm{vec}\left(\mathbf{A}\mathbf{D}\right)\right)\). Then, the SNR metric becomes_ \[f(\boldsymbol{a},\mathbf{v})=\mathbf{h}^{\mathrm{H}}\mathbf{W}\mathbf{h}. \tag{37}\] Proof.: Assume \(\mathbf{B}=\mathbf{A}\mathbf{X}\). We recast the objective in (32) as \[f(\boldsymbol{a},\mathbf{v}) =\mathrm{Tr}\left(\mathbf{P}^{\mathrm{H}}(\boldsymbol{\nu}) \mathbf{B}^{\mathrm{H}}\mathbf{\Sigma}^{-1}\mathbf{B}\mathbf{P}(\boldsymbol{ \nu})\right)\] \[=\mathrm{Tr}\left(\mathbf{B}^{\mathrm{H}}\mathbf{\Sigma}^{-1} \mathbf{B}\mathbf{P}(\boldsymbol{\nu})\mathbf{P}^{\mathrm{H}}(\boldsymbol{ \nu})\right)\] \[=\mathrm{vec}\left(\mathbf{B}^{\mathrm{*}}\right)^{\top}\mathrm{ vec}\left(\mathbf{\Sigma}^{-1}\mathbf{B}\mathbf{P}(\boldsymbol{\nu})\mathbf{P}^{ \mathrm{H}}(\boldsymbol{\nu})\right)\] \[=\mathrm{vec}\left(\mathbf{B}\right)^{\mathrm{H}}\left(\left( \mathbf{P}(\boldsymbol{\nu})\mathbf{P}^{\mathrm{H}}(\boldsymbol{\nu})\right)^ {\top}\otimes\mathbf{\Sigma}^{-1}\right)\mathrm{vec}\left(\mathbf{B}\right)\] \[=\mathrm{vec}\left(\mathbf{B}^{\mathrm{H}}\mathcal{A}\mathrm{vec} \left(\mathbf{B}\right),\right. \tag{38}\] where \(\mathcal{A}=\left(\mathbf{P}(\boldsymbol{\nu})\mathbf{P}^{\mathrm{H}}( \boldsymbol{\nu})\right)^{\top}\otimes\mathbf{\Sigma}^{-1}\). We have \(\mathrm{vec}\left(\mathbf{B}\right)=\mathrm{vec}\left(\mathbf{A}\mathbf{X} \right)=\mathrm{vec}\left(\mathrm{Diag}\left(\boldsymbol{a}\right)\left( \mathbf{D}\odot\mathbf{H}\right)\right)=\mathrm{vec}\left(\mathbf{H}\right) \odot\mathrm{vec}\left(\mathbf{A}\mathbf{D}\right)=\mathrm{Ch}\mathbf{\odot vec }\left(\mathbf{A}\mathbf{D}\right)\) and using Lemma. 1, \(\mathrm{vec}\left(\mathbf{B}\right)=\mathbf{U}\mathbf{C}\mathbf{h}\). Substituting this in (38) completes the proof. We now reformulate the SNR metric as a quartic function in the optimization parameter \(\mathbf{v}\). **Proposition 2**.: _The SNR metric is quartic in phase-shifts, i.e._ \[f(\boldsymbol{a},\mathbf{v}) =\mathbf{v}^{\mathrm{H}}\mathbf{Q}_{1}(\mathbf{v})^{\mathrm{H}} \mathbf{W}\mathbf{Q}_{1}(\mathbf{v})\mathbf{v},\] \[=\mathbf{v}^{\mathrm{H}}\mathbf{Q}_{2}(\mathbf{v})^{\mathrm{H}} \mathbf{W}\mathbf{Q}_{2}(\mathbf{v})\mathbf{v}, \tag{39}\] where \[\mathbf{Q}_{1}(\mathbf{v}) =\left[\begin{array}{c|c|c}\left(\mathbf{S}_{1}\mathbf{Q}( \mathbf{v})\right)^{\top}\left|\begin{array}{c|c}\cdots&\left(\mathbf{S}_{L }\mathbf{Q}(\mathbf{v})\right)^{\top}\left|\begin{array}{c}\top\\ \cdot\end{array}\right.\end{array}\right]^{\top},\] \[\mathbf{Q}_{2}(\mathbf{v}) =\left[\begin{array}{c|c}\left(\mathbf{S}_{1}\mathbf{Q}^{ \prime}(\mathbf{v})\right)^{\top}\left|\begin{array}{c}\cdots&\left(\mathbf{S}_ {L}\mathbf{Q}^{\prime}(\mathbf{v})\right)^{\top}\end{array}\right.\right]^{ \top},\] \[\mathbf{Q}(\mathbf{v}) =\mathrm{Diag}\left(\mathbf{v}_{1}\otimes\mathbf{I}_{N_{m}}, \ldots,\mathbf{v}_{{}_{M}}\otimes\mathbf{I}_{N_{m}}\right),\] \[\mathbf{Q}^{\prime}(\mathbf{v}) =\mathrm{Diag}\left(\mathbf{I}_{N_{m}}\otimes\mathbf{v}_{1}, \ldots,\mathbf{I}_{N_{m}}\otimes\mathbf{v}_{M}\right),\] \[\mathbf{S}_{l} =\mathrm{Diag}\left(\mathrm{vec}\left(\mathbf{S}_{l1}\right)^{ \top},\ldots,\mathrm{vec}\left(\mathbf{S}_{lM}\right)^{\top}\right),\] \[\mathbf{S}_{lm} =\left(\mathbf{b}(\theta_{ir,m},f_{l})\odot\mathbf{b}(\theta_{i,m},f_{l})\right)\left(\mathbf{b}(\theta_{ir,m},f_{l})\odot\mathbf{b}(\theta_ {it,m},f_{l})\right)^{\top}. \tag{40}\] Proof.: Given \(\mathbf{\Phi}_{m}=\mathrm{Diag}\left(\mathbf{v}_{m}\right)\), it is straightforward to verify from (6) that we have \(h_{l_{m}}=\mathbf{v}_{m}^{\top}\mathbf{S}_{l_{m}}\mathbf{v}_{m}\) and to (41) produces \[\mathbf{h}_{i} =\mathbf{S}_{i}\mathbf{Q}(\mathbf{v})\mathbf{v}=\mathbf{S}_{i} \mathbf{Q}^{\prime}(\mathbf{v})\mathbf{v}, \tag{41}\] \[=\mathrm{Diag}\left(\mathrm{vec}\left(\mathbf{S}_{l_{1}}\right)^{ \top},\ldots,\mathrm{vec}\left(\mathbf{S}_{lM}\right)^{\top}\right)\] \[\quad\times\left[\mathrm{vec}\left(\mathbf{v}_{1}\mathbf{v}_{1}^{ \top}\right),\ldots,\mathrm{vec}\left(\mathbf{v}_{M}\mathbf{v}_{M}^{\top} \right)\right]^{\top}. \tag{42}\] Applying the identity [32] \[\mathrm{vec}\left(\mathbf{v}_{m}\mathbf{v}_{m}\mathbf{v}_{m}^{\top}\right) =\left(\mathbf{I}_{N_{m}}\otimes\mathbf{v}_{m}\right)\mathbf{v}_{m}=\left( \mathbf{v}_{m}\otimes\mathbf{I}_{N_{m}}\right)\mathbf{v}_{m}, \tag{43}\] to (41) produces \[\mathbf{h}_{i}=\mathbf{S}_{i}\mathbf{Q}(\mathbf{v})\mathbf{v}=\mathbf{S}_{i} \mathbf{Q}^{\prime}(\mathbf{v})\mathbf{v}, \tag{44}\] where \(\mathbf{S}_{i}\), \(\mathbf{Q}(\mathbf{v})\) and \(\mathbf{Q}^{\prime}(\mathbf{v})\) are given in (40). Concatenating the vectors in (43), we obtain \[\mathbf{h}=\mathbf{Q}_{1}(\mathbf{v})\mathbf{v}=\mathbf{Q}_{2}(\mathbf{v}) \mathbf{v}, \tag{45}\] where \(\mathbf{Q}_{1}(\mathbf{v})\) and \(\mathbf{Q}_{2}(\mathbf{v})\) are given in (40). ### _Proposed Algorithm_ We cyclically tackle the SNR maximization via its bi-quadratic transformation with respect to auxiliary variables \(\mathbf{v}_{(1)}\) and \(\mathbf{v}_{(2)}\). In the sequel, \(\mathbf{v}_{(1)},\mathbf{v}_{(2)}\in\mathbb{C}^{MN_{m}}\) are vectors produced by symmetrization, representing the collection of phase-shifts of all IRS platforms, whereas in the previous parts \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{C}^{N_{m}}\) were the vector of phase-shift of IRS\({}_{1}\) and IRS\({}_{2}\). In the following proposition, we recast the SNR metric as a bi-quadratic function of \(\mathbf{v}_{(1)}\) and \(\mathbf{v}_{(2)}\). **Proposition 3**.: _The function_ \[g(\mathbf{v}_{ By Substituting \(\mathbf{v}_{{}_{(1)}}=\mathbf{v}_{{}_{(2)}}=\mathbf{v}\) and (46) in (45), and comparing it with (39), one can verify that \(f(\mathbf{a},\mathbf{v})=g(\mathbf{v},\mathbf{v})\). Since \(f(\mathbf{a},\mathbf{v})=g(\mathbf{v},\mathbf{v})\), we propose to maximize \(f(\mathbf{a},\mathbf{v})\) by alternately fixing \(\mathbf{v}_{(1)}\) or \(\mathbf{v}_{(2)}\) and maximizing \(g\left(\mathbf{v}_{{}_{(1)}},\mathbf{v}_{{}_{(2)}}\right)\) with respect to the other variable while enforcing \(\mathbf{v}_{{}_{(1)}}=\mathbf{v}_{{}_{(2)}}\) as a constraint. From Proposition 3, fixing either \(\mathbf{v}_{{}_{(1)}}\) or \(\mathbf{v}_{{}_{(2)}}\) and maximizing \(g\left(\mathbf{v}_{{}_{(1)}},\mathbf{v}_{{}_{(2)}}\right)\) with respect to the other variable requires solving the following \[\underset{\mathbf{v}_{(k)}\in\Omega^{MN_{m}}}{\text{maximize}}\quad\mathbf{v }_{(k)}^{\mathrm{H}}\mathbf{E}\left(\mathbf{v}_{(i)}\right)\mathbf{v}_{(k)}, \quad i\neq k\in\left\{1,2\right\}, \tag{48}\] **Remark 1**.: _In UQP, the diagonal loading technique is used to ensure the positive semidefiniteness of the matrix, without changing the optimal solution [12]. In (48), the diagonal loading as \(\widetilde{\mathbf{E}}(\mathbf{v})\leftarrow\lambda_{m}\mathbf{I}-\mathbf{E}( \mathbf{v})\), with \(\lambda_{{}_{m}}\) being the maximum eigenvalue of \(\mathbf{E}(\mathbf{v})\), results is an equivalent problem._ _Note that diagonal loading has no effect on the solution of (48) because \(\mathbf{v}^{\mathrm{H}}\widetilde{\mathbf{E}}(\mathbf{v})\mathbf{v}=\lambda_{m }MN_{m}-\mathbf{v}^{\mathrm{H}}\widetilde{\mathbf{E}}(\mathbf{v})\mathbf{v}\). The equivalent problem to (48) is_ \[\underset{\mathbf{v}_{(k)}\in\Omega^{MN_{m}}}{\text{minimize}}\quad\mathbf{v }_{(k)}^{\mathrm{H}}\widetilde{\mathbf{E}}\left(\mathbf{v}_{(i)}\right) \mathbf{v}_{(k)},\quad i\neq k\in\left\{1,2\right\}. \tag{49}\] The following theorem demonstrates that the IRS beamforming design problem \(\mathcal{P}_{2}\) is equivalent to a unimodular bi-quadratic programming (UBQP) that we solve using the PMLI approach in (34). **Theorem 1**.: _The SNR maximization problem \(\mathcal{P}_{2}\) with respect to phase-shifts is equivalent to the following UBQP_ \[\underset{\mathbf{v}_{(k)}\in\Omega^{MN_{m}}}{\text{maximize}}\quad\begin{bmatrix} \mathbf{v}_{(k)}\end{bmatrix}^{\mathrm{H}}\underbrace{\begin{bmatrix}\widehat {\lambda}_{m}\mathbf{I}-\widetilde{\mathbf{E}}(\mathbf{v}_{(1)})&\eta\mathbf{ v}_{(1)}\\ \eta\mathbf{v}_{(i)}^{\mathrm{H}}&\widehat{\lambda}_{m}-2\eta MN_{m}\end{bmatrix}} _{\neg\widetilde{\mathbf{E}}\left(\mathbf{v}_{(i)}\right)}\begin{bmatrix} \mathbf{v}_{(k)}\end{bmatrix}, \tag{50}\] _where \(i\neq k\in\left\{1,2\right\}\), and \(\widehat{\lambda}_{m}\) is the maximum eigenvalue of \(\mathcal{E}(\mathbf{v}_{{}_{(1)}})\) as defined in (54)._ Proof.: From Proposition 3 and Remark 1, we know that \(\mathcal{P}_{2}\) is equivalent to the following problems, \[\underset{\mathbf{v}_{(k)}\in\Omega^{MN_{m}}}{\text{minimize}} \quad\mathbf{v}_{(k)}^{\mathrm{H}}\widetilde{\mathbf{E}}(\mathbf{v}_{(i)}) \mathbf{v}_{(k)},\quad i\neq k\in\left\{1,2\right\},\] \[\text{subject to}\quad\mathbf{v}_{(i)}=\mathbf{v}_{(k)}. \tag{51}\] We add the \(\ell_{2}\)-norm penalty term between \(\mathbf{v}_{{}_{(1)}}\) and \(\mathbf{v}_{{}_{(2)}}\) as a _penalty_ function to (51), which yields \[\underset{\mathbf{v}_{(k)}\in\Omega^{MN_{m}}}{\text{minimize}} \quad\mathbf{v}_{(k)}^{\mathrm{H}}\widetilde{\mathbf{E}}(\mathbf{v}_{(i)}) \mathbf{v}_{(k)}+\eta\|\mathbf{v}_{(i)}-\mathbf{v}_{(k)}\|_{2}^{2},\quad i \neq k\in\left\{1,2\right\}, \tag{52}\] where \(\eta\) is a Lagrangian multiplier. The regularizer as well as the main objective are quadratic in \(\mathbf{v}_{{}_{(1)}}\) and \(\mathbf{v}_{{}_{(2)}}\). Consequently, we recast the objective of (52) as \[\mathbf{v}_{(k)}^{\mathrm{H}}\widetilde{\mathbf{E}}(\mathbf{v}_{(i) })\mathbf{v}_{(k)}+\eta\|\mathbf{v}_{(i)}-\mathbf{v}_{(k)}\|_{2}^{2},\] \[=\mathbf{v}_{(k)}^{\mathrm{H}}\widetilde{\mathbf{E}}(\mathbf{v}_{( i)})\mathbf{v}_{(k)}-2\eta\operatorname{Re}\left(\mathbf{v}_{(k)}^{\mathrm{H}} \mathbf{v}_{(i)}\right)+2\eta MN_{m},\] \[=\begin{bmatrix}\mathbf{v}_{(k)}^{\mathrm{H}}\widetilde{\mathbf{E}} (\mathbf{v}_{(i)})&\eta\mathbf{v}_{(i)}\\ 1\end{bmatrix}, \tag{53}\] where in the first equality, we used \(\|\mathbf{v}_{(k)}\|_{2}^{2}=MN_{m}\), due to unimodularity of \(\mathbf{v}_{(k)}\). Substituting (53) in (52) yields \[\underset{\mathbf{v}_{(k)}\in\Omega^{MN_{m}}}{\text{minimize}} \quad\begin{bmatrix}\mathbf{v}_{(k)}\\ 1\end{bmatrix}^{\mathrm{H}}\underbrace{\begin{bmatrix}\widetilde{\mathbf{E}}( \mathbf{v}_{(j)})&\eta\mathbf{v}_{(i)}\\ \eta\mathbf{v}_{(i)}^{\mathrm{H}}&2\eta MN_{m}\end{bmatrix}}_{=\mathcal{E}( \mathbf{v}_{(i)})}^{\mathrm{H}}\begin{bmatrix}\mathbf{v}_{(k)}\\ 1\end{bmatrix},\] \[i\neq k\in\left\{1,2\right\}. \tag{54}\] We use diagonal loading as introduced in Remark 1 to obtain (50). Based on Theorem 1, the IRS phase assignment problem can be formulated as a UBQP and therefore it may be tackled in an alternating manner over \(\bar{\mathbf{v}}_{{}_{(1)}}=[\mathbf{v}_{(1)}^{\top}]^{\top}\) and \(\bar{\mathbf{v}}_{{}_{(2)}}=[\mathbf{v}_{(2)}^{\top}]^{\top}\), by the PMLI iterations in (34). The PMLI has been shown to be convergent in terms of both the optimization objective and variable [19, 20]. Algorithm 1 summarizes the steps for joint waveform and IRS phase-shift design. ``` Input: Initialization values \(\mathbf{v}_{1}^{(0)}\),\(\mathbf{v}_{2}^{(0)}\), \(\mathbf{a}^{(0)}\), the Lagrangian multiplier \(\eta\), total number of iterations \(\Gamma_{1}\) (\(\Gamma_{2}\)) for the problem \(\mathcal{P}_{1}\) (\(\mathcal{P}_{2}\)). Output: Optimized phase-shifts \(\mathbf{v}^{*}\) and OFDM signal coefficients \(\mathbf{a}^{*}\). 1:for\(s=0:\Gamma_{1}-1\)do\(\triangleright\)\(\mathbf{v}_{(1)}^{(t)}\) and \(\mathbf{v}_{(2)}^{(t)}\) are the solutions at the \(t\)-th iteration. for\(t=0:\Gamma_{2}-1\)do 2:\(\mathbf{v}_{(1)}^{(t+1)}\gets e^{\mathrm{j}\arg\left(\left[\mathbf{I}_{MN_{m}} \mathbf{0}_{MN_{m}}\right]\left(\widehat{\lambda}_{m}\mathbf{I}-\mathcal{E}( \mathbf{v}_{(2)}^{(t+1)},\mathbf{a}^{(t)})\right)\mathbf{e}_{(1)}^{(t)}\right)}\). 3:\(\mathbf{v}_{(2)}^{(t+1)}\gets e^{\mathrm{j}\arg\left(\left[\mathbf{I}_{MN_{m}} \mathbf{0}_{MN_{m}}\right]\left(\widehat{\lambda}_{m}\mathbf{I}-\mathcal{E}( \mathbf{v}_{(1)}^{(t)},\mathbf{a}^{(t)})\right)\mathbf{e}_{(2)}^{(t)}\right)}\), 4:\(\mathbf{v}_{(3)}^{(s)}\leftarrow\mathbf{v}_{(1)}^{(\Gamma_{2})}\) or \(\mathbf{v}_{(2)}^{(\Gamma_{2})}\). 5:\(\mathbf{h}^{(s)}\leftarrow\mathbf{Q}_{1}(\mathbf{v}^{(s)})\mathbf{v}^{(s)}\). 6: Update \(\mathbf{X}^{(s)}\) according to (12)-(15). 7: Update \(\mathbf{a}^{(s)}\) according to (31). 8:return\(\{\mathbf{a}^{*},\mathbf{v}^{*}\}\leftarrow\left\{\mathbf{a}^{(\Gamma_{1})},\mathbf{v}^{( \Gamma_{1})}\right\}\). ``` **Algorithm 1** Joint IRS phase-shift OFDM radar proposed in [22] with specular reflection \(\{h_{l_{m}}\}=1\) in the exactly identical two paths between the target and radar. Deploying multiple IRS platforms in comparison with single IRS and non-IRS scenarios provides additional degrees of freedom (DoFs) and improves performance. Also, multi-IRS-aided radar outperforms single IRS because an optimal deployment of more IRSs provides more NLoS paths and, hence, enhanced detection of NLoS targets especially those that may not be accessible via only one IRS. ## VI Summary We investigated the moving target detection problem using a multi-IRS-aided OFDM radar. The IRS phase-shifts along with the OFDM transmit signal coefficients were designed by taking advantage of an alternating optimization of the non-centrality parameter of the GLRT. We showed that maximizing the non-centrality parameter improved the probability of detection. By means of numerical investigations, we demonstrated that our proposed method enhances the \(P_{\mathrm{D}}\) over the non-IRS OFDM radar systems.
2310.17535
The Atacama Cosmology Telescope: Extragalactic Point Sources in the Southern Surveys at 150, 220 and 280 GHz observed between 2008-2010
We present a multi-frequency, multi-epoch catalog of extragalactic sources. The catalog is based on 150, 220, and 280 GHz observations carried out in 2008, 2009, and 2010 using the Millimeter Bolometric Array Camera on the Atacama Cosmology Telescope. We also present and release 280 GHz maps from 2008 and 2010. The catalog contains 483 sources found in a sky area of ${\sim}600$ square degrees. It was obtained by cross-matching sources found in 11 sub-catalogs, one for each season and frequency band. We also include co-added data from ${\sim}150$ and ${\sim}160$ square degrees using two and three years of overlapping observations. We divide the sources into two populations, synchrotron and dusty emitters, based on their spectral behavior in the 150-280 GHz frequency range. We find 284 synchrotron sources and 183 dusty source candidates. Our cross-matching with catalogs from radio to X-ray results in 251 synchrotron sources (88%) and 92 dusty sources (51%) with counterparts and suggests that 91 dusty candidates are not in existing catalogs. We study the variability and number counts of each population. In the case of synchrotron sources, we find year-to-year variability, with a mean value around 35%. As expected, we find no evidence of dusty source variability. Our number counts generally agree with previous measurements and models, except for dusty sources at 280 GHz, where some models overestimate our results.
Cristian Vargas, Carlos H. López-Caraballo, Elia S. Battistelli, Rolando Dunner, Gerrit Farren, Megan Gralla, Kirsten R. Hall, Carlos Hervías-Caimapo, Matt Hilton, Adam D. Hincks, Kevin Huffenberger, Tobias Marriage, Tony Mroczkowski, Michael D. Niemack, Lyman Page, Bruce Partridge, Felipe Rojas, Francesca Rizzo, Cristóbal Sifón, Suzanne Staggs, Edward J. Wollack
2023-10-26T16:27:44Z
http://arxiv.org/abs/2310.17535v2
# The Atacama Cosmology Telescope ###### Abstract We present a multi-frequency, multi-epoch catalog of extragalactic sources. The catalog is based on 150, 220 and 280 GHz observations carried out in 2008, 2009 and 2010 using the Millimeter Bolometric Array Camera on the Atacama Cosmology Telescope. We also present and release 280 GHz maps from 2008 and 2010. The catalog contains 695 sources, found in a sky area of \(\sim\)600 square degrees. It is obtained by cross-matching sources found in 11 sub-catalogs, one for each season and frequency band. Also include are co-added data from \(\sim\)150 and \(\sim\)160 square degrees using 2 and 3 years of overlapping observations. We divide the sources into two populations, synchrotron and dusty emitters, based on their spectral behavior in the 150-220 GHz frequency range. We find 374 synchrotron sources and 321 dusty source candidates. Cross-matching with catalogs from radio to X-ray results in 264 synchrotron sources (71%) and 89 dusty sources (28%) with counterparts, suggesting that 232 dusty candidates are not in existing catalogs. We study the variability and number counts of each population. In the case of synchrotron sources, we find year-to-year variability up to 60%, with a mean value around 35%. As expected, we find no evidence of dusty source variability. Our number counts generally agree with previous measurements and models, except for dusty sources at 280 GHz where some models overestimate our results. We also characterize the spectral energy distribution of a dusty star-forming galaxy, ACTS J065207-551605, using our data and higher frequency observations. Key Words.:catalogs - surveys - galaxies: active - galaxies: high-redshift - galaxies: starburst - submillimeter: galaxies 1 [MISSING_PAGE_POST] Footnote 34: institet
2308.15168
Ontologies in Digital Twins: A Systematic Literature Review
Digital Twins (DT) facilitate monitoring and reasoning processes in cyber-physical systems. They have progressively gained popularity over the past years because of intense research activity and industrial advancements. Cognitive Twins is a novel concept, recently coined to refer to the involvement of Semantic Web technology in DTs. Recent studies address the relevance of ontologies and knowledge graphs in the context of DTs, in terms of knowledge representation, interoperability and automatic reasoning. However, there is no comprehensive analysis of how semantic technologies, and specifically ontologies, are utilized within DTs. This Systematic Literature Review (SLR) is based on the analysis of 82 research articles, that either propose or benefit from ontologies with respect to DT. The paper uses different analysis perspectives, including a structural analysis based on a reference DT architecture, and an application-specific analysis to specifically address the different domains, such as Manufacturing and Infrastructure. The review also identifies open issues and possible research directions on the usage of ontologies and knowledge graphs in DTs.
Erkan Karabulut, Salvatore F. Pileggi, Paul Groth, Victoria Degeler
2023-08-29T09:52:21Z
http://arxiv.org/abs/2308.15168v1
# Ontologies in Digital Twins: A Systematic Literature Review ###### Abstract Digital Twins (DT) facilitate monitoring and reasoning processes in cyber-physical systems. They have progressively gained popularity over the past years because of intense research activity and industrial advancements. Cognitive Twins is a novel concept, recently coined to refer to the involvement of Semantic Web technology in DTs. Recent studies address the relevance of ontologies and knowledge graphs in the context of DTs, in terms of knowledge representation, interoperability and automatic reasoning. However, there is no comprehensive analysis of how semantic technologies, and specifically ontologies, are utilized within DTs. This Systematic Literature Review (SLR) is based on the analysis of 82 research articles, that either propose or benefit from ontologies with respect to DT. The paper uses different analysis perspectives, including a structural analysis based on a reference DT architecture, and an application-specific analysis to specifically address the different domains, such as Manufacturing and Infrastructure. The review also identifies open issues and possible research directions on the usage of ontologies and knowledge graphs in DTs. keywords: Digital Twin, Ontology, Knowledge Graphs, Semantic Web, Internet of Things, Cyber-physical systems ## 1 Introduction Cyber-physical systems of the last decade have transitioned from using traditional system models to using Digital Twins (DT) [1]. One of the most relevant and distinguishing features of Digital Twins is the real-time connection between the physical and the virtual system. It enables a more sophisticated digital model, which recreates and can update a physical environment faithfully and on the fly, rather than at later stages after simulation or analysis. Digital Twins are applied to various vertical industries, the most common being manufacturing, agriculture, and construction [2]. Given the variety of application areas, there is no single commonly-accepted definition of a Digital Twin. Additionally, the concept is constantly evolving to reflect advances in the field [3]. In line with the broad characteristics of Digital Twins, there is also a variety of approaches to design and develop such systems as and there is, currently, no consensus on specific engineering processes and related architectures for them. Nevertheless, certain architectural patterns, which are discussed later on in this paper, have begun to emerge. Moreover, considerable gaps can be found in the current structured understanding of data and flow representation and reasoning layers of Digital Twins. One of the most well-known and promising standards for knowledge representation and reasoning is semantic technologies and, in particular, ontologies. An ontology provides a formal machine-processable conceptualization of a given domain [4], including entities, their types and relationships, normally implemented in standard languages. These languages take advantage of the Web infrastructure to enable interoperability at a global level and to support automatic reasoning. As shown further in this paper, there is a growing popularity of employing ontologies in the DT systems among researchers and engineers. Ontologically-enriched Digital Twins are often called Cognitive Twins [5]. Given this popularity, a number of questions arise: how exactly are DTs employing ontologies being used? In which parts of the DT architecture are ontologies the most beneficial? How can one ensure the biggest gains from using these systems? What are the common barriers and limitations to overcome in utilizing ontologies in DTs? At present, there are no best practices that have been established that answer the above questions. Additionally, a deeper analysis that provides an overview of the current application trends and of the relationships with the different architectural patterns is missing. In order to address these gaps, we conducted a Systematic Literature Review (SLR) of recent research in digital twins utilizing ontologies. We screened 460 papers and extracted 82 directly relevant papers. These papers have been discussed against a reference architecture that reflects the most common patterns in Digital Twins. Our analysis aims to exhaustively address ontologies in the different architectural layers, and includes an inter-layer analysis to provide a more comprehensive framework. A significant number of reviewed articles include an implementation of a knowledge graph[6] in DTs using ontologies, hence, an additional brief analysis of such knowledge graphs implementations was carried out. We also considered an application perspective, looking at the DT domains in which ontologies are most commonly used. More holistically, we identified a number of discussion points and possible future research directions based on the review. _Structure of the paper._ Section 2 provides an overview of the background concepts, while Section 3 presents the related work, focusing on SLRs on DTs. Section 4 addresses methodological aspects. The core part of the paper is composed of 3 sections that deal respectively with the reference architecture (Section 5), the performed analysis (Section 6) and its discussion (Section 7). Finally, Section 8 concludes the paper by summarising the major findings. ## 2 Background Concepts This review aims to identify and discuss the body of knowledge associated with the application of ontologies in Digital Twins. This section addresses these two main concepts in order to provide a self-contained concise overview of the relevant background for understanding the subsequent review. ### Digital Twins Digital Twin (DT) is a term that has become popular especially over the past 5 years. The term is used in multiple disciplines and contexts other than Computer Science, including, among others, several different sub-disciplines in engineering, business and healthcare (see Section 6.4). Different definitions have been used over the past 2 decades. D'Amico et al. [3] in their SLR have identified 11 different definitions. The first definition, without explicitly mentioning the term DT is given by Michael Grieves in 2002 [7], as the conceptual ideal for Product Lifecycle Management (PLM): _"PLM is an integrated, information driven approach to all aspects of a product's life from its design inception, through its manufacture, deployment and maintenance, and culminating in its removal from service and final disposal."_. In the presentation, separation of virtual and real space and the bi-directional communication in between is emphasized as the one of the main characteristic of a DT. DTs can be used to achieve different goals, such as physical space monitoring, optimizing decisions made by a physical system/asset, and predictive maintenance. Although the definition of DT slightly evolved over the time, the bi-directional communication in between the physical and virtual space remained as one of the distinctive features of a DT. A more recent definition given by Grieves and Vickers in 2017 in [8] is: _"Digital Twin is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin"_. Besides production in manufacturing, DTs are now also created for countries [9], plants [10], and construction [11] just to name a few. Figure 1, taken from [1], shows a more comprehensive, domain-agnostic definition for DTs. The main difference in such a view is the interpretation part, where data from the physical space is explicitly converted into a format that is processable as part of the virtual space. In spite of the fact that all of the DT definitions given in [3] contain a physical asset, DTs are also created for abstracted entities. As an example, Parmar et al. [12] discuss building a DT for an organization, which also includes abstracted concepts related to companies. As the use of DTs becomes more widespread, the definition of DT is likely to keep evolving accordingly. Another concept frequently associated with DT is Digital Shadow (DS). DS simply refers to a DT without any communication from the digital environment back to the physical environment. Finally a third term which is especially relevant in the context of this review is Cognitive Digital Twin (CDT) [5]. It was defined by Ahmed El Adl in 2016: CDT _"is a digital representation, augmentation and intelligent companion of its physical Twin as a whole including its subsystems and across all of its life cycles and evolution phases"_. CDTs are also referred as Cognitive Twin (CT) [13] or Cyber Twins [14]. Later definitions of CDTs (e.g., [13]) include explicitly Semantic Web technology, such as ontologies and knowledge graphs, as part of the CT technology. Although a Cyber Twin is not synonymous with CT, it still incorporates semantic technologies in a DT, while also considering Industry 4.0[15] specific data management issues. ### Knowledge Representation and Ontology The term "ontology" has its origin in philosophy, where, in a context of metaphysics, is used to deal with the "nature of being". Although maintaining its original philosophical meaning, ontology has progressively evolved towards a more generic interpretation, as it is largely used in fact to express a formal conceptualization of a given domain. More recently, ontology has been enabled also in computer science to create and work with formal machine-processable specifications of a given domain [4], often referred to as "semantics". Ontologies became a key notion in the field of Knowledge Engineering [16]. Its popularity in Computer Science consistently increased with the growth of the Semantic Web [17], which adopts the Web infrastructure to establish global identifiers. Indeed, unique identifiers allow a more sophisticated approach to interoperability, that can be established at a Figure 1: A figure from [1] illustrating the DT technology more comprehensively. semantic level (Semantic Interoperability [18]), as well as to data management and re-use within rich knowledge spaces [19]. The effective application of ontology within modern systems has been further fostered by the availability of specialised languages [20] (e.g. RDF and OWL), most of which have been standardised by W3C1. Such languages provide capability in terms of inference and automatic reasoning [21][22], and allow the establishment of semantically enriched data ecosystems, such as Linked Data [23] and Open Data [24]. Footnote 1: [https://www.w3.org](https://www.w3.org) Ontologies normally work in the background of final systems and their value becomes even more relevant in distributed environments, where they typically contribute in the support of machine-to-machine interaction. However, ontologies may be considered a valuable asset also to support functionalities and representations in a generic context of Human-computer Interaction (HCI) [25]. The popularity of ontologies has progressively increased in the past two decades. The intense research activity within the community has resulted in a relatively consolidated technology that is being applied in a broad range of disciplines and application domains to solve real world problems. Typical examples of a successful application are in biology [26], medicine [27], system engineering [28] and manufacturing [29]. ## 3 Related Work This section aims to provide a concise overview of related reviews that also focus on DTs and ontologies. A search on Scopus2 in titles, abstracts and keywords with a composed query resulting from the combination of "digital twin", "ontology" and "review" returned only 5 results. 20 more SLRs identified as a result of applying the methodology described in section 4. Only 3 out of 25 of them have any analysis of ontologies when used in DTs. Those contributions are summarized in this section together. Footnote 2: [https://www.scopus.com/home.uri](https://www.scopus.com/home.uri) D'Amico et al. [3] performed an SLR that includes 59 articles on CDT (CT as per authors' statement) in the maintenance context. The analysis of Digital Twins assumes 5 different categories: purpose, communication, knowledge representation, computation and microservices. Knowledge Representation is relevant and directly connected to this work. Authors report that 28 of the selected articles adopt ontologies explicitly, with 5 of them referred to be Top-level Ontology (TLO). In order to improve interoperability, the reviewed articles benefit from standardized architectures, ontologies such as Semantic Sensor Network (SSN)[30], or international standards, such as ISO3. Another relevant aspect that is analyzed under the knowledge representation section is the type of database employed. The authors state that 3 types of databases have been commonly used (relational, non-relational and graph databases). RDF4 and OWL5 have been found as the most common data formats to store data as a knowledge graph. Footnote 3: [https://www.iso.org/home.html](https://www.iso.org/home.html) Footnote 4: [https://www.w3.org/RDF/](https://www.w3.org/RDF/) Correia et al. [2] carried out an SLR focusing on data management aspect in DTs. Results related to interoperability and data integration are especially relevant in the context of this work. The authors analyzed interoperability in DTs under 3 categories: data interoperability, semantic interoperability and interoperability in the communication level. On the semantic level, domain ontologies are used to provide semantic interoperability as well as for the communication in between different DTs in the same domain. As one of the data integration solutions, the authors found that modeling domain knowledge with an ontological layer in the architecture is also a common approach. Another part of the analysis was to understand for which domains the DT solutions were proposed in the reviewed articles. Industry 4.0, Smart Cities and Healthcare domains are found to be the most common application areas of DTs. In terms of application domains, the mentioned review presents results in line with our findings (Section 6.4). Shishehgarkhaneh et al. [31] conducted an SLR specific to construction. The goal of the SLR is to understand how Building Information Modeling (BIM), DT and Internet of Things (IoT) technologies are adopted in the construction industry. Although ontologies are not an explicit topic of review, authors identified the concept of "ontology" as one of the most prominent in the reviewed articles. Authors state that ontologies have not yet been developed to address diverse and multi-context construction workflows. Our review has pointed out multiple ontologies being used for different aspects in the construction industry. However, we were also unable to identify concrete application of ontologies to address construction workflows. Although it is not an SLR, we have also found D'Amico et al.'s earlier work [32] highly relevant as they are also using the same search query as in our review, "digital twin" and "ontology". Authors briefly review existing scientific papers that use ontologies in the scope of a digital twin and found out that by the time the paper is written, a limited number of articles mention using an ontology and only a few mentioned using a TLO-based approach. Finally, a TLO-based DT conceptual model is proposed for maintenance operations. To the best of our knowledge at the time of writing this review, there are no SLRs that exhaustively deal with the adoption of ontologies in DTs. Our SLR differs from the existing work by solely focusing on how existing DT solutions benefit from ontologies. Based on the reference DT architecture given in Section 5, this review investigates in which layers of a DT architecture ontologies are used, and the role of ontologies for each layer is identified (See section 6.2). It has been found that a single ontology might include concepts that belong to the multiple layers of a DT architecture. Therefore an inter-layer analysis is carried out especially to discover how ontologies play the role of a semantic interface in between layers (Section 6.3). Some of the articles reviewed also construct a knowledge graph that is mostly based on a domain ontology. Although it was not among the initial analysis goals, a brief analysis of knowledge graph implementations in DTs is also performed (See sections 6.5 and 7.3). Lastly, a domain-level analysis is performed to understand in which domains the ontologies are used most commonly in the scope of DTs. ## 4 Methodology and Approach In order to better understand how ontologies are used in the scope of DTs, we performed a systematic literature review. This section explains the methodology of the review including how the articles are selected and analysed. The guideline published by Kitchenham et al. for performing a systematic literature review in Software Engineering [33] has been taken as a reference for the review process. Both intermediary and end results of applying the methodology described in this section is made publicly available [34]. ### Identification and Initial Selection of Research _Data sources and search query._ 5 different relevant research databases (ACM Digital Library, arXiv, IEEEExplore, ScienceDirect, SpringerLink) and 3 search tools (Google Scholar, Semantic Scholar, Zeta Alpha - AI Research Navigator) have been considered. Relevant papers have been retrieved by the following queries: "digital twin ontology", ii) "digital twin" AND "ontology". Data was collected in the period of March/April 2023. _Initial retrieval._ Figure 2 shows the PRISMA workflow [35] which summarizes the study identification and the screening process. As the initial number of results is considerably high (832,884), only the first 60 studies as ranked by the considered portal are retrieved. This is in line with empirical observations that show the relevancy to be negligible after a certain threshold [36]. In this specific case, the threshold (60) has been decided by skimming. However, not all data sources returned more than 60 results and, finally, 833 papers overall were retrieved. After removing 373 duplicates, 460 papers have been selected as an outcome of the initial screening process. _Exclusion/inclusion criterion and relevancy check._ Only research articles that explicitly propose a use of an ontology in the scope of a DT are included in this survey. Relevancy of the papers are decided in two analysis rounds. The first round focuses on the focus of the paper by considering title, abstract and keywords; additionally, skimming was performed to further assess the consistency in scope. Only articles written in English were considered. 8 papers only had an abstract in English. 7 of the results were blog posts and 20 of them were surveys which were considered not relevant as this SLR focuses on research articles only. 287 of the papers were either discussing Digital Twins or Ontologies, but not their nexus, or were not actually dealing with ontologies in the scope of a DT. Finally, 138 papers were assessed in detail in the second round. 10 of them did not provide enough detail about the ontology used or the benefit provided by ontology adopted. 46 of the papers were discussing the use and benefits of ontologies in DTs with a holistic short view, rather than actually utilizing ontologies or proposing a way to utilize ontologies in DTs. This resulted in 82 papers in total included in this survey. ### Overview of the selected research and analysis Figure 3 shows relevant publications per year. The number of publications increased almost 3 times over the period. This trend is not limited to ontologies only, but also applies to semantic technologies in general and knowledge graphs (see Section 6). Our analysis has been structured according to a reference architecture composed of different logical layers. Section 5 describes such a reference architecture in detail. In the analysis process, we have identified conceptual and functional patterns of ontologies to be matched with the logical layers of the architecture. As an example, if a concept in an ontology is used to describe a physical entity and that is specific to Figure 3: Number of relevant publications per year. Figure 2: PRISMA Workflow showing the identification and screening processes for the SLR. a domain, for instance floors of a building in building management, then the concept "floor" leads to a physical layer in the reference architecture in a specific domain (building management). As explained later on in the paper, a single ontology is often addressing concepts in the scope of more than one layer. Indeed, ontology often acts as an interface in between layers. Those considerations led to the inter-layer analysis (section 6.3). Additionally, the domain of each contribution is identified and a domain-based analysis is carried out accordingly. Lastly, even though it was not explicitly among the objectives, we have discussed also Knowledge Graphs because of their relevance and popularity in the reviewed articles. Tamasauskaite et al. [37] has showed that utilizing ontologies is one of the common steps while constructing a knowledge graph. As it will be described in section 6.5, our review also shows that knowledge graphs are utilized in many of the reviewed research articles together with ontologies. ## 5 Reference Architecture As far as we know, there is not a commonly accepted reference architecture for DT since authors tend to propose their own view of a DT architecture as part of their work. However, given the increasing popularity of DT, common architectural patterns are progressively emerging, although we cannot yet see a proper convergence of the different architectures. A common architecture is often perceived as a need within the community [38]. Most architectures are structured in layers and are normally designed to reflect a seamless coexistence of a physical and a virtual space. That is the case of the architecture proposed by Ashtari et al. [39] that assumes two main layers (physical and cyber layer), while most architectures are structured in a more detailed way. For instance, the architecture proposed by Souza et al. [40] extends the previous concept by adding a gateway between the physical and the digital layer. Similarly, in the work by Fan et al. [41] the authors integrate cyber-physical components with a human layer to address human-cyber-physical systems. The architecture by Minerva et al. [42] is structured in 4 layers, including data, integration, service and business, while Steindl et al. [43] assumes a service-oriented architecture based on physical and virtual entities to support a given business logic. A full service-oriented approach structured in 5 different layers (Physical, Communication, Digital, Cyber and Application) is proposed by Aheleroff et al. [44]. Schroeder et al. [45] propose five relatively classic layers (device, user interface, Web service, query, and data) integrated with a specific layer for augmented reality. A 5-layer architecture - i.e. Smart-Connection, Data-to-Information, Cyber, Cognition and Configuration - is proposed by Lee et al. [46]. The six-layer architecture described by Redelinghuys et al. [47] includes a double layer for physical twins (devices and data), local data repositories, an IoT Gateway, Cloud-based Information repositories and, finally, a layer for Emulation and Simulation. An explicit cloud-based approach is adopted by Alam and Saddik [48], with a duality between physical and cloud cyber things, and by Gehrmann and Gunnarsson [49], which puts emphasis on Security. In most mentioned architectures, data is implicitly addressed at different layers, without a specific data view. The proposed literature review has been conducted looking at the reference architecture in Figure 4: * _Organizational Context_. Our analysis has been performed at a generic level without assuming any specific domain or context. We assume this virtual layer to reflect, represent or specify such specific aspects in a given context. While the main focus is on business and organizations, it may also include elements of system engineering at different levels. * i.e. _Physical_ and _Communication_ - aims to reflect the physical reality by addressing physical elements and their interaction respectively. This logical block is intuitively complemented by the _Digital Layer_, which provides a digital representation of the physical world. Intuitively, the most abstracted layer (_Application_) addresses application specific aspects and components. * _Knowledge View_. We are assuming a fluid model for knowledge representation which assumes 3 different kinds of support: (i) _local_, when the representation is in the specific boundaries of one single logical layer, (ii) _interlayer_ to interface two contiguous layers and (iii) _multilayer_ if involving two or more non-contiguous layers. ## 6 Ontologies in Digital Twins This section presents the results of the analysis conducted. Table 2 shows the list of reviewed papers with associated details, including ontology name, application domain, related architecture layer, and whether a given solution utilizes KGs or not. The section is structured in 5 different parts that deal respectively with (i) the value provided by ontologies in DTs, (ii) structural analysis by layer, (iii) inter-layer analysis, (iv) domain analysis and (v) Knowledge Graphs. Figure 4: Reference Architecture. ### Objectives There has been 4 different inter-related objectives observed in the reviewed articles: system/data modeling, semantic interoperability, (implicit) semantic relation extraction, and automated reasoning support. These objectives are either explicitly mentioned by the authors as the reason to employ ontologies, or in case it was not mentioned explicitly, we found the purpose of employing ontologies fitting to one or more of the mentioned objectives. Those objectives may be considered to be layer-agnostic, meaning they normally affect a system as a whole. System/Data modelingBased on our review, one of the common reasons for incorporating ontologies into DTs is to model the DT system and to integrate heterogeneous data from the various components of a DT [34]. Domain ontologies help modeling parts in a DT, and data structures to be stored or data packages to send other internal/external parts of a system. Since a domain ontology includes all the concepts that belong to a domain, if comprehensive enough, such an abstracted model can effectively drive developments. As an example, Zhang et al. [50] create a DT model for workshops utilizing a proposed domain ontology that consists of 3 main classes: _ResourceInformation_, _TaskInformation_ and _ProcessInformation_. A further development assumes the refinement of the main classes to include sub-classes and properties. Semantic interoperabilityIt refers to understanding what a piece of data means when sent to a different sub-component in a DT. DTs can also co-exist and even cooperate e.g., to share learned parameters for a common task that is performed in multiple DTs [51]. Ontologies can provide this semantic understanding of the data across sub-systems or DTs. While domain ontologies are usually enough to establish semantic interoperability for the entities in the same domain and context [52], top-level ontologies can also play a role when used across domains [53]. Semantic relation extractionOntologies, especially when used to build knowledge graphs and supported with sensor data, can help extracting implicit semantic relations in DTs. Knowledge graphs are composed of instances of classes that are described in ontologies. Sensors are used to track the latest state of these instances, and rule extraction algorithms can be altered to work with knowledge graphs and sensor data to extract implicit semantic relations [54]. Reasoning facilitationA rather more generic reason to use ontologies is to facilitate automatic reasoning in the system. Most automated reasoners require data from diverse sources in a DT or across DTs, as well as semantic information about this data to be able to process it. Output of the reasoning is then propagated to respective components in accordance with the used ontology. Hoebert et al. [55] use an ontology-based model of industrial robots and run reasoning algorithms to plan a set of actions to reach a certain goal of the system. ### Structural analysis Figure 6 shows the number of publications that uses an ontology in the scope of a DT per layer of the reference DT architecture (inside the circles). The same figure also shows a pairwise analysis of ontologies that addresses more than one layer simultaneously. 63 out of 82 publications use ontologies to describe concepts that belong to the physical layer. 49 of the papers focus on the DT layer, 15 on the organization layer, 15 on the application layer and 5 on the communication layer. In most cases, ontologies have a multi-layer focus. Figure 5 shows the number of papers where ontologies include concepts from 1 or more layers. Ontologies in 30 out of 82 papers include concepts that belong to 1 layer only, where 23 of them are matched to physical layers. These ontologies can be considered as domain-specific, while the rest of the 52 ontologies found in the reviewed articles are more task- and/or application-oriented. 35 of the ontologies include concepts that belong to 2 layers, where majority of them are matched to physical-digital layers. 16 of the reviewed articles include ontologies where concepts belong to 3 layers and only 1 article found where the ontology include concepts from 4 layers. The following subsections explain how ontologies are specifically used in different layers of the reference DT architecture. A summary of the results are given in Table 1 where the column "Architecture Layer" refers to which layers do the concepts used in the mentioned ontology (or ontologies) corresponds to. \begin{table} \begin{tabular}{p{142.3pt} p{284.5pt}} \hline Layer & Usage \\ \hline Physical & physical entities, actions and processes \\ Communication & protocols, access parameters \\ Digital & generic DT concepts, real or abstract/derived digital terms, assets and operations \\ Application & ranges between task-specific terms (e.g., CNC (Computer Numerical Control) cutting machine optimization app) to domain-independent application terms (e.g., top-level requirements validation app) \\ Organizational context & production lines, facilities, client and order info, project management, bridging DT and non-DT parts \\ \hline \end{tabular} \end{table} Table 1: Usage of ontologies in the different layers as per reference architecture. Figure 5: Number of articles utilizing ontologies in a DT that has concepts belongs to n (1-4) layers. #### 6.2.1 Physical layer Two different usages of ontologies are found that describe concepts in the physical layer. The first one is to utilize ontologies to describe physical components, their physical attributes, states in a system and the relation in between them. Skobelev et al. [56] use an ontology to describe physical parts of a plant, such as root, stem or leaf, and the ontology is then used to extract rules for decision making. Another example in manufacturing is to represent industrial machines or machine parts, personnel, or environmental conditions such as temperature and humidity using an ontology. Liu et al. [52] developed a CNC machine tool ontology that includes concepts such as Material, Personnel, Device and Environment. The ontology is used to aggregate data from diverse sources. Secondly, ontologies are also used to represent physical actions or processes. Tuli et al. [57] used CORA ontology [58] to represent movements of an industrial robot. Nguyen et al. [59] proposed an ontology model for tactile sensing devices that has concepts describing tactile events such as position, velocity and type of a touch event. #### 6.2.2 Communication layer On the communication layer, ontologies are used to represent communication protocols in between far-edge, edge, and more centralized units such as cloud stores, or different parts of a machining system, production line. Chevallier et al. [60] proposed a reference DT architecture for smart buildings and utilizes many ontologies including Sensor, Observation, Sample and Actuator (SOSA) [61] ontology. Authors utilized _Procedure_ subclass of SOSA to specify communication protocol used and its attributes such as IP address. Maryasin [62] developed a home automation system ontology that contains communication network-level classes such as _NetworkProtocol_ class. #### 6.2.3 Digital layer (DT) Both generic DT ontologies and ontologies that are used to represent digital entities are included in this category. 3 different usages of ontologies have been identified on the digital layer. The first one relates with representing concepts that generically used in a DT. Duan et al. [63] propose a domain-independent DT ontology that consists of 4 categories of concepts: entity-related, DT-entity related, DT system and application framework dimensions. None of the proposed concepts are domain-specific and can be used in any DT implementation. These ontologies can also be used as a top-level ontology for DTs. The authors use the ontology while creating a reference DT architecture. The second way of using ontologies is to represent digital assets and operations such as settings of a machine, input that goes to a machine or a software module, operating systems etc. Khan et al. [11] created a construction DT ontology named ConDT ontology. It includes _Data Resources_ as a part of a construction DT. According to the ConDT, a data resource has a data source, data format, input method, database and an owner. A third way of using ontologies on the DT layer is to represent abstract (usually domain-specific) concepts in terms of digital data. Amar et al. [64] created an ontology for fault management and validates in a power plant scenario. The ontology has a _Component_ class which can have a _Fault_, in a power plant. _Component_ has _Sensors_ which generates _sensor_stream_data_. _DataRules_ defined on the _sensor_stream_data_ to detect _RootCaus_es of faults. In this case, a fault is an abstract term to specify a data rule violation in a sensor of a component due to a root cause. #### 6.2.4 Application layer The ontologies used in the application layer ranges between representing concepts that are specific to a certain task in a certain domain to more generic domain-independent application terms of a DT. Zheng et al. [65] introduce the requirements ontology for aircraft assembly systems and benefit from it while designing assembly processes. A set of ontologies are created to be used in construction domain in the scope of COGITO6 project. Katsigarakis et al. [66] developed the COGITO ontology with 4 new modules which are then used to create a knowledge graph for construction projects: facility, process, resource and quality modules. Poudel et al. [67] developed a more generic ontology for manufacturing to represent manufacturing resources (e.g., machines), capabilities of the resources, and manufacturing processes. The ontology is used to automatically match resource capabilities to manufacturing processes. Footnote 6: [https://cogito-project.eu/](https://cogito-project.eu/) #### 6.2.5 Organizational context DTs are also created for either entire organizations or parts of an organization. In harmony with this, ontologies created for these DTs either include broader concepts to cover operations and assets in an organization, or concepts that relate with a certain entity which the organization creates a DT for. Three different usages of ontologies in a DT are identified in an organizational context. The first one applies to representing production lines, facilities, received orders or client information of an organization. Rozanec et al. [68] proposed the term _Actionable Cognitive Twin (ACT)_ which is very similar to Cognitive Twins introduced in Section 2.1, however with more concrete PT interaction definitions. In a later work [69], the authors proposed a manufacturing ontology based on Basic Formal Ontology (BFO) [70] to be used in ACT. The ontology focuses on manufacturing concepts that are related to production planning and demand forecasting such as _manufacturing process_, _stock order_, _production line_, _production plant_ and _organization_. This is a good example of using ontologies in an organizational context in manufacturing. Another type of utilization of ontologies in an organizational context is to track ongoing, long-lasting projects of organization(s). Munker et al. [71] proposed Internet of Construction On-Site Ontology (IoC-OSO) that re-uses concepts from 5 other ontologies, including Domain Ontology for Construction Knowledge (DOCK 1.0) [72]. DOCK includes semantic concepts that can be used to refer projects, their stages, states and life-cycle. IoC-OSO is used for resource allocation to construction processes. Third, Ariansyah et al. [73] focuses on the problem of connecting DTs with other software systems used in an organization such Computerized Maintenance Management Systems (CMMS) or Enterprise Resource Planning (ERP) systems. The authors propose an ontology to establish semantic interoperability between DT and non-DT software. ### Inter-layer analysis As reported in Table 2, 30 of the publications use ontologies to describe concepts from 1 single layer only, while the majority of the papers, 52, use ontologies to describe concepts from multiple layers. Figure 6 shows a pairwise analysis of ontologies used in the scope of DTs, that includes concepts from different layers. Some ontologies include concepts from 3 or more layers. 31 of the ontologies include concepts that belong to both physical and digital layers. This shows that physical and digital layers are semantically the most connected layers. On the other side, there is no ontology that describes concepts from the communication and the organization layer at the same time. Therefore these 2 layers are not connected at all. Physical layer is the one that has the most connections to other layers, while the communication layer has the least connections. Application and communication layers are most connected to the physical layer, while the organization is most connected to the digital layer. ### Domains This section presents an analysis from an application domain perspective. Table 2 includes domains for each of the paper in the _Domain_ column. An aggregated view of the papers by domain is given in Figure 7. The number given as _Generic_ is the sum of the papers with _digital twin, IoT, smart home, materials science, IT, smart city, IT security_ domains (see Table 2). _Agriculture_ refers to _smart farming_ and _smart fisheries_. Lastly, _Infrastructure_ refers to _building management, construction, public infrastructure_ and _cultural heritage_ domains. D'Amico et al. [3] in their SLR on cognitive DTs in maintenance context, have also reviewed papers based on application domain. Similar to their findings, ontologies in DTs are also mostly used in the Manufacturing domain, which is followed by Generic and Infrastructure domains. There has been only 1 paper found for _Goverance_, _Medicine_ and _Business_ domains. Another SLR on DTs performed by Correia et al. [2] from a data management perspective. The SLR includes both domain and subdomain classification for the reviewed papers and the authors found that there are more papers in _Smart Manufacturing_ subdomain. ### Knowledge Graphs As defined by Hogan et al. [6], a knowledge graph is _"a graph of data intended to accumulate and convey knowledge of the real world, whose nodes represent entities of interest and whose edges represent potentially different relations between these entities"_. Tamasauskaite et al. [37] defined the steps to construct a knowledge graph, and included ontology construction as one of the steps. 17 of the reviewed articles build knowledge graphs using ontologies (see Table 2 where the column _KG_ refers to whether a paper includes a knowledge graph implementation or not). Therefore, even though knowledge graphs are not the focus of this SLR (and the search query is not inclusive for knowledge graphs), this section is dedicated to a short analysis of how knowledge graphs are used in the reviewed articles. 3 ways of utilizing knowledge graphs in the scope of a DT are identified. One common way of using knowledge graphs in the reviewed articles is to benefit from the graph structure and run queries on the node end edge properties using ontological terms to extract information, where each node includes metadata about DT components. Banerjee et al. [54] created knowledge graphs for industrial production lines and uses Path Ranking algorithm to extract semantic relations which are not as explicitly exist in the knowledge graph. Figure 6: Pairwise analysis of ontologies used for concepts in different layers of a DT. Figure 7: Number of papers by domain. The state of each component in a DT is frequently associated and stored together with the knowledge graph. Chukka-palli et al. [74] creates a knowledge graph from fused sensor data (as opposed to metadata about the system components) in the DT of a smart farming use case. In this way, the latest state of the DT is always kept within the knowledge graph. Later, the knowledge graph is used to detect anomalies in the sensor data. Lastly, another way of utilizing knowledge graphs in DTs is for integrating heterogeneous data from multiple data sources. Proper et al. [75] developed an ontology-based DT for IT infrastructures of organizations. The authors mentioned that there is diverse data streams coming from IT Governance Processes, IT Management Processes and Organizational IT Assets. An ontology named Governed IT Management (GITM) ontology is described and a knowledge graph-based approach is built to handle unify the heterogeneous data. ## 7 Discussion We now discuss the main outcomes of our review. ### Ontologies in Different Layers of a DT A DT by definition (Section 2.1) includes intelligent operations based on the application domain, which distinguishes it from a DS, and this requires a semantic understanding of the data. Ontologies can provide this semantic understanding. In most of the cases, ontologies used in DTs include concepts that belong to multiple layers (Sections 6.2 and 6.3) based on our reference architecture (Section 5). In this way, ontologies also act as a semantic interface in between different layers. There are more articles that use an ontology to represent concepts in the physical layer than others (see Figure 6). One possible reason could be that the physical layer is more tied to respective domain. As an example, an application that checks if certain requirements are satisfied could be required in any domain listed in Figure 7. Therefore many of the terms in this application would be similar across domains, hence less work is needed to formalize the concepts. However, components of the physical layer tend to be more domain-specific. This could also explain the high number of ontologies (31) that describe concepts from both physical and digital layer. DTs can be created for single entities (an industrial machine), a set of entities in the same context (machines in a production line), or even entities that are completely in different domains (e.g., Akroyd et al. [9] creates a universal DT for UK). In the last two cases, DTs will have heterogeneous data from multiple sources. To solve this issue using ontologies, a single domain-specific ontology would suffice to unify the data for the second case. In the third case, the data might be representing concepts that belong to different layers in a DT architecture. Matching each domain ontology to a top-level ontology [32] is among one of the popular solutions that can be used in the third case. ### Application Domain Similar to results of other recent SLRs on DTs [3; 2], there are more papers published in the manufacturing domain that use an ontology in the scope of a DT than in the other domains (see Figure 7). A simple query of _"ontology" AND "digital twin"_ on scopus gives 141 results in _Engineering_ (the one that is most related with manufacturing and infrastructure among the subject areas on Scopus search results), 16 in _Energy_, 15 in _Business, Management and Accounting_ and 3 in _Medicine_. When compared with the mentioned SLRs on different aspects of DTs, usages of ontologies in DTs across domains are proportionate to the number of articles published on DTs in general. ### Knowledge Graphs 17 out of 82 papers utilized knowledge graphs. DTs, by nature, are closed systems with limited number of components where each component is somehow in an interaction with other, mostly neighbouring, components. Knowledge graphs can reflect these interactions semantically. As presented in Section 6.5, knowledge graphs are actively used as a data store to store both metadata about the DT parts, as well as current state of each part based on sensor data. Knowledge graphs are then queried to extract system parts with certain patterns or simply to get latest system or component state. Therefore, knowledge graphs are also frequently used together with other decision support and reasoning algorithms. However, knowledge graphs so far used only as a metadata or state store, rather than as part of a reasoning process, e.g., guiding a reasoning algorithm based on the extract knowledge. We expect knowledge graphs to be more involved in future DT implementations, not only as a data storage but also actively as a part of reasoning algorithms. ### Multiple ontologies used in a single layer In Section 6.3, we showed that some ontologies include concepts that belong to multiple layers in the DT architecture. However, in some cases multiple ontologies are also used in a single layer, especially in the case of relatively bigger DT systems. An example would be the smart city use case where multiple ontologies used together to semantically represent an entire city or even a country [9]. In these cases, ontologies are integrated either by matching some of the common terms (or creating common terms) and using namespacing, or utilizing a comprehensive enough top-level ontology [76; 53; 77]. Besides the semantic aspect of integrating ontologies, there is also the technical aspect of how and where to store and retrieve ontologies or knowledge graphs built using ontologies as well. Apache Jena7 is one of the tools that is used in linked data applications, and also in some of the papers reviewed [78; 50], to parse ontologies and also to store them in RDF stores such as TDB8, an RDF storage. Besides triple stores, property graphs such as Neo4j9 is also among popular choices to store ontologies and knowledge graphs [55]. Footnote 7: [https://jena.apache.org/](https://jena.apache.org/) Footnote 8: [https://jena.apache.org/documentation/ddb/index.html](https://jena.apache.org/documentation/ddb/index.html) Footnote 9: [https://neo4j.com/](https://neo4j.com/) ### Distributed Digital Twins A research topic that is just started to be studied by researchers is to have multiple DTs co-exists in a same context. Poudel et al. [67] created a framework with a pool of DTs for various manufacturing devices, where a decision maker unit tries to optimize configurations of DTs. Although we did not encounter it while performing this SLR, federated learning approach also seems promising when having distributed digital twins. An example in manufacturing would be to have multiple of the same or similar machines that perform a similar task and optimize its own configuration while running. Each machine then shares the learned parameters with DTs of other machines. In this way, there would be no need to have a central decision maker unit, and instead each DT has its own decision maker which can evaluate the learned parameters from other DTs. Semantic technologies such as ontologies and knowledge graphs are also not yet studied in the case of having distributed digital twins. A possible research direction would be to investigate technical and semantic integration of ontologies and knowledge graphs in the case of having multiple DTs co-exist. What are the pros and cons of having a single or many knowledge graphs for a same type of PT or different types of PTs? ### Knowledge engineering and ontology re-use 64 out of 82 of the reviewed articles proposed a new ontology. Only 19 of the 64 articles either re-used some concepts from existing ontologies or matched the newly proposed ontology to top-level ontologies. More than half of the articles did not mention creating a source file for the ontology and sharing it openly. This shows that the common problem of re-using ontologies in semantic web also exists for ontologies in DTs as well. One reason that we think it could be DT specific is that many ontologies are created for or based on a specific task or specific aspect to better or optimize (e.g., an ontology for energy usage of a particular industrial machine) and therefore can not be generalized to a domain. Matching newly proposed ontologies to top-level ontologies, sharing a source code openly on open access ontology databases can help alleviate the issue. ## 8 Conclusions Digital Twins are becoming increasingly popular across many domains as research shows clear benefits in monitoring, decision support and reasoning tasks besides others. Semantic technologies are also being incorporated into DTs for better knowledge representation and to facilitate reasoning. Such digital twins are often called Cognitive Twins (CT). This SLR includes an analysis of 82 scientific papers that use an ontology in the scope of a DT. Its key findings are: * Ontologies are mostly used to represent concepts in the physical layer, which is interpreted as the physical layer being more tied to the respective domain. * 30 out of 82 reviewed articles have "domain-specific" ontologies, which at describe concepts from 1 layer, while 52 articles have "application/task oriented" ontologies where concepts stem from multiple layers. * Both DT and CT implementations and advancements are led by and often limited to the Manufacturing and Infrastructure domains. * Ontology re-usability issues in semantic web persists for DTs as more than half of the reviewed articles did not re-use an existing ontology, or matched their proposed ontology to a top-level ontology. * Knowledge graphs are becoming increasingly popular in DTs, due to their expressiveness of semantic relations and fast query capabilities. It has been only a couple of years since semantic technologies have been used in the scope of DTs. We believe that the capabilities offered by ontologies and knowledge graphs have yet to be fully leveraged by DTs. Below are some of the promising future research directions based on this systematic literature review: * _Integration of ontologies into DTs._ This SLR does not cover in detail the manner in which ontologies are integrated into DT knowledge bases both semantically and technically. Analysing DT specific requirements of the integration process will help researchers and practitioners to employ ontologies faster. * _Widespread adoption of ontologies in DTs across domains._ This SLR showed that cognitive twins are so far adopted mainly in Manufacturing and Infrastructure domains. However, we believe that there cognitive twins can bring enormous value to other domains where twinning technology is applied. * _Knowledge graph as a state graph._ Knowledge graphs are mostly used for storing metadata about DT components. However they can also be used as a state graph when combined with aggregated sensor data. This can help reducing further data processing time and can facilitate reasoning process. * _Knowledge graphs as part of reasoning process._ Besides being used as a data store, we believe that knowledge graphs in DTs can also bring great value to reasoning processes. They have the potential to guide reasoning algorithms, e.g., to decide where in the system should the reasoning be performed. We hope that this SLR can help researchers and practitioners to understand how ontologies are currently being used in digital twins and what are some of the future research directions. ## Acknowledgments This work has received support from The Dutch Research Council (NWO), in the scope of Digital Twin for Evolutionary Changes in water networks (DiTEC) project, file number 19454. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Reference & Year & Proposed ontology & Re-used ontology & Domain & Architecture Layer & KG \\ \hline \hline [50] & 2017 & Ontology of workshop manufacturing system & - & Manufacturing & DT & No \\ \hline [54] & 2017 & Ontology for manufacturing & - & Manufacturing & Physical, DT, Organizational & Yes \\ \hline [79] & 2018 & Extension to IoT-Lite ontology & - & IoT & Physical, DT & No \\ \hline [55] & 2019 & - & Rosetta[80], OntoBREP CAD & Manufacturing & Physical & No \\ [81] ontologies & & & & & \\ \hline [82] & 2019 & OPC UA Ontology & OntoBREP Ontology[81] & CAD & Manufacturing & Physical & No \\ \hline [83] & 2019 & Manufacturing, Learning and Pedagogy Ontology & - & Manufacturing & Physical, Application & No \\ \hline [62] & 2019 & Home automation ontology & - & Smart Home & Physical, Communication, DT & No \\ \hline [84] & 2020 & Machine parts ontology for manufacturing & - & Manufacturing & Physical & No \\ \hline [85] & 2020 & Digital twin ontology & - & Digital Twin & DT, Organizational & No \\ \hline [86] & 2020 & Mechanical testing ontology & - & Materials Science & Physical, Application & No \\ \hline [52] & 2020 & CNC Machine ontology & - & Manufacturing & Physical & No \\ \hline [53] & 2020 & Upper level city ontology for DT modeling & - & Smart City & Physical, DT & No \\ \hline [14] & 2020 & A DT ontology & SSN and SOSA[30] & Digital Twin & Physical, DT, Application & No \\ \hline [10] & 2020 & Plant DT ontology & - & Smart Farming & Physical, DT, Application & No \\ \hline [60] & 2020 & - & ifcOWL (OWL for Industry Foundation Classes)[87], SSN, SOSA, BOT (Building Ontology Topology)[88] & Building Management & Physical, Communication, Application & No \\ \hline [56] & 2020 & Ontology of plant DT & - & Smart Farming & Physical & No \\ \hline [89] & 2020 & Manufacturing product ontology & - & Manufacturing & Physical & No \\ \hline [90] & 2020 & Manufacturing ontology & Re-uses concepts from SSN[30] & Manufacturing & Physical, Application & No \\ \hline [63] & 2020 & DT ontology & - & Digital Twin & DT & No \\ \hline [91] & 2021 & An ontology for DT data management & - & Digital Twin & DT & No \\ \hline [92] & 2021 & Geometric information ontology & STEP-NC machine tool monolayer[93] & Manufacturing & Physical, Communication & No \\ \hline [94] & 2021 & Assembly workshop ontology & - & Manufacturing & Physical, Organizational & No \\ \hline [95] & 2021 & - & MASON[96], Brick[97] and BOT[88] & Manufacturing & Physical, DT & No \\ \hline \hline \end{tabular} \end{table} Table 2: List of reviewed papers with proposed/used ontology name, domain, corresponding architecture layer and KG utilization. \begin{table} \begin{tabular}{l l l l l l l} \hline Reference & Year & Proposed ontology & Re-used ontology & Domain & Architecture Layer & KG \\ \hline \hline [75] & 2021 & Governed IT Management & - & Governance / & Organizational & Yes \\ & & (GITM) Ontology & - & & Management & \\ \hline [98] & 2021 & Offsite Manufacturing Production Workflow Ontology & - & Manufacturing & Physical, Application, Organizational & No \\ \hline [99] & 2021 & Mechanical products ontology & - & Manufacturing & Physical & No \\ \hline [100] & 2021 & A Plant DT Ontology & - & Smart Farming & Physical, DT, Organizational & No \\ \hline [57] & 2021 & - & CORA[58], SSN[30] ontologies & Manufacturing & Physical & No \\ \hline [9] & 2021 & - & 10+ various domain ontologies & Digital Twin & Physical & Yes \\ \hline [101] & 2021 & A domain and a DT ontology for the energy domain & - & Energy & Physical, DT & No \\ \hline [102] & 2021 & Extension to COBie[103] and OntoProg ontologies & COBie and OntoProg ontologies & Public Infrastructure & Physical, DT, Organizational & No \\ \hline [105] & 2021 & A DT ontology for predicting lifecycle cost estimation in manufacturing & - & Manufacturing & DT, Application & No \\ \hline [106] & 2021 & An ontology for solar power plants to be used in DTs & - & Energy & Physical & No \\ \hline [107] & 2021 & Smart home DT ontology & - & Smart Home & Physical & No \\ \hline [78] & 2021 & CNC Machining Ontology & - & Manufacturing & Physical, DT & No \\ \hline [51] & 2021 & A DT ontology for complexity management & - & Digital Twin & DT, Organizational & Yes \\ \hline [108] & 2021 & Mechanical products ontology & - & Manufacturing & Physical, DT & No \\ \hline [109] & 2021 & Tech infrastructure management ontology & - & IT & DT, Organizational & No \\ \hline [110] & 2021 & A DT ontology extending author’s earlier work[14], SOSA and SSN[30] ontologies & Author’s previous ontology, SOSA and SSN ontologies & Manufacturing & Physical, Communication, DT & No \\ \hline [74] & 2021 & - & Smart farming ontology[111] & Smart Farming & DT & Yes \\ \hline [112] & 2021 & - & Uses BFO[113] as the top-level ontology and then 5 more ontologies that are specific to the use case described, see Table 2 in the paper. [ENDFOOTNOTE] & Digital Twin & Physical, DT & No \\ \hline [112] & 2021 & - & Uses BFO[113] as the top-level ontology and then 5 more ontologies that are specific to the use case described, see Table 2 in the paper. [ENDFOOTNOTE] & Digital Twin & Physical, DT & No \\ \hline [114] & 2021 & Inflammatory bowel disease ontology & Based on 3 existing medical ontologies[115][116][117] & Medicine & Physical, Application & Yes \\ \hline [118] & 2021 & An ontology for fisheries & Platys ontology[119] & Smart Fish- & Physical & No \\ \hline \end{tabular} \end{table} Table 2: List of reviewed papers with proposed/used ontology name, domain, corresponding architecture layer and KG utilization. (Continued) \begin{table} \begin{tabular}{l l l l l l l} \hline Reference & Year & Proposed ontology & Re-used ontology & Domain & Architecture Layer & KG \\ \hline \hline [120] & 2021 & - & Brick[97], PROPS[121] and & BOT[88], BEO[122] & Building Management & Physical, DT & Yes \\ & & & ontologies & & Management & \\ \hline [123] & 2021 & Industrial robot control ontology & - & Manufacturing & Physical, DT, Appli- & No \\ \hline [69] & 2021 & Production planning and demand forecasting ontology & BFO[113] & Manufacturing & DT, Application, Organizational & Yes \\ \hline [124] & 2021 & An ontology for co-simulation of complex engineered systems & - & Digital Twin & Physical, DT & No \\ \hline [32] & 2022 & Top-level DT ontology & BFO[113] & Digital Twin & Communication, DT & No \\ \hline [77] & 2022 & OntoLandUse, OntoCropMapGML and OntoCropEnergy & - & Digital Twin & Physical & Yes \\ \hline [125] & 2022 & Top-level ontology of mechanic systems & - & Manufacturing & DT & No \\ \hline [126] & 2022 & Cultural heritage ontology & - & Cultural & Physical & No \\ \hline [71] & 2022 & Internet of Construction On-Site Ontology (IoC-OSO) & DOCK, MASON[96], MaRCO[127], MSO(no citation found), and [128] & Construction & Physical, Organizational & No \\ \hline [129] & 2022 & - & Uses an existing materials ontology[130] & Materials Science & Physical, DT & Yes \\ \hline [131] & 2022 & Digital Twin Manufacturing Ontology (DTM-Onto) & - & Manufacturing & Physical, DT, Organizational & No \\ \hline [132] & 2022 & Railway DT Ontology & - & Public Infrastructure & Physical & Yes \\ \hline [133] & 2022 & Construction Programme \& Production Control Ontology & - & Construction & DT, Organizational & No \\ \hline [73] & 2022 & DT and non-DT system inter-operability ontology & - & Business & DT, Organizational & No \\ \hline [59] & 2022 & Tactile internet ontology for tactile devices & - & IT & Physical & No \\ \hline [134] & 2022 & An extended version of RealEstateCore ontology[135] & RealEstateCore & Building Management & Physical & No \\ \hline [136] & 2022 & SDTP crop ontology & Author’s previous work [100], 100] & Smart Farming & Physical & No \\ \hline [137] & 2022 & - & BIM, GIS and IoT ontologies (no specific ontology is cited) & Building Management & Physical, DT & No \\ \hline [67] & 2022 & A top-level manufacturing ontology & - & Manufacturing & Physical, DT, Application & No \\ \hline [138] & 2022 & An IoT device ontology & - & IoT & Communication, DT & No \\ \hline [139] & 2022 & - & Brick[97] & Building & Physical & No \\ \hline [140] & 2022 & - & MarCO[127] & Manufacturing & Organizational & No \\ \hline \end{tabular} \end{table} Table 2: List of reviewed papers with proposed/used ontology name, domain, corresponding architecture layer and KG utilization. (Continued) \begin{table} \begin{tabular}{l l l l l l l} \hline Reference & Year & Proposed ontology & Re-used ontology & Domain & Architecture Layer & KG \\ \hline \hline [141] & 2022 & - & \begin{tabular}{l} BFO[113], Common Core Ontology and IoT-Core[142] \\ \end{tabular} & Manufacturing & DT & Yes \\ \hline [143] & 2022 & \begin{tabular}{l} Clamping system ontology \\ \end{tabular} & - & Manufacturing & Physical, DT & No \\ \hline [65] & 2022 & Aircraft assembly system, manufacturing requirements and architecture model ontologies & IoF-Core[142], BFO[113]
2307.12082
Software Code Quality Measurement: Implications from Metric Distributions
Software code quality is a construct with three dimensions: maintainability, reliability, and functionality. Although many firms have incorporated code quality metrics in their operations, evaluating these metrics still lacks consistent standards. We categorized distinct metrics into two types: 1) monotonic metrics that consistently influence code quality; and 2) non-monotonic metrics that lack a consistent relationship with code quality. To consistently evaluate them, we proposed a distribution-based method to get metric scores. Our empirical analysis includes 36,460 high-quality open-source software (OSS) repositories and their raw metrics from SonarQube and CK. The evaluated scores demonstrate great explainability on software adoption. Our work contributes to the multi-dimensional construct of code quality and its metric measurements, which provides practical implications for consistent measurements on both monotonic and non-monotonic metrics.
Siyuan Jin, Mianmian Zhang, Yekai Guo, Yuejiang He, Ziyuan Li, Bichao Chen, Bing Zhu, Yong Xia
2023-07-22T13:55:42Z
http://arxiv.org/abs/2307.12082v4
# A Quantitative Analysis of Open Source Software Code Quality: Insights from Metric Distributions ###### Abstract Code quality is a construct in open-source software (OSS) with three dimensions: maintainability, reliability, and functionality. We identify 20 distinct metrics and categorize them into two types: 1) monotonic metrics that consistently influence code quality; and 2) non-monotonic metrics that lack a consistent relationship for evaluation. We propose a distribution-based method to evaluate both types, which demonstrates great explainability of OSS adoption. Our empirical analysis includes more than 36,460 OSS repositories and their raw metrics from SonarQube1 and CK2. Our work contributes to the multi-dimensional construct of code quality and its metric measurements. Footnote 1: [https://www.sonarsource.com](https://www.sonarsource.com) Footnote 2: [https://github.com/mauricioaniche/ck](https://github.com/mauricioaniche/ck) Open source software, Code quality, Construct dimensions, Measurements ## 1 Introduction Code quality positively influences software adoption [1]. Precise code quality measurement can improve software products, increase user satisfaction, and save costs of IT systems [2], which influences the success and adoption of software [3]. Figure 1 shows that code quality is a multi-dimensional construct that includes dimensions [4]: maintainability, reliability, and functionality. We identified 20 distinct metrics from previous literature to measure these dimensions. However, the literature remains a gap in the methodologies for evaluating them. We propose a distribution-based method to provide scores for each metric, which shows great explainability on OSS adoption. **RQ. 1: How to evaluate code quality metrics?** We divide metrics into monotonic metrics and non-monotonic metrics. Monotonic metrics consistently impact code quality, while non-monotonic metrics lack a consistent monotonic relationship for evaluation (Figure 2). We evaluated their scores by analyzing their probability distributions among high-star OSS. For monotonic metrics, we fit an exponential distribution and use the distance from 0 in their cumulative distribution functions (CDFs) as their scores. For non-monotonic metrics, we fit an asymmetric Gaussian distribution and use the distance away from the central point in their CDFs as their scores. Our evaluation method provides scores within the range of \(0\sim 100\) for each metric. Our empirical analysis covers 36,460 high-star repositories. The selection of high-star repositories provides a more critical evaluation because they generally have lower bugs, resulting in sharper distributions. **RQ. 2: How to use the evaluated scores to explain OSS adoption?** We investigated the explainability of our code quality metric scores on OSS GitHub stars. The number of Github stars reflects OSS quality and adoption [5]. With standard machine learning approaches, we use R2 and accuracy to assess their explanatory power. The results show our code quality scores can explain the number of OSS stars well. Our methodology can be applied to different target variables, providing a flexible strategy for evaluating code quality in various contexts. This work has the following contributions. Prior literature has discussed diverse code quality metrics [6, 7, 8, 9]. We extend them by dividing code quality metrics into two types and evaluating them with a novel distribution-based method. We evaluate code quality metric scores for over 36,460 GitHub OSS repositories. We then use our scores to explain the GitHub stars [10], generating insights into how code quality may influence the widespread adoption of OSS. Our study advances the understanding of code quality with two different types of code quality metrics and contributes to better quality control standards and practices. ## 2 Literature Review OSS enhance code quality [11, 12] and innovation [13, 14, 15]. Many firms utilize and contribute to OSS [16] and developers reuse OSS to lower their search cost [17], which requires high code quality. Many research studied performance evaluations for software. Inappropriate performance measurements have been identified as a major cause of IT systems failing [2, 18]. As new technologies and techniques emerge [19], more precise measurements of software quality and fit are needed. Our approach sets itself apart from past studies by taking into account the distribution of high-quality software and delivering accurate scores for each software. Code quality has dimensions [20]. The IEEE standard defines code quality as the collective features and characteristics of software that meet given needs [21]. Later on, user-friendliness and useful functionalities are included in the definition of code quality [10], echoing the three dimensions in the ISO/IEC 25010 standard [22]: maintainability, reliability, and functionality [23]. Similarly, other studies have similar dimensions: maintainability [24], readability [25], and functionality [26]. We define the construct and dimensions in Table 1. \begin{table} \begin{tabular}{l l l l} \hline \hline **Construct** & **Definition** & **Dimensions** & **Definition** \\ \hline \multirow{3}{*}{Code Quality} & How well-written the code is, & Maintainability & The code is easy to understand, enhance, or correct. [27] \\ & including maintainability, & Reliability & The code is user-friendly and stable. [10] \\ & reliability, and functionality. [10] & Functionality & The code has useful functions. [10] \\ \hline \hline \end{tabular} \end{table} Table 1: Construct Definition Figure 1: Multi-Dimensional Construct Code quality has metric measurements [9], including the size of components [7], code complexity [6, 8], and so on. However, most existing metric identifications have focused on monotonic areas rather than non-monotonic metrics, simplifying the analysis. Our paper considers both and proposes a uniform solution for them. Some measurements can reflect the level of code quality, such as the number of stars [5]. Although adoption and OSS activities are determined by many factors, such as commitment [28], transparency [29], and leader resources [30], OSS adoption can partially reflect OSS code quality. Lee et al. [10] highlight the impact of code quality on user satisfaction and adoption. Therefore, we use GitHub stars as a reflective measure of code quality. ## 3 Methodologies To quantitatively evaluate code quality metrics, we first map out the distribution of each metric in high-star OSS repositories and then score them according to their corresponding metric CDFs. Table 3 presents two different types of metrics distributions: monotonic and non-monotonic metrics. We fit exponential distributions to monotonic metrics, the probability distribution function (PDF) of which reads as: \[f_{1}(x;c,\lambda)=\begin{cases}0&\text{if }x\leq c\\ \lambda\exp\left[-\lambda(x-c)\right]&\text{if }x>c\end{cases} \tag{1}\] where \(\lambda\) and \(c\) are the fitting parameters. The corresponding score function based on the CDF of Eq. (1) reads as \[M_{1}(x;c,\lambda)=100\times\begin{cases}1&\text{if }x\leq c\\ \exp\left[-\lambda(x-c)\right]&\text{if }x>c\end{cases} \tag{2}\] Figure 2: Examples of Non-Monotonic Metric Distribution and Monotonic Metric Distribution The score falls into the range of \(0\sim 100\) and it peaks at \(c\) and decays exponentially for \(x>c\). The non-monotonic metrics follow an asymmetric Gaussian distribution (see the left of Fig. 2), the PDF of which reads as \[f_{2}(x;\mu,\sigma_{1},\sigma_{2})= \tag{3}\] \[\begin{cases}\frac{1}{\sqrt{2\pi}}\frac{2}{\sigma_{1}+\sigma_{2}} \exp\left(-\frac{(x-\mu)^{2}}{2\sigma_{1}^{2}}\right)&\text{if }0\leq x<\mu\\ \frac{1}{\sqrt{2\pi}}\frac{2}{\sigma_{1}+\sigma_{2}}\exp\left(-\frac{(x-\mu)^ {2}}{2\sigma_{2}^{2}}\right)&\text{if }x\geq\mu\end{cases}\] where \(\mu,c,\sigma_{1},\sigma_{2}\) are fitting parameters representing the peak position, peak height on the right, and peak widths on each side, respectively. The corresponding score function is \[M_{2}(x,\mu,\sigma_{1},\sigma_{2})= \tag{4}\] \[100\times\begin{cases}1-\operatorname{erf}\left(\frac{x-\mu}{ \sigma_{1}\sqrt{2}}\right)&\text{if }0\leq x<\mu\\ 1-\operatorname{erf}\left(\frac{x-\mu}{\sigma_{2}\sqrt{2}}\right)&\text{if }x \geq\mu\end{cases}\] where the score falls into the range of \(0\sim 100\), peaks at \(\mu\), and decays according to the Z-score of the Gaussian function on each side. To obtain an overall score, we assign weights to individual scores. The overall score for a given repository, denoted by \(k\), can be computed as follows: \[Q_{k}^{overall}=\sum_{i}\omega_{i}\cdot Q_{i,k}^{metric}\,,\text{ subject to:}\sum_{i}\omega_{i}=1. \tag{5}\] The weights \(\omega_{i}\) are derived from the importance values from supervised learning models for metric scores to a target variable such as repository stars. ## 4 Empirical Analysis ### Data Sources GitHub is the largest OSS management platform that has more than 39 million public repositories (As of June 2023). We selected a subset of repositories with Java, Python, JavaScript, and TypeScript as the main programming languages \begin{table} \begin{tabular}{c c c c} \hline \hline **Programming Language** & **Max Number of Stars** & **Min Number of Stars** & **Number of Filtered Repositories** \\ \hline Java & 50k & 100 & 1,645 \\ Python & 228k & 260 & 16,096 \\ JavaScriptScript & 107k & 270 & 7,722 \\ TypeScript & 202k & 60 & 10,997 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistical Summary Figure 3: Workflow for Code Quality Scoring with GitHub Stars as the Target Variable and sorted them by the number of stars. We collected code from the top \(\sim 20,000\) repositories for each programming language. The number of GitHub stars is a measure of OSS adoption [5]. We removed non-engineering repositories by pattern matching, such as a guide for Java interviews in JavaScript. We used code scanners to obtain metrics. Scripting language repositories (Python, Javasccript, TypeScript) can be directly imported, while non-scripting Java repositories need to be compiled first. Compiling Java repositories is challenging due to their different JDK, maven, or Gradle versions. Therefore, we only chose repositories with GitHub releases for compilation, which led to 36,460 repositories and over 600 million lines of code. Table 2 reports the statistics of the cloned repositories. ### Metrics Overview We used SonarQube and CK to extract metrics from OSS repositories. For Java repositories, we generated over 100 metrics and selected 20 based on the ISO/IEC 25010 international standard [22]. For scripting language repositories, we only extracted 12 metrics. Table 3 shows the 20 metrics with their corresponding ISO/IEC 25010 characteristics. We normalized metrics to ensure score fairness. Cyclomatic complexity, cognitive complexity, code smells, line to cover, and violations-related metrics are normalized by non-comment lines of code, duplicated lines are normalized by lines of code, and comment lines are normalized by the sum of non-comment lines and comment lines, to account for repository size. File complexity and duplicated files are normalized by the number of files, and duplicated blocks are normalized by the number of statements to adjust for differences across repositories. This normalization process results in a more unbiased score for the metrics across different OSS. \begin{table} \begin{tabular}{l l l} \hline \hline **Dimension** & **Metric** & **Definition** \\ \hline \multirow{8}{*}{Maintainability} & Cyclomatic Complexitya & Number of independent paths through code. \\ & File Complexityb & Cyclomatic complexity averaged by files. \\ & Cognitive Complexitya & Combination of cyclomatic complexity and human assessment. \\ & Code Smellsa & Number of code smell issues. \\ & Coupling Between Objects & Number of classes that are coupled to a particular class. \\ & Fan-in & Number of input dependencies a class has. \\ & Fan-out & Number of output dependencies a class has. \\ & Depth Inheritance Tree & Number of ”fathers” a class has. \\ & Number of Children & Number of immediate subclasses that a particular class has. \\ & Lack of Cohesion of Methods & Degree to which class methods are coupled. \\ & Tight Class Cohesion & Ratio of the number of pairs of directly related methods in a class to the maximum number of possible methods in the class. \\ & Loose Class Cohesion & Ratio of the number of directly or indirectly related method pairs in a class to the maximum number of possible method pairs. \\ \hline \multirow{4}{*}{Reliability} & Total Violationsa & Number of issues including all severity levels. \\ & Critical Violationsa & Number of issues of the critical severity. \\ & Info Violationsa & Number of issues of the info severity. \\ \hline \multirow{4}{*}{Functionality} & Line to Covera & Lines to be covered by unit tests. \\ & Comment Linesa & Number of comment lines. \\ & Duplicated Blocksc & Number of duplicated blocks of line. \\ & Duplicated Filesb & Number of files involved in duplicated blocks. \\ & Duplicated Linesa & Number of lines involved in duplicated blocks. \\ \hline \multicolumn{3}{l}{a Normalized by Non-comment Line of Codes.} \\ \multicolumn{3}{l}{b Normalized by Number of Functions.} \\ \multicolumn{3}{l}{c Normalized by Number of Statements.} \\ \end{tabular} \end{table} Table 3: Definition of 20 Metrics for Code Quality Evaluation ### Importance Weights We use standard machine-learning approaches to derive weights for different metric scores and calculate a repository's overall code quality score. Our model can explain OSS adoption (Github stars) using evaluated scores. Figure 3 illustrates the entire process from data collection to final scores. We use custom data filters to ensure genuine engineering repositories are retained. We extract code quality metrics using a metric scanner and generate metric scores using the distribution-based method in Section 3, with each programming language having its distribution for each metric. We implement a Gradient Boosting Classifier (GBC) model with 0-1 labels as dependent variables based on the number of GitHub stars. We label the top and bottom quintiles (20%) of the OSS repository stars as 1 and 0, respectively. The model generates importance values as weights for each metric. Finally, we obtain a weighted average code quality score according to Eq. (5). ``` Input: Training dataset \(\mathcal{D}=\{(\mathbf{m}_{i},c_{i})\}_{i=1}^{N}\), number of iterations \(T\) Output: Ensemble model \(F(\mathbf{m})\) Initialize model \(F_{0}(\mathbf{m})=0\); for\(t=1\) to \(T\)do Compute the negative gradient: \(r_{it}=-\frac{\partial L(c_{i},F(\mathbf{m}_{i}))}{\partial F(\mathbf{m}_{i})} \bigg{|}_{F(\mathbf{m})=F_{t-1}(\mathbf{m})}\); Fit a base learner \(h_{t}(\mathbf{m})\) to the negative gradient: \(h_{t}(\mathbf{m})=\arg\min_{h}\sum_{i=1}^{N}L(c_{i},F_{t-1}(\mathbf{m}_{i})+h (\mathbf{m}_{i}))\); Update the ensemble model: \(F_{t}(\mathbf{m})=F_{t-1}(\mathbf{m})+\eta h_{t}(\mathbf{m})\), where \(\eta\) is the learning rate; end for ``` **Algorithm 1**GBC in Our Context The GBC algorithm is presented in Algorithm 1, where each data point contains a metric score \(\mathbf{m}_{i}\) and its corresponding classification \(c_{i}\) according to its GitHub star. We divide the whole dataset into a training (\(\mathcal{D}\)) and a validation set by a ratio of 4:1. The GBC algorithm works with an ensemble model \(F_{0}(\mathbf{m})\) and we fine-tune it by fitting base learners \(h_{t}(\mathbf{m})\) to the loss function's negative gradient. The learning rate \(\eta\) determines the base learners' contribution, resulting in the final ensemble model \(F(\mathbf{m})\) providing the aggregate prediction. ## 5 Results ### Metric Distributions Table 4 and Table 5 present the fitted parameters for the asymmetric Gaussian [Eq. (3)] and Exponential [Eq. (1)] distributions, respectively. Java repositories have 8 more maintainability metrics describing cohesion and coupling in the codes, which are absent for other programming languages due to a lack of proper metric scanners. Monotonic metrics, such as 'Code Smells', exhibit an exponential distribution pattern, as represented in Fig. 2 and Table 5. This distribution aligns with our understanding that superior code quality is associated with fewer bugs, \begin{table} \begin{tabular}{c c c c} \hline \hline **Metric** & Java(\(\boldsymbol{\mu}\),\(\boldsymbol{\sigma_{1}}\),\(\boldsymbol{\sigma_{2}}\)) & JavaScript(\(\boldsymbol{\mu}\),\(\boldsymbol{\sigma_{1}}\),\(\boldsymbol{\sigma_{2}}\)) & Python(\(\boldsymbol{\mu}\),\(\boldsymbol{\sigma_{1}}\),\(\boldsymbol{\sigma_{2}}\)) & TypeScript(\(\boldsymbol{\mu}\),\(\boldsymbol{\sigma_{1}}\),\(\boldsymbol{\sigma_{2}}\)) \\ \hline Cyclomatic Complexity & (155,228,50,947,40,902) & (166.692,84,415,78.289) & (162.321,53.497,52.789) & (127.273,51.616,66.733) \\ Cognitive Complexity & (50.870,40.120,75.664) & (33.238,32.586,121.541) & (170.042,33.546,000) & (29.619,22.964,81.617) \\ Comment Lines & (15.841,11.451,137.269) & (0.007,6.575,96.312) & (91.730,64.805,148.192) & (0.002,9.300,72.443) \\ Fan-in & (1.101,0.463,1.217) & / & / & / \\ Fan-out & (5.181,2.043,4.639) & / & / & / \\ Loose Class Cohesion & (0.329,0.149,0.176) & / & / & / \\ Tight Class Cohesion & (0.228,0.100,0.128) & / & / & / \\ Coupling Between Objects & (7.055,2.580,5.086) & / & / & / \\ \hline \hline \end{tabular} \end{table} Table 4: Parameters of the Fitted Asymmetric Gaussian Distributions (\(\boldsymbol{\mu}\), \(\boldsymbol{\sigma_{1}}\), \(\boldsymbol{\sigma_{2}}\)) verifying the effectiveness of our method. Furthermore, the threshold parameter \(c\) reflects the tolerance value for full scores. In the probability density function (Eq. (1)) except 'Code Smells', 'Depth Inheritance Tree', and 'Total Violations' where \(c\) approximates 1. The fitted exponential decay parameter, \(\lambda\), reflects the sensitivity of metrics to scores. Particularly, a \(\lambda\lesssim 1\) is observed for metrics such as 'File Complexity', 'Depth Inheritance Tree', 'Number of Children', 'Duplicated Blocks', and 'Duplicated Files', which implies a low sensitivity to metric variations of the order of 1. Conversely, the \(\lambda\) value for total violation is high, which reflects the high sensitivity of the number of violations. Non-monotonic metrics, such as the 'Cyclomatic Complexity', follow an asymmetric Gaussian distribution. According to Eq. (4), repositories with metric values close to the Gaussian center get higher scores since they fall into the range where high-quality OSS are mostly located. In Table 4, the Gaussian centers \(\mu\) are large (\(\gg 1\)) for the metrics of 'Cyclomatic Complexity', 'Cognitive Complexity', and 'Comment Lines' in most cases except for the 'Comment Lines' of the Javascript and Typescript languages. The latter two distributions are almost monotonic (\(\mu=0\)), potentially because these two languages are generally easy to understand and do not require additional command lines. The fitted widths \(\sigma_{1,2}\) are large and have asymmetric sensitivity; i.e. relatively long tails are observed on the right of the asymmetric Gaussian distributions. For "command line" in Python, increasing command lines before the center point has high sensitivity, while it becomes less sensitive after the center point. ### Importance Weights Table 6 shows the feature importance values from the GBC model in Section 4, which we use as metric weights in Eq. (5) within the three dimensions: maintainability, reliability, and functionality. The relative importance values are listed in Table 6. We normalized the importance values for each dimension to get relative weights within dimensions. In the maintainability dimension, 'File Complexity' has the largest weight across four programming languages, followed by 'Cognitive Complexity' 'Cyclomatic Complexity', and 'Code Smells'. These metrics contribute more to the maintainability scores. For Java repositories, all the coupling and cohesion metrics show similar contributions \(\lesssim 0.1\), reflecting their weak contribution to OSS adoption. In the reliability dimension, 'Total Violations' contributes mostly to Java, while 'Critical Violations' contributes mostly to the other three languages, which suggests varying priorities of solving violations for different languages. In the functionality dimension, the 'Comment Lines' metric contributes more to Java, potentially because Java is less intuitive to understand, which requires code comments for better understanding. The 'Comment Lines' metric also contributes significantly to the other three scripting languages. We note that zero 'Line to Cover' metric values were \begin{table} \begin{tabular}{c c c c c} \hline \hline **Metric** & \(\text{Java}(\mathbf{c},\mathbf{\lambda})\) & \(\text{JavaScript}(\mathbf{c},\mathbf{\lambda})\) & \(\text{Python}(\mathbf{c},\mathbf{\lambda})\) & \(\text{TypeScript}(\mathbf{c},\mathbf{\lambda})\) \\ \hline File Complexity & (0,0.485) & (0,0.884) & (0,0.917) & (0,0.492) \\ Code Smells & (1.123,50.731) & (0.036,60.260) & (0.004,37.177) & (0.017,16.530) \\ Depth Inheritance Tree & (1.003,0.502) & / & / & / \\ Number of Children & (0.002,0.137) & / & / & / \\ Lack of Cohesion of Methods & (0.053,80.004) & / & / & / \\ Total Violations & (1.160,54.376) & (0.054,63.313) & (0.004,387.551177) & (0.021,18.168) \\ Critical Violations & (0.019,9.872) & (0.020,48.811) & (0.007,9.443) & (0.005,5.497) \\ Info Violations & (0.019,19.34) & (0.001,1.436) & (0.002,1.401) & (0.003,1.535) \\ Line to Cover & (0,0,000) & (0,0,000) & (0,0.000) & (0,0.000) \\ Duplicated Blocks & (0,0.015) & (0.001,0.021) & (0,0.010) & (0,0.021) \\ Duplicated Files & (0.003,0.135) & (0.001,0.203) & (0,0.222) & (0,0.116) \\ Duplicated Lines & (0.439,63.284) & (0.145,163.258) & (0.081,124.342) & (0.085, 102.796) \\ \hline \hline \end{tabular} \end{table} Table 5: Parameters of the Fitted Exponential Distributions (\(\mathbf{c},\mathbf{\lambda}\)) obtained in our raw data, either caused by problems in obtaining this metric or because codes in OSS repositories are rarely tested. This gap can be closed when applying our methodology in specific companies where values of 'Line to Cover' are obtained for their close-source repositories. ### Scores After obtaining metric distributions, we score those metrics of each OSS repository based on their respective distribution locations. We present the overall scores of included OSS repositories in Fig. 4 and assess the explanatory power of our metric scores on the OSS stars using Table 7. We observe that Java code metric scores show higher explanatory power for the OSS repository's stars compared to the other languages, which suggests that code quality can better determine the success of Java-based OSS repositories in terms of stars received, which may be attributed to the greater availability of metrics for Java or the nature of repositories developed using Java for large-scale platforms and systems. In contrast, JavaScript, Python, and TypeScript exhibit relatively lower explanatory power of metric scores, indicating their code quality might be less critical in determining their OSS adoption, possibly because of their primary use in data analytics or other domains where their adoption is less influenced by code quality. \begin{table} \begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Dimension**} & \multirow{2}{*}{**Metric**} & \multicolumn{4}{c}{**Importance**} \\ & & **Java** & **JavaScript** & **Python** & **TypeScript** \\ \hline \multirow{8}{*}{Maintainability} & Cyclomatic Complexity & 0.110 (0.083) & 0.190 (0.082) & 0.250 (0.120) & 0.223 (0.081) \\ & File Complexity & 0.220 (0.165) & 0.396 (0.171) & 0.449 (0.215) & 0.402 (0.146) \\ & Cognitive Complexity & 0.086 (0.065) & 0.289 (0.125) & 0.119 (0.057) & 0.215 (0.078) \\ & Code Smells & 0.066 (0.049) & 0.125 (0.054) & 0.182 (0.087) & 0.160 (0.058) \\ & Coupling Between Objects & 0.096 (0.072) & / & / & / \\ & Fan-in & 0.108 (0.081) & / & / & / \\ & Fan-out & 0.057 (0.043) & / & / & / \\ & Depth Inheritance Tree & 0.075 (0.057) & / & / & / \\ & Number of Children & 0.026 (0.020) & / & / & / \\ & Lack of Cohesion of Methods & 0.078 (0.058) & / & / & / \\ & Tight Class Cohesion & 0.010 (0.008) & / & / & / \\ & Loose Class Cohesion & 0.068 (0.051) & / & / & / \\ & **Sum** & 1 (0.752) & 1 (0.432) & 1 (0.479) & 1 (0.363) \\ \hline \multirow{4}{*}{Reliability} & Total Violations & 0.474 (0.056) & 0.288 (0.070) & 0.293 (0.068) & 0.228 (0.065) \\ & Critical Violations & 0.272 (0.032) & 0.420 (0.102) & 0.410 (0.095) & 0.414(0.118) \\ & Info Violations & 0.254 (0.030) & 0.292 (0.071) & 0.297 (0.069) & 0.358 (0.102) \\ & **Sum** & 1 (0.118) & 1 (0.243) & 1 (0.232) & 1 (0.285) \\ \hline \multirow{8}{*}{Functionality} & Line to Cover & 0.000 (0.000) & 0.000 (0.000) & 0.000 (0.000) & 0.000 (0.000) \\ & Comment Lines & 0.454 (0.059) & 0.317 (0.103) & 0.370 (0.107) & 0.318 (0.112) \\ \cline{1-1} & Duplicated Blocks & 0.162 (0.021) & 0.286 (0.093) & 0.197 (0.057) & 0.148 (0.052) \\ \cline{1-1} & Duplicated Files & 0.190 (0.025) & 0.120 (0.039) & 0.166 (0.048) & 0.179 (0.063) \\ \cline{1-1} & Duplicated Lines & 0.194 (0.025) & 0.277 (0.090) & 0.267 (0.077) & 0.355 (0.125) \\ \cline{1-1} & **Sum** & 1 (0.130) & 1 (0.325) & 1 (0.289) & 1 (0.352) \\ \hline \hline \end{tabular} The parenthesis values are original importance values, while the values outside parenthesis are normalized in the dimension level. \end{table} Table 6: Importance Values for Metric Scores ## 6 Conclusion Our research focuses on code quality with three dimensions: maintainability, reliability, and functionality. We evaluate metrics based on their distributions. Our study advances the understanding of code quality and contributes to better quality control standards and practices, ultimately supporting the success and sustainability of software. Although our study provides valuable insights, it has some limitations that need to be acknowledged. We have not yet systematically validated the effectiveness of the method. Moving forward, it would be beneficial to incorporate validation techniques, such as sensitivity tests, to ensure the accuracy and reliability of the distribution fitting. Additionally, the parameters of the fitted distribution are sensitive to data distribution, making it necessary to incorporate more data for determining them. ## Acknowledgment Y. Xia is partly supported by the "Pioneering Innovator" award from the Guangzhou Tianhe District government. Z. Li is partly supported by the Guangdong Basic and Applied Basic Research Foundation (2021A1515012039). We would like to acknowledge useful discussions and support from Mianmian Zhang and other colleagues at the HSBC Lab. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Language** & **Java** & **JavaScript** & **Python** & **TypeScript** \\ \hline Accuracy & 0.947 & 0.826 & 0.808 & 0.817 \\ Precision & 0.971 & 0.838 & 0.831 & 0.834 \\ Recall & 0.917 & 0.803 & 0.771 & 0.784 \\ F1 & 0.943 & 0.820 & 0.800 & 0.808 \\ AUC\_ROC & 0.946 & 0.826 & 0.815 & 0.817 \\ R2 & 0.787 & 0.274 & 0.186 & 0.247 \\ \hline \hline \end{tabular} \end{table} Table 7: Metric Scores Explanatory Power Figure 4: Overall Scores for Four Languages
2303.00171
DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation Detection and Correction
Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google Assistant, to name a few - play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A text-to-speech (TTS) module allows PDAs to interact in a natural, human-like manner, and play a vital role when the interaction involves people with visual impairments or other disabilities. To cater to the needs of a diverse set of users, inclusive TTS is important to recognize and pronounce correctly text in different languages and dialects. Despite great progress in speech synthesis, the pronunciation accuracy of named entities in a multi-lingual setting still has a large room for improvement. Existing approaches to correct named entity (NE) mispronunciations, like retraining Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation dictionary, require expensive annotation of the ground truth pronunciation, which is also time consuming. In this work, we present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction. In addition, we also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss. We demonstrate that a locale-agnostic, privacy-preserving solution to the problem of TTS mispronunciation detection is feasible. We evaluate our approach on a real-world dataset, and a corpus of NE pronunciations of an anonymized audio dataset of person names recorded by participants from 10 different locales. Human evaluation shows our proposed approach improves pronunciation accuracy on average by ~6% compared to strong phoneme-based and audio-based baselines.
Raviteja Anantha, Kriti Bhasin, Daniela de la Parra Aguilar, Prabal Vashisht, Becci Williamson, Srinivas Chappidi
2023-03-01T01:53:11Z
http://arxiv.org/abs/2303.00171v1
# DTW-SiameseNet: Dynamic Time Warped Siamese Network for ###### Abstract Personal Digital Assistants (PDAs) -- such as Siri, Alexa and Google Assistant, to name a few -- play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A text-to-speech (TTS) module allows PDAs to interact in a natural, human-like manner, and play a vital role when the interaction involves people with visual impairments or other disabilities. To cater to the needs of a diverse set of users, inclusive TTS is important to recognize and pronounce correctly text in different languages and dialects. Despite great progress in speech synthesis, the pronunciation accuracy of named entities in a multi-lingual setting still has a large room for improvement. Existing approaches to correct named entity (NE) mispronunciations, like retraining Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation dictionary, require expensive annotation of the ground truth pronunciation, which is also time consuming. In this work, we present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction. In addition, we also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss. We demonstrate that a locale-agnostic, privacy-preserving solution to the problem of TTS mispronunciation detection is feasible. We evaluate our approach on a real-world dataset, and a corpus of NE pronunciations of an anonymized audio dataset of person names recorded by participants from 10 different locales. Human evaluation shows our proposed approach improves pronunciation accuracy on average by \(\approx 6\%\) compared to strong phoneme-based and audio-based baselines. Raviteja Anantha, Kriti Bhasin\({}^{*}\), Daniela de la Parra Aguilar\({}^{*}\), Prabal Vashisht, Becci Williamson, Srinivas Chappidi Apple * Equal contribution. **Index Terms**: Personal Digital Assistants, Text-to-Speech, Metric Learning, Mispronunciation Detection ## 1 Introduction TTS is an important component in Personal Digital Assistants (PDAs). With the rapid adoption of smart eco-systems and an increase in voice-based applications, PDAs are becoming more common helping users complete tasks. The role of TTS is critical when the interactions involve people with visual impairment or other disabilities. With recent advances in speech synthesis [1, 2, 3], current TTS systems can produce expressive and natural sounding voice close to human speech. However, there is significant room for improvement on the multilingual, inclusiveness and personalization aspects. In the digital ecosystems, where it is common to have diverse group of users, multilingual TTS is critical to make users feel acknowledged; and the named entity (NE) pronunciations are particularly important. In this work we address TTS entity mispronunciations, which can occur because of either: * The NE being a homograph, e.g _David_, which can be pronounced \(/\text{det}\,\text{vid}/\) for English NEs, or \(/\text{da}\,\text{.}\,\text{bid}/\) for Spanish NEs. * The NE has a pronunciation that is difficult to derive from the orthography, and it must still be learned by the TTS system, e.g the Italian name _Palatucci_ which is pronounced \(/\text{pa.la.}^{\text{ta}}\text{tuf}\,\text{i}/\), but can easily be mispronounced by TTS as \(/\text{pa.la.}^{\text{tuk.si}}/\) if, e.g using a G2P model predominantly trained on Spanish data. TTS personalization can address the former problem, whereas global TTS pronunciation correction is preferable to address the latter. Prior works, which address multilingual and user-specific intonation aspects [4, 5], require locale-specific models and incur high maintenance cost, especially when working with multiple locales. We present a locale-agnostic, PDA-compatible, two-stage framework for TTS mispronunciation detection and correction. In the first stage, TTS mispronunciations are detected using a two-step process. First the pronunciation dissimilarity between the user's pronunciation and the TTS pronunciation is computed; second, the dissimilarity score is checked against a threshold to determine if a mispronunciation occurred. The threshold is derived from human labeling to meet the desired precision and recall metrics. In the second stage, the mispronunciation will be qualified for correction (personalization or global learning) using user-engagement signals, such as task completion to ensure precise entity selection, in a privacy-preserving manner. Although we address the problem of TTS mispronunciation, it should be trivial to employ the same framework for correcting ASR NE misrecognitions. Our contributions can be summarized as: * We propose a highly-precise, locale-agnostic framework for TTS mispronunciation detection and correction by using the correlation between a TTS mispronunciation and the pronunciation dissimilarity of user and TTS pronunciations. * We present an empirical comparison of phoneme-based algorithms and models along with acoustic models using both intrinsic and extrinsic metrics. * And finally, we introduce a novel mispronunciation detection model called DTW-SiameseNet, which is trained using a metric learning paradigm and learns the distance function via triplet loss to perform Dynamic Time Warping (DTW). ## 2 Related Work Our work is an intersection of three areas: phoneme representation, pronunciation learning, and metric learning. ### Phoneme Representation The task of learning phoneme representations to capture pronunciation similarities is well studied for various downstream applications. A few works have explored the use of phoneme embeddings to perform phonological analogies [6], while others have investigated using embeddings for grapheme-to-phoneme conversion [7]. Improvements in contextual end-to-end Automatic Speech Recognition (ASR) were also realized by using phoneme representations [8]. In this work we apply phoneme embeddings for the task of mispronunciation detection. Recent works show using ASR phoneme embeddings improves mispronunciation detection accuracy [9, 10]. However, these works use a single phoneme representation (e.g. IPA -- International Phonetic Alphabet), whereas in practice PDAs may use component/task-specific phoneme notation. In our setting, we use two separate phonetic representations, one for ASR and one for TTS. Our goal is to learn dense phoneme representations which capture phonetic similarity within the same phoneme space as well as the relationship between the two different phoneme spaces. ### Pronunciation Learning To learn a correct pronunciation, the first step is to detect a mispronunciation. Prior works [11, 12] on mispronunciation detection require a canonical transcription and employ Goodness of Pronunciation (GOP) [13], or classifier based methods. A phonological feature-based active-learning method for mispronunciation detection, which estimates phoneme state probabilities using hidden markov models (HMMs) was shown to outperform GOP based methods [14], but this still requires locale-specific training and is not feasible for a multilingual setting. A comparison-based approach [15] for mispronunciation detection was explored, where two speaker Dynamic Time Warping (DTW) is carried out between student (non-native speaker) and teacher (native speaker) utterances. Unlike this approach where a database of teacher utterances are required and a static distance measure (DTW) is employed, we use a metric-based learning framework, where user and TTS pronunciations are compared using a learned distance function. ### Metric Learning Metric Learning aims to establish similarity (or dissimilarity) between samples while using an optimal distance metric for learning tasks. Most of the existing metric learning methods rely on learning a Mahalanobis distance [16]. The use of a learned distance function in DTW to compare multivariate time series is shown to improve both precision and robustness [17]. In this work, we adopt a similar strategy to learn a Mahalanobis distance function for audio comparison using DTW. Metric learning uses a linear projection, which limits its ability to learn non-linear characterisitics of the data, so we first apply a non-linear projection using a Siamese architecture and then apply metric learning. To the best of our knowledge, we are the first to use metric learning for mispronunciation detection and correction. ## 3 Methods We introduce a new framework for the task of TTS mispronunciation detection and correction. We propose using the correlation between TTS mispronuncing a NE, and the pronunciation dissimilarity of the user and TTS pronunciations for the same NE exceeding a set threshold. This framework requires us to define a distance function that computes the pronunciation dissimilarity. Once a distance function is obtained, the threshold that correlates with mispronunciation detection with desired precision and recall can be empirically chosen through human labeling. Once a mispronunciation is detected, the TTS entity (e.g. contact name) pronunciation is updated for that specific user, not all users, using the user's pronunciation. An overview of the proposed mispronunciation detection and correction framework is shown in Figure 1. Our experiments for mispronunciation detection can be broadly classified as phoneme-based and audio-based approaches. Pronunciation correction is carried out post mispronunciation detection by using user engagement signals in a privacy-preserving manner. We describe our mispronunciation detection and correction methods below. ### Phoneme-based Mispronunciation Detection In this section, we elaborate on various methods we evaluated on the TTS mispronunciation detection task where phonemes are used as input. #### 3.1.1 Proposed Baseline: P2P Comparison Algorithm We present a simple, yet strong baseline called the P2P (Phoneme-to-Phoneme) Comparison algorithm. In this algorithm, we use: * The user interactions on the device to extract the ASR phonemes. * The text of the NE as an input to the TTS model to generate the default TTS phonemes. * The edit distance between the ASR phonemes and TTS phonemes using the Levenshtein distance metric. * Human-labeled data to empirically determine the edit distance threshold based on the desired precision and recall. If the edit distance is greater than the threshold, the algorithm determines there is a TTS mispronunciation. Once a mispronunciation is detected, we use engagement signals to determine if the pronunciation can be updated with high confidence. #### 3.1.2 Phoneme Embeddings In our setting, ASR and TTS use separate phones. As a result, it is not possible to directly compare an ASR phoneme sequence (representation of user's pronunciation) with a TTS phoneme sequence (representation of TTS pronunciation). In addition, these phonesets are locale-specific thereby increasing the number of phonesets. One simple approach is to use one-hot or multi-hot embeddings, but the resulting representations would be sparse as they do not capture phonetic similarity. Our goals for phoneme embeddings are: (1) obtain dense representations; (2) embeddings should capture phonetic similarity within the same phoneme space, and; (3) capture the relationship between the two different phoneme spaces. To accomplish these goals, we train a multi-phoneme sequence-to-sequence (seq2seq) model with multi-head attention [18] applied to both the encoder and the decoder. A uni directional LSTM cell with an output dimension of 100 is used with both the encoder and the decoder. The encoder/decoder attention establishes the corresponding inter-relationship between the input phonemes and the target phonemes, whereas self-attention pays more attention to the intra-relationship of the phoneme pairs in a phoneme sequence. #### 3.1.3 Gbdt We train a Gradient Boosted Decision Tree (GBDT) [19] classifier using phoneme embeddings as input. For the given user and TTS pronuncations, the phoneme embedding sequences are concatenated and used as input to train a GBDT model using XGBoost [20] with logistic loss. The annotations are binary labels, where 0 represents both pronuncations are the same, and 1 otherwise. #### 3.1.4 MobileBERT We evaluate the MobileBERT [21] architecture, a compressed and optimal version of BERT for resource-limited settings, such as running on mobile devices, with phoneme embeddings as input. MobileBERT is a bidirectional Transformer based on the BERT model. We use the HuggingFace pretrained MobileBERT1 and conduct knowledge transfer using the multi-head attention from the multi-phoneme seq2seq model described in Section 3.1.2. Footnote 1: [https://huggingface.co/docs/transformers/model_doc/mobilebert](https://huggingface.co/docs/transformers/model_doc/mobilebert) ### Audio-based Mispronunciation Detection #### 3.2.1 Dynamic Time Warping Dynamic Time Warping (DTW) is an algorithm which can measure the divergence between two time series, in our case audio waveforms, with different phases and lengths. The idea is to compute an optimal warp path between two given waveforms. We use a specific implementation of DTW called FastDTW [22] as a baseline for audio input. #### 3.2.2 Siamese Network using Mel Spectrograms Mel-frequency spectrogram is a low-level acoustic representation, which is easily computed from time-domain waveforms. Mel spectrograms are also smoother than raw audio waveforms, which makes them easier to use as features to train a model using variety of loss functions. The sub-waveforms corresponding to entity pronunciation are first extracted using ASR time-spans. We obtain Mel spectrograms of both user and TTS entity pronuncations by applying short-time Fourier transform (STFT) followed by a nonlinear transform to the frequency axis of the STFT. This representation using Mel frequency scale emphasizes details in lower frequencies, which are critical to speech intelligibility. We use a Siamese neural network [23], which consists of twin networks and the parameters between them are tied. We use convolutional layers in twin networks which accept two Mel spectrograms as inputs and determines whether they are similar. We use 3 channels with filters of varying size and fixed stride length of 1. We use ReLU for activation function and max pooling. The outputs of convolutional layers are flattened and concatenated before passing on to a sigmoid activation function. We use the Adam optimizer and cross-entropy loss to learn binary classification. #### 3.2.3 Proposed Method: DTW-SiameseNet We propose a novel mispronunciation detection model that employs metric learning with a Siamese architecture for DTW with a triplet loss. We use Mahalanobis distance as our metric; non-Mahalanobis based metric learning was also proposed but this suffered from non-convexity or computational complexity [16]. Given two _d_-dimensional vectors \(x\) and \(y\), the square Mahalanobis distance parametrized by a symmetric Positive Definite (PD) matrix \(A\) between the two vectors is defined as: \[D_{A}(x,y)=(x-y)^{T}A(x-y). \tag{1}\] The Positive Definiteness of the matrix \(A\) guarantees that the distance function will return a positive distance. The Mahalanobis matrix \(A\) can be decomposed as: \[A=G^{T}G. \tag{2}\] This can be interpreted as \(G\) being distributed to (_x_ - _y_) terms, i.e., linear transformation applied to the input. Our goal is to learn the PD matrix \(A\) based on some constraints over the distance function. We apply two constraints: * If two vectors are similar then the distance metric D(.) is smaller than an upper bound ubound, and; * If two vectors are dissimilar then the distance metric D(.) is greater than a lower bound lbound. We combine the two constraints into a triplet constraint. Given three _d_-dimensional vectors \(x\), \(y\) and \(z\); where \(x\), \(y\) are similar and \(x\), \(z\) are dissimilar we express the constraint as: \[D_{A}(x,y)-D_{A}(x,z)<-\rho, \tag{3}\] where \(0<\rho<l_{bound}-u_{bound}\). We apply non-linear projection using the Siamese architecture on the inputs before linear projection through \(A\). We use a unidirectional LSTM with an attention mechanism, \(f_{W}(.)\), for the twin networks, where the parameters \(W\) are tied. For given Figure 1: _An overview of TTS Mispronunciation Detection and Correction Framework on the Client_ inputs \(x\), \(y\) with a randomly drawn \(z\), the objective function is defined as \[l(A,W)=\rho+D_{A}(f_{W}(x),f_{W}(y))-D_{A}(f_{W}(x),f_{W}(z)). \tag{4}\] The overall loss is given as: \[L(A,W)=\sum_{t}l(A,W). \tag{5}\] We use SGD to update the parameters \(W\) and learn the Mahalanobis matrix \(A\), which together constitute the distance function. Since we need the updates to \(A\) to be gradual and stable, we add a regularization term. LogDet divergence [24] is shown to be the most optimal for regularizing the metric learning process and is invariant to linear group transformations. The LogDet divergence for \(A\) and \(A_{t}\) (_A_ at time-step _t_) is given as: \[D_{ld}(A,A_{t})=tr(AA_{t}^{-1})-\log(\det(AA_{t}^{-1}))-d. \tag{6}\] Applying the LogDet divergence the metric learning model for updating \(A\) will be \[A_{t+1}=\arg\min_{A>0}D_{ld}(A,A_{t})+\eta_{t}l(A,W), \tag{7}\] where \(\eta_{t}>0\) is a regularization parameter that balances LogDet regularization function \(D_{ld}(A,A_{t})\). Once the distance function is learned, we use it to compute the distance between the inputs using the traditional DTW algorithm as shown below, where we use a moving window of dimension \(d\) to choose input sub-sequences: \[D_{A}(i,j)=D_{A}(x^{i},y^{i})+min\begin{cases}D_{A}(i-1,j-1)\\ D_{A}(i-1,j)\\ D_{A}(i,j-1).\end{cases} \tag{8}\] The main difference between the traditional DTW algorithm and DTW-SiameseNet lies in the fact that we learn the distance function _D_(.) parameterized on \(A\) and \(W\) for the inputs \(x\) and \(y\). ### Pronunciation Correction Once the pronunciation dissimilarity score is computed, and if it meets the chosen threshold, we deem the TTS pronunciation as a mispronunciation. We employ user engagement signals, such as task completion, to avoid incorrectly updating the pronunciation of an entity. For example, if the task was to call a person, prior to updating the contact pronunciation, we check if the call was successful and the call duration was greater than a predetermined number of seconds. ## 4 Training Data We use two datasets: one real-world (phoneme-based) and one human-generated (audio) NE pronunciations dataset; comprised of data from 10 locales to train and evaluate phoneme-based and audio-based methods. ### Phoneme-based Dataset We curated a real-world dataset comprised of 50K randomized and anonymized user requests from 10 different locales, where each request contain a reference to an entity. This dataset is used to train and test phoneme-based approaches described in Section 3.1. Each locale has 5K data points with ASR and TTS phoneme representations for the entity pronunciation, but no user audio. On average, 30% of entity names in each locale are non-native names and \(>\)20% are homographs. This dataset has mispronunciations in the range of 15% to 28%. ### Audio Dataset We created an anonymized audio dataset comprised of 30K audio requests using human annotators. Each locale has 1K unique entities with person, location and business names. Human participants are provided with prompts, such as "Directions to X" or "Call X", which are used to record the audio. Each entity gets audio from 3 different participants to capture variance from different genders and age groups. Since we did not use locale-specific participants, this dataset contains 40% to 50% of human mispronunciations. On average, 22% of the names are homographs with 17% being non-native names. ## 5 Results Below we present both intrinsic and extrinsic metrics, unless specified otherwise metrics for methods described in Section 3.1 and 3.2 are computed using data described in Section 4.1 and 4.2 respectively. We compute pronunciation accuracy using both percentage and a 3-point Likert scale, where in the latter 1 represents correct entity and TTS pronunciations are different, 2 represents a partial similarity, and 3 represents full similarity. We use a TTS system with an average pronunciation accuracy of 88%. ## 6 Conclusion In this paper, we presented a locale-agnostic framework for TTS mispronunciation detection and correction, which is compatible with PDAs. In addition, we described a novel metric learning model for audio comparison called DTW-SiameseNet. We investigated and presented empirical comparison of various phoneme and audio based methods. \begin{table} \begin{tabular}{l l l l} \hline \hline **Data Type** & **Method** & **Precision** & **Recall** \\ \hline \multirow{3}{*}{Phoneme-based} & P2P & \(95.29(\pm 0.01)\) & \(72.87(\pm 0.01)\) \\ & GBDT & \(\textbf{95.78}(\pm 0.04)\) & \(\textbf{94.91}(\pm 0.02)\) \\ & MobileBERT & \(94.22(\pm 0.15)\) & \(92.36(\pm 0.19)\) \\ \hline \multirow{3}{*}{Audio-based} & DTW & \(62.5(\pm 0.01)\) & \(30.64(\pm 0.01)\) \\ & SiameseNet & \(94.77(\pm 0.27)\) & \(91.12(\pm 0.18)\) \\ \cline{1-1} & DTW-SiameseNet & \(\textbf{95.17}(\pm 0.12)\) & \(\textbf{93.87}(\pm 0.08)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Intrinsic Metrics average across the 10 locales: en-US, en-CA, en-GB, en-AU, en-IN, fr-FR, es-ES, es-MX, es-US, ja-JP. All the differences among methods are statistically significant. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Scale** & **en-US/CA/GB/AU** & **fr-FR** & **es-ES/MX/US** & **ja-JP** \\ \hline Percent & \(94.34\) & \(92.89\) & \(93.25\) & \(90.17\) \\ Likert (1-3) & \(2.83\) & \(2.79\) & \(2.80\) & \(2.61\) \\ \hline \hline \end{tabular} \end{table} Table 2: Pronunciation accuracy (extrinsic metric) across the 10 locales on audio-based dataset using DTW-SiameseNet.
2305.04099
Symbolic Regression on FPGAs for Fast Machine Learning Inference
The high-energy physics community is investigating the potential of deploying machine-learning-based solutions on Field-Programmable Gate Arrays (FPGAs) to enhance physics sensitivity while still meeting data processing time constraints. In this contribution, we introduce a novel end-to-end procedure that utilizes a machine learning technique called symbolic regression (SR). It searches the equation space to discover algebraic relations approximating a dataset. We use PySR (a software to uncover these expressions based on an evolutionary algorithm) and extend the functionality of hls4ml (a package for machine learning inference in FPGAs) to support PySR-generated expressions for resource-constrained production environments. Deep learning models often optimize the top metric by pinning the network size because the vast hyperparameter space prevents an extensive search for neural architecture. Conversely, SR selects a set of models on the Pareto front, which allows for optimizing the performance-resource trade-off directly. By embedding symbolic forms, our implementation can dramatically reduce the computational resources needed to perform critical tasks. We validate our method on a physics benchmark: the multiclass classification of jets produced in simulated proton-proton collisions at the CERN Large Hadron Collider. We show that our approach can approximate a 3-layer neural network using an inference model that achieves up to a 13-fold decrease in execution time, down to 5 ns, while still preserving more than 90% approximation accuracy.
Ho Fung Tsoi, Adrian Alan Pol, Vladimir Loncar, Ekaterina Govorkova, Miles Cranmer, Sridhara Dasu, Peter Elmer, Philip Harris, Isobel Ojalvo, Maurizio Pierini
2023-05-06T17:04:02Z
http://arxiv.org/abs/2305.04099v2
# Symbolic Regression on FPGAs for Fast Machine Learning Inference ###### Abstract The high-energy physics community is investigating the feasibility of deploying machine-learning-based solutions on Field-Programmable Gate Arrays (FPGAs) to improve physics sensitivity while meeting data processing latency limitations. In this contribution, we introduce a novel end-to-end procedure that utilizes a machine learning technique called symbolic regression (SR). It searches equation space to discover algebraic relations approximating a dataset. We use PySR (software for uncovering these expressions based on evolutionary algorithm) and extend the functionality of hls4ml (a package for machine learning inference in FPGAs) to support PySR -generated expressions for resource-constrained production environments. Deep learning models often optimise the top metric by pinning the network size because vast hyperparameter space prevents extensive neural architecture search. Conversely, SR selects a set of models on the Pareto front, which allows for optimising the performance-resource tradeoff directly. By embedding symbolic forms, our implementation can dramatically reduce the computational resources needed to perform critical tasks. We validate our procedure on a physics benchmark: multiclass classification of jets produced in simulated proton-proton collisions at the CERN Large Hadron Collider, and show that we approximate a 3-layer neural network with an inference model that has as low as 5 ns execution time (a reduction by a factor of 13) and over 90% approximation accuracy. ## 1 Introduction Symbolic regression (SR) is a machine learning technique that seeks to discover mathematical expressions that best fit a dataset. The outcome of SR is an analytic equation that captures the underlying patterns and relationships within the data. As the equations are interpretable, SR can provide valuable insights into natural sciences, including high-energy physics (HEP). Furthermore, by allowing the selection of models on the Pareto front, SR enables the optimization of the performance-resource trade-off, making it a promising alternative to other machine learning methods, especially deep learning models. This is a crucial feature in the context of the Large Hadron Collider (LHC) experiments which must process proton-proton collisions at a 40 MHz rate and tens of terabytes of raw data per second. This extreme data rate and the current size of buffering system impose a maximum latency of \(\mathcal{O}(1)\)\(\mu\)s for the real-time data classification and filtering on the edge (or the _trigger system_) [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. In these conditions, lightweight algorithms running on custom hardware such as Field-Programmable Gate Arrays (FPGAs) for ultra low-latency inference are desired. In this paper, we extend the functionality of the hls4ml1[5; 6] (High-Level Synthesis for Machine Learning) framework to provide parsing capabilities for the equation chosen by SR and High-Level Synthesis (HLS) support for the mathematical functions. Our implementation is validated on a physics benchmark, demonstrating the effectiveness and potential of this approach to address the challenges faced by the HEP community. For generating the expressions, we have chosen to utilize PySR2[7], an open-source software tool for SR that employs an evolutionary algorithm. PySR offers a comprehensive implementation of SR and is built on Julia but interfaced from Python, making it easily accessible and usable for practitioners in a wide range of fields, including HEP. The rest of the paper is structured as follows. Section 2 introduces the dataset and baseline model. Section 3 presents our implementations and results. Lastly, Section 4 summarizes the work and suggests future directions. Footnote 1: [https://github.com/fastmachinelearning/hls4ml](https://github.com/fastmachinelearning/hls4ml) Footnote 2: [https://github.com/MilesCranmer/PySR](https://github.com/MilesCranmer/PySR) Footnote 3: [https://github.com/google/qkeras](https://github.com/google/qkeras) ## 2 Benchmark and Baseline To showcase the application of SR, we choose jet identification problem from the HEP field. A jet refers to a narrow cone of outgoing particles, and the process of identifying the original particle that initiated this collimated shower of particles with adjacent trajectories is called _jet tagging_. Jets are central to many physics data analyses at the LHC experiments. The data for this case study is generated from simulated jets that result from the decay and hadronization of quarks and gluons produced in high-energy collisions at the LHC. The task is to tag a given jet as originating from either a quark (\(q\)), gluon (\(g\)), W boson (\(W\)), Z boson (\(Z\)), or top quark (\(t\)). The dataset is publicly accessible from Zenodo [8]. A variety of jet recombination algorithms and substructure tools are implemented to build a list of 16 physics-motivated expert features: \((\sum z\log z\), \(C_{1}^{\beta=0,1,2}\), \(C_{2}^{\beta=1,2}\), \(D_{2}^{\beta=1,2}\), \(D_{2}^{(\alpha,\beta)=(1,1),(1,2)}\), \(M_{2}^{\beta=1,2}\), \(N_{2}^{\beta=1,2}\), \(m_{\text{mMDT}}\), Multiplicity), where the description of each of these variables is presented in Ref. [9]. The anti-\(k_{\text{T}}\) algorithm [10] with a distance parameter of \(R=0.8\) is used to cluster all jets. A cut on the reconstructed jet \(p_{\text{T}}\) is applied to remove extreme events from the analysis [6]. More detailed descriptions of the dataset can be found in Refs. [6; 9; 11]. The architecture of the baseline model is adopted from Ref. [6], which is a fully-connected neural network (NN) consisting of three hidden layers of 64, 32, and 32 nodes, respectively and ReLU activation functions. The input layer takes the 16 high-level features as inputs and the output layer consists of five nodes with a softmax activation function, yielding the probability of a jet originating from each of the five classes. This architecture was chosen to provide a reasonable performance (75% overall accuracy, and 90% per-class accuracy) while keeping the model lightweight [6; 12; 13; 14]. The model is trained with QKeras3[12], where the kernel weights, biases, and activation functions are quantized to fixed precision and constrained during weight optimization, referred to as _quantization-aware training_ (QAT). This is necessary since post-training quantization (no fine-tuning) results in reduced accu racy [12]. The baseline models presented in Section 3 are fine-tuned for each considered precision. For evaluation, the model is converted to HLS firmware using hls4ml. ## 3 Implementations and Results To deploy discovered expressions on FPGAs, we use the hls4ml library. We extended hls4ml with support for expressions via the Xilinx HLS math library. To further optimize the resource utilization and reduce latency, we added functionality to enable approximation of mathematical functions with lookup tables (LUTs). The comparison of LUT-based functions with HLS math library functions is illustrated in Fig. 1. We use \(\langle\text{B},\text{I}\rangle\) to denote fixed point precision, where B is the total number of bits allocated or bit width, and I is the number of integer bits. In the following experiments, we apply SR to fit the LHC jet dataset and demonstrate its resource efficiency in the context of FPGA deployment. We consider models of five independent algebraic expressions as functions of the 16 high-level input features, \(\mathbf{\hat{y}}=s(\mathbf{x})\) with \(s:\mathbb{R}^{16}\rightarrow\mathbb{R}^{5}\), where the inputs are standardized and the outputs \(\mathbf{\hat{y}}\) correspond to the score for one of the five jet classes. A jet is identified as the class whose tagger yields the highest score. The search for expressions is performed using the PySR package. It uses an evolutionary algorithm to construct symbolic expressions, by growing the tree structure using combinations of constants, variables, and operators (+, -, \(\times\), /, (\(\cdot\))\({}^{2}\), \(\mathtt{sin}(\cdot)\), etc.). The search starts from a random combination without requiring _a priori_ knowledge of the underlying functional form, expressions are evaluated by a specified metric and the best ones can evolve to the next generation, where mutation (i.e., selecting one node to change) and crossbreeding Figure 1: The sine (left) and tangent (right) functions evaluated with and without the use of LUTs, implemented in HLS with precision \(\langle 12,6\rangle\), i.e. 12 bits variable with 6 integer bits. The LUT notation reads: [range start, range end; table size] for table definition. The lower panel shows the function deviation from the truth. (i.e., swapping the sub-trees from two solutions) can take place to explore more combinations. There is a measure in PySR called complexity, \(c\), which by default is set to 1 for every constant, variable, and operator. The complexity of an expression is the sum of the complexity of all its components. We set the model selection strategy such that the candidate model with the lowest loss will be selected regardless of complexity as long as it does not exceed the maximum value, \(c_{\text{max}}\), of our choice. In this setting, the algorithm attempts to solve the following optimization problem for the dataset \(\{(\mathbf{x}^{i},\mathbf{y}^{i})\}\) with each input \(\mathbf{x}^{i}\in\mathbb{R}^{16}\) and label \(\mathbf{y}^{i}\in\mathbb{R}^{5}\): \[h_{f}^{*}=\operatorname*{arg\,min}_{h\in\mathcal{S}_{\text{\tiny max}}}\sum_{ i}\ell(h(\mathbf{x}^{i}),y_{f}^{i}),\text{ for each jet class }f\in\{g,q,t,W,Z\}, \tag{1}\] where \(\mathcal{S}_{c_{\text{\tiny max}}}\) is the space of equations (i.e., \(h:\mathbb{R}^{16}\rightarrow\mathbb{R}\)) with complexity ranging from 1 to \(c_{\text{max}}\) satisfying all constraints specified in the configuration (choice of operators, function nesting, etc.). We use the L2 margin loss, \(\ell\) given by \[\ell(\hat{y}_{f},y_{f})=(1-\hat{y}_{f}y_{f})^{2},\text{ with label }y_{f}= \begin{cases}+1,&\text{if $f$ matched to true jet class}\\ -1,&\text{otherwise}\end{cases}. \tag{2}\] The reason for the choice of this loss is that its domain is \(\mathbb{R}^{2}\) which is suitable for our setting where the model outputs are not restricted in any fixed range. The downside of SR is that it is a complex combinatorial problem that does not leverage the advantage of gradient-based optimization, so it is less efficient when applied to high-dimensional datasets. To alleviate this challenge, PySR employs a random forest regressor to evaluate the relative importance of each input feature. We ask PySR to select 6 out of the 16 available inputs for model training in the following experiments. For resource estimation, each model is converted to FPGA firmware using hls4ml, which is then synthesized with Vivado HLS (2020.1) [15], targeting a Xilinx Virtex UltraScale+ VU9P FPGA with part number 'xcvu9p-flga2577-2-e'. All results are derived after the HLS compilation step. The clock frequency is set to 200 MHz (or clock period of 5 ns) which is typical for the LHC real-time trigger environment [1, 2, 3, 4]. The initiation interval is set to 1. In the following studies, we monitor the accuracy, latency and resource usage (digital signal processors, or DSPs, and LUTs) to compare the models. ### Plain implementation We first study models with a single class of mathematical function: polynomial, trigonometric, exponential, and logarithmic. For the polynomial model, only arithmetic operators are considered: +, -, and \(\times\). For other models, an additional operator is added respectively: \(\mathtt{sin}(\cdot)\) for trigonometric, \(\mathtt{Gauss}(\cdot)=\mathtt{exp}(-(\cdot)^{2})\) for exponential, and \(\mathtt{log}(\mathtt{abs}(\cdot))\) for logarithmic. For simplicity, function nesting (e.g., \(\mathtt{sin}(\mathtt{sin}(\cdot))\)) is not allowed. Every operator has a complexity of 1 by default. Searches are repeated for \(c_{\text{max}}=20\), 40, and 80, to observe how model accuracy and resource usage change with model size. Table 1 shows per-class expressions for the trigonometric model with \(c_{\text{max}}=20\). Table 2 shows expressions for the \(t\) tagger in all models with \(c_{\text{max}}=40\). Accuracy is shown in Fig. 2. FPGA resource usage and latency are shown in Fig. 3. ### Function approximation with LUTs Based on the models from Section 3.1 (except for the polynomial), we approximate all mathematical functions with LUTs and redo the analysis. In Fig. 2 and 3, these models correspond to the dashed lines. Compared to the baseline, the resource usage is dramatically reduced for all SR models, especially for those applying function approximation, sometimes with several orders of magnitude improvements. Besides, the SR models require significantly shorter inference time than the baseline, while having minimal drop in accuracy. In particular, the inference time is reduced to as low as 1 clock cycle (5 ns) in some scenarios in the exponential and the logarithmic models with LUT-based functions implemented, amounting to a reduction by a factor of 13 when compared to the baseline which has a latency of 13 clock cycles (65 ns), while the relative accuracy is above 90%. The ROC curves of baseline and trigonometric models are compared in Fig. 4. ### Latency-aware training Alternatively, one can improve resource usage by guiding PySR to search in a latency-aware manner. By default, PySR assigns complexity for every operator to 1 so that they are all equally penalized when being added to expression trees. However, it is not ideal for FPGA deployment since, for example, an operator tan(\(\cdot\)) typically takes several times more clock cycles than a sin(\(\cdot\)) to evaluate on an FPGA. This time cost can be incorporated in expression searches by setting operator complexity to the corresponding number of clock cycles needed on FPGAs. Note that this strategy is not valid in the context of function approximation with LUTs since every indexing operation requires only one clock cycle. We demonstrate this latency-aware training (LAT) for two precisions, \(\langle 16,6\rangle\) and \(\langle 18,8\rangle\), with \(c_{\max}\) ranging from 20 to 80. We consider the following operators: +(1), -(1), \(\times\)(1), log(abs(\(\cdot\)))(4), sin(\(\cdot\))(8), tan(\(\cdot\))(48), cosh(\(\cdot\))(8), sinh(\(\cdot\))(9), and exp(\(\cdot\))(3), where the numbers in parentheses correspond to the operator complexity. For simplicity, function nesting is not allowed again. We also constrain the sub-tree total complexity. Such way we are forcing the model to explore solutions in a different part of the Pareto front. The final expressions are shown in Table 3. Model accuracy, resource usage and latency are shown in Fig. 5. SR models obtained from LAT use systematically fewer resources and have smaller latency compared to those obtained from plain implementation while having comparable accuracy. Implementa \begin{table} \begin{tabular}{l|l|l} \hline \hline \multicolumn{1}{c|}{Tagger} & Expression for the trigonometric model with \(c_{\max}=20\) & AUC \\ \hline \(g\) & sin\((-2C_{1}^{\mathrm{GT}}+0.31C_{1}^{\mathrm{GT}^{2}}+n_{\mathrm{max}}+ \mathrm{Multiplicity}-0.098\mathrm{Multiplicity}^{2}-0.79)\) & 0.897 \\ \hline \(q\) & \(-0.33(\mathrm{sin}(n_{\mathrm{MAP}})-1.54)\mathrm{sin}(-C_{1}^{\mathrm{GT}}+C _{2}^{\mathrm{III}}+\mathrm{Multiplicity})-0.81\mathrm{sin}(n_{\mathrm{MAP}})-0.81\) & 0.853 \\ \hline \(t\) & sin\((c_{1}^{\mathrm{GT}}+C_{1}^{\mathrm{III}}-n_{\mathrm{MAP}}+0.22(C_{1}^{ \mathrm{III}}-0.29)(-C_{1}^{\mathrm{III}}+C_{2}^{\mathrm{III}}-0.65)\) & 0.920 \\ \hline \(W\) & \(-0.31(\mathrm{Multiplicity})+(2.09-\mathrm{Multiplicity})(\mathrm{ sinh}(0.80C_{1}^{\mathrm{III}}+0.98))-0.5\) & 0.877 \\ \hline \(Z\) & \((\mathrm{sin}(4.84\mathrm{m}_{\mathrm{MAP}})+0.59)\mathrm{sin}(n_{\mathrm{MAP }}+1.14)\mathrm{sin}(C_{1}^{\mathrm{III}}+4.84\mathrm{m}_{\mathrm{MAP}})-0.94\) & 0.866 \\ \hline \hline \end{tabular} \end{table} Table 1: Expressions generated by PySR for the trigonometric model with \(c_{\max}=20\). Operator complexity is set to 1 by default. Constants are rounded to two decimal places for readability. Area under the receiver operating characteristic (ROC) curve, or AUC, is reported. \begin{table} \begin{tabular}{l|l|l} \hline \hline Model & Expression for the \(t\) target with \(c_{\max}=40\) & AUC \\ \hline \hline Polynomial & \(C_{1}^{\mathrm{GT}^{2}}+0.098\mathrm{m}_{\mathrm{MAP}}(2C_{1}^{\mathrm{III}}+M_{ 2}^{\mathrm{III}}-n_{\mathrm{MAP}})-\mathrm{Multiplicity}-(1.82C_{1}^{\mathrm{III}}-M_{ 2}^{\mathrm{III}}-M_{1}^{\mathrm{III}}N(C_{1}^{\mathrm{III}}-0.49\mathrm{m}_{ \mathrm{MAP}})-3.22)-0.53\) & 0.914 \\ \hline Trigonometric & sin\((0.06\mathrm{Z}_{1}^{\mathrm{III}}\!\!\leq\!\mathrm{log})M_{2}^{\mathrm{III}}-0.25C_{1} ^{\mathrm{III}}-(C_{1}^{\mathrm{III}}+2C_{1}^{\mathrm{III}}-M_{2}^{\mathrm{III}} +\mathrm{Multiplicity}-8.86)-n_{\mathrm{MAP}}+0.06\mathrm{Multiplicity}-0.4\) & 0.925 \\ \hline Exponential & \(0.23C_{1}^{\mathrm{III}}\!\!=\!(n_{\mathrm{MAP}}+\mathrm{Gauss}(0.63\mathrm{Multiplicity})+1)-\mathrm{Gauss}(C_{1}^{\mathrm{III}})+0.45C_{1}^{\mathrm{III}}-0.23\mathrm{min}_{ \mathrm{MAP}}+0.23\mathrm{min}_{\mathrm{MAP}}+0.23\mathrm{min}_{\mathrm{MAP}}+0.2 3\mathrm{min}_{\mathrm{MAP}}+0.15\) & 0.920 \\ \hline Logarithmic & \(C_{1}^{\mathrm{III}}\!\!=\!0.1\mathrm{m}_{\mathrm{MAP}}(\mathrm{Multiplicity})\times\mathrm{ log}(\mathrm{abs}(\mathrm{Multiplicity})+2)-0.02\mathrm{log}(\mathrm{abs}(\mathrm{Multiplicity}))\) & 0.923 \\ \(-0.16(C_{1}^{\mathrm{III}})C_{1}^{\mathrm{III}}-1.6M_{2}^{\mathrm{III}}+n_{ \mathrm{MAP}}+1.28)-n_{\mathrm{MAP}}+0.48\mathrm{log}(\mathrm{abs}(C_{1}^{ \mathrm{III}}))-0.42\) & 0.923 \\ \hline \hline \end{tabular} \end{table} Table 2: Expressions generated by PySR for the \(t\) tagger in different models with \(c_{\max}=40\). Operator complexity is set to 1 by default. Constants are rounded to two decimal places for readability. tion of a maximum latency constraint is also possible. We added a script to generate operator complexity for praticioners8. Footnote 8: [https://github.com/AdrianAlan/hls4sr-configs](https://github.com/AdrianAlan/hls4sr-configs) \begin{table} \begin{tabular}{l|l|l} \hline \hline Operator complexity & Expression for the \(t\) tagger with \(c_{\text{max}}=40\) & AUC \\ \hline \hline All 1’s (PySR default) & \(0.11(C_{1}^{min}+C_{1}^{min}+1\text{log(abs}(C_{1}^{min}))-0.48\text{min}_{ \text{MAPT}}-0.05\text{Multiplicity(Multiplicity+log(abs}(m_{\text{mMDT}})))\) & 0.930 \\ & \(-\text{sin}(-C_{1}^{min}+0.14C_{1}^{min}+m_{\text{mMDT}})+0.11\text{sinh}(C_{1} ^{min})-0.24\) & \\ \hline No. of clock cycles & \(0.04((\sum_{i}\log{2})+C_{1}^{min}+C_{1}^{min}+C_{1}^{min}-m_{\text{MAPT}}\) (Multiplicity-0.2)(Multiplicity+log(abs}(C_{1}^{min})))\) & 0.924 \\ at (16, 6) & \(-\text{sin}(-C_{1}^{min}-C_{1}^{min}+1.23\text{ln}_{\text{MAPT}}+0.58)\) & \\ \hline No. of clock cycles & \(0.04\text{Multiplicity(C_{1}^{min})}+(C_{1}^{min}-m_{\text{MAPT}})-\text{ Multiplicity}-\text{log(abs}(C_{1}^{min}(\sum\,\leq\,\log{2})+0.23)))\) & 0.926 \\ at (18, 8) & \(-\text{sin}(-C_{1}^{min}-C_{1}^{min}+1.19\text{ln}_{\text{MAPT}}+0.61)\) & \\ \hline \hline \end{tabular} \end{table} Table 3: Expressions generated by PySR for the \(t\) tagger with \(c_{\text{max}}=40\), implemented with and without LAT. Constants are rounded to two decimal places for readability. Figure 2: Relative accuracy as a function of bit width, for polynomial (top left), trigonometric (top right), exponential (bottom left), and logarithmic (bottom right) models. The relative accuracy is evaluated with respect to the baseline QAT NN trained and implemented at corresponding precision. The number of integer bits is fixed at \(I=12\) for the exponential model and at \(I=6\) for other models. Figure 3: DSPs usage (left), LUTs usage (middle), and latency (right) as a function of bit width. From top to bottom: polynomial, trigonometric, exponential, and logarithmic models. The baseline QAT NN trained and implemented at corresponding precision is shown for comparison. Resource usage and latency are obtained from C-synthesis on a Xilinx VU9P FPGA with part number ‘xcvu9p-flga2577-2-e’. Figure 4: ROC curves for the trigonometric models with \(c_{\rm max}=80\) implemented with precision \(\langle 16,6\rangle\), as compared to the baseline QAT NN. Numbers in parentheses correspond to the AUC per class. Figure 5: Relative accuracy (top), DSPs usage (bottom left), LUTs usage (bottom middle) and latency (bottom right) as a function of \(c_{\rm max}\) ranging from 20 to 80, comparing models obtained from plain implementation (solid) and LAT (dashed). Two precision settings are implemented: \(\langle 16,6\rangle\) and \(\langle 18,8\rangle\). The relative accuracy is evaluated with respect to the baseline model. Resource usage and latency are obtained from C-synthesis on a Xilinx VU9P FPGA with part number ‘xcvu9p-flga2577-2-e’. ## 4 Summary and Outlook In this paper, we presented a novel approach for utilizing symbolic regression (SR) in the context of FPGAs for fast machine learning inference. We extended the functionality of the hls4ml package to support the symbolic expressions generated by PySR. We demonstrated the effectiveness of our approach on a physics benchmark (jet tagging at the LHC) and showed that our implementation of SR on FPGAs provides a way to dramatically reduce the computational resources needed to perform critical tasks, making it a promising alternative to deep learning models. The utilization of SR in HEP provides a valuable solution to meet the sensitivity and latency demands of modern physics experiments. The results of this study open up new avenues for future work, including further optimization of the performance-resource trade-off and the exploration of other application domains for SR on FPGAs. ## 5 Acknowledgements We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community was important for the development of this project. H.F.T. and S.D. are supported by the U.S. Department of Energy (Award No. DE-SC0017647). A.A.P. is supported by the Eric and Wendy Schmidt Transformative Technology Fund. M.P. is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 772369).
2306.14902
Molecule Design by Latent Space Energy-Based Modeling and Gradual Distribution Shifting
Generation of molecules with desired chemical and biological properties such as high drug-likeness, high binding affinity to target proteins, is critical for drug discovery. In this paper, we propose a probabilistic generative model to capture the joint distribution of molecules and their properties. Our model assumes an energy-based model (EBM) in the latent space. Conditional on the latent vector, the molecule and its properties are modeled by a molecule generation model and a property regression model respectively. To search for molecules with desired properties, we propose a sampling with gradual distribution shifting (SGDS) algorithm, so that after learning the model initially on the training data of existing molecules and their properties, the proposed algorithm gradually shifts the model distribution towards the region supported by molecules with desired values of properties. Our experiments show that our method achieves very strong performances on various molecule design tasks.
Deqian Kong, Bo Pang, Tian Han, Ying Nian Wu
2023-06-09T03:04:21Z
http://arxiv.org/abs/2306.14902v1
# Molecule Design by Latent Space Energy-Based Modeling and Gradual Distribution Shifting ###### Abstract Generation of molecules with desired chemical and biological properties such as high drug-likeness, high binding affinity to target proteins, is critical for drug discovery. In this paper, we propose a probabilistic generative model to capture the joint distribution of molecules and their properties. Our model assumes an energy-based model (EBM) in the latent space. Conditional on the latent vector, the molecule and its properties are modeled by a molecule generation model and a property regression model respectively. To search for molecules with desired properties, we propose a sampling with gradual distribution shifting (SGDS) algorithm, so that after learning the model initially on the training data of existing molecules and their properties, the proposed algorithm gradually shifts the model distribution towards the region supported by molecules with desired values of properties. Our experiments show that our method achieves very strong performances on various molecule design tasks. The code and checkpoints are available at [https://github.com/deqiankong/SGDS](https://github.com/deqiankong/SGDS). ## 1 Introduction In drug discovery, it is of vital importance to find or design molecules with desired pharmacologic or chemical properties such as high drug-likeness and binding affinity to a target protein. It is challenging to directly optimize or search over the drug-like molecule space since it is discrete and enormous, with an estimated size on the order of \(10^{33}\)(Polishchuk et al., 2013). Recently, a large body of work attempts to tackle this problem. The first line of work leverages deep generative models to map the discrete molecule space to a continuous latent space, and optimizes molecular properties in the latent space with methods such as Bayesian optimization (Gomez-Bombarelli et al., 2018; Kusner et al., 2017; Jin et al., 2018). The second line of work recruits reinforcement learning algorithms to optimize properties in the molecular graph space directly (You et al., 2018; De Cao and Kipf, 2018; Zhou et al., 2019; Shi et al., 2020; Luo et al., 2021). A number of other methods have been proposed to optimize molecular properties with genetic algorithms (Nigam et al., 2020), particle-swarm algorithms (Winter et al., 2019), and specialized MCMC methods (Xie et al., 2021). In this work, we propose a method along the first line mentioned above, by learning a probabilistic latent space generative model of molecules and optimizing molecular properties in the latent space. Given the central role of latent variables in this approach, we emphasize that it is critical to learn a latent space model that captures the data regularities of the molecules. Thus, instead of assuming a simple Gaussian distribution in the latent space as in prior work (Gomez-Bombarelli et al., 2018; Jin et al., 2018), we assume a flexible and expressive energy-based model (EBM) (LeCun et al., 2006; Ngiam et al., 2011; Kim and Bengio, 2016; Xie et al., 2016; Kumar et al., 2019; Nijkamp et al., 2019; Du and Mordatch, 2019; Grathwohl et al., 2019; Finn et al., 2016) in latent space. This leads to a _latent space energy-based model_ (LEBM) as studied in Pang et al. (2020), Nie et al. (2021), where LSEBM has been shown to model the distributions of natural images and text well. Going beyond existing latent space energy-based models Pang et al. (2020); Nie et al. (2021), our work makes two innovations: First, given our goal of property optimization, we learn a joint distribution of molecules and their properties. Our model consists of (1) an energy-based model (EBM) in a low-dimensional continuous latent space, (2) a molecule generation model that generates molecule given the latent vector, and (3) a property regression model that predicts the value of the property given the latent vector. See Figure 0(a) for an illustration of the model. We first learn the initial model on the training data that consist of existing molecules and their properties. All three components in our model are learned jointly by an approximate maximum likelihood algorithm. Second, and more importantly, we propose a _sampling with gradual distribution shifting_ (SGDS) method for molecule design. We first sample molecules and their property values from the initial model learned on the training data mentioned above. Then we gradually shift the joint distribution towards the region supported by molecules with high property values. Specifically, our method iterates the following steps. (1) Shift the sampled property values by a small constant towards the desired target value. (2) Generate molecules given the shifted property values. (3) Obtain the ground-truth property values of the generated molecules by querying the software. (4) Update the model parameters by learning from the generated molecules and their ground-truth property values. Because of the flexibility of the latent space energy-based model, the model can be updated to account for the change of the joint distribution of the generated molecules and their ground-truth property values in the gradual shifting process. Figure 0(c) illustrates the shifting of the distribution of the property values of the generated molecules. In drug discovery, most often we need to consider multiple properties simultaneously. Our model can be extended to this setting straightforwardly. With our method, we only need to add a regression model for each property, while the learning and sampling methods remain the same (see Figure 0(c)). We can then simultaneously shift the values of the multiple properties for multi-objective optimization. We evaluate our method in various settings including single-objective and multi-objective optimization. Our method outperforms prior methods by significant margins. In summary, our contributions are as follows: * We propose to learn a latent space energy-based model for the joint distribution of molecules and their properties. * We develop a sampling with gradual distribution shifting method, which enables us to extrapolate the data distribution and sample from the region supported by molecules with high property values. * Our methods are versatile enough to be extended to optimizing multiple properties simultaneously. * Our model achieves state-of-the-art performances on a range of molecule optimization tasks. **Caveat.** As in most existing work on molecule design, we assume that the value of a property of interest of a given molecule can be obtained by querying an existing software. There are two research problems in this endeavor. (1) Developing software that can output biologically or chemically accurate value of the property for an input molecule. (2) Developing method that can optimize the property values output by a given software. While problem (1) is critically important, our work is exclusively about problem (2). We duly acknowledge that existing software may need much improvements. Meanwhile our method can be readily applied to the improved versions of software. ## 2 Related Work **Optimization with Generative Models.** Deep generative models approximate the distribution of molecules with desired biological or non-biological properties. Existing approaches for generating molecules include applying variational autoencoder (VAE) [13] and generative adversarial network (GAN) [11] etc. to molecule data [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 233, 234, 235, 236, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 329, 332, 341, 35, 36, 371, 38, 39, 310, 32, 341, 36, 39, 342, 35, 36, 371, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 80, 82, 84, 85, 86, 88, 89, 92, 87, 88, 89, 93, 94, 88, 95, 89, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 110, 111, 12, 12, 133, 14, 14, 15, 16, 17, 18, 19, 19, 112, 13, 14, 15, 16, 18, 19, 120, 103, 104, 105, 106, 107, 108, 109, 112, 109, 113, 114, 15, 16, 18, 19, 121, 13, 14, 15, 16, 18, 19, 19, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 111, 12, 13, 14, 15, 16, 17, 18, 19, 19, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 120, 109, 121, 13, 14, 15, 16, 17, 18, 19, 200, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 6 ent methods. [14] proposes to optimize by simulating design-synthesis-test cycles. [11, 15, 16] propose to learn a surrogate function to predict properties, and then use Bayesian optimization to optimize the latent vectors. However, the performance of this latent optimization is not satisfactory due to three major issues. First, it is difficult to train an accurate surrogate predictor especially for those novel molecules with high properties along the design trajectories. Second, as the learned latent space tries to cover the fixed data space, its ability to explore the targets out of the distribution is limited [1, 14]. Third, those methods are heavily dependent on the quality of learned latent space, which requires non-trivial efforts to design encoders when dealing with multiple properties. To address the above issues, [10] use VAE to learn the latent space and train predictors separately using generated molecules, and then leverage latent inceptionism, which involves the decoder solely, to optimize the latent vector with multiple predictors. In this paper, we propose an encoder-free model in both training and optimization to learn the joint distribution of molecules and properties. We then design an efficient algorithm to shift the learned distribution gradually. **Optimization with Reinforcement Learning and Evolutionary Algorithms.** Reinforcement learning (RL) based methods directly optimize and generate molecules in an explicit data space [13, 14, 15, 16]. By formulating the property design as a discrete optimization task, they can modify the molecular substructures guided by an oracle reward function. However, the training of those RL-based methods can be viewed as rejection sampling which is difficult and inefficient due to the random-walk search behavior in the discrete space. Evolutionary algorithms (EA) also formulate the optimization in a discrete manner [17, 14, 15, 16]. By leveraging carefully-crafted combinatorial search algorithms, they can search the molecule graph space in a flexible and efficient way. However, the design of those algorithms is non-trivial and domain specific. ## 3 Methods ### Problem Setup and Overview We use the SELFIES representation for molecules [11]. It encodes each molecule as a string of characters and ensures validity of all SELFIES strings. Let \(x=(x^{(1)},...,x^{(t)},...,x^{(T)})\) be a molecule string encoded in SELFIES, where \(x^{(t)}\in\mathcal{V}\) is the \(t\)-th character and \(\mathcal{V}\) is the vocabulary. Suppose \(y\in\mathbb{R}\) represents a molecular property of interest. Then the problem we attempt to tackle is to optimize \(x\) such that its property \(y=y^{*}\) where \(y^{*}\) is some desirable value for \(y\). We take a probabilistic approach and treat the optimization problem as a sampling problem, that is, \[x^{*}\sim p(x|y=y^{*}). \tag{1}\] This is a _single-objective optimization_ problem since only one property is targeted. In real-world drug design settings, we are more likely to optimize multiple properties simultaneously, that is, _multi-objective optimization_. Suppose we optimize for \(\{y_{j}\in\mathbb{R}\}_{j=1}^{m}\), then our task is to sample \[x^{*}\sim p(x|y_{1}=y_{1}^{*},...,y_{m}=y_{m}^{*}). \tag{2}\] To address these problems, we propose a solution within a unified probabilistic framework. As a first step, we need to model or approximate the data distribution of molecules and their properties, \(p_{\mathrm{data}}(x,y)\). To this end, we recruit latent space energy-based model (LSEBM) [15, 16] to model the molecule and properties. LSEBM assumes that a latent vector \(z\in\mathbb{R}^{d}\) in a low dimensional latent space follows an energy-based prior model \(p(z)\). Conditional on \(z\), the molecule \(x\) and the property \(y\) are independent, so that the joint distribution \(p(x,y,z)\) can be factorized as \(p(z)p(x|z)p(y|z)\), which leads to \(p(x,y)=\int p(z)p(x|z)p(y|z)dz\) as an approximation to \(p_{\mathrm{data}}(x,y)\). The latent space energy-based prior model \(p(z)\), the molecule generation model \(p(x|z)\), and the property regression model \(p(y|z)\) can be jointly learned by an approximate maximum likelihood algorithm (see SS3.3 and Algorithm 1). LSEBM within the context of molecule data is presented in SS3.2. For the purpose of property optimization, we are required to generate molecules \(x\) with some desirable property \(y^{*}\). Rather than direct optimization in the molecule space, we choose to optimize \(z\) in the latent space. We first consider the single-objective optimization problem (Equation (1)). With the learned model, we propose to optimize \(x\) given \(y=y^{*}\) by ancestral sampling, \[z^{*}\sim p(z|y=y^{*}),\quad x^{*}\sim p(x|z=z^{*}). \tag{3}\] However, if \(y^{*}\) deviates from the observed data distribution of \(y\), this naive solution involves sampling in an extrapolated regime (or out of distribution regime) where \(y^{*}\) is not in the effective support of the learned distribution. To address this problem, we propose a _Sampling with Gradual Distribution Shifting_ (SGDS) approach where we gradually shift the learned distribution to a region where it is supported by high property values (see SS3.4 and Algorithm 2). Our model is designed to be versatile such that it admits straightforward extension to multi-objective optimization. To optimize \(x\) given \(\{y_{j}=y_{j}^{*}\}_{j=1}^{m}\), we can simply augment the joint distribution with more regression models, i.e., \(p(x,z,y_{1},...,y_{m})=p(z)p(x|z)\prod_{j=1}^{m}p(y_{j}|z)\). The optimization procedure follows the same SGDS approach. See SS3.5 for more details on multi-objective optimization. ### Joint Distribution of Molecule and Molecular Property Suppose \(x=(x^{(1)},...,x^{(t)},...,x^{(T)})\) is a molecule string in SELFIES, \(y\in\mathbb{R}\) is the target property of interest, and \(z\in\mathbb{R}^{d}\) is the latent vector. Consider the following model, \[z\sim p_{\alpha}(z),\quad[x\mid z]\sim p_{\beta}(x|z),\quad[y\mid z ]\sim p_{\gamma}(y|z), \tag{4}\] where \(p_{\alpha}(z)\) is a prior model with parameters \(\alpha\), \(p_{\beta}(x|z)\) is a molecule generation model with parameters \(\beta\), and \(p_{\gamma}(y|z)\) is a property regression model with parameter \(\gamma\). In VAE (Kingma and Welling, 2014), the prior is simply assumed to be an isotropic Gaussian distribution. In our model, \(p_{\alpha}(z)\) is formulated as a learnable energy-based model, \[p_{\alpha}(z)=\frac{1}{Z(\alpha)}\exp(f_{\alpha}(z))p_{0}(z), \tag{5}\] where \(p_{0}(z)\) is a reference distribution, assumed to be isotropic Gaussian as in VAE. \(f_{\alpha}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a scalar-valued negative energy function and is parameterized by a small multi-layer perceptron (MLP) with parameters \(\alpha\). \(Z(\alpha)=\int\exp(f_{\alpha}(z))p_{0}(z)dz=\mathbb{E}_{p_{0}}[\exp(f_{\alpha }(z))]\) is the normalizing constant or partition function. The molecule generation model \(p_{\beta}(x|z)\) is a conditional autoregressive model, \[p_{\beta}(x|z)=\prod_{t=1}^{T}p_{\beta}(x^{(t)}|x^{(1)},...,x^{(t- 1)},z) \tag{6}\] which is parameterized by a one-layer LSTM Hochreiter and Schmidhuber (1997) with parameters \(\beta\). Note that the latent vector \(z\) controls every step of the autoregressive model. It is worth pointing out the simplicity of the molecule generation model of our method considering that those in prior work involve complicated graph search algorithm or alternating generation of atoms and bonds with multiple networks. Given a molecule \(x\), suppose \(y\) is the chemical property of interest, such as drug likeliness or protein binding affinity. The ground-truth property value can be computed for an input \(x\) via open-sourced software such as RDKit (Landrum et al., 2013) and AutoDock-GPU (Santos-Martins et al., 2021). We assume that given \(z\), \(x\) and \(y\) are conditionally independent, so that \[p_{\theta}(x,y,z)=p_{\alpha}(z)p_{\beta}(x|z)p_{\gamma}(y|z), \tag{7}\] where \(\theta=(\alpha,\beta,\gamma)\). We use the model \(p_{\theta}(x,y)=\int p_{\theta}(x,y,z)dz\) to approximate the data distribution \(p_{\text{data}}(x,y)\). See Supplement for a detailed discussion. The property regression model can be written as \[p_{\gamma}(y|z)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac {1}{2\sigma^{2}}(y-s_{\gamma}(z))^{2}\right), \tag{8}\] where \(s_{\gamma}(z)\) is a small MLP, with parameters \(\gamma\), predicting \(y\) based on the latent \(z\). The variance \(\sigma^{2}\) is set as a constant or a hyperparameter in our work. ### Learning Joint Distribution Suppose we observe training examples \(\{(x_{i},y_{i}),i=1,...,n\}\). The log-likelihood function is \(L(\theta)=\sum_{i=1}^{n}\log p_{\theta}(x_{i},y_{i})\). The learning gradient can be calculated according to \[\nabla_{\theta}\log p_{\theta}(x,y)=\mathbb{E}_{p_{\theta}(z|x,y )}\left[\nabla_{\theta}\log p_{\theta}(x,y,z)\right]\] \[=\mathbb{E}_{p_{\theta}(z|x,y)}\left[\nabla_{\theta}(\log p_{ \alpha}(z)+\log p_{\beta}(x|z)+\log p_{\gamma}(y|z))\right]. \tag{9}\] For the prior model, \[\nabla_{\alpha}\log p_{\alpha}(z)=\nabla_{\alpha}f_{\alpha}(z)- \mathbb{E}_{p_{\alpha}(z)}[\nabla_{\alpha}f_{\alpha}(z)]. \tag{10}\] The learning gradient given an example \((x,y)\) is \[\delta_{\alpha}(x,y)=\nabla_{\alpha}\log p_{\theta}(x,y)\] \[=\mathbb{E}_{p_{\theta}(z|x,y)}[\nabla_{\alpha}f_{\alpha}(z)]- \mathbb{E}_{p_{\alpha}(z)}[\nabla_{\alpha}f_{\alpha}(z)]. \tag{11}\] Thus \(\alpha\) is updated based on the difference between \(z\) inferred from empirical observation \((x,y)\), and \(z\) sampled from the current prior. For the molecule generation model, \[\delta_{\beta}(x,y)=\nabla_{\beta}\log p_{\theta}(x,y)=\mathbb{E }_{p_{\theta}(z|x,y)}[\nabla_{\beta}\log p_{\beta}(x|z)]. \tag{12}\] Similarly, for the property regression model, \[\delta_{\gamma}(x,y)=\nabla_{\gamma}\log p_{\theta}(x,y)=\mathbb{E }_{p_{\theta}(z|x,y)}[\nabla_{\gamma}\log p_{\gamma}(y|z)]. \tag{13}\] Estimating expectations in Equations 11, 12, and 13 requires MCMC sampling of the prior model \(p_{\alpha}(z)\) and the posterior distribution \(p_{\theta}(z|x,y)\). We recruit Langevin dynamics (Neal, 2011; Han et al., 2017). For a target distribution \(\pi(z)\), the dynamics iterates \[z_{\tau+1}=z_{\tau}+s\nabla_{z}\log\pi(z_{\tau})+\sqrt{2s}\epsilon_{\tau}, \tag{14}\] where \(\tau\) indexes the time step of the Langevin dynamics, \(s\) is step size, and \(\epsilon_{\tau}\sim\mathcal{N}(0,I_{d})\) is the Gaussian white noise. \(\pi(z)\) can be either the prior \(p_{\alpha}(z)\) or the posterior \(p_{\theta}(z|x,y)\). In either case, \(\nabla_{z}\log\pi(z)\) can be efficiently computed by back-propagation. We initialize \(z_{0}\sim\mathcal{N}(0,I_{d})\), and we run \(\Gamma\) steps of Langevin dynamics (e.g. \(\Gamma=20\)) to approximately sample from the prior and the posterior distributions. The resulting learning algorithm is an approximate maximum likelihood learning algorithm. See (Pang et al., 2020; Nijkamp et al., 2020) for a theoretical understanding of the learning algorithm based on the finite-step MCMC. See also [14, 15] for learning EBMs at multiple noise levels for effective modeling and sampling of multimodal density. The learning algorithm is summarized in Algorithm 1. ``` 0:Learning iterations \(T\), learning rates for the prior, generation, and regression models \(\{\eta_{0},\eta_{1},\eta_{2}\}\), initial parameters \(\theta_{0}=(\alpha_{0},\beta_{0},\gamma_{0})\), observed examples \(\{(x_{i},y_{i})\}_{i=1}^{T}\), batch size \(m\), number of prior and posterior sampling steps \(\{\Gamma_{0},\Gamma_{1}\}\), and prior and posterior sampling step sizes \(\{s_{0},s_{1}\}\). 0:\(\theta_{T}=(\alpha_{T},\beta_{T},\gamma_{T})\). for\(t=0:T-1\)do 1. **Mini-batch**: Sample observed examples \(\{(x_{i},y_{i})\}_{i=1}^{m}\). 2. **Prior sampling**: For each \(i\), sample \(z_{i}^{-}\sim p_{\alpha_{t}}(z)\) using Equation (14), where the target distribution \(\pi(z)=p_{\alpha_{t}}(z)\), and \(s=s_{0}\), \(\Gamma=\Gamma_{0}\). 3. **Posterior sampling**: For each \((x_{i},y_{i})\), sample \(z_{i}^{+}\sim p_{\theta_{t}}(z|x_{i},y_{i})\) using Equation (14), where the target distribution \(\pi(z)=p_{\theta_{t}}(z|x_{i},y_{i})\), and \(s=s_{1}\), \(\Gamma=\Gamma_{1}\). 4. **Update prior model**: \(\alpha_{t+1}=\alpha_{t}+\eta_{0}\frac{1}{m}\sum_{i=1}^{m}[\nabla_{\alpha}f_{ \alpha_{t}}(z_{i}^{+})-\nabla_{\alpha}f_{\alpha_{t}}(z_{i}^{-})]\). 5. **Update generation model**: \(\beta_{t+1}=\beta_{t}+\eta_{1}\frac{1}{m}\sum_{i=1}^{m}\nabla_{\beta}\log p_{ \beta_{t}}(x_{i}|z_{i}^{+})\). 6. **Update regression model**: \(\gamma_{t+1}=\gamma_{t}+\eta_{2}\frac{1}{m}\sum_{i=1}^{m}\nabla_{\gamma}\log p _{\gamma_{t}}(y_{i}|z_{i}^{+})\). ``` **Algorithm 1**Learning joint distribution. ### Sampling with gradual distribution shifting (Sgds) To tackle the single-objective optimization problem (Equation (1)), one naive approach is to perform ancestral sampling with two steps, given some desirable property value \(y^{*}\), \[\mathrm{(i)}\ z^{*}\sim p_{\theta}(z|y=y^{*})\propto p_{\alpha}(z )p_{\gamma}(y=y^{*}|z), \tag{15}\] \[\mathrm{(ii)}\ x^{*}\sim p_{\beta}(x|z=z^{*}), \tag{16}\] where \(\mathrm{(i)}\) is an application of Bayes rule, with \(p_{\alpha}(z)\) as the prior and \(p_{\gamma}(y|z)\) as the likelihood. Sampling from \(p_{\theta}(z|y)\) can be carried out by Langevin dynamics in Equation (14) by replacing the target distribution \(\pi(z)\) with \(p_{\theta}(z|y)\). Our model \(p_{\theta}(x,y,z)\) is learned to capture the data distribution. In real-world settings, \(y^{*}\) might not be within the support of the data distribution. Therefore, sampling following Equation (15) does not work well since it involves extrapolating the learned distribution. We propose an iterative updating method called _sampling with gradual distribution shifting_ (SGDS) to address this issue. In particular, we first leverage the \(n\) samples collected from the common dataset \(\{(x_{i}^{0},y_{i}^{0})\}_{i=1}^{n}\) (e.g. ZINC, \(n=250,000\)) to learn the initial joint distribution \(p_{\theta_{0}}(x,y)\) as a valid starting point. Then we shift the joint distribution progressively using a smaller number \(k\) (e.g., \(k=10,000\)) of synthesized samples \(\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{k}\) from distribution \(p_{\theta_{t-1}}\) at the previous iteration, where \(k\ll n\). Therefore, by progressively learning the joint distribution with \(T\) (e.g., \(T=30\)) shift iterations, \(p_{\theta_{1}},\ldots,p_{\theta_{T}}\), at the last several iterations, we expect to generate molecules with desirable properties that are significantly distant from the initial distribution as shown in Figure 0(c). Next, we shall explain in detail the steps to generate \(\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{k}\) given \(p_{\theta_{t-1}}\). In a property maximization task, we shift the support slightly by adding a small \(\Delta_{y}\) to all \(y\)'s, \[\tilde{y}^{t}=y^{t-1}+\Delta_{y}, \tag{17}\] and generate \(x^{t}\) conditional on shifted \(\tilde{y}^{t}\), following Equation (15), \[\mathrm{(i)}\ z^{t}\sim p_{\theta_{t-1}}(z|y=\tilde{y}^{t}), \tag{18}\] \[\mathrm{(ii)}\ x^{t}\sim p_{\beta_{t-1}}(x|z=z^{t}). \tag{19}\] \(\Delta_{y}\) can be chosen as a fixed small value. After generating \(x^{t}\), its ground-truth property value \(y^{t}\) can be computed by calling the corresponding engines such as RDKit and AutoDock-GPU. In Equation (18), sampling can be achieved by langevin dynamics as in Equation (14). For the sake of efficiency, we propose to run persistent chain by initializing the Langevin dynamics from the latent vectors generated in the previous iteration Han et al. [2017]. This is also called warm start. Specifically, we have \[z_{0}^{t}=z_{\Gamma}^{t-1},\] \[z_{\tau+1}^{t}=z_{\tau}^{t}+s\nabla_{z}\log p_{\theta_{t-1}}(z| \tilde{y}^{t})+\sqrt{2s}\epsilon_{\tau}, \tag{20}\] for \(\tau=1,...,\Gamma\), where \(\Gamma\) is the length of Markov chain in each iteration. With warm start, we use \(\Gamma=2\) in our experiments. For more efficient optimization via distribution shifting, we further introduce a rank-and-select scheme by maintaining a buffer of top-\(k\) samples of \((z,x,y)\) (where \(z\) is the sampled latent vector, \(x\) is the generated molecule, and \(y\) is the ground-truth property value of \(x\)). Specifically, we maintain a buffer which consists of \(k\) samples of \((z,x,y)\) with the highest values of \(y\) in the past shifting iterations. In each shift iteration, conditioned on the shifted property values, with warm start, initialized from these \(k\) vectors \(z\) in the buffer, a new batch of \(k\) latent vectors \(z\), molecules \(x\), and their ground-truth values \(y\) can be produced in the new shift iteration. We rank all the \(2k\) samples of \((z,x,y)\) (including \(k\) newly generated ones and \(k\) samples in the buffer) and select the top-\(k\) samples of \((z,x,y)\) based on the ground-truth values \(y\). The \(k\) samples of \((z,x,y)\) in the buffer are then updated by those newly selected \(k\) samples of \((z,x,y)\). We call this procedure as _rank-and-select_. This rank-and-select procedure can also be applied to constrained optimization tasks, where we select those sampled molecules that satisfy the given constraints. With the selected samples, we then shift the model distribution by learning from these samples with several learning iterations. The SGDS algorithm is summarized in Algorithm 2. ``` 0: Shift iterations \(T\), initial pretrained parameters \(\theta_{0}=(\alpha_{0},\beta_{0},\gamma_{0})\), initial examples \(\{(x_{i}^{0},y_{i}^{0})\}_{i=1}^{k}\) from the data distribution boundary, shift magnitude \(\Delta_{y}\), \(\mathrm{PropertyComputeEngine}=\mathrm{RDKit}\) or \(\mathrm{AutoDock-GPU}\), \(\mathrm{LearningAlgorithm}=\mathrm{Algorithm}\) 1. output:\(\{(x_{i}^{T},y_{i}^{T})\}_{i=1}^{k}\). for\(t=1:T\)do 1. Property shift: For each \(y_{i}^{t-1}\), \(\tilde{y}_{i}^{t}=y_{i}^{t-1}+\Delta_{y}\). 2. Latent sampling with warm start: For each \(\tilde{y}_{i}^{t}\), sample \(z_{i}^{t}\sim p_{\theta_{t-1}}(z|\tilde{y}_{i}^{t})\) using Equation (20). 3. Molecule generation: For each \(z_{i}^{t}\), sample \(x_{i}^{t}\sim p_{\theta_{t-1}}(x|z_{i}^{t})\). 4. Property computation: For each \(x_{i}^{t}\), compute \(y_{i}^{t}=\mathrm{PropertyComputeEngine}(x_{i}^{t})\). 5. Rank-and-select: Update the buffer of top-\(k\) samples \(\{z_{i}^{t},x_{i}^{t},y_{i}^{t}\}_{i=1}^{k}\) by rank-and-select. 6. Distribution shift: \(\theta_{t}=\mathrm{LearningAlgorithm}(\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{k},\theta_{t -1})\). ``` **Algorithm 2**SGDS for single property optimization. ### Multi-objective optimization We next consider the multi-objective optimization problem. Suppose we optimize for a set of properties \(\{y_{j}\}_{j=1}^{m}\), then we learn a property regression model for each property \(y_{j}\), \[p_{\gamma_{j}}(y_{j}|z)=\frac{1}{\sqrt{2\pi\sigma_{j}^{2}}}\exp\left(-\frac{1} {2\sigma_{j}^{2}}(y_{j}-s_{\gamma_{j}}(z))^{2}\right), \tag{21}\] where each \(s_{\gamma_{j}}\) is a small MLP with parameters \(\gamma_{j}\). We assume that given \(z\) the properties are conditionally independent, so the joint distribution is \[p_{\theta}(x,z,y_{1},...,y_{m})=p_{\alpha}(z)p_{\beta}(x|z)\prod_{j=1}^{m}p_{ \gamma_{j}}(y_{i}|z). \tag{22}\] Under our framework, both the learning and the sampling algorithm for the single-objective problem can be straightforwardly extended to the multi-objective setting. In SGDS, we shift the values of the multiple properties simultaneously, and generate molecules conditional on the multiple properties. ## 4 Experiments To demonstrate the effectiveness of our proposed method, SGDS, we compare our model with previous SOTA methods for molecule design including single-objective optimization (SS4.2), multi-objective optimization (SS4.3) and constrained optimization (SS4.4). In molecule design experiments, we consider both non-biological and biological properties. Finally, we add ablation studies to analyze the effects of different components in SGDS. We also conduct unconditional molecule generation experiments for sanity check of the model and discuss the mode traversing in the latent space at the end of this section. ### Experimental setup **Datasets.** For the molecule property optimization task, we report results on ZINC [11] and MOSES [17], which consist of around \(250\)k and \(2\)M molecules respectively. Encoding systems in molecular studies typically include SMILES [20], SELFIES [13], and graph representations. SMILES and SELFIES linearize a molecular graph into character strings. SMILES has historically faced challenges regarding validity (the percentage of molecules that satisfy the chemical valency rules). Recently, SELFIES was introduced, offering an encoding system where each string inherently corresponds to a valid molecule. We use SELFIES representation in our work. The non-biological properties (such as penalized logP, QED, etc.) can be computed using RDKit [12]. Following [14], we use the docking scores from AutoDock-GPU [21] to approximate the binding affinity to two protein targets, human estrogen receptor (ESR1) and human peroxisomal acetyl-CoA acyl transferase 1 (ACAA1). **Training Details.** There are three modules in our method, the molecule generation model \(p_{\beta}(x|z)\), the energy-based prior model \(p_{\alpha}(z)\), and property regression model \(\{p_{\gamma_{j}}(y|z)\}_{j=1}^{m}\), where \(m\) is the total number of properties we aim to optimize. The generation model \(p_{\beta}(x|z)\) is parameterized by a single-layer LSTM with \(1024\) hidden units where the dimension of latent vector \(z\) is \(100\). The energy-based prior model \(p_{\alpha}(z)\) is a \(3\)-layer MLP. Each of the property regression model \(p_{\gamma_{j}}(y|z)\) is a 3-layer MLP. It is worth mentioning that compared to most previous models, SGDS is characterized by its simplicity without adding inference networks for sampling, or RL-related modules for optimization. In order to get valid initial distribution \(\theta_{0}\) for SGDS, we first train our model for \(30\) epochs on ZINC. We use Adam optimizer [14] to train our models with learning rates \(10^{-4}\) for energy-based prior model, and \(10^{-3}\) for the molecule generation model and property regression model. During SGDS, we use \(30\) shifting iterations for single-objective optimization and \(20\) for multi-objective optimization. For each iteration of distribution shifting, we sample \(10^{4}\) boundary examples except binding affinity, where \(2\times 10^{3}\) examples are used to speed up calculation, and then we update the model parameters \(\theta\) for \(10\) iterations using Algorithm 1 with Adam optimizer and the same learning rates mentioned above. All experiments are conducted on Nvidia Titan XP GPU. ### Single-Objective Optimization **Penalized logP and QED Maximization.** For non-biological properties, we are interested in Penalized logP and QED, both of which can be calculated by RDKit [11]. Since we know Penalized logP scores have a positive relationship with the lengths of molecules, we maximize Penalized logP either with or without maximum length limit. Following [1], the maximum length is set to be the maximum length of molecules in ZINC using SELFIES. From Table 1, we can see that with length limit, SGDS outperforms previous methods by a large margin. We also achieve the highest QED with/without length limit. These observations demonstrate the effectiveness of our method. We also illustrate our distribution shifting method in Figure 0(c). One can notice that, the distribution of the property is gradually shifted towards the region with higher values, and the final distribution is significantly distant from the initial one. **Biological Property Optimization.** ESR1 and ACAA1 are two human proteins. We aim to design ligands (molecules) that have the maximum binding affinities towards those target proteins. ESR1 is well-studied, which has many existing binders. However, we do not use any binder-related information in SGDS. Binding affinity is measured by the estimated dissociation constants \(\mathrm{K}_{\mathrm{D}}\), which can be approximated by docking scores from AutoDock-GPU [22]. Large binding affinities corresponds to small \(\mathrm{K}_{\mathrm{D}}\). That is, we aim to minimize \(\mathrm{K}_{\mathrm{D}}\). Table 2 shows that our model outperforms previous methods on both ESR1 and ACAA1 binding affinity maximization tasks by large margins. Comparing to existing methods, much more molecules with high binding affinities can be directly sampled from the last several shifting iterations. See Supplement for more examples. Producing those ligands with high binding affinity plays a vital role in the early stage of drug discovery. ### Multi-Objective Optimization Multi-objective Binding Affinity Optimization.We consider maximizing binding affinity, QED and minimizing synthetic accessibility score (SA) simultaneously. Following Eckmann et al. [2022], we exclude molecules with abnormal behaviors 1 to encourage the joint distribution shifts towards a desirable region in terms of pharmacologic and synthetic properties. Those heuristics can be conveniently added in our _rank-and-select_ step. Table 3 shows our multi-objective results compared to LIMO and GCPN. From the results, we can see that SGDS is able to find the ligands with desired properties while keeping the pharmacologic structures. For ESR1, we have two existing binders on the market, Tamoxifen and Raloxifene. Our designed ligands have similar QED and SA, with very low \(\mathrm{K}_{\mathrm{D}}\). Compared to existing methods, SGDS obtains better results in overall adjustments. For ACAA1, we do not have any existing binders. Compared with prior SOTA methods, our optimized ligands outperform those by a large margin in terms of \(\mathrm{K}_{\mathrm{D}}\). When comparing with single-objective optimization, we find that multi-objective optimization is more complicated, but it may be more useful in real world molecule design. While we still need domain expertise to determine the effectiveness of those ligands discovered by SGDS, we believe the ability \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**LL**} & \multicolumn{3}{c}{**Penalized logP (\(\uparrow\))**} & \multicolumn{3}{c}{**QED (\(\uparrow\))**} \\ & & 1st & 2rd & 3rd & 1st & 2rd & 3rd \\ \hline JT-VAE & ✗ & 5.30 & 4.93 & 4.49 & 0.925 & 0.911 & 0.910 \\ GCPN & ✓ & 7.98 & 7.85 & 7.80 & **0.948** & 0.947 & 0.946 \\ MolDQN & ✓ & 11.18 & 11.8 & **0.948** & 0.943 & 0.943 \\ MARS & ✗ & 45.0 & 44.3 & 43.8 & **0.948** & **0.948** & **0.948** \\ GraphDF & ✗ & 13.7 & 13.2 & 13.2 & **0.948** & **0.948** & **0.948** \\ LIMO & ✓ & 10.5 & 9.69 & 9.60 & 0.947 & 0.946 & 0.945 \\ \hline **SGDS** & ✓ & **26.4** & **25.8** & **25.5** & **0.948** & **0.948** & **0.948** \\ **SGDS** & ✗ & **158.0** & **157.8** & **157.5** & **0.948** & **0.948** & **0.948** \\ \hline \hline \end{tabular} \end{table} Table 1: Non-biological single-objective optimization. Report top-3 highest scores found by each model. LL (Length Limit) denotes whether it has the maximum length limit. Baseline results obtained from [1, 14, 15]. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**ESR1 \(\mathrm{K}_{\mathrm{D}}\) (\(\downarrow\))**} & \multicolumn{3}{c}{**ACAA1 \(\mathrm{K}_{\mathrm{D}}\) (\(\downarrow\))**} \\ & 1st & 2rd & 3rd & 1st & 2rd & 3rd \\ \hline GCPN & 6.4 & 6.6 & 8.5 & 75 & 83 & 84 \\ MolDQN & 373 & 588 & 1062 & 240 & 337 & 608 \\ MARS & 25 & 47 & 51 & 370 & 520 & 590 \\ GraphDF & 17 & 64 & 69 & 163 & 203 & 236 \\ LIMO & 0.72 & 0.89 & 1.4 & 37 & 37 & 41 \\ \hline **SGDS** & **0.03** & **0.03** & **0.04** & **0.11** & **0.11** & **0.12** \\ \hline \hline \end{tabular} \end{table} Table 2: Biological single-objective optimization. Report top-3 lowest \(\mathrm{K}_{\mathrm{D}}\) (in nanomoles/liter) found by each model. Baseline results obtained from [1]. of SGDS to generate many high quality molecules given multiple metrics is extremely useful in the early stage of drug discovery. ### Constrained Optimization To optimize single-objective under some constrains, we use the original SGDS steps and in _rank-and-select_, we only keep the molecules that satisfy the constraints. Similarity-constrained Penalized logP MaximizationFollowing JT-VAE (Jin et al., 2018), this experiment aims to generate molecules with high penalized logP while being similar to the target molecules. Similarity is measured by Tanimoto similarity between Morgan fingerprints with a cutoff value \(\delta\). We compare our results with previous SOTA method in Table 4. The results show that SGDS tends to obtain better results with weak constraints (i.e. \(\delta=0,0.2\)) with \(100\%\) success rate, since different from optimized property, the constraints are added implicitly. logP targetingIn Table 5, comparing to previous methods, SGDS is able to get competitive diversity scores with significantly better success rate in both ranges. That is because after SGDS, our model is shifted towards the region that is supported by molecules satisfying the logP constraints. Due to the flexibility of our EBM prior, SGDS achieves high diversity scores while keeping most of the sampled molecules within the logP range. ### Ablation Studies SGDS outperforms previous methods by a significant margin, especially on binding affinity related experiments. Hence, we conduct ablations on the key components of our method on a challenging single-objective ACAA1 maximization experiment. Since SGDS is optimized based on shifting the joint distribution rather than per-molecule based optimization method (such as (Eckmann et al., 2022)), we use the summarized statistics (i.e. the mean and standard deviation of \(100\) lowest \(\mathrm{K_{d}}\) from uniquely generated molecules) of last three shifted distributions as our metric to compare the key components rather than top-3 optimized \(\mathrm{K_{D}}\) in SS4.2. Ablation studies are discussed as follows. (1) _Without EBM Prior_: for joint distribution, we replace the learnable EBM prior by a fixed \(\mathcal{N}(0,I_{d})\). (2)_Without Property Regression_\(p_{\gamma}(y|z)\): we only learn the distribution of molecules as \(p_{\alpha}(z)p_{\beta}(x|z)\). For each iteration of distribution, we only use rank-and-select and update model parameters based on those molecules with high values. The molecule can be generated by first sampling \(z\sim p_{\alpha}(z)\) and then \(x\sim p_{\beta}(x|z)\). (3) _Without Gradual Shifting_: rather than iterative distribution shifting in SGDS, we directly sample \(z\sim p_{\theta}(z|y=y^{\star})\), where \(y^{\star}\) is set to be the minimal value we can get in SS4.2. (4) _Without Rank-and-Select:_ we skip the rank-and-select step in Algorithm 2. (5) _Without Warm Start_: when sampling \(z\sim p_{\theta}(z|y)\) in current iteration, we replace the warm start algorithm in Equation (20) by 20-step langevin dynamics in Equation (14) with the same step size. The ablation studies are displayed in Table 6. It is clear that \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & \(28\)h & \(29\)h & \(30\)h \\ \hline SGDS & \(0.74\pm 0.04\) & \(0.61\pm 0.03\) & \(0.59\pm 0.03\) \\ Without EBM Prior & \(47.8\pm 26.9\) & \(38.8\pm 21.1\) & \(35.1\pm 20.2\) \\ Without Property Regression & \(140\pm 74.8\) & \(114\pm 67.0\) & \(103\pm 56.3\) \\ Without Gradual Shifting & \(211\pm 125\) & \(166\pm 97.9\) & \(137\pm 74.8\) \\ Without Rank-and-Select & \(9.71\pm 5.52\) & \(5.75\pm 3.39\) & \(3.91\pm 2.17\) \\ Without Warm Start & \(6.27\pm 3.92\) & \(3.27\pm 2.99\) & \(2.37\pm 1.35\) \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation Studies. Report the mean and standard deviation of \(100\) uniquely generated molecules with the lowest \(\mathrm{K_{D}}\)(in \(10^{-9}\)mol/L) from last three shifted iterations (i.e. the 28th, 29th, 30th iterations of total 30 iterations). \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{\(\delta\)} & \multicolumn{2}{c}{GraphDF} & \multicolumn{2}{c}{LIMO} & \multicolumn{2}{c}{SGDS} \\ & Improv. & \(\%\) Succ. & Improv. & \(\%\) Succ. & Improv. & \(\%\) Succ. \\ \hline 0.0 & \(5.9\pm 2.0\) & 100 & \(10.1\pm 2.3\) & 100 & \(\mathbf{19.1\pm 2.1}\) & 100 \\ 0.2 & \(5.6\pm 1.7\) & 100 & \(5.8\pm 2.6\) & 99.0 & \(\mathbf{7.4\pm 1.9}\) & 100 \\ 0.4 & \(\mathbf{4.1\pm 1.4}\) & 100 & \(3.6\pm 2.3\) & 93.7 & \(3.8\pm 1.4\) & 97.5 \\ 0.6 & \(1.7\pm 1.2\) & 93.0 & \(1.8\pm 2.0\) & 85.5 & \(\mathbf{2.6\pm 2.0}\) & 95.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Similarity-constrained optimization results. LIMO results obtained from (Eckmann et al., 2022; Luo et al., 2021). \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{\(-2.5\leq\) logP \(\leq-2\)} & \multicolumn{2}{c}{\(5\leq\) logP \(\leq 5.5\)} \\ & Success & Diversity & Success & Diversity \\ \hline ZINC & 0.4\(\%\) & 0.919 & \(1.3\%\) & 0.901 \\ \hline JT-VAE & 11.3\(\%\) & 0.846 & \(7.6\%\) & 0.907 \\ ORGAN & 0 & \(-\) & \(0.2\%\) & **0.909** \\ GCPN & \(85.5\%\) & \(0.392\) & \(54.7\%\) & 0.855 \\ LIMO & \(10.4\%\) & **0.914** & \(-\) & \(-\) \\ \hline **SGDS** & \(\mathbf{86.0\%}\) & 0.874 & \(\mathbf{62.2\%}\) & 0.858 \\ \hline \hline \end{tabular} \end{table} Table 5: logP targeting to a certain range (Eckmann et al., 2022; You et al., 2018; Luo et al., 2021; Xie et al., 2021). all the proposed components contribute significantly to the good performance of our method. ### Unconditional Generation We employ unconditional molecule generation tasks as a sanity check of the latent space EBM. The goal is to model the molecules in the training dataset and generate similar molecules. We evaluate the model based on validity (the percentage of molecules that satisfy the chemical valency rules), uniqueness (the percentage of unique molecules in all generated samples) and novelty (the percentage of generated molecules that are not in the training set) of generated molecules. Note that we are not concerned with optimization of molecular properties in this subsection. Following previous work, we randomly sample \(10\)k molecules for ZINC and \(30\)k for MOSES, comparing the results based on the aforementioned metrics. Generation results are shown in Table 7 for ZINC and Table 8 for MOSES. In Table 7, we present generation results for both SMILES and SELFIES. Despite lacking a validity constraint during generation, our model attains \(95.5\%\) validity using SMILES, outperforming other SMILES-based methods and rivaling those with valency checks. This demonstrates our model's effective and implicit capture of valency rules. Furthermore, our model's samples exhibit perfect uniqueness and novelty. Then we randomly sample \(10\)k molecules from the learned latent space EBM and compute their PlogP and QED using RDKit. Their empirical densities are then compared not only with the molecule property densities from the test split but also with the predictions made by the regression model \(p_{\gamma}(y|z)\). As shown in Figure 2, the property densities from both our learned model and predicted values from regression model align closely with those of the data, suggesting that our model effectively captures regularities in the data. In our experiments, we employ short-run MCMC [11] with a Markov chain length of \(K=20\) and step size \(s=0.1\) for all tests. As shown in Figure 3, with increasing Markov chain length, the molecules evolve correspondingly, suggesting that the Markov chain is not trapped in local modes. ## 5 Conclusion and Discussion We propose a deep generative model for the joint distribution of molecules and their properties. It assumes an energy-based prior model in a low-dimensional continuous latent space, and the latent vector can generate the molecule and predict its property value. We then design a sampling with gradual distribution shifting method to shift the learned distribution to a region with high property values. Molecule design can then be achieved by conditional sampling. Our experiments demonstrate that our method outperforms previous SOTA methods on some tasks by significant margins. ## Acknowledgements Y. N. Wu was partially supported by NSF DMS-2015577 and a gift fund from Amazon. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Representation** & **Validity** & **Novelty** & **Uniqueness** \\ \hline JT-VAE & Graph & 1.000\({}^{*}\) & 1.000 & 1.000 \\ GPN & Graph & 1.000\({}^{*}\) & 1.000 & 1.000 \\ GraphNFP & Graph & 0.426 & 1.000 & 0.948 \\ GraphAF & Graph & 1.000\({}^{*}\) & 1.000 & 0.991 \\ GraphDF & Graph & 1.000\({}^{*}\) & 1.000 & 1.000 \\ \hline ChemVAE & SMILES & 0.170 & 0.980 & 0.310 \\ GrammarVAE & SMILES & 0.310 & 1.000 & 0.108 \\ **Ours** & SMILES & 0.955 & 1.000 & 1.000 \\ **Ours** & SELFIES & 1.000 & 1.000 & 1.000 \\ \hline \hline \end{tabular} \end{table} Table 7: Unconditional generation on ZINC. \({}^{*}\) denotes valency check. [11, 12, 13, 14], [15, 16], [17] \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Representation** & **Validity** & **Novelty** & **Uniqueness** \\ \hline JT-VAE & Graph & 1.000\({}^{*}\) & 0.914 & 1.000 \\ GraphAF & Graph & 1.000\({}^{*}\) & 1.000 & 0.991 \\ GraphDF & Graph & 1.000\({}^{*}\) & 1.000 & 1.000 \\ LIMO & SELFIES & 1.000 & 1.000 & 0.976 \\ **Ours** & SELFIES & 1.000 & 1.000 & 1.000 \\ \hline \hline \end{tabular} \end{table} Table 8: Unconditional generation on MOSES. \({}^{*}\) denotes valency check. Results obtained from [11, 12]. Figure 3: Two sequences of sampled molecules with different lengths of Markov chain. Figure 2: Property distributions of PlogP (left) and QED (right).
2308.02414
A State-Space Perspective on Modelling and Inference for Online Skill Rating
We summarise popular methods used for skill rating in competitive sports, along with their inferential paradigms and introduce new approaches based on sequential Monte Carlo and discrete hidden Markov models. We advocate for a state-space model perspective, wherein players' skills are represented as time-varying, and match results serve as observed quantities. We explore the steps to construct the model and the three stages of inference: filtering, smoothing and parameter estimation. We examine the challenges of scaling up to numerous players and matches, highlighting the main approximations and reductions which facilitate statistical and computational efficiency. We additionally compare approaches in a realistic experimental pipeline that can be easily reproduced and extended with our open-source Python package, https://github.com/SamDuffield/abile.
Samuel Duffield, Samuel Power, Lorenzo Rimella
2023-08-04T16:03:50Z
http://arxiv.org/abs/2308.02414v3
# A State-Space Perspective on Modelling and Inference for Online Skill Rating ###### Abstract This paper offers a comprehensive review of the main methodologies used for skill rating in competitive sports. We advocate for a state-space model perspective, wherein players' skills are represented as time-varying, and match results serve as the sole observed quantities. The state-space model perspective facilitates the decoupling of modeling and inference, enabling a more focused approach highlighting model assumptions, while also fostering the development of general-purpose inference tools. We explore the essential steps involved in constructing a state-space model for skill rating before turning to a discussion on the three stages of inference: filtering, smoothing and parameter estimation. Throughout, we examine the computational challenges of scaling up to high-dimensional scenarios involving numerous players and matches, highlighting approximations and reductions used to address these challenges effectively. We provide concise summaries of popular methods documented in the literature, along with their inferential paradigms and introduce new approaches to skill rating inference based on sequential Monte Carlo and finite state-spaces. We close with numerical experiments demonstrating a practical workflow on real data across different sports. Introduction In the quantitative analysis of competitive sports, a fundamental task is to estimate the skills of the different agents ('players') involved in a given competition based on the outcome of pairwise comparisons ('matches') between said players, often in an online setting. Skill estimation facilitates the prediction of various relevant outcomes of subsequent matches, which can then be applied towards high-level decision-making for the competition, including player seeding, fair team matching, and more. There are several established approaches to the task of skill estimation, including among others the Bradley-Terry model (Bradley and Terry, 1952), the Elo rating system (Elo, 1978), the Glicko rating system (Glickman, 1999), and TrueSkill (Herbrich et al., 2006) each with various levels of complexity and varying degrees of statistical motivation. Skill rating is of paramount importance in the world of competitive sports as it serves as a foundational tool for assessing and comparing the abilities of players and how they vary over time. By accurately quantifying skill levels, skill rating systems enable fair and balanced competition, inform strategic decision-making, and enhance the overall sporting level. Popular skill rating systems applications include chess (FIDE, 2023), online gaming (Herbrich et al., 2006), education (Pelanek, 2016), tennis (Kovalchik, 2016) and team-based sports like football (Hvattum and Arntzen, 2010), basketball (Strumbelj and Vracar, 2012) and many more (Stefani, 2011). Skill ratings not only facilitate player ranking, but also serve as a basis for dynamic matchmaking and player seeding, ensuring that competitive matches are engaging and well-matched. Moreover, in professional sports, skill rating plays a pivotal role in talent scouting and gauging overall performance, providing data-driven insights for coaches, analysts, bettors and fanatics. As technology and statistical methodologies continue to advance, skill rating systems are expected to evolve further, benefiting an ever-widening spectrum of sports and competitive domains. In this work, we argue for an explicitly model-based, statistical approach to this task, centred around state-space models. We model the skills of the players in the competition as time-varying, and we model the outcomes of matches between players as indirect observations of these skills. Such models are often left implicit in the presentation of popular approaches to the skill estimation task. Our viewpoint is that by emphasising the role of the underlying models, users can more easily incorporate additional structure into their estimation routines. Moreover, this emphasis encourages the decoupling of the algorithmic aspects of estimation from the modeling aspects, which again affords additional flexibility and robustness. The paper will proceed as follows. In Section 2, we describe a high-level formalism for state-space formulations of the skill estimation problem. In Section 3, we outline the possible inference objectives for this problem and how they interact as well as the general structure of tractable inference, induced approximations and relevant complexity considerations. In Section 4, we review a variety of concrete procedures for the skill estimation problem, combining probabilistic models and inference algorithms unifying existing approaches and introducing new ones. In Section 5, we perform some numerical experiments, demonstrating and contrasting some of the varied approaches on real data. Finally, in Section 6, we conclude with a discussion on general recommendations and extensions. The paper is accompanied by a python package for reproducibility and extension of discussed techniques, found at github.com/SamDuffield/abile. ## 2 Model Specification Throughout, we will use various terminology which is specific to the skill estimation problem. We will use'sport' to denote a sport in the abstract (e.g. football), and will use'match' to denote a specific instance of this sport being played (e.g. a match between two football teams). All matches will be contested between two competing 'players' (e.g. a football team is a 'player'), one of whom is designated as the 'home' player, and the other as the 'away' player (even for sports in which there is no notion of 'home ground' or 'home advantage'). A 'competition' refers to a specific sport, a collection of players of that sport, a set of matches between these players, and a set of results of these matches (e.g. the results of the English Premier League). Each match is also associated to a time \(t\in[0,\infty)\) denoting when the match was played, which we call a'matchtime'. Notationally, given an integer \(N\in\mathbf{N}\) we use \([N]\) for the set of integers from \(1\) to \(N\). The total number of players in the competition is denoted \(N\in\mathbf{N}\), and \(K\in\mathbf{N}\) denotes the total number of matches played in the competition. We order the matchtimes as \(t_{1}\leqslant t_{2}\leqslant\dots\leqslant t_{K}\) (noting that matches may take place contemporaneously). Matches are then indexed in correspondence with this ordering, i.e. match \(1\) took place at time \(t=t_{1}\), etc.; we also adopt the convention that \(t_{0}=0\). We explicitly model the skills of all players over the full time window \([0,T]\), even if individual players may enter the competition at a later time. We now discuss modelling choices. Our key interest is to infer the skills of players in a fixed competition, given access to the outcomes of matches which are played in that competition. We model player skills as taking values in a totally-ordered set \(\mathcal{X}\), i.e. skills are treated as ordinal, with the convention that higher skill ratings are indicative of more favourable match outcomes for a player. We model match outcomes as taking values in a finite, discrete set \(\mathcal{Y}\), e.g. \(\mathcal{Y}=\{\text{draw},\text{home win},\text{away win}\}\), although the framework is general and can be extended to more complex scenarios. We also assume direct observation of matchtimes, who the home and away players are in a given match, and the outcome of the match. Our model will consist of the players' skills, and the match outcomes; we detail here the construction of the full joint likelihood. Players' skills are allowed to vary with time, and we write \(x_{t}^{i}\in\mathcal{X}\) for the skill value of the \(i\)th player at time \(t\) and in an abuse of notation, we will also use \(x_{k}^{i}\) to denote the skill value of the \(i\)th player at observation time \(t_{k}\). We will, where possible, index skills with letters \(t\) or \(k\) so that the continuous/discrete nature of the time index is clear from context. For discrete time indices, we also make use of the notation \(x_{0:k}\) or \(y_{1:k}\) to denote the joint vector of skills or observations at discrete times. We assume that the initial skills of the players are all drawn mutually independently of one another, i.e. for \(i\in[N]\), \(x_{0}^{i}\sim m_{0}^{i}\) independently for some initial distribution \(m_{0}^{i}\). We also assume that the evolution of each player's skill over time is independent of one another. We further assume that each of these evolutions is Markovian, i.e. for each \(i\in[N]\), there is a semigroup of \(\mathcal{X}\)-valued Markov transition kernels \(\left(M_{t,t^{\prime}}^{i}:t\leqslant t^{\prime}\right)\) such that for \(t\leqslant t^{\prime}\), \[x_{t^{\prime}}^{i}\mid x_{t}^{i}\sim M_{t,t^{\prime}}^{i}\left(x_{t}^{i},\cdot \right),\] where \(t,t^{\prime}\) represent matchtimes. Again, we will often abuse notation to write \(M_{k,k^{\prime}}^{i}=M_{t_{k},t_{k^{\prime}}}^{i}\) with \(t_{k}\leqslant t_{k^{\prime}}\) and the nature of the index clear from context. For the \(k\)th match, we write \(h(k),a(k)\in[N]\) for the indices of the home and away players in that match, and write \(y_{k}\in\mathcal{Y}\) for the outcome of the match. We assume that given the skills of players \(h(k)\) and \(a(k)\) at time \(t_{k}\), the match's outcome \(y_{k}\) is conditionally independent of all other player skills and match outcomes (depicted in Fig. 1), and is drawn according to some probability distribution \(G_{k}\left(y_{k}\mid x_{k}^{h(k)},x_{k}^{a(k)}\right)\). Assembling these various assumptions, it follows that the joint law of all players' skills on all matchtimes, and of all match results, is thus given by the following \[\begin{split}\mathbf{P}\left(x_{0:K}^{[N]},y_{1:K}\right)& =\prod_{i\in[N]}\left\{m_{0}^{i}\left(x_{0}^{i}\right)\cdot\prod_ {k=1}^{K}M_{k-1,k}^{i}\left(x_{k-1}^{i},x_{k}^{i}\right)\right\}\\ &\quad\cdot\prod_{k=1}^{K}G_{k}\left(y_{k}\mid x_{k}^{h(k)},x_{k }^{a(k)}\right).\end{split} \tag{1}\] Some simplifications which we will make in all subsequent examples are that i) we will model the initial laws of all players' skill as being identical across players (i.e. \(m_{0}^{i}\) will not Figure 1: On the left, the conditional independence structure of an SSM. On the right, the conditional independence structure of an fSSM (1), with \(N=4\) and a pairwise observation model. depend on \(i\)), ii) the dynamics of all players' skills will also be identical across players, i.e. for \(t<t^{\prime}\), we can write \(M_{t,t^{\prime}}^{i}\left(x,x^{\prime}\right)=M_{t,t^{\prime}}\left(x,x^{\prime}\right)\), and iii) the observation model \(G_{k}\) will not depend on \(k\) (although we will still use the notation \(G_{k}\) to emphasise dependence on \(y_{k}\)). These simplifications are made for ease of presentation, and deviations from each of these simplifications are typically straightforward to accommodate in the algorithms which we present. Such deviations are often relevant in practical scenarios, e.g. representing the off-season in seasonal sports, different match types (e.g. 3-set and 5-set tennis), and so on. There are also a number of model features which we insist on, namely i) we insist on modeling player skills as evolving in continuous time, and ii) we insist on the possibility of observations which occur at irregularly-spaced intervals in time. We do this because these settings are of practical relevance, and because they pose particular computational challenges which deserve proper attention. Terminologically, we use the term'state-space model' (SSM) to denote a model, such as (1), in which there is an unobserved state \(x\), taking values in a general space, which evolves in time according to a Markovian evolution, and is observed indirectly. We use the term 'hidden Markov model' (HMM) to refer to an SSM where the unobserved state \(x\) takes values on a finite state space. The term factorial state-space model (fSSM, or indeed fHMM) refers to a state-space model in which the state \(x\) is naturally partitioned into a collection of sub-states, each of which evolve independently of one another (Ghahramani and Jordan, 1995), see Fig. 1. ## 3 Inference In this section, we discuss the problem of inference in the skill rating problem, which features of the problem one might seek to understand, and how one might go about representing these features. ### Inference Objectives Broadly speaking, inference in general state-space models tends to involve the solution of (some subset of) three related tasks, presented in (roughly) increasing order of complexity: 1. Filtering: inferring the current latent states, given the observations thus far, i.e. \[\text{Filter}_{k}\left(x_{k}\right):=\mathbf{P}\left(x_{k}\mid y_{1:k}\right),\] which is closely related to prediction, i.e. for \(t_{k}<t_{k+1}\) \[\text{Predict}_{k+1|k}\left(x_{k+1}\right)=\mathbf{P}\left(x_{k+1}\mid y_{1:k} \right).\] 2. Smoothing: inferring past latent skills, given the observations thus far, i.e. for \(t_{k+1}\leqslant t_{K}\) \[\mathrm{Smooth}_{k|K}\left(x_{k}\right) =\mathbf{P}\left(x_{k}\mid y_{1:K}\right),\] \[\mathrm{Smooth}_{k,k+1|K}\left(x_{k},x_{k+1}\right) =\mathbf{P}\left(x_{k},x_{k+1}\mid y_{1:K}\right),\] \[\mathrm{Smooth}_{0:K|K}\left(x_{0:K}\right) =\mathbf{P}\left(x_{0:K}\mid y_{1:K}\right).\] 3. Parameter Estimation: when the dynamical and/or observational structure of the model depend on unknown parameters \(\theta\), one can calibrate these models based on the observed data by e.g. maximum likelihood estimation: \[\operatorname*{arg\,max}_{\theta\in\Theta}\mathbf{P}\left(y_{1:K}\mid\theta \right),\] where \(\mathbf{P}(y_{1:K}\mid\theta)=\int\mathbf{P}\left(x_{0:K}^{[N]},y_{1:K}\mid \theta\right)\,\mathrm{d}x_{0:K}^{[N]}\). Note that for filtering and smoothing, \(\theta\) is treated as constant, hence its omission from the preceding descriptions. Depending on the application in question, each of these tasks can be of more or less interest. In our setting, because we are interested in using our model to make real-time decisions, filtering is directly relevant towards informing those decisions. However, this does not mean that we can immediately ignore the other two tasks. Firstly, without an accurate estimate of the parameters \(\theta\) which govern the dynamical and observation models for our process of interest, our estimate of the filtering distribution can be badly misspecified. As a result, without incorporating some elements of parameter estimation, inference of the latent states can be quite poor. Moreover, obtaining good estimates of these model parameters tends to require developing an understanding of the full trajectory of the latent process; indeed, many algorithms for parameter estimation in SSMs require some form of access to the smoothing distribution, which in turn requires the filtering distributions. As such, the three tasks are deeply interconnected. In Section 4, we will explicitly consider computational methods for addressing these inference objectives. For the problems considered in this paper, it is rare that the true filtering or smoothing distributions can even be represented, and as such, approximations will be adopted. It bears mentioning that the 'richness' of the chosen approximation will play an important role in determining how well the original inference goals are achieved. This will be treated further in Section 4. ### Techniques for Filtering We first present some generalities on algorithmic approaches to the filtering and smoothing problem, to contextualise the forthcoming developments. For a general SSM on \(\mathcal{X}\) with transitions \(M_{t,t^{\prime}}\) and observations \(G_{k}\), the following abstract filtering recursions hold \[\mathrm{Predict}_{k+1|k}\left(x_{k+1}\right) =\int\mathrm{Filter}_{k}\left(x_{k}\right)\cdot M_{k,k+1}\left(x_{ k},x_{k+1}\right)\,\mathrm{d}x_{k},\] \[\mathrm{Filter}_{k+1}\left(x_{k+1}\right) \propto\mathrm{Predict}_{k+1|k}\left(x_{k+1}\right)\cdot G_{k+1} \left(x_{k+1}\right),\] or more suggestively, \[\text{Predict}_{k+1|k} =\text{Propagate}\left(\text{Filter}_{k};M_{k,k+1}\right),\] \[\text{Filter}_{k+1} =\text{Assimilate}\left(\text{Predict}_{k+1|k};G_{k+1}\right),\] where we note that the operators Propagate and Assimilate act on probability measures. Note that the Propagate operator can also readily be applied to times which are not associated with matches; this can be useful for forecasting purposes. The same recursions also allow for computation of the likelihood of all observations so far, using that \(\mathbf{P}\left(y_{1:k+1}\right)=\mathbf{P}\left(y_{1:k}\right)\cdot\mathbf{P }\left(y_{k+1}\mid y_{1:k}\right)\) when \(t_{k}\leqslant t_{k+1}\); here, the last term carries the interpretation of a predictive likelihood or in the skill-rating setting, match outcome predictions. Various algorithms for approximate filtering have been derived by approximating each of these updates in turn. The computational cost varies depending on the algorithm being used; we will discuss these on a case-by-case basis. ### Techniques for Smoothing Similarly to filtering, there are 'backward' recursions which characterise the smoothing laws: if no observations occur in the interval \(\left(t_{k},t_{k+1}\right)\), then \[\text{Smooth}_{k,k+1|K}\left(x_{k},x_{k+1}\right) =\frac{\text{Filter}_{k}\left(x_{k}\right)\cdot M_{k,k+1}\left(x_{ k},x_{k+1}\right)\cdot\text{Smooth}_{k+1|K}\left(x_{k+1}\right)}{\text{Predict}_{k+1|k} (x_{k+1})},\] \[\text{Smooth}_{k|K}\left(x_{k}\right) =\int\text{Smooth}_{k,k+1|K}\left(x_{k},x_{k+1}\right)\,\text{d} x_{k+1},\] or \[\text{Smooth}_{k,k+1|K} =\text{Bridge}\left(\text{Filter}_{k},\text{Smooth}_{k+1|K};M_{k,k+ 1}\right)\] \[\text{Smooth}_{k|K} =\text{Marginalise}\left(\text{Smooth}_{k,k+1|K};k\right),\] where, as for filtering, the operators act on probability measures, and noting that \(M_{k,k+1}\) is interpreted as a density, rather than as a Markov kernel. Likewise, several algorithms for approximate smoothing are built upon approximation of these recursions. Smoothing between observation times is possible using the same approach, i.e. for \(t_{k}\leqslant t_{k^{\prime}}\leqslant t_{k+1}\) where \(t_{k^{\prime}}\) is not associated with an observation, we have \[\text{Smooth}_{k^{\prime},k+1|K}=\text{Bridge}\left(\text{Predict}_{k^{\prime }|k},\text{Smooth}_{k+1|K};M_{k^{\prime},k+1}\right).\] In general, exact implementation of any of \(\{\text{Propagate},\text{Assimilate},\text{Bridge},\text{Marginalise}\}\) tends to only be possible in models with substantial conjugacy properties, due to the general difficulty of integration and representation of probability measures of even moderate complexity. If one insists on exact implementation of all of these operations, then one tends to be restricted to working with linear-Gaussian SSMs or HMMs of moderate size. As such, practical algorithms must often make approximations which restore some level of tractability to the model. Observe also that, given the filtering distributions, the smoothing recursions require no further calls to the likelihood term \(G_{k}\), which can be noteworthy in the case that the likelihood is computationally expensive or otherwise complex. ### Standing Approximations and Reductions We are interested in developing procedures for filtering, smoothing, and parameter estimation whose complexity scales well with respect to the parameters of interest for skill rating. In particular, we want to be able to process competitions involving i) many players, i.e. \(N\rightarrow\infty\), and ii) many matches, i.e. \(K\rightarrow\infty\). We will therefore focus on procedures for which the computational cost scales at most _linearly_ in each of \(N\) and \(K\). #### 3.4.1 Decoupling Approximation In seeking procedures with stable behaviour as the number of players grows, there is one approximation which seems to be near-universal in the setting of pairwise comparisons, namely that the filtering (and smoothing) distributions over all of the players' skills are well-approximated by a decoupled representation. Using superscripts to index player-specific distributions (e.g. Filter\({}_{k}^{i}\) denoting the filtering distribution for the skill of player \(i\) at time \(t_{k}\), and so on), this corresponds to the approximation \[\text{Filter}_{k} =\mathbf{P}\left(x_{k}^{[N]}\mid y_{1:k}\right)\approx\prod_{i \in[N]}\mathbf{P}\left(x_{k}^{i}\mid y_{1:k}\right)=\prod_{i\in[N]}\text{Filter }_{k}^{i}\] \[\text{Smooth}_{\circ|K} =\mathbf{P}\left(x_{\circ}^{[N]}\mid y_{1:K}\right)\approx\prod_{ i\in[N]}\mathbf{P}\left(x_{\circ}^{i}\mid y_{1:K}\right)=\prod_{i\in[N]}\text{Smooth}_{ \circ|K}^{i},\] where we have \(\circ=\left\{k\right\},\left\{k,k+1\right\}\) or \(\left\{0:K\right\}\) for the various marginal and joint smoothing objectives. This approximation is largely motivated by the practical difficulty of representing large systems of correlated random variables, and is further supported by the standing assumption that players' skills evolve independently of one another a priori. The quality of such approximations depends heavily on the ability to control the strength of interactions between players, that is, the sensitivity of the conditional law of any one player's skill to perturbations in any single other player's skill. If such control is possible (which one expects for large-scale, high-frequency competitions, with weakly-informative match outcomes), then one can rigorously establish that the decoupling approximation has good fidelity to the true filtering law (Rebeschini and van Handel, 2015; Rimella and Whiteley, 2022). Whether richer approximations of the filtering and smoothing laws are practically feasible and worthwhile remains to be seen. We thus focus hereafter on inferential paradigms which adopt this decoupling approximation. #### 3.4.2 Match Sparsity Our general formulation of the joint model actually includes more information than is strictly necessary. Due to the conditional independence structure of the model, one sees that instead of monitoring the skills of all players during all matches, it is sufficient to keep track of only the skills of players at times when they are playing in matches. Reformulating a joint likelihood which reflects this simplicity requires the introduction of some additional notation, but dramatically reduces the cost of working with the model, and is computationally crucial. To this end, for \(i\in[N]\), write \(L^{i}\subseteq\{0,\ldots,K\}\) for the ordered indices of matches in which player \(i\) has played, and for \(\ell^{i}\in L^{i}\), write \(\ell^{i,-}\) for the element of \(L^{i}\) immediately before \(\ell^{i}\), i.e. \(\ell^{i,-}=\sup\{\ell\in L^{i}:\ell<\ell^{i}\}\). It then holds that \[\mathbf{P}\left(\left\{x_{\ell^{i}}^{i}:\ell^{i}\in L^{i},i\in[N] \right\},y_{1:K}\right)= \prod_{i\in[N]}\left\{m_{0}\left(x_{0}^{i}\right)\cdot\prod_{ \ell^{i}\in L^{i}}M_{\ell^{i,-},\ell^{i}}\left(x_{\ell^{i,-}}^{i},x_{\ell^{i} }^{i}\right)\right\}\] \[\cdot\prod_{k=1}^{K}G_{k}\left(y_{k}\mid x_{k}^{h(k)},x_{k}^{a(k) }\right). \tag{2}\] Some careful bookkeeping reveal that while our original likelihood contained \(\mathcal{O}(N\cdot K)\) terms, this new representation involves only \(\mathcal{O}(N+K)\) terms. Given that the competition consists of \(N\) players and \(K\) matches, we see that this representation is essentially minimal. #### 3.4.3 Pairwise Updates A consequence of these two features is that when carrying out filtering and smoothing computations, only sparse access to the skills of players is required. In particular, consider assimilating the result of the \(k\)th match into our beliefs of the players' skills. Since this match involves the players \(h(k)\) and \(a(k)\), which we refer to as \(h\) and \(a\), we have \(k\in L^{h}\cap L^{a}\) and the filtering update requires only the following steps: 1. Compute the last matchtime indices on which the two players played \(k^{h,-}\in L^{h}\) and \(k^{a,-}\in L^{a}\). 2. Retrieve the filtering distributions of the two players' skills on these matchtimes, i.e. \(\operatorname{Filter}_{k^{h,-}}^{h}\) and \(\operatorname{Filter}_{k^{a,-}}^{a}\) respectively. 3. Compute the predictive distributions of the two players' skills prior to the current matchtime by propagating them through the dynamics, i.e. \(M_{k^{h,-},k}\) for the home player and \(M_{k^{a,-},k}\) for the away player, compute 4. Compute the current filtering distributions of the two players' skills immediately following the current matchtime by assimilating the new result \[\text{Filter}_{k}^{h,a}=\text{Assimilate}\left(\left(\begin{array}{c}\text{ Predict}_{k|k^{h,-}}^{h}\\ \text{Predict}_{k|k^{a,-}}^{a}\end{array}\right);G_{k}\right),\] where \(\text{Assimilate}\) denotes a generic Bayesian procedure for converting predictive distributions for a pair of players and an observation likelihood into a joint filtering distribution. 5. Marginalise to regain the factorial approximation \[\left(\begin{array}{c}\text{Filter}_{k}^{h}\\ \text{Filter}_{k}^{a}\end{array}\right)=\left(\begin{array}{c}\text{Marginalise} \left(\text{Filter}_{k}^{h,a};h\right)\\ \text{Marginalise}\left(\text{Filter}_{k}^{h,a};a\right)\end{array}\right),\] where \(\text{Marginalise}\) is used for a marginalisation in space and not in time as in Section 3.3. Note briefly that if at a specific time, multiple matches are being played between a disjoint set of players, then this structure implies that the results of all of these matches can be assimilated independently and in parallel; for high-frequency competitions in which many matches are played simultaneously, this is an important simplification. A key takeaway from this observation is then that the cost of assimilating the result of a single match is independent of both \(N\) and \(K\). Similar benefits are also available for smoothing updates. Indeed, the benefits of sparsity are even more dramatic in this case, since the smoothing recursions (Section 3.3) decouple _entirely_ across players, implying that all smoothing distributions can be computed independently and in parallel. Given access to sufficient compute parallelism, this implies the possibility of computing these smoothing laws in time \(\mathcal{O}\left(\max\left\{\mathbf{card}(L^{i}):i\in[N]\right\}\right)\), which is potentially much smaller than the serial complexity of \(\mathcal{O}\left(K\right)\). For example, if each player is involved in the same number of matches (\(\sim K/N\)), then the real-time complexity is reduced by a factor of \(N\). For ease of exposition, in what follows, we will present all considered methods with notation corresponding to the \(\mathcal{O}(N\cdot K)\) posterior (1), rather than the memory-efficient \(\mathcal{O}(N+K)\) posterior (2) which should be applied in practice, e.g. we will describe the Propagate step as taking us from time \(k-1\) to time \(k\), and so on. ### Techniques for Parameter Estimation Recall the goal of parameter estimation is to infer the unknown static (i.e. non-time-varying) parameters \(\theta\) of the state-space model. A complete discussion of parameter estimation in state-space models would be time-consuming. In this work, we simply focus on estimation following the principle of maximum likelihood, due to its generality and compatibility with typical approximation schemes. Careful discussion of other approaches (e.g. composite likelihoods, Bayesian estimation, etc.; see e.g. Varin and Vidoni (2008); Varin et al. (2011); Andrieu et al. (2010)) would be interesting but is omitted for reasons of space. Additionally, we limit our attention to offline parameter estimation, where we have access to historical match outcomes and look to find static parameters \(\theta\) which model them well, with the goal of subsequently using these parameters in the online setting (i.e. filtering). Techniques for online parameter estimation (with a focus on particle methods) are reviewed in Kantas et al. (2015). Given the general intractability of the filtering and smoothing distributions, it should not be particularly surprising that the likelihood function of a state-space model is also typically unavailable. Fortunately, many approximation schemes for filtering and smoothing also enable the computation of an approximate or surrogate likelihood, which can then be used towards parameter estimation. That is, in addition to approximating the filtering distribution (for example), an algorithm may also provide access to a tractable approximation \(\hat{\mathbf{P}}\left(y_{1:K}\mid\theta\right)\), which can then be optimised directly by numerical methods. In cases where a 'direct' approximate likelihood is not available, a popular option is to adopt an expectation-maximisation (EM) strategy for maximising the likelihood \(\mathbf{P}\left(y_{1:K}\mid\theta\right)\); see e.g. Neal and Hinton (1998), Chapter 14 in Chopin and Papaspiliopoulos (2020) for overview. EM is an iterative approach, with each iteration consisting of two steps, known as the E-step and M-step respectively. The usual expectation step ('E-step') consists of taking the current parameter estimate \(\hat{\theta}\), using it to form the smoothing distribution \(\mathbf{P}\left(x_{0:K}^{[N]}\mid y_{1:K},\hat{\theta}\right)\), and constructing the surrogate objective function \[\mathbf{Q}\left(\theta\mid\hat{\theta}\right):=\int\mathbf{P}\left(x_{0:K}^{[ N]}\mid y_{1:K},\hat{\theta}\right)\cdot\log\mathbf{P}\left(x_{0:K}^{[N]},y_{1:K} \mid\theta\right)\,\mathrm{d}x_{0:K}^{[N]}, \tag{3}\] which is a lower bound on \(\log\mathbf{P}\left(y_{1:K}\mid\theta\right)\). The maximization step ('M-step') then consists of deriving a new estimate by maximising this surrogate objective. When these steps are carried out exactly, each iteration of the EM algorithm is then guaranteed to ascend the likelihood \(\mathbf{P}(y_{1:K}\mid\theta)\), and will thus typically yield a local maximiser when iterated until convergence. Depending on the complexity of the model at hand, it can be the case that either the E-step, the M-step, or both cannot be carried out exactly. For the models considered in this work, the intractability of the smoothing distribution means that the E-step cannot be carried out exactly. As such, we will simply approximate the E-step by treating our approximate smoothing distribution as exact (noting that this compromises EM's usual guarantee of ascending the likelihood function). Similarly, when the M-step cannot be carried out in closed-form, one can often approximate the maximiser of \(\theta\mapsto\mathbf{Q}\left(\theta\mid\hat{\theta}\right)\) through the use of numerical optimisation schemes. For a broad perspective on the EM algorithm and its approximations, we recommend Neal and Hinton (1998). In some cases, constructing the smoothing distribution in a way that provides a tractable M-step may be expensive, and it is cheaper to directly form the log-likelihood gradient. By Fisher's identity (see e.g. Del Moral et al. (2010)), this gradient takes the following form. \[\nabla_{\theta}\log\mathbf{P}\left(y_{1:K}\mid\theta\right)=\int \mathbf{P}\left(x_{0:K}^{[N]}\mid y_{1:K},\theta\right)\cdot\nabla_{\theta} \log\mathbf{P}\left(x_{0:K}^{[N]},y_{1:K}\mid\theta\right)\,\mathrm{d}x_{0:K}^ {[N]}. \tag{4}\] As such, when the smoothing distribution is directly available, this offers a route to implementing a gradient method for optimising \(\log\mathbf{P}\left(y_{1:K}\mid\theta\right)\). When only approximate smoothing distributions are available, one obtains an inexact gradient method, which may nevertheless be practically useful. It is important to note that the logarithmic nature of the expectations in (11-4) means that for many models, the components of \(\theta\) can be treated independently. For example, when the parameters which influence \(m_{0}\), \(M_{t,t^{\prime}}\) and \(G_{k}\) are disjoint, the intermediate objective \(\mathbf{Q}\left(\theta\mid\hat{\theta}\right)\) will be separable with respect to this structure, which can simplify implementations. Moreover, depending on the tractability of solving each sub-problem, one can seamlessly blend analytic maximisation of some parameters with gradient steps for others, as appropriate. ## 4 Methods In this section, we turn to some concrete models for two-player competition as well as natural inference procedures. We here focus on the key components of the approaches, highlighting the probabilistic model used (for model-based methods) and the inference paradigm. Detailed recursions for each approach can be found in the supplementary material. The presented methods alongside some notable features are summarised in Table 1. Potentially the simplest model for latent skills is a (static) Bradley-Terry model1(Bradley and Terry, 1952), wherein skills take values in \(\mathcal{X}=\mathbf{R}\) and (binary) match outcomes are modeled with the likelihood Footnote 1: Note that Bradley-Terry models are often (and indeed originally) described on an exponential scale i.e. in terms of \(z=\exp(x)\in\mathbf{R}_{+}\). \[G^{\mathrm{BT}}\left(y\mid x^{h},x^{a}\right)=\begin{cases} \mathbf{sigmoid}\left(x^{h}-x^{a}\right)&\text{if }y=\mathrm{h},\\ 1-\mathbf{sigmoid}\left(x^{h}-x^{a}\right)&\text{if }y=\mathrm{a},\end{cases}\] where \(\mathbf{sigmoid}:\mathbf{R}\rightarrow[0,1]\) is an increasing function which maps real values to normalised probabilities, such that \(\mathbf{sigmoid}(x)+\mathbf{sigmoid}(-x)=1\). The full likelihood thus takes the form \[\mathbf{P}\left(x^{[N]},y_{1:K}\right)=\prod_{i\in[N]}m_{0}^{i} \left(x^{i}\right)\cdot\prod_{k=1}^{K}G^{\mathrm{BT}}\left(y_{k}\mid x^{h(k)},x^{a(k)}\right),\] with prior \(m_{0}^{i}\). In practice, it is relatively common to neglect the prior and estimate the players' skills through pure maximum likelihood, see e.g. Kiraly and Qian (2017). In contrast to the other models considered in this work, this Bradley-Terry model treats player skills as static in time. As such, as the 'career' of each player progresses, our uncertainty over their skill level generally collapses to a point mass. We take the viewpoint that in many practical scenarios, this phenomenon is unrealistic, and so we advocate for models which explicitly model skills as varying dynamically in time. This leads naturally to the state-space model framework. ### Elo The Elo rating system (Elo, 1978), is a simple and transparent system for updating a database of player skill ratings as they partake in two player matches. Elo implicitly represents players skills as real-valued i.e. \(\mathcal{X}=\mathbf{R}\), though typical presentations of Elo tend to eschew an explicit model. Skill estimates are updated incrementally in time according to the rule (for binary match outcomes) \[\begin{pmatrix}x_{k}^{h}\\ x_{k}^{a}\end{pmatrix}=\begin{pmatrix}x_{k-1}^{h}+K\cdot\left(\mathbb{I}\left[y _{k}=\mathrm{h}\right]-\mathbf{sigmoid}\left(\frac{x_{k-1}^{h}-x_{k-1}^{a}}{s }\right)\right)\\ x_{k-1}^{a}+K\cdot\left(\mathbb{I}\left[y_{k}=\mathrm{a}\right]-\mathbf{sigmoid }\left(\frac{x_{k-1}^{a}-x_{k-1}^{h}}{s}\right)\right)\end{pmatrix},\] where the sigmoid function is usually taken to be the logistic \(\mathbf{sigmoid}_{\mathrm{L}}(x)=\left(1+\exp(-x)\right)^{-1}\). Here \(s\) is a scaling parameter and \(K\) is a learning rate parameter; these are each typically set empirically each competition, e.g. for Chess (FIDE, 2023), one takes \(s=400/\log(10)\) and \(K\in\{10,20,40\}\) depending on a player's level of experience. Note that rescaling \((K,s)\) by a common factor leads to an essentially-equivalent algorithm; it is thus mathematically convenient to work with \(s=1\) for purposes of identifiability. In practice, the ratio \(K/s\) is identifiable and carries an interpretation of the speed at which player skills vary on their intrinsic scale per unit time. The **sigmoid** term can be interpreted as a prediction probability for the match outcome; this enables an interpretation of Elo as a stochastic gradient method with respect to a logistic loss; see e.g. Morse (2019) for details on this connection. The Elo rating system can be generalised to give valid normalised prediction probabilities in the case of draws (i.e. ternary match outcomes) via the Elo-Davidson system (Davidson, 1970; Szczecinski and Djebbi, 2020), the recursions for which we outline in the supplementary material. The Elo rating system has been used in a variety of settings, most famously in chess (Elo, 1978) but also in a wide variety of other sports (see Stefani (2011) for a review), for many of which it remains the official rating system used for e.g. seeding and matchmaking. The popularity of Elo arguably stems from its simplicity, as it can be well understood without a statistical background. Interestingly, it has also been shown to provide surprisingly hard-to-beat predictions in many cases (Hvattum and Arntzen, 2010; Kovalchik, 2016). However, we emphasise that Elo is not explicitly model-based, and this can make it difficult to extend to more complex scenarios, or to critique the assumptions by which it is underpinned. ### Glicko and Extended Kalman It was noted in Glickman (1999) that the Elo rating system is reminiscent of a Bradley-Terry model with a dynamic element. This observation lead to the development of the Glicko rating system, which explicitly seeks to take into account time-varying uncertainty over each player's latent skill ratings, i.e. representing \(\text{Filter}_{k}^{i}\approx\mathcal{N}\left(x_{k}^{i}\mid\mu_{k}^{i},\sigma_{k}^{ i\,2}\right)\). This enriches the Elo approach by tracking both the location and spread of player skills at a given instant. The Propagate and Assimilate steps used in Glicko invite comparison to a (local, marginal) variant of the (Extended) Kalman filter (see e.g. Chapter 7 of Sarkka and Svensson (2023)), a connection which was made formal in Ingram (2021) and later Szczecinski and Tihon (2023). Further details on Glicko can be found in supplementary material and Glickman (1999). In Glicko, sports which permit draws are only heuristically permitted by treating them as 'half-victories'; this implementation does not provide normalised prediction probabilities for sports with draws, which is undesirable. By adopting the Extended Kalman filter perspective, we can readily provide a principled approach to sports with draws by considering the following (non-linear) factorial state-space model (considered in Dangauthier et al. (2008) and Minka et al. (2018)) \[m_{0}\left(x_{0}^{i}\right)=\mathcal{N}\left(x_{0}^{i}\mid\mu_{ 0},\sigma_{0}^{2}\right),\qquad M_{t,t^{\prime}}\left(x_{t}^{i},x_{t^{\prime}}^ {i}\right)=\mathcal{N}\left(x_{t^{\prime}}^{i}\mid x_{t}^{i},\tau^{2}\cdot(t^ {\prime}-t)\right), \tag{5}\] \[G_{k}\left(y_{k}\mid x^{h},x^{a}\right)=\begin{cases}\mathbf{ sigmoid}\left(\frac{x^{h}-x^{a}+\epsilon}{s}\right)-\mathbf{sigmoid}\left(\frac{x^{h}-x^{a}- \epsilon}{s}\right)&\text{if }y_{k}=\text{draw},\\ \mathbf{sigmoid}\left(\frac{x^{h}-x^{a}-\epsilon}{s}\right)&\text{if }y_{k}= \text{h},\\ 1-\mathbf{sigmoid}\left(\frac{x^{h}-x^{a}+\epsilon}{s}\right)&\text{if }y_{k}= \text{a}.\end{cases}\] As with Elo, the logistic sigmoid function is typically used. Similarly, the parameters \(\mu_{0}\), \(\sigma_{0}\), \(\tau\) and \(s\) are not jointly identifiable, due to a translation and scaling equivariance. For mathematical simplicity, we break this symmetry by setting \(\mu_{0}=0\), \(s=1\); note that in practice, implementations of Glicko will often use alternative numerical values in service of interpretability, comparability with Elo ratings, and so on. The state-space model perspective elucidates the interpretation of the parameters \(\sigma_{0}\), \(\tau\) and \(\epsilon\). The initial variance \(\sigma_{0}^{2}\) controls the uncertainty over the skill rating of a new player entering the database, \(\tau\) is a rate parameter that controls how quickly players' skill vary over time (note that \(\tau=0\) recovers a static Bradley-Terry model) and \(\epsilon\) is a draw parameter that dictates how common draws are for the given sport (for sports without draws, \(\epsilon=0\)). The original presentation of Glicko in Glickman (1999) observed that smoothing can be easily applied using the standard backward Kalman smoother (Sarkka and Svensson, 2023), which they used in service of 'pure smoothing', rather than for parameter estimation. The parameters for Glicko (\(\sigma_{0}\) and \(\tau\)) can be inferred by numerically minimising a cross-entropy-type loss function which contrasts predicted and realised match outcomes (see Glickman (1999), Section 4); this works reasonably in its own context, but does not necessarily scale well to more complex models with a larger parameter set. Adopting the framework described in Section 3.5, we determine maximum-likelihood estimators for the static parameters in a manner which scales gracefully to more complex models and parameter sets. In particular, for (8), the convenient Gaussian form of the smoothing approximation means the maximisation step of EM can be carried out analytically for \(\sigma_{0}\) and \(\tau\), and efficiently numerically for \(\epsilon\). ### TrueSkill and Expectation Propagation/Moment-Matching It was noted in Herbrich et al. (2006) that if we instead choose the sigmoid function for a static Bradley-Terry model to be the inverse probit function \(\mathbf{sigmoid}_{\mathrm{IP}}\left(x\right)=\Phi\left(x\right)\) (where \(\Phi\) is the CDF of a standard Gaussian), then certain integrals of interest become analytically tractable. In particular, we can use the identity \[\int\mathcal{N}(z\mid\mu,\sigma^{2})\cdot\Phi(z)\,\mathrm{d}z=\Phi\left(\frac{ \mu}{\sqrt{1+\sigma^{2}}}\right), \tag{6}\] to analytically calculate the marginal filtering means and variances of the non-Gaussian joint filtering posterior \(\mathrm{Filter}_{k}^{h,a}\). This naturally motivates the moment-matching approach of Herbrich et al. (2006), wherein an approximate factorial posterior \(\mathrm{Filter}_{k}^{h,a}\approx\mathrm{Filter}_{k}^{h}\cdot\mathrm{Filter}_{k}^{a}\) can be defined by simply extracting the marginal means and variances. This moment-matching strategy can be seen as a specific instance of _assumed density filtering_ (see e.g. Chapter 1 of Minka (2001b) and references therein) or its more general cousin, Expectation Propagation (Minka, 2001a). We provide some details on this connection in the supplementary material. One can also reasonably consider applying other approximate filtering strategies based on Gaussian principles (e.g. Unscented Kalman Filter (Julier and Uhlmann, 2004), Ensemble Kalman Filter (Evensen, 2009), etc.) to the same model class; we do not explore this further here. In the original TrueSkill (Herbrich et al., 2006), this procedure was applied as an approximate inference procedure in the static Bradley-Terry model. In the follow-up works TrueSkillThroughTime (Dangauthier et al., 2008) and TrueSkill2 (Minka et al., 2018), the model was extended to allow the latent skills to vary over time. The resulting procedure is a treatment of the state-space model in (8), where the filtering distributions are formed by i) applying the decoupling approximation, and ii) assimilating observations with predictions through the aforementioned moment-matching procedure. Smoothing is handled analogously to the Glicko and Extended Kalman setting, i.e. by running the Kalman smoother backwards from the terminal time. TrueSkill2 (Minka et al., 2018) applies a gradient-based version of the parameter estimation techniques presented in Section 3.5, although there it is not presented explicitly in the state-space model context. We note that the above description is TrueSkill (Herbrich et al., 2006; Minka et al., 2018) in its most basic form. The TrueSkill approach has been successfully applied in significantly more complex settings, notably for online multiplayer games. ### Sequential Monte Carlo The preceding strategies all hinge on the availability of a suitable parametric family for approximating the relevant probability distributions. This has clear appeal in terms of enabling explicit computations and general ease of construction. This is counter-balanced by the necessarily limited flexibility of parametric approximations, where even in the presence of an increased computational budget, it is not always clear how to obtain improved estimation performance. This can sometimes be ameliorated by the use of nonparametric approximations, in the form of particle methods. In particular, we will consider a sequential Monte Carlo (SMC) strategy based on importance sampling. SMC can be applied to any state-space model for which we can i) simulate from both the initial distribution \(m_{0}\) and the Markovian dynamics \(M_{t,t^{\prime}}\) and ii) evaluate the likelihood \(G_{k}\). Naturally, in this work, it is of particular interest to consider the application of SMC strategies to the (factorial) state-space model in (8). SMC encompasses a diverse range of algorithms which exhibit variations through their choice of proposal distribution and resampling scheme. It is out of the scope of this paper to review multiple available variants of SMC. We instead prioritize conciseness by concentrating on the most widely-used instance of SMC, the bootstrap particle filter. SMC filtering maintains a (potentially weighted) particle approximation to the filtering distributions which in our context has an additional factorial approximation, reminiscent of a 'local' or 'blocked' SMC approach (Rebeschini and van Handel, 2015): \[\text{Filter}^{i}_{k}(x^{i}_{k})\approx\sum_{j\in[J]}w^{i\,j}_{k}\cdot\delta \left(x^{i}_{k}\mid x^{i\,j}_{k}\right),\] where \(\delta(x\mid y)\) is a Dirac measure in \(x\) at point \(y\), \(J\) is the number of particles, and \(j\) indexes each particle. The bootstrap particle filter then executes Propagate by simply simulating from the dynamics \(M_{k,k+1}\) to provide a new particle approximation to \(\text{Predict}^{i}_{k+1|k}\). Before applying the Assimilate step, the distributions \(\text{Predict}^{h}_{k+1|k}\) and \(\text{Predict}^{a}_{k+1|k}\) are paired together to form a joint distribution \(\text{Predict}^{h,a}_{k+1|k}\). The Assimilate step then consists of a reweighting step and a resampling step (to encourage only high-probability particles to be carried forward). The result is a joint weighted particle approximation to \(\text{Filter}^{h,a}_{k}\). The factorial approximation can then be regained by a simple Marginalise operation which unpairs the joint particles. Note that the factorial approximation is nonstandard in an SMC context, and is adopted here as a natural means of avoiding the well-known curse of dimensionality that affects SMC (Rebeschini and van Handel, 2015). Smoothing can be applied using a similar iterative importance sampling approach (Godsill et al., 2004) that sweeps backwards for \(k=K{-}1,\ldots,0\) recycling the filtering approximations \(\text{Filter}^{i}_{k}\) into joint smoothing approximations \(\text{Smooth}^{i}_{0:K|K}\). This procedure is (embarrassingly) parallel in both the number of players \(N\) and the number of particles \(J\), given the filtering approximations; see Finke and Singh (2017) for a similar scenario. Parameter estimation can also be achieved by applying general-purpose expectation-maximisation or gradient ascent techniques from Section 3.5, where the integrals (11-4) are approximated using particle approximations to the smoothing law. Full details can be found in the supplementary material and Chopin and Papaspiliopoulos (2020) provides a thorough review of the field. ### Finite State-Space From a modelling perspective, it is conceptually simple to consider skills which take values in a finite state space. Recalling the general model formulated in (1), by choosing to work with \(\mathcal{X}=[S]\) for some \(S\in\mathbf{N}\), one obtains a factorial hidden Markov model. By varying \(S\), one has the freedom to adapt the flexibility of the model to the richness of the data. In this finite state-space, it is natural to model the skills of player \(i\) as evolving according to a continuous time Markov jump process. In the time-homogeneous case, such processes can be specified in terms of their so-called "generator matrix" \(Q_{S}\), an \(S\times S\) matrix which encodes the rates at which the player's skill level moves up and down, and is typically sparse. Given such a matrix, it is typically straightforward to construct the corresponding transition kernels \(M_{t,t^{\prime}}\) at a cost of \(\mathcal{O}(S^{3})\) by diagonalisation and matrix exponentiation. In many settings, such as filtering, one is not interested in the transition matrix \(M_{t,t^{\prime}}\) itself but rather its action on probability vectors. In this case, given appropriate pre-computations, the cost can often be controlled at the much lower \(\mathcal{O}(S^{2})\). We provide further details in the supplementary material. In these models, the likelihood \(G_{k}\) then takes the form of an \(S\times S\times\mathbf{card}(\mathcal{Y})\) array, representing the probabilities of observing a certain outcome given a certain pair of player skills. For modeling coherence, it is natural that this array satisfies certain monotonicity constraints, so that i.e. if a player's skill level increases, then they should become more likely to win matches. In the context of skill ratings with pairwise comparisons, we can then consider the following fHMM inspired by (8). Writing \(Q_{S}\) for the generator matrix of the continuous-time random walk with reflection on \([S]\) (see the supplementary material) for details and using \(\exp\) to denote the matrix exponential, we can define \[m_{0} =\nu\cdot M_{0,\sigma_{d}},\qquad M_{t,t^{\prime}}=\exp\left( \tau_{d}\cdot\left(t^{{}^{\prime}}-t\right)\cdot Q_{S}\right), \tag{7}\] \[G_{k}\left(y_{k}\mid x^{h},x^{a}\right) =\begin{cases}\mathbf{sigmoid}\left(\frac{x^{h}-x^{a}+\epsilon_{ d}}{s_{d}}\right)-\mathbf{sigmoid}(\frac{x^{h}-x^{a}-\epsilon_{d}}{s_{d}})& \text{if }y_{k}=\text{draw},\\ \mathbf{sigmoid}\left(\frac{x^{h}-x^{a}-\epsilon_{d}}{s_{d}}\right)&\text{ if }y_{k}=\text{h},\\ 1-\mathbf{sigmoid}\left(\frac{x^{h}-x^{a}+\epsilon_{d}}{s_{d}}\right)&\text{ if }y_{k}=\text{a}.\end{cases}\] The subscript \(d\) is appended to all parameters to emphasise their connection to the discrete model. Here \(\nu\) is a probability vector whose mass is concentrated on the median state(s) \(\left\{\left\lfloor\frac{S}{2}\right\rfloor,\left\lceil\frac{S}{2}\right\rceil\right\}\), so that \(m_{0}\) resembles a (discrete) centered Gaussian law with standard deviation of order \(\sigma_{d}\) (for \(\sigma_{d}\ll\sqrt{S}\)). As one takes \(\sigma_{d}\to\infty\), this converges towards the uniform distribution on \([S]\). Similarly to (8), for the dynamical model, we have a rate parameter \(\tau_{d}\in\mathbf{R}^{+}\) which controls how quickly skill ratings vary over time, and for the observation model, we have a scaling parameter \(s_{d}\in\mathbf{R}^{+}\) and a draw propensity parameter \(\epsilon_{d}\in\mathbf{R}^{+}\), which can be set as \(\epsilon_{d}=0\) for sports without draws. For inference, we can follow the procedure described in Rimella and Whiteley (2022), which is well-suited to our highly-localised observation model. For filtering, at time \(k\) we can perform the same steps described in Section 3.4.3, which in the fHMM scenario can be implemented in closed form via simple linear algebra operations, representing the filtering laws as normalised probability (row) vectors of length \(S\). Smoothing follows from observing that both the filtering distributions and the transition kernel factorise across players. That is, the equations in Section 3.3 can be applied exactly through matrix multiplications and independently on each player. Parameter estimation can be performed through the EM algorithm. For parameters not associated to the dynamical model, the \(\mathbf{Q}\) function can be formed using only the marginal smoothing laws Smooth\({}^{i}_{k|K}\), which are available at a cost of \(\mathcal{O}(S^{2})\). EM updates for the dynamical parameter \(\tau_{d}\) instead require joint laws of the form Smooth\({}^{i}_{k,k+1|K}\), which are more costly to assemble; we thus opt to instead update \(\tau_{d}\) by a cheaper gradient ascent step. See the supplementary material for additional information on how we conduct filtering, smoothing and parameter estimation in this model. ## 5 Experiments We now turn to the task of applying the aforementioned dynamic skill estimation techniques to some real-life data sets. We consider three sports; Women's Tennis Association (WTA) results (noting that WTA data is particularly convenient since draws are excluded and all matches have the same 3-set format), football data for the English Premier League (EPL) (as well as international data for Fig. 2) and finally professional (classical format) chess matches. We structure this section with the goal of replicating a realistic workflow. We start with an exploratory analysis with some trial static parameters, testing against basic coherence checks on how we expect the latent skills to behave. We then turn to parameter estimation and learning the static parameters from historical data. Finally, we describe and analyse how filtering and smoothing can be utilised for online decision-making and historical evaluation respectively. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Method & Skills & Filtering & Smoothing & Parameter & Sources of Error \\ & & & Estimation & (Beyond Factorial) \\ \hline \hline Elo & Continuous & Location, \(\mathcal{O}(1)\) & N/A & N/A & Not model-based \\ \hline Glicko & Continuous & Location and Spread, \(\mathcal{O}(1)\) & Location and Spread, \(\mathcal{O}(1)\) & N/A & Not model-based \\ \hline Extended Kalman & Continuous & Location and Spread, \(\mathcal{O}(1)\) & Location and Spread, \(\mathcal{O}(1)\) & EM & Gaussian Approximation \\ \hline TrueSkill2 & Continuous & Location and Spread, \(\mathcal{O}(1)\) & Location and Spread, \(\mathcal{O}(1)\) & EM & Gaussian Approximation \\ \hline SMC & General & Full Distribution, \(\mathcal{O}(J)\) & Full Distribution, \(\mathcal{O}(J)\)2 & EM & Monte Carlo Variance \\ \hline Discrete & Discrete & Full Distribution, \(\mathcal{O}(S^{2})\) & Full Distribution, \(\mathcal{O}(S^{2})\)3 & (Gradient) EM & N/A \\ \hline \end{tabular} \end{table} Table 1: Considered approaches and their features. All approaches are linear in the number of players \(\mathcal{O}(N)\) and the number of matches \(\mathcal{O}(K)\). At each stage of the workflow, we compare and highlight similarities or differences across sports and modelling or inference approaches as appropriate (but not exhaustively). We consider all (dynamic) methods discussed in Section 4. For the Extended Kalman approach, we use the state-space model (8) with the logistic sigmoid function to match the Elo and Glicko approaches. TrueSkill2 requires the inverse probit sigmoid function which we also use for the SMC and fHMM approaches. We also comment on some global parameter choices. We run SMC with \(J=1000\) particles and the fHMM with \(S=500\) states. Naturally, increasing these resolution parameters can only increase accuracy at the cost of computational speed, whose complexities are laid out in Table 1. As mentioned previously, for the model (8), the parameters \(\mu_{0}\) and \(s\) are not identifiable given \(\sigma_{0},\tau\) and \(\epsilon\); we therefore set \(\mu_{0}=0\) and \(s=1\). For the fHMM case, we note that the boundary conditions of the dynamics result in the scaling parameter \(s_{d}\) being identifiable, and requires scaling with \(S\). It would be possible to tune \(s_{d}\) with parameter estimation techniques, but for ease of comparison with the continuous state-space approaches, we here fix it to \(s_{d}=S/5\), which was found to work well in practice. This leaves the following parameters to be learnt from data: \(K\) for Elo, \((\sigma_{0},\tau,\epsilon)\) for Extended Kalman, TrueSkill2 and SMC, and \((\sigma_{d},\tau_{d},\epsilon_{d})\) for fHMM. A python package permitting easy application of the discussed techniques, as well as code to replicate all of the following simulations, can be found at github.com/SamDuffield/abile. ### Exploratory Analysis The first step in any state-space model fitting procedure is to explore the model with some preliminary (perhaps arbitrarily chosen) static parameters. The goal here is not a thorough evaluation of skill ratings, but rather to assess our prior intuitions. In Fig. 2, we depict Argentina's 2023 football world cup in terms of the evolution of their skill rating distribution with arbitrarily chosen static parameters (we run on international matches in 2020-2022, with only Argentina's post-match 2022 world cup ratings displayed). We also take this opportunity to highlight the different approaches to encoding a probability distribution over skill ratings. We first see that Elo stores skill ratings as a point estimate without any uncertainty quantification and that it is a purely forward-based approach without any ability to update skill ratings based on future match results (i.e. smoothing). In contrast, the three model-based SSM approaches encode a (more informative) distribution over skill ratings. In the case of TrueSkill2 (Glicko and Extended Kalman would appear similar and are therefore omitted) a simple location and spread (Gaussian) distribution is used, whereas SMC and the discrete model can encode more complex distributions. In terms of verifying our intuitions, we can see that Argentine victories increase their skill ratings, draws (against similar sides) have little influence and defeats decrease the ratings. Particularly poignant is Argentina's defeat to Saudi Arabia (who are a low-ranked side) which in all approaches resulted in a sharp decrease in Argentina's estimated skill rating. We can also draw insights from the smoothing distributions. We first note that at the Figure 2: Visualisation of the different (post-match) skill representations for Argentina’s 2023 FIFA World Cup triumph. Each y-axis represents the skill-rating scale for the different approaches (only SMC and TrueSkill2 share the same model and therefore also y-axis scaling). final match in the dataset, the filtering and smoothing distributions match exactly, by definition. We also see that the smoothing distributions show less uncertainty than the filtering distributions; this matches our intuition since smoothing \(\mathbf{P}(x_{k}\mid y_{1:K})\) has access to more data than filtering \(\mathbf{P}(x_{k}\mid y_{1:k})\). We finally observe that the smoothing distributions are much less reactive to individual results and instead track a _smooth_ trajectory of the team's skill rating over time. ### Parameter Estimation Having verified informally that the algorithms are behaving sensibly, we turn to applying the techniques discussed in Section 3.5 to learn the static parameters from historical data in an offline setting. Our goal is to maximise the log-likelihood \(\log\mathbf{P}(y_{1:K}\mid\theta)\). We initially consider the WTA tennis dataset, which we train on years 2019-2021 and leave 2022 as a test set for later. Draws do not occur in tennis, and therefore we can set \(\epsilon=\epsilon_{d}=0\), leaving only two parameters to tune for each (model-based) approach. This two-dimensional optimisation landscape of the log-likelihood can readily be visualised, as in Fig. 3. Indeed, as the static parameter is only two-dimensional the optimisation could be applied using a grid search (as is indeed the most natural option for Elo and Glicko). Furthermore, a filtering sweep provides an estimate of the optimisation objective \(\log\mathbf{P}(y_{1:K}\mid\theta)\), and therefore the grid search can be applied directly without running a smoothing routine. For more complex models and datasets, higher dimensional static parameters are inevitable, and a grid search will quickly become prohibitive. We therefore apply iterative expectation-maximisation to the tennis data in Fig. 3 to investigate some of the properties and differences between approaches, indicative of parameter estimation in more complex situations. For the three approaches considered (we again omit the Extended Kalman approach due to its similarity with TrueSkill2 in all steps beyond filtering), we display 1000 expectation-maximisation iterations starting from three different initialisations. Figure 3: Log-likelihood grid and parameter estimation for WTA tennis data. Note that TrueSkill2 and SMC share the same model. The TrueSkill2 and SMC approaches share the same model and differ only through their respective Gaussian and particle-based approximations to the skill distributions. The Gaussian approximation induces a significant bias, whereas the particle approximation is asymptotically unbiased, but induces Monte Carlo variance. We can see that the bias from the Gaussian approximation contorts the optimisation landscape for TrueSkill2 relative to SMC, whose landscape is fuzzy due to the stochastic nature of the algorithm. We see that the additional bias from the TrueSkill2 approach results in the EM trajectory evading the global optimum; by contrast, this optimum is successfully identified by both the SMC and discrete approaches, which do not exhibit any systematic bias beyond the factorial approximation. ### Filtering (for online decision-making) Now that we have a principled choice for our static parameters, we can apply the methods to online data. In Table 2, we use static parameters (trained using grid search for Elo and Glicko, and EM for the remaining, model-based approaches). Recall that in the case of sports with draws, Glicko does not provide the normalised outcome predictions required for log-likelihood-based assessment. For the tennis data, we notice that all models broadly perform quite similarly, with the exception of TrueSkill2; we suspect that this stems from the parameter estimation optimisation issues discussed above. The tennis task without draws represents a simpler binary prediction problem, and it is therefore perhaps not surprising that (with principled parameter estimation) predictive performance saturates. For the more difficult tasks of Football and Chess (where draws do occur), we see that the model-based approaches equipped with uncertainty quantification significantly outperform Elo. Here we have assessed the accuracy of the approaches in predicting match outcomes, as this is a natural task in the context of online decision-making, and can be useful for a variety of purposes including seeding, scheduling and indeed betting. Predictive distributions can also be used for more sophisticated decision-making such as those based on multiple future \begin{table} \begin{tabular}{|l|c c|c c|c c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Tennis (WTA)} & \multicolumn{2}{c|}{Football (EPL)} & \multicolumn{2}{c|}{Chess} \\ & Train & Test & Train & Test & Train & Test \\ \hline \hline Elo-Davidson & 0.640 & 0.636 & 1.000 & 0.973 & 0.802 & 1.001 \\ \hline Glicko & 0.640 & 0.636 & - & - & - & - \\ \hline Extended Kalman & 0.640 & **0.635** & 0.988 & 0.965 & **0.801** & **0.972** \\ \hline TrueSkill2 & 0.650 & 0.668 & 1.006 & **0.961** & 0.802 & 0.978 \\ \hline SMC & 0.640 & 0.639 & 0.988 & 0.962 & **0.801** & 0.974 \\ \hline Discrete & **0.639** & 0.636 & **0.987** & **0.961** & **0.801** & 0.976 \\ \hline \end{tabular} \end{table} Table 2: Average negative log-likelihood (low is good) for presented models and algorithms across a variety of sports. In each case, the training period was 3 years and the test period was the subsequent year. Note the draw percentages were 0% for tennis, 22% for football and 65% for chess. matches (competition outcomes, promotion/relegation results, etc.). ### Smoothing (for historical evaluation) Smoothing represents an integral subroutine for parameter estimation, though can also be of interest in its own right (Glickman, 1999; Duffield and Singh, 2022). In particular, when analysing the historical evolution of a player's skill over time, it is more appropriate to consider the smoothing distributions, rather than the filtering distributions which do not update in light of recent match results. In Fig. 4, we display the historical evolution of Tottenham's EPL skill rating over time according to (8) and TrueSkill2 inference. When comparing filtering and smoothing, we immediately see that the smoothing distributions are less reactive, and provide a more realistic trajectory of how a team's underlying skill is expected to evolve over time. Noting that the model permits a certain amount of randomness to occur in each match which can result in surprise results, we observe that the smoothing distributions do a much better job of handling this noise or uncertainty. Fig. 4 only displays the TrueSkill2 method, but of course, similar takeaways hold for all of the aforementioned model-based methods (as is highlighted in Fig. 2). Figure 4: Filtering and smoothing with TrueSkill2 for Tottenham’s EPL matches from 2011-2023. Filtering in purple, smoothing in green (error bars represent one standard deviation) with the other teams’ mean skills in faded grey. Black dashed lines represent a change in Tottenham manager with long-serving ones named. Historical evaluation of skills can be particularly useful to analyse the impacts of various factors and how they impact the team's underlying skill level, relative to its competitors. In Fig. 4, we highlight the different managers or head coaches that have served Tottenham during the time period, depicting their competitors' mean skills in the background. This can be particularly useful in evaluating the ability of the managers and their impact on the team. We observe from the smoothing output that Tottenham's skill rating was in ascendance under Villas-Boas and early Pochettino, before descending towards the end of the Pochettino era (although perhaps not as sharply as the filtering output would suggest). We note that there are likely many further factors influencing the underlying skill that are not highlighted in Fig. 4, and also that (8) is relatively simple; it may of course be desirable to build a more complex model which can direct account for such additional factors or data. In practice, one may want to update full smoothing trajectories in an online fashion so that historical evaluation is possible without having to run full \(\mathcal{O}(K)\) backward sweeps. In this setting, a fixed-lag approximation is a natural option (Duffield and Singh, 2022). ## 6 Discussion In this work, we have advocated for a model-based approach to the skill rating problem. By taking this perspective, we are able to separate the tasks of modeling and inference. We have detailed a number of basic SSMs which are suitable for tackling the problem. We have also detailed a number of different approximate inference schemes for analysing such models, and discussed their relative strengths and shortcomings. We have conducted a number of case studies on how such methods apply to practical data, highlighting a simple workflow based around the SSM approach, and the different utilities of filtering, smoothing, and parameter estimation in this context. While we have focused here on relatively simple sporting models, there is of course ample potential to apply the same framework and procedures to more complex models, which we welcome. A simple concrete example would be to model 3-set and 5-set tennis matches with different observation scalings \(s_{\text{3-set}}>s_{\text{5-set}}\), to reflect the additional randomness associated with the shorter format. The data and models which we have described have been focused on binary and ternary match outcomes, but the techniques could be easily applied point-by-point or to model margin of victory (Kovalchik, 2020). For example, one could develop a model for cricket skills where the bowler and batter are the two 'players' with an asymmetric likelihood based on \(\mathcal{Y}=\{\text{wicket},\text{extras},0,1,2,3,4,6\}\). Higher-dimensional representations of player skill also represents a natural extension; e.g. home and away strength, surface-dependent strength for tennis players, cricket batter strength v.s. pace/spin, and more. In this setting, the various quantities for a single player may carry significant correlation, suggesting that the factorial approximation ought be applied _across_ player ratings but not _within_. More broadly, this raises considerations about the scalability of the inference techniques with respect to dimension. These considerations also apply to sports which go beyond pairwise observation models (such as those tackled by TrueSkill (Herbrich et al., 2006; Minka et al., 2018)) although the general joint Assimilate and Marginalise framework still applies. An appealing aspect of our framework is that the user can devote their time to carefully designing their model, and describing their data-generating process in state-space language. Having done this, the general-purpose inference schemes will typically apply directly, enabling the user to easily explore different model and parameter configurations, with the ability to refine the model in an iterative manner (Gelman et al., 2020). As a result of our specific interests in this problem, we tend to emphasise the role of filtering as a tool for online decision-making, and of smoothing for retrospective evaluation of policies (e.g. assessing coach efficacy, changes in conditions, etc.). For simplicity, we have given a comparatively lightweight treatment of parameter estimation; there are various extensions of our work in this direction which would be worthwhile to examine carefully, e.g. online parameter estimation (Cappe, 2011; Kantas et al., 2015), Bayesian approaches (Andrieu et al., 2010), and beyond. With regard to practical recommendations, at a coarse resolution we can offer that i) when speed and scalability are of primary interest, Extended Kalman inference offers a good default, whereas ii) when robustness and flexibility are a greater priority, the fHMM model with graph-based inference has many nice properties. We note that the task of evaluating the relative suitability of { Extended Kalman, SMC, HMM,... } approaches is not limited to the skill rating setting, and is relevant across many fields and applications wherein state-space models are fundamental. ## Data availability All data used is freely available online. Tennis data sourced from tennis-data.co.uk. Football data sourced from football-data.co.uk as well as international football data from github.com/martj42/international_results. Chess data sourced from github.com/huffyhenry/forecasting-candidates. Reproducible code can be found at github.com/SamDuffield/abile. ## Funding Sam Power and Lorenzo Rimella were supported by EPSRC grant EP/R018561/1 (Bayes4Health).
2308.12607
The incompressible Navier-Stokes-Fourier-Maxwell system limits of the Vlasov-Maxwell-Boltzmann system for soft potentials: the noncutoff cases and cutoff cases
We obtain the global-in-time and uniform in Knudsen number $\epsilon$ energy estimate for the cutoff and non-cutoff scaled Vlasov-Maxwell-Boltzmann system for the soft potential. For the non-cutoff soft potential cases, our analysis relies heavily on additional dissipative mechanisms with respect to velocity, which are brought about by the strong angular singularity hypothesis, i.e. $\frac12\leq s<1$. In the case of cutoff cases, our proof relies on two new kinds of weight functions and complex construction of energy functions, and here we ask $\gamma\geq-1$. As a consequence, we justify the incompressible Navier-Stokes-Fourier-Maxwell equations with Ohm's law limit.
Ning Jiang, Yuanjie Lei
2023-08-24T07:19:01Z
http://arxiv.org/abs/2308.12607v1
The incompressible Navier-Stokes-Fourier-Maxwell system limits of the Vlasov-Maxwell-Boltzmann system for soft potentials: the noncutoff cases and cutoff cases ###### Abstract. We obtain the global-in-time and uniform in Knudsen number \(\varepsilon\) energy estimate for the cutoff and non-cutoff scaled Vlasov-Maxwell-Boltzmann system for the soft potential. For the non-cutoff soft potential cases, our analysis relies heavily on additional dissipative mechanisms with respect to velocity, which are brought about by the strong angular singularity hypothesis, i.e. \(\frac{1}{2}\leq s<1\). In the case of cutoff cases, our proof relies on two new kinds of weight functions and complex construction of energy functions, and here we ask \(\gamma\geq-1\). As a consequence, we justify the incompressible Navier-Stokes-Fourier-Maxwell equations with Ohm's law limit. Keywords. two-species Vlasov-Maxwell-Boltzmann system; cutoff and noncutoff soft potential; two-fluid incompressible Navier-Stokes-Fourier-Maxwell system; Ohm's law; global classical solutions; uniform energy bounds; convergence for classical solutions. AMS subject classifications. 35B45, 35B65, 35Q35, 76D03, 76D09, 76D10 \({}^{*}\) Corresponding author November 7, 2021 ###### Contents * 1 Introduction. * 1.1 The scaled VMB system * 1.2 Hydrodynamic limits of Boltzmann type equations * 1.3 Notations * 1.4 The structure * 2 Main results * 2.1 Non-cutoff cases * 2.2 Cutoff cases * 2.3 The limits * 2.4 The idea and the outline of the proof * 3 The non-cutoff VMB system * 3.1 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.2 The top-order energy estimates with weight * 3.3 The low-order energy estimates with weight * 3.4 Lyapunov inequality for the energy functionals * 3.5 The temporal time decay estimate on \(\mathcal{E}_{k\to N_{0}}(t)\) * 3.6 The estimates on the negative Sobolev space * 3.7 The a priori estimates * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.1 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.3 The cutoff VMB system * 3.7.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.8 The cutoff VMB system * 3.7.9 The cutoff VMB system * 3.7.10 The cutoff VMB system * 3.7.11 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.2 The cutoff VMB system * 3.7.3 The cutoff VMB system * 3.7.2.4 Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) * 3.7.5 The cutoff VMB system * 3.7.6 The cutoff VMB system * 3.7.7.7 The cutoff VMB system * 3.7.8.1 The cutoff VMB system * 3.7.9.1 The cutoff VMB system * 3.7.1.1 The cutoff VMB system * 3.7.1.2 The cutoff VMB system * 3.7.2.3 The cutoff VMB system * 3.7.1.3 The cutoff VMB 4.6 The a priori estimates * 5 Limit to two fluid incompressible Navier-Stokes-Fourier-Maxwell equations with Ohm's law * A Appendix A.1 Properties of the collision operator for non-cutoff cases * A.2 Properties of the collision operator for angular cutoff cases * A.3 The proof of Lemma 3.2 * Acknowledgement ## 1. Introduction. ### The scaled VMB system. The two-species Vlasov-Maxwell-Boltzmann (VMB) system describes the evolution of a gas of two species of oppositely charged particles (cations of charge \(q^{+}>0\) and mass \(m^{+}>0\), and anions of charge \(-q^{-}<0\) and mass \(m^{-}>0\)), subject to auto-induced electromagnetic forces. Such a gas of charged particles equipped with a global neutrality condition is usually called a plasma. The unknowns \(F^{+}(t,x,v)\geq 0\) and \(F^{-}(t,x,v)\geq 0\) represent respectively the particle number densities of the positive charged ions (i.e., cations), and the negative charged ions (i.e., anions), which are at position \(x\in\mathbb{R}^{3}\) with velocity \(v\in\mathbb{R}^{3}\), at time \(t\geq 0\). The VMB system reads as follows: \[\left\{\begin{aligned} \partial_{t}F^{+}+v\cdot\nabla_{x}F^{+}+ \frac{q^{+}}{m^{+}}(E+v\times B)\cdot\nabla_{v}F^{+}&=Q(F^{+}, F^{+})+Q(F^{+},F^{-})\,,\\ \partial_{t}F^{-}+v\cdot\nabla_{x}F^{-}-\frac{q^{-}}{m^{-}}(E+v \times B)\cdot\nabla_{v}F^{-}&=Q(F^{-},F^{-})+Q(F^{-},F^{+})\,, \\ \mu_{0}\varepsilon_{0}\partial_{t}E-\nabla_{x}\times B& =-\mu_{0}\int_{\mathbb{R}^{3}}(q^{+}F^{+}-q^{-}F^{-})v\,\mathrm{d}v\,,\\ \partial_{t}B+\nabla_{x}\times E&=0\,,\\ \mathrm{div}_{x}E&=\frac{1}{\varepsilon_{0}}\int_{ \mathbb{R}^{3}}(q^{+}F^{+}-q^{-}F^{-})\,\mathrm{d}v\,,\\ \mathrm{div}_{x}B&=0\,.\end{aligned}\right. \tag{1.1}\] Here the evolution of the densities \(F^{\pm}\) are governed by the Vlasov-Boltzmann equations in the first two equations of (1.1). This means that the variations of densities along the particle trajectories are subject to the influence of an auto-induced Lorentz force and inter-particle collisions in the gas. The electric field \(E(t,x)\) and the magnetic field \(B(t,x)\), which are generated by the motion of the particles in the plasma itself, are governed by the Maxwell equation. It consists of the Ampere equation, Faraday's equation and Gauss' laws, representing in the third, fourth, and the last two equations, respectively. The vacuum permeability and permittivity (or say, the magnetic and electric constants) are denoted, respectively, by the physical coefficients \(\mu_{0},\varepsilon_{0}>0\). Both species of particles are assumed here to have the same mass \(m^{\pm}=m>0\) and charge \(q^{\pm}=q>0\). The Boltzmann collision operator, presented in the right-hand sides of the Vlasov-Boltzmann equations in (1.1), is a quadratic form, acting on the velocity variables, associated to the bilinear operator \(Q(F,G)(v)\). Before stating the results, we introduce the models, in particular the formats of the collision kernels. Since we treat both non-cutoff and cutoff kernels, for which the convenient representations are little different, we describe it separately in the following. #### 1.1.1. Non-cutoff cases. The Boltzmann collision operator \(Q(F,G)(v)\) is given by \[Q(F,G)(v)=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\mathbf{B}(v-u,\sigma)\{F( u^{\prime})G(v^{\prime})-F(u)G(v)\}\mathrm{d}\sigma\mathrm{d}u,\] where in terms of velocities \(v\) and \(u\) before the collision, velocities \(v^{\prime}\) and \(u^{\prime}\) after the collision are defined by \[v^{\prime}=\frac{v+u}{2}+\frac{|v-u|}{2}\sigma,\quad\,u^{\prime}=\frac{v+u}{ 2}-\frac{|v-u|}{2}\sigma,\quad\sigma\in\mathbb{S}^{2}.\] The Boltzmann collision kernel \(\mathbf{B}(v-u,\sigma)\) depends only on the relative velocity \(|v-u|\) and on the deviation angle \(\theta\) given by \(\cos\theta=\langle\sigma,\ (v-u)/|v-u|\rangle\), where \(\langle\cdot,\cdot\rangle\) is the usual dot product in \(\mathbb{R}^{3}\). Without loss of generality, we suppose that \(\mathbf{B}(v-u,\sigma)\) is supported on \(\cos\theta\geq 0\). Throughout the paper, the collision kernel \(\mathbf{B}(v-u,\sigma)\) is further supposed to satisfy the following assumptions: 1. \(\mathbf{B}(v-u,\sigma)\) takes the product form in its argument as \[\mathbf{B}(v-u,\sigma)=\Phi(|v-u|)\mathbf{b}(\cos\theta)\] with \(\Phi\) and \(\mathbf{b}\) being non-negative functions; 2. The angular function \(\sigma\to\mathbf{b}(\langle\sigma,(v-u)/|v-u|\rangle)\) is not integrable on \(\mathbb{S}^{2}\), i.e. \[\int_{\mathbb{S}^{2}}\mathbf{b}(\cos\theta)\mathrm{d}\sigma=2\pi\int_{0}^{ \pi/2}\sin\theta\mathbf{b}(\cos\theta)\mathrm{d}\theta=\infty.\] Moreover, there are two positive constants \(c_{b}>0,0<s<1\) such that \[\frac{c_{b}}{\theta^{1+2s}}\leq\sin\theta\mathbf{b}(\cos\theta)\leq\frac{1}{c _{b}\theta^{1+2s}};\] 3. The kinetic function \(z\to\Phi(|z|)\) satisfies \[\Phi(|z|)=C_{\Phi}|z|^{\gamma}\] for some positive constant \(C_{\Phi}>0.\) Here we should notice that the exponent \(\gamma>-3\) is determined by the intermolecular interactive mechanism. Usually, we call \(\mathbf{B}(v-u,\sigma)\) as hard potentials collision kernels when \(\gamma+2s\geq 0\), and soft potentials when \(-3<\gamma<-2s\) with \(0<s<1\). The current work is restricted to the case of \[\max\left\{-3,-\frac{3}{2}-2s\right\}<\gamma<-2s,\ \ \frac{1}{2}\leq s<1\,. \tag{1.2}\] We call the case (1.2) with \(\frac{1}{2}\leq s<1\) as strong angular singularity as treated in [23], while for \(0<s<\frac{1}{2}\) weak angular singularity as in [28]. #### 1.1.2. Cutoff cases For the cutoff collision kernels, \(Q(F,G)(v)\) is defined as defined as \[Q(F,G)(v)= \int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|u-v|^{\gamma}\mathbf{b }\left(\frac{\omega\cdot(v-u)}{|u-v|}\right)\left\{F(v^{\prime})G(u^{\prime}) -F(v)G(u)\right\}\mathrm{d}\omega\mathrm{d}u\] \[\equiv Q_{gain}(F,G)-Q_{loss}(F,G).\] Here \(\omega\in\mathbb{S}^{2}\) and \(\mathbf{b}\), the angular part of the collision kernel, satisfies the Grad angular cutoff assumption (cf. [35]) \[0\leq\mathbf{b}(\cos\vartheta)\leq C|\cos\vartheta| \tag{1.3}\] for some positive constant \(C>0\). Here the deviation angle \(\theta\) is given by \(\cos\vartheta=\omega\cdot(v-u)/|v-u|\). Moreover, \[v^{\prime}=v-[(v-u)\cdot\omega]\omega,\quad u^{\prime}=u+[(v-u)\cdot\omega]\omega,\] which denote velocities after a collision of particles having velocities \(v,u\) before the collision and vice versa. The exponent \(\gamma\in(-3,1]\) in the kinetic part of the collision kernel is determined by the potential of intermolecular forces, which is classified into the soft potential case for \(-3<\gamma<0\), the Maxwell molecular case for \(\gamma=0\), and the hard potential case for \(0<\gamma\leq 1\) which includes the hard sphere model with \(\gamma=1\) and \(\mathbf{b}(\cos\vartheta)=C|\cos\vartheta|\) for some positive constant \(C>0\). Here we focus on the cutoff case \(-1\leq\gamma<0\). There have been extensive research on the well-posedness of the VMB. In late 80's, DiPerna and Lions developed the theory of global-in-time renormalized solutions with large initial data to the Boltzmann equation [22], Vlasov-Maxwell equations [21] and Vlasov-Poisson-Boltzmann equations [65, 66]. But for VMB there are severe difficulties, among which the major one is that the _a priori_ bounds coming from physical laws are not enough to prove the existence of global solutions, even in the renormalized sense. Recently, Arsenio and Saint-Raymond [6] eventually established global-in-time renormalized solutions with large initial data for VMB, both cut-off and non-cutoff collision kernels. We emphasize that for Boltzmann type equations, renormalized solutions are still the only global-in-time solutions without any smallness requirements on initial data. On the other line, in the context of classical solutions, through the so-called nonlinear energy method, Guo [39] constructed the classical solution of VMB near the global Maxwellian. Guo's work inspired sequences of follow-up research on VMB with more general collision kernels among which we only mention results for the most general collision kernels with or without angular cutoff assumptions, see [23, 24, 28]. ### Hydrodynamic limits of Boltzmann type equations One of the most important features of the Boltzmann equations (or more generally, kinetic equations) is their connections to the fluid equations. The so-called fluid regimes of the Boltzmann equations are those of asymptotic dynamics of the scaled Boltzmann equations when the Knudsen number \(\varepsilon\) is very small. Justifying these limiting processes rigorously has been an active research field from late 70's. Among many results obtained, the main contributions are the incompressible Navier-Stokes and Euler limits. There are two types of results in this field: **(Type-I)**, First obtaining the solutions of the scaled Boltzmann equation with _uniform_ bounds in the Knudsen number \(\varepsilon\), then extracting a subsequence converging (at least weakly) to the solutions of the fluid equations as \(\varepsilon\to 0\). **(Type-II)**, First obtaining the solutions for the limiting fluid equations, then constructing a sequence of special solutions (near the Maxwellians) of the scaled Boltzmann equations for small Knudsen number \(\varepsilon\). The key difference between the results of Type-I and type-II are: in type-I, the solutions of the fluid equations are _not_ known a priori, and are completely obtained from taking limits from the Boltzmann equations. In short, it is "from kinetic to fluid"; In type-II, the solutions of the fluid equations are _known_ first. In short, it is "from fluid to kinetic". Usually, type-I results are harder to be achieved since it must obtain enough uniform (with respect to \(\varepsilon\)) bounds for solutions of the scaled kinetic equations then compactness arguments give the convergence. This approach automatically provides the existence of both the original kinetic equations and the limiting macroscopic equations. On the other hand, type-II approach needs to employ the information of the limiting equations, then prove the solutions of the original kinetic equations with a _special_ form (usually Hilbert expansion). We remark that this classification of two approaches also appears in other asymptotical problems. For example, in their work on Kac's program [73], Mischler and Mouhot called the type-I as "bottom-up" and type-II as "top-down". We quote their words here: _...our answer is an "inverse" answer inthe sense that our methodology is "top-down" from the limit equation to the many-particle system rather than "bottom-up" as was expected by Kac._ The most successful achievement in type-I is the so-called BGL (named after Bardos, Golse and Levermore) program. From late 80's, Bardos, Golse and Levermore initialized the program to justify Leray's solutions of the incompressible Navier-Stokes equations from DiPerna-Lions' renormalized solutions [7, 8]. They proved the first convergence result with five additional technical assumptions. After ten years efforts by Bardos, Golse, Levermore, Lions, Masmoudi and Saint-Raymond, see for example [9, 68, 69, 31], the first complete convergence result without any additional compactness assumption was proved by Golse and Saint-Raymond in [32] for cutoff Maxwell collision kernel, and in [33] for hard cutoff potentials. Later on, it was extended by Levermore and Masmoudi [63] to include soft potentials. Recently Arsenio got the similar results for non-cutoff case [3, 5] based on the existence result of [2]. Furthermore, by Jiang, Levermore, Masmoudi and Saint-Raymond, these results were extended to bounded domain where the Boltzmann equation was endowed with the Maxwell reflection boundary condition [71, 52, 56], based on the solutions obtained by Mischler [72]. Another direction in type-I is in the context of classical solutions. The first work in this type is Bardos-Ukai [10]. They started from the scaled Boltzmann equation for cut-off hard potentials, and proved the global existence of classical solutions \(g_{\varepsilon}\) uniformly in \(0<\varepsilon<1\). The key feature of Bardos-Ukai's work is that they only need the smallness of the initial data, and did not assume the smallness of the Knudsen number \(\varepsilon\) in uniform estimate. After having the uniform in \(\varepsilon\) solutions \(g_{\varepsilon}\), taking limits can provide a classical solution of the incompressible Navier-Stokes equations with small initial data. Bardos-Ukai's approach heavily depends on the sharp estimates from the spectral analysis on the linearized Boltzmann operator \(\mathcal{L}\), and the semigroup method (the semigroup generated by the scaled linear operator \(\varepsilon^{-2}\mathcal{L}+\varepsilon^{-1}v\cdot\nabla_{x}\)). It seems that it is hardly extended to cutoff soft potential, and even harder for the non-cutoff cases, since it is well-known that the operator \(\mathcal{L}\) has continuous spectrum in those cases. On the torus, semigroup approach was used by Briant [13] and Briant, Merino-Aceituno and Mouhot [16] to prove incompressible Navier-Stokes limit by employing the functional analysis breakthrough of Gualdani-Mischler-Mouhot [37]. Again, their results are for cut-off kernels with hard potentials. Recently, there is type-I convergence result on the incompressible Navier-Stokes limit of the Boltzmann equation by the first author and his collaborators. In [58], the uniform in \(\varepsilon\) global existence of the Boltzmann equation with or without cutoff assumption was obtained and the global energy estimates were established. Most of the type-II results are based on the Hilbert expansion and obtained in the context of classical solutions. It was started from Nishida and Caflisch's work on the compressible Euler limit [74, 17, 60]. Their approach was revisited by Guo, Jang and Jiang, combining with nonlinear energy method to apply to the acoustic limit [42, 43, 50]. After then this process was used for the incompressible limits, for examples, [20] and [41]. In [20], De Masi-Esposito-Lebowitz considered Navier-Stokes limit in dimension 2. More recently, using the nonlinear energy method, in [41] Guo justified the Navier-Stokes limit (and beyond, i.e. higher order terms in Hilbert expansion). This was extended in [57] to more general initial data which allow the fast acoustic waves. These results basically say that, given the initial data which is needed in the classical solutions of the Navier-Stokes equations, it can be constructed the solutions of the Boltzmann equation of the form \(F_{\varepsilon}=M+\sqrt{M}(\varepsilon g_{1}+\varepsilon^{2}g_{2}+\cdots+ \varepsilon^{n}g_{n}+\varepsilon^{k}g_{\varepsilon}^{R})\), where \(g_{1},g_{2},\cdots\) can be determined by the Hilbert expansion, and \(g_{\varepsilon}^{R}\) is the error term. In particular, the first order fluctuation \(g_{1}=\rho_{1}+\mathrm{u}_{1}\!\cdot\!v+\theta_{1}(\frac{|v|^{2}}{2}-\frac{3} {2})\), where \((\rho_{1},\mathrm{u}_{1},\theta_{1})\) is the solutions to the incompressible Navier-Stokes equations. Regarding to the Vlasov-Poisson-Boltzmann (VPB) system and VMB, the corresponding fluid limits are more fruitful since the effects of electric and magnetic fields are considered. Analytically, the limits from scaled VPB are similar to the Boltzmann equations because VPB couples with an extra Poisson equation, which has very good enough regularity. This usually does not bring much difficulties. For the limits of VPB, see recent results [38, 59]. However, for the VMB, the situation is quite different. The corresponding hydrodynamic limits are much harder, even at the formal level, since it is coupled with Maxwell equations which are essentially hyperbolic. In a recent remarkable breakthrough [6], Arsenio and Saint-Raymond not only proved the existence of renormalized solutions of VMB, more importantly, also justified various limits (depending on the scalings) towards incompressible viscous electro-magneto-hydrodynamics. Among these limits, the most singular one is to the two-fluid incompressible Navier-Stokes-Fourier-Maxwell (in brief, NSFM) system with Ohm's law. The proofs in [6] for justifying the weak limit from a sequence of solutions of VMB to a dissipative solution of incompressible NSFM are extremely hard. Part of the reasons are, besides many difficulties of the existence of renormalized solutions of VMB itself, the current understanding for the incompressible NSFM with Ohm's law is far from complete. From the point view of mathematical analysis, NSFM have the behavior which is more similar to the much less understood incompressible Euler equations than to the Navier-Stokes equations. That is the reason in [6], they consider the so-called dissipative solutions of NSFM rather than the usual weak solutions. The dissipative solutions were introduced by Lions for 3-dimensional incompressible Euler equations (see section 4.4 of [67]). The studies of incompressible NSFM just started in recent years (for the introduction of physical background, see [11, 19]). For weak solutions, the existence of global-in-time Leray type weak solutions are completely open, even in 2-dimension. The first breakthrough comes from Masmoudi [70], who proved in 2-dimensional case the existence and uniqueness of global strong solutions of incompressible NSFM (in fact, the system he considered in [70] is little different with the NSFM in this paper, but the analytic analysis are basically the same) for the initial data \((v^{in},E^{in},B^{in})\in L^{2}(\mathbb{R}^{2})\times(H^{s}(\mathbb{R}^{2}))^{2}\) with \(s>0\). It is notable that in [70], the divergence-free condition of the magnetic field \(B\) or the decay property of the linear part coming from Maxwell's equations is _not_ used. Ibrahim and Keraani [47] considered the data \((v^{in},E^{in},B^{in})\in\dot{B}^{1/2}_{2,1}(\mathbb{R}^{3})\times(\dot{H}^{1/ 2}(\mathbb{R}^{3}))^{2}\) for 3-dimensional, and \((v_{0},E_{0},B_{0})\in\dot{B}^{0}_{2,1}(\mathbb{R}^{2})\times(L^{2}_{log}( \mathbb{R}^{2}))^{2}\) for 2-dimensional case. Later on, German, Ibrahim and Masmoudi [29] refines the previous results by running a fixed-point argument to obtain mild solutions, but taking the initial velocity field in the natural Navier-Stokes space \(H^{1/2}\). In their results the regularity of the initial velocity and electromagnetic fields is lowered. Furthermore, they employed an \(L^{2}L^{\infty}\)-estimate on the velocity field, which significantly simplifies the fixed-point arguments used in [47]. For some other asymptotic problems related, say, the derivation of the MHD from the Navier-Stokes-Maxwell system in the context of weak solutions, see Arsenio-Ibrahim-Masmoudi [4]. Recently, in [54] the authors of the current paper proved the global classical solutions of the incompressible NSFM with small intial data, by using the decay properties of both the electric field and the wave equation with linear damping of the divergence free magnetic field. This key idea was already used in [29]. Regarding to the hydrodynamic limits of VMB in the context of classical solutions, the only previous result belongs to Jang [49]. In fact, in [49], it was taken a very special scaling under which the magnetic effect appeared only at a higher order. As a consequence, it vanished in the limit as the Knudsen number \(\varepsilon\to 0\). So in the limiting equations derived in [49], there was no equations for the magnetic field at all. We emphasize that in [49], the author took the Hilbert expansion approach, and the classical solutions to the VMB were based on those of the limiting equations. So the convergence results in [49] belong to the type-II results, as we named in the last subsection. The main purpose of this paper is to obtain the type-I results for the hydrodynamic limits to Cauchy problem of the Vlasov-Maxwell-Boltzmann system (1.4)-(1.5) for the soft potentials, including noncutoff cases and cutoff cases, when the Knudsen number \(\varepsilon\) tends to zero. The key issue is that we do _not_ employ the Hilbert expansion method, which as mentioned above, has two disadvantages: first, it only gives a special type of the solution of the VMB. In other words, the solution with the expansion form of some finite terms. Second, the solution to the limiting equations must be known _before_ the existence of VMB. The approach employed in this paper is to obtain a family of the global in time solutions \(F_{\varepsilon}^{\pm}(t,x,v)\) to the scaled VMB with energy estimate uniform in \(0<\varepsilon<1\). Based on this uniform energy estimate, the moments of the fluctuations of \(F_{\varepsilon}^{\pm}(t,x,v)\) around the global Maxwellian converge to the solutions to the incompressible Navier-Stokes-Fourier-Maxwell (NSFM) equations. This approach automatically provides a classical solution to the NSFM equations. The first named author of this paper and Luo did this for noncutoff VMB with hard sphere collision kernel in [53]. This paper treats the technically much harder general soft potentials cases, both noncutoff and cutoff collision kernels. #### 1.2.1. Incompressible NSFM limit of VMB To obtain the incompressible NSFM equations, formally, we follow the scales settled in [6]. More specifically, we consider the following scaled two-species VMB system: \[\left\{\begin{aligned} &\partial_{t}F_{\varepsilon}^{\pm}+\frac{1}{ \varepsilon}v\cdot\nabla_{x}F_{\varepsilon}^{\pm}\pm\frac{1}{\varepsilon}( \varepsilon E_{\varepsilon}+v\times B_{\varepsilon})\cdot\nabla_{v}F_{ \varepsilon}^{\pm}=\frac{1}{\varepsilon^{2}}Q(F_{\varepsilon}^{\pm},F_{ \varepsilon}^{\pm})+\frac{1}{\varepsilon^{2}}Q(F_{\varepsilon}^{\pm},F_{ \varepsilon}^{\mp})\,,\\ &\partial_{t}E_{\varepsilon}-\nabla_{x}\times B_{\varepsilon}=- \frac{1}{\varepsilon^{2}}\int_{\mathbb{R}^{3}}(F_{\varepsilon}^{+}-F_{ \varepsilon}^{-})v\mathrm{d}v\,,\\ &\partial_{t}B_{\varepsilon}+\nabla_{x}\times E_{\varepsilon}=0 \,,\\ &\mathrm{div}_{x}E_{\varepsilon}=\frac{1}{\varepsilon}\int_{ \mathbb{R}^{3}}(F_{\varepsilon}^{+}-F_{\varepsilon}^{-})\mathrm{d}v\,,\\ &\mathrm{div}_{x}B_{\varepsilon}=0\end{aligned}\right. \tag{1.4}\] with initial data \[F_{\varepsilon}^{\pm}(0,x,v)=F_{\varepsilon}^{\pm,in}(x,v)\in\mathbb{R}\,, \quad E_{\varepsilon}(0,x)=E_{\varepsilon}^{in}(x)\in\mathbb{R}^{3}\,,\quad B _{\varepsilon}(0,x)=B_{\varepsilon}^{in}(x)\in\mathbb{R}^{3}\,. \tag{1.5}\] It is well-known that the global equilibrium for the two-species VMB is \([M(v),M(v)]\), where the normalized global _Maxwellian_ is \[M(v)=\tfrac{1}{(2\pi)^{\frac{3}{2}}}\exp(-\tfrac{|v|^{2}}{2})\,. \tag{1.6}\] Set \[F_{\varepsilon}^{\pm}(t,x,v)=M(v)+\varepsilon\sqrt{M(v)}f_{\varepsilon}^{\pm }(t,x,v),\] this leads to the perturbed two-species VMB \[\left\{\begin{aligned} &\partial_{t}f_{\varepsilon}+\tfrac{1}{ \varepsilon}\big{[}v\cdot\nabla_{x}f_{\varepsilon}+q_{0}(\varepsilon E_{ \varepsilon}+v\times B_{\varepsilon})\cdot\nabla_{v}f_{\varepsilon}\big{]}+ \tfrac{1}{\varepsilon^{2}}\mathscr{L}f_{\varepsilon}-\tfrac{1}{\varepsilon}(E _{\varepsilon}\cdot v)\sqrt{M}q_{1}\\ &\qquad=\tfrac{1}{2}q_{0}(E_{\varepsilon}\cdot v)f_{\varepsilon} +\tfrac{1}{\varepsilon}\mathscr{T}(f_{\varepsilon},f_{\varepsilon})\,,\\ &\partial_{t}E_{\varepsilon}-\nabla_{x}\times B_{\varepsilon}=- \tfrac{1}{\varepsilon}\int_{\mathbb{R}^{3}}f_{\varepsilon}\cdot q_{1}v\sqrt {M}\mathrm{d}v\,,\\ &\partial_{t}B_{\varepsilon}+\nabla_{x}\times E_{\varepsilon}=0 \,,\\ &\mathrm{div}_{x}E_{\varepsilon}=\int_{\mathbb{R}^{3}}f_{ \varepsilon}\cdot q_{1}\sqrt{M}\mathrm{d}v\,,\ \mathrm{div}_{x}B_{\varepsilon}=0\,,\end{aligned}\right. \tag{1.7}\] where \(f_{\varepsilon}=[f_{\varepsilon}^{+},f_{\varepsilon}^{-}]\) represents the vector in \(\mathbb{R}^{2}\) with the components \(f_{\varepsilon}^{\pm}\), the \(2\times 2\) diagonal matrix \(q_{0}=\mathit{diag}(1,-1)\), the vector \(q_{1}=[1,-1]\), the linearized collision operator \(\mathscr{L}f_{\varepsilon}\) and the nonlinear collision term \(\mathscr{T}(f_{\varepsilon},f_{\varepsilon})\) are respectively defined by \[\mathscr{L}f_{\varepsilon}=[\mathscr{L}_{+}f_{\varepsilon},\mathscr{L}_{-}f_{ \varepsilon}],\qquad\quad\mathscr{T}(f_{\varepsilon},g)=[\mathscr{T}_{+}(f_{ \varepsilon},g),\mathscr{T}_{-}(f_{\varepsilon},g)]\] with \[\mathscr{L}_{\pm}f_{\varepsilon}= -M^{-1/2}\left\{Q\left(M,M^{1/2}(f_{\varepsilon}^{\pm}+f_{ \varepsilon}^{\mp})\right)+2Q\left(M^{1/2}f_{\varepsilon}^{\pm},M\right) \right\},\] \[\mathscr{T}_{\pm}(f_{\varepsilon},g)= M^{-1/2}Q\left(M^{1/2}f_{\varepsilon}^{\pm},M^{1/2}g^{\pm} \right)+M^{-1/2}Q\left(M^{1/2}f_{\varepsilon}^{\pm},M^{1/2}g^{\mp}\right).\] For the linearized Boltzmann collision operator \(\mathscr{L}\), it is well known, cf. [39, 24], that it is non-negative and the null space \(\mathcal{N}\) of \(\mathscr{L}\) is given by \[\mathcal{N}=\mathrm{span}\left\{[1,0]M^{1/2},[0,1]M^{1/2},[v_{i},v_{i}]M^{1/2} (1\leq i\leq 3),[|v|^{2},|v|^{2}]M^{1/2}\right\}.\] If we define \(\mathbf{P}\) as the orthogonal projection from \(L^{2}(\mathbb{R}^{3}_{v})\times L^{2}(\mathbb{R}^{3}_{v})\) to \(\mathcal{N}\), then for any given function \(f(t,x,v)\in L^{2}(\mathbb{R}^{3}_{v})\), one has \[\mathbf{P}f_{\varepsilon}=\left\{\rho_{\varepsilon}^{+}(t,x)[1,0]+\rho_{ \varepsilon}^{-}(t,x)[0,1]+\sum_{i=1}^{3}u_{\varepsilon}^{i}(t,x)[1,1]v_{i}+ \theta_{\varepsilon}(t,x)[1,1](|v|^{2}-3)\right\}M^{1/2}\] with \[\rho_{\varepsilon}^{\pm}=\int_{\mathbb{R}^{3}}M^{1/2}f_{\varepsilon}^{\pm} \mathrm{d}v,\quad u_{\varepsilon,i}=\frac{1}{2}\int_{\mathbb{R}^{3}}v_{i}M^{1/2 }(f_{\varepsilon}^{+}+f_{\varepsilon}^{-})\mathrm{d}v,\quad\theta_{\varepsilon }=\frac{1}{12}\int_{\mathbb{R}^{3}}(|v|^{2}-3)M^{1/2}(f_{\varepsilon}^{+}+f_{ \varepsilon}^{-})\mathrm{d}v.\] Therefore, we have the following macro-micro decomposition with respect to the given global Maxwellian \(M\) which was introduced in [40] \[f_{\varepsilon}(t,x,v)=\mathbf{P}f_{\varepsilon}(t,x,v)+\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}(t,x,v) \tag{1.8}\] where \(\mathbf{I}\) denotes the identity operator and \(\mathbf{P}f_{\varepsilon}\) and \(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\) are called the macroscopic and the microscopic component of \(f_{\varepsilon}(t,x,v)\), respectively. Using the moment method, Arsenio and Saint-Raymond in [6] proved the following limit: \[f_{\varepsilon}\to(\rho+\tfrac{1}{2}n)\tfrac{q_{1}+q_{2}}{2}\sqrt{M}+(\rho- \tfrac{1}{2}n)\tfrac{q_{2}-q_{1}}{2}\sqrt{M}+u\cdot vq_{2}\sqrt{M}+\theta( \tfrac{|v|^{2}}{2}-\tfrac{3}{2})q_{2}\sqrt{M}\,, \tag{1.9}\] where \((\rho,n,u,\theta,E,B)\) satisfies the following two-fluid incompressible NSFM with Ohm's law: \[\left\{\begin{aligned} &\partial_{t}u+u\cdot\nabla_{x}u-\mu\Delta_{ x}u+\nabla_{x}p=\tfrac{1}{2}(nE+j\times B)\,,&&\operatorname{ div}_{x}u=0\,,\\ &\partial_{t}\theta+u\cdot\nabla_{x}\theta-\kappa\Delta_{x} \theta=0\,,&&\rho+\theta=0\,,\\ &\partial_{t}E-\nabla_{x}\times B=-j\,,&& \operatorname{div}_{x}E=n\,,\\ &\partial_{t}B+\nabla_{x}\times E=0\,,&& \operatorname{div}_{x}B=0\,,\\ & j-nu=\sigma\big{(}-\tfrac{1}{2}\nabla_{x}n+E+u\times B\big{)} \,,&& w=\tfrac{3}{2}n\theta\,,\end{aligned}\right. \tag{1.10}\] where the vectors \(q_{1}=[1,-1]\), \(q_{2}=[1,1]\), for a detailed definition of the viscosity \(\mu\), heat conductivity \(\kappa\), and electrical conductivity \(\sigma\), please refer to [6] for their derivation. We will not give detailed formal derivation here. In our proof of Theorem 2.1, we indeed provide how the NSFM can be derived from VMB with soft potential, including the noncutoff cases and cutoff cases. ### Notations * For convention, we index the usual \(L^{p}\) space by the name of the concerned variable. So we have, for \(p\in[1,+\infty]\), \[L^{p}_{[0,T]}=L^{p}([0,T])\,,\,\,L^{p}_{x}=L^{p}(\mathbb{R}^{3})\,,\,\,L^{p}_{ x}=L^{p}(\mathbb{R}^{3})\,,\,\,L^{p}_{x,v}=L^{p}(\mathbb{R}^{3}\times \mathbb{R}^{3})\,.\] For \(p=2\), we use the notations \((\cdot\,,)_{L^{2}_{x}}\), \(\langle\cdot\,,\cdot\rangle_{L^{2}_{x}}\) and \((\cdot\,,)_{L^{2}_{x,v}}\) to represent the inner product on the Hilbert spaces \(L^{2}_{x}\), \(L^{2}_{v}\) and \(L^{2}_{x,v}\), respectively; * \(\langle\cdot\rangle=\sqrt{1+|\cdot|^{2}}\,\); * For any multi-indexes \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) and \(\beta=(\beta_{1},\beta_{2},\beta_{3})\) in \(\mathbb{N}^{3}\) we denote the \((\alpha,\beta)^{th}\) partial derivative by \[\partial^{\alpha}_{\beta}=\partial^{\alpha}_{x}\partial^{\beta}_{v}=\partial ^{\alpha_{1}}_{x_{1}}\partial^{\alpha_{2}}_{x_{2}}\partial^{\alpha_{3}}_{x_{3} }\partial^{\beta_{1}}_{v_{1}}\partial^{\beta_{2}}_{v_{2}}\partial^{\beta_{3}}_ {v_{3}}\,;\] * As in [1], \[|g|^{2}_{L^{2}_{D}}=|g|^{2}_{D} \equiv\int_{\mathbb{R}^{6}\times\mathbb{S}^{2}}\mathbf{B}(v-u, \sigma)\mu(u)\left(g^{\prime}-g\right)^{2}\mathrm{d}u\mathrm{d}v\mathrm{d}\sigma\] \[\quad+\int_{\mathbb{R}^{6}\times\mathbb{S}^{2}}g(u)^{2}\left(\mu( u^{\prime})^{\frac{1}{2}}-\mu(u)^{\frac{1}{2}}\right)^{2}\mathrm{d}u\mathrm{d}v \mathrm{d}\sigma;\] and \(\|g\|_{D}=\|g\|_{L^{2}_{x}L^{p}_{D}}=\||g|_{L^{2}_{D}}\|_{L^{2}_{x}}\). * For \(l\in\mathbb{R}\), \(\langle v\rangle=\sqrt{1+|v|^{2}}\), \(L^{2}_{l}\) denotes the weighted space with norm \(|g|^{2}_{L^{2}_{l}}\equiv\int_{\mathbb{R}^{3}_{v}}|g(v)|^{2}\langle v\rangle^{2 }dv\). The weighted frictional Sobolev norm \(|g(v)|^{2}_{H^{s}_{l}}=|\langle v\rangle^{l}f(v)|^{2}_{H^{s}}\) is given by \[|g(v)|^{2}_{H^{s}_{l}}=|f|^{2}_{L^{2}_{l}}+\int_{\mathbb{R}^{3}}\mathrm{d}v \int_{\mathbb{R}^{3}}\mathrm{d}v^{\prime}\frac{[\langle v\rangle^{l}g(v)- \langle v^{\prime}\rangle^{l}g(v^{\prime})]^{2}}{|v-v^{\prime}|^{3+2s}}\chi_{|v -v^{\prime}|\leq 1},\] where \(\chi_{\Omega}\) is the standard indicator function of the set \(\Omega\). Moreover, in \(\mathbb{R}^{3}_{x}\times\mathbb{R}^{3}_{v}\), \(\|\cdot\|_{H^{s}_{\tau}}=\||\cdot|_{H^{s}_{\tau}}\|_{L^{2}_{x}}\) is used; * \(|g|_{\nu}\equiv|g\langle v\rangle^{\frac{\gamma}{2}}|_{L^{2}_{v}}\), \(\|g\|_{\nu}\equiv\|g\langle v\rangle^{\frac{\gamma}{2}}\|_{L^{2}_{x}L^{2}_{v}}\) for \(-3<\gamma\leq 1\); * We use \(\Lambda^{-\varrho}g(t,x,v)\) to denote \[\Lambda^{-\varrho}g(t,x,v)=\int_{\mathbb{R}^{3}_{x}}|\xi|^{-\varrho}\widehat{g}( t,\xi,v)e^{2\pi ix\cdot\xi}\mathrm{d}\xi,\] where \(\widehat{g}(t,\xi,v)=\int_{\mathbb{R}^{3}_{x}}g(t,x,v)e^{-2\pi ix\cdot\xi}\mathrm{d}x\). ### The structure In the next section, we will deduce the main results for both non-cutoff soft potentials and cutoff soft potentials. In Section 3 and Section 4, we mainly deal with the uniform bound estimate of solutions for non-cutoff soft potentials and cutoff soft potentials respectively, which is independent of time \(t\) and \(\varepsilon\). In Section 5, based on the uniform global in time energy bound, we take the limit to derive the incompressible NSFM system with Ohm's law. Some basic properties of the linear collision operator and bilinear symmetric operator will be given in Appendix. ## 2. Main results ### Non-cutoff cases To state for brevity, we introduce the following energy and dissipation rate functional respectively: for some large \(N\in\mathbb{N}\), \[\mathcal{E}_{N}(t) =\|f_{\varepsilon}\|^{2}_{H^{N}_{x}}+\|E_{\varepsilon}\|^{2}_{H^ {N}_{x}}+\|B_{\varepsilon}\|^{2}_{H^{N}_{x}}, \tag{2.1}\] \[\mathcal{D}_{N}(t) =\tfrac{1}{\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\|^{2}_{H^{N}_{x}L^{2}_{D}}+\|\nabla_{x}\mathbf{P}f_{\varepsilon }\|^{2}_{H^{N-1}_{x}L^{2}_{v}}+\|E_{\varepsilon}\|^{2}_{H^{N-1}_{x}}+\|\nabla _{x}B_{\varepsilon}\|^{2}_{H^{N-2}_{x}} \tag{2.2}\] Due to the weaker dissipation of the Boltzmann operator in the soft potential case rather than the hard cases, in order to deal with the external force term brought by the Lorenian electric-magnetic force, especially including the growth of velocity \(v\), we need to introduce the time-velocity weight \[w_{\ell}(\alpha,\beta)=e^{\frac{q(v)}{(1+t)^{q}}}\langle v\rangle^{4(\ell-| \alpha|-|\beta|)},\ q<<1,\ \ell\geq N. \tag{2.3}\] The energy and dissipation rate functional with respect to \(w_{\ell}(\alpha,\beta)\) are introduced respectively by \[\mathcal{E}_{N,\ell}(t) =\sum_{|\alpha|+|\beta|\leq N-1}\bigl{\|}w_{\ell}(\alpha,\beta) \partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\bigr{\|}^{ 2}+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\varepsilon^{2}\sum_{|\alpha|=N}\|w_{ \ell}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\|^{2}\] \[\quad+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\sum_{|\alpha|+|\beta|=N, \atop|\beta|\geq 1}\left\|w_{\ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}, \tag{2.4}\] \[\mathcal{D}_{N,\ell}(t) =\tfrac{1}{\varepsilon^{2}}\sum_{|\alpha|+|\beta|\leq N-1}\|w_{ \ell}(\alpha,\beta)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\|^{2}_{D}\] \[\quad+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\sum_{|\alpha|+|\beta| \leq N-1}\|w_{\ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\langle v\rangle^{\frac{1}{2}}\|^{2}\] \[\quad+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\left\{\sum_{|\alpha|=N} \|w_{\ell}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\|^{2}_{D}+\frac{q \vartheta}{(1+t)^{1+\vartheta}}\varepsilon^{2}\sum_{|\alpha|=N}\|w_{\ell}( \alpha,0)\partial^{\alpha}f_{\varepsilon}\langle v\rangle^{\frac{1}{2}}\|^{2}\right\}\] \[\quad+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\frac{1}{\varepsilon^{2}} \sum_{|\alpha|+|\beta|=N,\atop|\beta|\geq 1}\|w_{\ell}(\alpha,\beta) \partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v \rangle^{\frac{1}{2}}\|^{2}\] \[\quad+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\frac{q\vartheta}{(1+t)^{1 +\vartheta}}\sum_{|\alpha|+|\beta|=N,\atop|\beta|\geq 1}\|w_{\ell}(\alpha, \beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \langle v\rangle^{\frac{1}{2}}\|^{2}. \tag{2.5}\] To obtain the desired temporal time decays, we introduce the energy and dissipation rate functional with the lowest \(k\)-order space-derivative \[\mathcal{E}_{k\to N_{0}}(t)=\sum_{|\alpha|=k}^{N_{0}}\left\|\partial^{\alpha}[ f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\right\|^{2}, \tag{2.6}\] \[\mathcal{D}_{k\to N_{0}}(t) =\left\|\nabla^{k}[E_{\varepsilon},\rho_{\varepsilon}^{+}-\rho_{ \varepsilon}^{-}]\right\|^{2}+\sum_{k+1\leq|\alpha|\leq N_{0}-1}\left\|\partial^ {\alpha}[\mathbf{P}f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\right\|^{2}\] \[\quad+\sum_{|\alpha|=N_{0}}\left\|\partial^{\alpha}\mathbf{P}f_{ \varepsilon}\right\|^{2}+\tfrac{1}{\varepsilon^{2}}\sum_{k\leq|\alpha|\leq N_ {0}}\left\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{ D}^{2}, \tag{2.7}\] respectively. The first part of the theorems is the global existence of the scaled two-species VMB system (1.7) with uniform energy estimate with respect to the Knudsen number \(0<\varepsilon\leq 1\), both for non-cutoff and cutoff collision kernels. **Theorem 2.1**.: _Assume that_ * \(\max\left\{-3,-\frac{3}{2}-2s\right\}<\gamma<-2s,\;\;\frac{1}{2}\leq s<1\)_;_ * \(0<q\ll 1\)_,_ \(0<\varepsilon<1\)_;_ * \(\frac{1}{2}<\varrho<\frac{3}{2}\)_,_ \(0<\vartheta\leq\frac{\rho}{2}-\frac{1}{4}\) _;_ * \(0<\epsilon_{0}\leq 2(1+\varrho)\)_;_ * \(\bar{l}\) _is a properly large positive constant;_ * \(N_{0}\geq 4\)_,_ \(N=2N_{0}\) _and_ \(l\geq\bar{l}+N+\frac{1}{2}\)_;_ * _the initial data_ \[Y_{f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}}(0) \equiv\sum_{|\alpha|\leq N}\left\|e^{q\langle v\rangle}\langle v \rangle^{4(l-|\alpha|-|\beta|)}\partial^{\alpha}f_{\varepsilon,0}\right\|\] \[\quad+\|[E_{\varepsilon,0},B_{\varepsilon,0}]\|_{H^{N}_{x}}+\| \Lambda^{-\varrho}[f_{\varepsilon,0},E_{\varepsilon,0},B_{\varepsilon,0}]\|\] (2.8) _is less than a sufficiently small positive constant, which is independent of_ \(\varepsilon\)_._ _Then the Cauchy problem to (1.7) admits admits the unique global-in-time solutions \([f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\in H^{N}_{x}L^{2}_{v} \times H^{N}_{x}\times H^{N}_{x}\), we can also deduce that there exist energy functionals \(\mathcal{E}_{N}(t)\), \(\mathcal{E}_{N,l}(t)\) and the corresponding energy dissipation functionals \(\mathcal{D}_{N}(t)\), \(\mathcal{D}_{N,l}(t)\) which satisfy (2.1), (2.2), (2.4) and (2.5) respectively such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\mathcal{E}_{N,l}(t)+\mathcal{E}_{N}(t) \right\}+\mathcal{D}_{N}(t)+\mathcal{D}_{N-1,l}(t)\lesssim 0 \tag{2.9}\] _holds for all \(0\leq t\leq T\)._ _Meanwhile, we also get the large time behavior in the following result:_ \[\mathcal{E}_{k\to N_{0}}(t)\lesssim Y_{f_{\varepsilon},E_{\varepsilon},B_{ \varepsilon}}^{2}(0)(1+t)^{-k-\varrho},\;k=0,1,\cdots,N_{0}-2, \tag{2.10}\] _where \(\mathcal{E}_{k\to N_{0}}(t)\) is defined in (2.6)._ ### Cutoff cases We introduce the time-velocity weight: \[\overline{w}_{l}(\alpha,\beta)=\overline{w}_{l}(\alpha,\beta)(t,v)=\langle v \rangle^{l-|\alpha|-2|\beta|}e^{\frac{q\langle v\rangle^{2}}{(1+t)^{\varrho}}}\] and \[\overline{\mathcal{E}}_{N-1,l_{1}}(t) \equiv\sum_{|\alpha|+|\beta|\leq N-1}\left\|\overline{w}_{l_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right\|^{2}, \tag{2.11}\] \[\overline{\mathcal{D}}_{N-1,l_{1}}(t) \equiv\sum_{|\alpha|+|\beta|\leq N-1}\left\|\overline{w}_{l_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right\|_{\nu}^{2}\] \[\quad+\sum_{|\alpha|+|\beta|\leq N-1}\frac{q\vartheta}{(1+t)^{1+ \vartheta}}\left\|\overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle\right\|^{2}. \tag{2.12}\] Define another weight \[\widetilde{w}_{\ell}(\alpha,\beta)=\widetilde{w}_{\ell}(\alpha,\beta)(t,v)= \langle v\rangle^{\ell-|\alpha|-\frac{1}{2}|\beta|}e^{\frac{q\langle v\rangle^ {2}}{(1+t)^{\varrho}}}.\] When \(n\leq N\), \[\mathcal{E}_{\ell}^{n,j}(t) \equiv\chi_{n=N}(1+t)^{-\sigma_{N,0}}\sum_{|\alpha|=N}\|\widetilde{w }_{\ell}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\|^{2}\] \[\quad+\sum_{\begin{subarray}{c}|\alpha|+|\beta|=n,\\ |\beta|\leq j\end{subarray}}(1+t)^{-\sigma_{n,|\beta|}}\left\|\widetilde{w}_{ \ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}\] \[\quad+\sum_{\begin{subarray}{c}|\alpha|+|\beta|=n,\\ |\beta|\leq j\end{subarray}}(1+t)^{-\sigma_{n,|\beta|}}\left\|\widetilde{w}_{ \ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}, \tag{2.13}\] \[\mathcal{D}_{\ell}^{n,j}(t) \equiv\sum_{\begin{subarray}{c}|\alpha|+|\beta|=n,\\ |\beta|\leq j\end{subarray}}(1+t)^{-\sigma_{n,|\beta|}}\left\|\widetilde{w}_{ \ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}_{\nu}\] \[\quad+\sum_{\begin{subarray}{c}|\alpha|+|\beta|=n,\\ |\beta|\leq j\end{subarray}}(1+t)^{-\sigma_{n,|\beta|}}\frac{q\vartheta\varepsilon ^{2}}{(1+t)^{1+\vartheta}}\|\widetilde{w}_{\ell}(\alpha,\beta)\partial^{ \alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle\|^{2}. \tag{2.14}\] Furthermore, for brevity, \[\mathbb{E}_{\ell}^{(n)}(t)\equiv\sum_{j\leq n}\mathcal{E}_{\ell}^{n,j}(t),\ \ \mathbb{D}_{\ell}^{(n)}(t)\equiv\sum_{j\leq n}\mathcal{D}_{\ell}^{n,j}(t). \tag{2.15}\] To obtain the desired temporal time decays, we introduce the energy and dissipation rate functional with the lowest \(k\)-order space-derivative \[\mathcal{E}_{k\to N_{0}}(t) =\sum_{|\alpha|=k}^{N_{0}}\|\partial^{\alpha}[f_{\varepsilon},E_{ \varepsilon},B_{\varepsilon}]\|^{2}\,, \tag{2.16}\] \[\mathcal{D}_{k\to N_{0}}(t) =\left\|\nabla^{k}[E_{\varepsilon},\rho_{\varepsilon}^{+}-\rho_ {\varepsilon}^{-}]\right\|^{2}+\sum_{k+1\leq|\alpha|\leq N_{0}-1}\|\partial^{ \alpha}[\mathbf{P}f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\|^{2}\] \[\quad+\sum_{|\alpha|=N_{0}}\|\partial^{\alpha}\mathbf{P}f_{ \varepsilon}\|^{2}+\tfrac{1}{\varepsilon^{2}}\sum_{k\leq|\alpha|\leq N_{0}}\| \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}_{\nu}\,, \tag{2.17}\] respectively. Meanwhile, we also define \[\mathcal{E}_{1\to N_{0}-1,\ell}(t) =\sum_{\begin{subarray}{c}|\alpha|+|\beta|\leq N_{0}-1,\\ |\alpha|\geq 1\end{subarray}}\left\|\overline{w}_{\ell}(\alpha,\beta) \partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}, \tag{2.18}\] \[\mathcal{D}_{1\to N_{0}-1,\ell}(t) =\sum_{\begin{subarray}{c}|\alpha|+|\beta|\leq N_{0}-1,\\ |\alpha|\geq 1\end{subarray}}\left\|\overline{w}_{\ell}(\alpha,\beta) \partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}_ {\nu}. \tag{2.19}\] **Theorem 2.2**.: _Assume that_ * \(-1\leq\gamma<0\)_;_ * \(N_{0}\geq 5\)_,_ \(N=2N_{0}\)_;_ * \(0<q\ll 1\)_,_ \(0<\varepsilon<1\)_;_ * \(\frac{1}{2}\leq\varrho<\frac{3}{2}\)_,_ \(0<\vartheta\leq\frac{2}{3}\rho\)_;_ * \(0<\epsilon_{0}\leq 2(1+\varrho)\)_;_ * \(\sigma_{N,0}=\frac{1+\epsilon_{0}}{2}\)_,_ \(\sigma_{n,0}=0\) _for_ \(n\leq N-1\)_,_ \(\sigma_{n,j+1}-\sigma_{n,j}=\frac{1+\epsilon_{0}}{2}\)_;_ * \(\tilde{l}\) _is a properly large positive constant;_ * \(l_{1}\geq N+\tilde{l}\)_,_ \(\tilde{\ell}\geq\frac{3}{2}\sigma_{N-1,N-1}\)_,_ \(\ell_{1}\geq l_{1}+\tilde{\ell}+\frac{1}{2}\)_,_ \(\overline{\ell}_{0}\geq\ell_{1}+\frac{3}{2}N\)_,_ \(l_{0}\geq\overline{\ell}_{0}+\frac{5}{2}\)_,_ \(\ell_{0}\geq l_{0}+\tilde{\ell}+\frac{1}{2}\)_,_ \(l^{H}\geq\ell_{0}+N\)_,_ _if the initial data_ \[Y_{f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}}(0)\equiv\sum_{|\alpha|+| \beta|\leq N}\left\|e^{q\langle v\rangle^{2}}\langle v\rangle^{l^{H}}\partial ^{\alpha}_{\beta}f_{\varepsilon,0}\right\|\] \[+\|[E_{\varepsilon,0},B_{\varepsilon,0}]\|_{H^{N}_{x}}+\|\Lambda^{-\varrho}[f_{ \varepsilon,0},E_{\varepsilon,0},B_{\varepsilon,0}]\| \tag{2.20}\] _is less than a sufficiently small positive constant, which is independent of \(\varepsilon\), then the Cauchy problem to (1.7) admits the unique global-in-time solutions \([f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\in H^{N}_{x}L^{2}_{v}\times H ^{N}_{x}\times H^{N}_{x}\), and we can deduce that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\sum_{n\leq N_{0}}\mathbb{E }^{(n)}_{\ell_{0}}(t)+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{E}^{(n)}_{\ell_{1}}(t )+\varepsilon^{2}\mathbb{E}^{(N)}_{\ell_{1}}(t)\right\}\] \[+\frac{\mathrm{d}}{\mathrm{d}t}\left\{\overline{\mathcal{E}}_{N- 1,l_{1}}(t)+\overline{\mathcal{E}}_{N_{0}-1,l_{0}}(t)+\mathcal{E}_{N}(t)+\| \Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\|^{2} \right\}+\sum_{n\leq N_{0}}\mathbb{D}^{(n)}_{\ell_{0}}(t)\] \[+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{D}^{(n)}_{\ell_{1}}(t)+ \varepsilon^{2}\mathbb{D}^{(N)}_{\ell_{1}}(t)+\overline{\mathcal{D}}_{N-1,l_ {1}}(t)+\overline{\mathcal{D}}_{N-1,l_{0}}(t)+\mathcal{D}_{N}(t)\lesssim 0 \tag{2.21}\] _holds for all \(0\leq t\leq T\)._ _Meanwhile, we also get the large time behavior in the following result:_ \[\mathcal{E}_{k\to N_{0}}(t)\lesssim Y^{2}_{f_{\varepsilon},E_{\varepsilon},B_ {\varepsilon}}(0)(1+t)^{-k-\varrho},\ k=0,1,\cdots,N_{0}-2, \tag{2.22}\] _where \(\mathcal{E}_{k\to N_{0}}(t)\) is defined in (2.6)._ ### The limits The second is on the two-fluid incompressible NSFM limit with Ohm's law as \(\varepsilon\to 0\), taken from the solutions \((f_{\varepsilon},E_{\varepsilon},B_{\varepsilon})\) of system (1.7) which are constructed in the first theorem. **Theorem 2.3**.: _Take the assumption as in Theorem 2.1. Assume that the initial data \((f_{\varepsilon,0},E_{\varepsilon,0},B_{\varepsilon,0})\) satisfy_ 1. \(f_{\varepsilon,0}\in H^{N}_{x}L^{2}_{v}\)_,_ \(E_{\varepsilon,0}\)_,_ \(B_{\varepsilon,0}\in H^{N}_{x}\)_;_ 2. _there exist scalar functions_ \(\rho(0,x)\)_,_ \(\theta(0,x)\)_,_ \(n(0,x)\in H^{N}_{x}\) _and vector-valued functions_ \(u(0,x)\)_,_ \(E(0,x)\)_,_ \(B(0,x)\in H^{N}_{x}\) _such that_ \[\begin{array}{l}f_{\varepsilon,0}\to f(0,x,v)\quad\text{strongly in }H^{N}_{x}L^{2}_{v}\,,\\ E_{\varepsilon,0}\to E(0,x)\quad\text{strongly in }H^{N}_{x}\\ B_{\varepsilon,0}\to B(0,x)\quad\text{strongly in }H^{N}_{x}\end{array}\] (2.23) _as_ \(\varepsilon\to 0\)_, where_ \(f(0,x,v)\) _is of the form_ \[\begin{array}{l}f(0,x,v)=&(\rho(0,x)+\tfrac{1}{2}n(0,x))^{\frac{\mathbf{q}_{ 1}+\mathbf{q}_{2}}{2}}\sqrt{M}+(\rho(0,x)-\tfrac{1}{2}n(0,x))^{\frac{\mathbf{ q}_{2}-\mathbf{q}_{1}}{2}}\sqrt{M}\\ &+u(0,x)\cdot v\mathbf{q}_{2}\sqrt{M}+\theta(0,x)(\tfrac{|v|^{2}}{2}-\tfrac{3 }{2})\mathbf{q}_{2}\sqrt{M}\,.\end{array}\] (2.24) _Let \((f_{\varepsilon},E_{\varepsilon},B_{\varepsilon})\) be the family of solutions to the scaled two-species VML (1.7) constructed in Theorem 2.1. Then, as \(\varepsilon\to 0\),_ \[f_{\varepsilon}\to(\rho+\tfrac{1}{2}n)^{\frac{\mathbf{q}_{1}+\mathbf{q}_{2}}{ 2}}\sqrt{M}+(\rho-\tfrac{1}{2}n)^{\frac{\mathbf{q}_{2}-\mathbf{q}_{1}}{2}} \sqrt{M}+u\cdot v\mathbf{q}_{2}\sqrt{M}+\theta(\tfrac{|v|^{2}}{2}-\tfrac{3}{2}) \sqrt{M} \tag{2.25}\] _weakly-\(\star\) in \(t\geq 0\), strongly in \(H^{N-1}_{x}L^{2}_{v}\) and weakly in \(H^{N}_{x}L^{2}_{v}\), and_ \[E_{\varepsilon}\to E\quad\text{and}\quad B_{\varepsilon}\to B \tag{2.26}\] _strongly in \(C(\mathbb{R}^{+};H^{N-1}_{x})\), weakly-\(\star\) in \(t\geq 0\) and weakly in \(H^{N}_{x}\). Here_ \[(u,\theta,n,E,B)\in C(\mathbb{R}^{+};H^{N-1}_{x})\cap L^{\infty}(\mathbb{R}^{ +};H^{N}_{x})\] _is the solution to the incompressible NSFM (1.10) with Ohm's law, which has the initial data_ \[u|_{t=0}=\mathcal{P}u(0,x)\,,\ \theta|_{t=0}=\tfrac{3}{5}\theta(0,x)-\tfrac{2}{5} \rho(0,x)\,,\ E|_{t=0}=E(0,x)\,,\ B|_{t=0}=B(0,x)\,, \tag{2.27}\] _where \(\mathcal{P}\) is the Leray projection. Moreover, the convergence of the moments holds:_ \[\begin{array}{l}\mathcal{P}(f_{\varepsilon},\tfrac{1}{2}\mathbf{q}_{2}v \sqrt{M})_{L^{2}_{v}}\to u\,,\\ \langle f_{\varepsilon},\tfrac{1}{2}\mathbf{q}_{2}(\tfrac{|v|^{2}}{5}-1)\sqrt{ M}\rangle_{L^{2}_{v}}\to\theta\,,\end{array} \tag{2.28}\] _strongly in \(C(\mathbb{R}^{+};H^{N-1}_{x})\), weakly-\(\star\) in \(t\geq 0\) and weakly in \(H^{N}_{x}\) as \(\varepsilon\to 0\)._ ### The idea and the outline of the proof In order to better illustrate the difficulties encountered in the soft potential considered in this paper, especially when compared with the hard sphere model. To this end, we first state the main strategies adopted in [53] to deal with the hard sphere model. #### 2.4.1. The strategy in [53] for the hard sphere model * First, under the hard sphere model, the following dissipative mechanism can be derived from the compulsion of linear collision operators: \[\|\cdot\|_{\nu}^{2}\equiv\|\cdot\langle v\rangle^{\frac{1}{2}}\|^{2},\] from this, we can see that the first power of the velocity in the dissipative mechanism is sufficient to control a single increase in the velocity due to the Lorentz force, namely the following terms, \(E_{\varepsilon}\cdot vf_{\varepsilon}\), \(E_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\) and \(\frac{1}{\varepsilon}v\times B_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\). * Secondly, in order to control the singularity of \(\varepsilon\) in the magnetic field term, especially the singularity of macroquantities in its energy estimation, they note that the linear operators corresponding to the two particles cancel out, making the correlation estimation of the macroquantities \(0\). For example, the following equation is true, i.e. \[\left(\frac{1}{\varepsilon}v\times\partial_{x}^{\alpha_{1}}B_{\varepsilon} \cdot\partial_{x}^{\alpha-\alpha_{1}}\mathbf{P}f_{\varepsilon},\partial_{x} ^{\alpha}\mathbf{P}f_{\varepsilon}\right)=0.\] * Finally, in order to obtain the dissipative mechanism of the electromagnetic field, they used Ohm's law to obtain the equation containing Damping term of the electric field quantity. This discovery played a very important role in their final estimation uniformly in both time \(t\) and \(\varepsilon\). #### 2.4.2. The difficulty for soft potentials However, for the soft potential case we consider in this paper, we will face more difficulties. We must control not only the growth of velocity, but also the growth of time, and more importantly, the growth of singularity. The specific difficulties are stated as follows: * First, the dissipation mechanism of the linear collision operator is much weaker than that of the hard sphere model, namely, the dissipation mechanism is as follows: \[\|\cdot\|_{\nu}\equiv\|\cdot\langle v\rangle^{\frac{7}{2}}\|,\quad\gamma<0,\] we find that this dissipative mechanism cannot control the growth of velocity in the nonlinear term containing Lorentz forces. * Secondly, refer to the practices about the dynamic model with external force rather than the hard sphere model, such as Guo in [44] proposed algebraic weight to deal with the Coulomb potential of VPL, and Duan-Liu-Yang-Zhao in [23], Duan-Lei-Yang-Zhao in [24] proposed to use the extra dissipation of exponential weight to deal with the soft potential of VMB and so on. Whether algebraic weight or exponential weight function, when we use the weight function, the weighted energy estimate of a linear operator \(\mathscr{L}\) of the highest order weighted function will produce the singular term of the highest order macroscopic quantity, as follows: \[\frac{1}{\varepsilon^{2}}\left(\mathscr{L}\partial^{\alpha}f_{\varepsilon},w^{ 2}\partial^{\alpha}f_{\varepsilon}\right)\gtrsim\cdots-\frac{1}{\varepsilon^ {2}}\left\|\partial^{\alpha}\mathbf{P}f_{\varepsilon}\right\|^{2},\] (2.29) which leads to a new singular term \(\frac{1}{\varepsilon^{2}}\left\|\partial^{\alpha}\mathbf{P}f_{\varepsilon} \right\|^{2}\). This new singularity term can not be controlled any more! * Thirdly, in order to control the increase of velocity in the nonlinear term caused by Lorentz force, the only feasible method so far is to use the strategy of exponential weight functions \(w_{\ell}\) i.e. \(\langle v\rangle^{\ell}e^{\frac{q(v)}{(1+t)^{\vartheta}}}\) or \(\langle v\rangle^{\ell}e^{\frac{q(v)^{2}}{(1+t)^{\vartheta}}}\). We use the extra dissipative term brought by the exponential weight function to deal with the increase of velocity in the nonlinear term. The electromagnetic field is required to have a certain time decay rates. For example, for the soft potential case under the Grad's angular cutoff hypothesis, how to deal with the related energy estimation of nonlinear terms containing magnetic fields is shown as follows: \[\frac{1}{\varepsilon}\|\partial_{x}^{e_{i}}B_{\varepsilon}\|_{L^{\infty}_{ \infty}}\|\langle v\rangle^{\frac{1}{2}}w_{\ell}\partial_{\beta+e_{i}}^{\alpha -e_{i}}f_{\varepsilon}\|\|\langle v\rangle^{\frac{1}{2}}w_{\ell}\partial_{ \beta}^{\alpha}f_{\varepsilon}\|.\] (2.30) We see that even if the magnetic field term has a certain time decay i.e. \(\frac{1}{(1+t)^{1+\vartheta}}\), this term cannot be controlled by the additional dissipative term \[\frac{q\vartheta}{(1+t)^{1+\vartheta}}\|\langle v\rangle w_{\ell}\partial_{ \beta}^{\alpha}f_{\varepsilon}\|^{2}\] (2.31) due to the existence of this singular term \(\frac{1}{\varepsilon}\) in (2.30). Therefore, how to control the weighted energy estimation (2.30) is the most challenging problem in our proof. 4. Forthly, for the transport term \(\frac{1}{\varepsilon}v\cdot\nabla_{x}f_{\varepsilon}\), under the hard sphere model or in the case of hard potential, the energy estimation of this term can be controlled by the dissipative mechanism i.e. \(\frac{1}{\varepsilon}\|\cdot\|_{\nu}\) caused by the corresponding linear operator \(\frac{1}{\varepsilon^{2}}\mathcal{L}\), regardless of the existence of the singular term with or without weight estimation. For the soft potential case, however, the situation is quite different. As estimated below: \[\frac{1}{\varepsilon}(\partial_{\beta}^{\alpha}(v\cdot\nabla_{x}f_{ \varepsilon}),w_{\ell}\partial_{\beta}^{\alpha}f_{\varepsilon})\leq\frac{1}{ \varepsilon}\|w_{\ell}\partial_{\beta-e_{i}}^{\alpha+e_{i}}f_{\varepsilon}\| \|w_{\ell}\partial_{\beta}^{\alpha}f_{\varepsilon}\|,\] (2.32) where we assume that the weight function \(w_{\ell}\) has no contribution with respect to velocity increasing or decreasing, we can see that there is a loss of velocity due to the dissipative mechanism of linear operators \(\mathcal{L}\). If we expect to use the extra dissipation mechanism (2.31) to control (2.32), we find that since the transport term \(\frac{1}{\varepsilon}v\cdot\nabla_{x}f_{\varepsilon}\) is linear, unlike (2.30), it lacks some kind of time decay, and there are also singular term \(\frac{1}{\varepsilon}\). So, in a sense, controlling (2.32) is harder than controlling (2.30). 5. Finally, from the above discussion, a basic premise is that the electromagnetic field must have a certain time decay rates. Due to the existence of the singularity \(\frac{1}{\varepsilon}\) in nonlinearity terms, using the linear analysis combined with the principle of Duhamel's principle, we should be able to get some time decay rates of electromagnetic field, but this time decay rates is \(O(\varepsilon^{-1})\). Now let's look at (2.30), which has a higher degree of singularity \(\frac{1}{\varepsilon}\), which makes it harder to estimate. Therefore, whether we can get better time decay rates, which is independent of \(\varepsilon\)? #### 2.4.3. The uniform estimates for the non-cutoff cases To overcome the above difficulty induced by the singularity terms, the main strategies and novelties can be summarized in the following parts. 1. For the \(L^{2}\)-energy estimate of spatial derivatives, assuming that the electromagnetic field has some time decay, we can obtain a direct estimation in the following : \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}(t)+\mathcal{D}_{N}(t)\] \[\lesssim M_{1}(1+t)^{-\varrho-\frac{3}{2}}\left\{\left\|\langle v \rangle^{\frac{7}{4}}\nabla_{x}^{N}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\sum_{|\alpha^{\prime}|=N-1}\|\langle v\rangle^{\frac{7}{4}} \partial^{\alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \|^{2}\right\}+\cdots.\] 2. To control the growth of the velocity on the right-hand side of the above inequality, we need to obtain the energy estimate of the highest-order weighted spatial derivative. As (2.29), we multiply \(\varepsilon^{2}\) to the energy estimates with weight on the highest-order spatial derivatives, such that \(\frac{1}{\varepsilon^{2}}\left\|\nabla_{x}^{N}\mathbf{P}f_{\varepsilon}\right\| ^{2}\) can be controlled by the macroscopic dissipation terms. 3. To overcome singular factors \(\frac{1}{\varepsilon}\) and the velocity growth for the transport term \(\frac{1}{\varepsilon}v\cdot\nabla_{x}f_{\varepsilon}\) and the nonlinear term containing magnetic field \(\frac{1}{\varepsilon}v\times B_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\), we can design a time-velocity weight function as follows: \[w_{\ell}(\alpha,\beta)=e^{\frac{q(v)}{(1+t)^{\vartheta}}}\langle v\rangle^{4( \ell-|\alpha|-|\beta|)}.\] We make full use of the dissipative mechanism and weight function brought by the strong angular singularity i.e. \(\frac{1}{2}\leq s<1\), and have the following related estimates such as \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v\cdot \nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],w_{l}^{2}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[= -\frac{1}{\varepsilon}\int_{\mathbb{R}_{\mathbb{R}}^{3}\times \mathbb{R}_{\mathbb{S}}^{3}}i\xi_{j}\mathcal{F}\left[\partial_{\beta-e_{i}}^{ \alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}w_{l}(\alpha+e_{i},\beta -e_{i})\langle v\rangle^{-\frac{3}{2}}\right]\] \[\times\overline{\mathcal{F}\left[\langle v\rangle^{\frac{3}{2}}w_ {l}(\alpha,\beta)\partial_{\beta-e_{i}}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right]}d\xi dx\] \[\lesssim\left\|w_{l}(\alpha+e_{i},\beta-e_{i})\partial_{\beta-e _{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2} +\frac{\eta}{\varepsilon^{2}}\left\|w_{l}(\alpha,\beta-e_{j})\partial_{\beta- e_{j}}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] and \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \times B_{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{ P}\}f_{\varepsilon}\right)\] \[= \frac{1}{\varepsilon}\sum_{|\alpha_{1}|=1,\beta_{1}=0}\left( \partial_{\beta_{1}}^{\alpha_{1}}\left[v\times B_{\varepsilon}\right]\cdot \partial_{\beta-\beta_{1}}^{\alpha-\alpha_{1}}\left[\nabla_{v}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)+\cdots\] \[\lesssim \frac{1}{\varepsilon}\|\partial^{e_{i}}B_{\varepsilon}\|_{L_{x}^{ \infty}}\left\|w_{l}(\alpha-e_{i},\beta)\partial_{\beta}^{\alpha-e_{i}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+\frac{\eta}{ \varepsilon}\left\|w_{l}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+\cdots\] where we used the fact that \[v_{j}\langle v\rangle^{\frac{3}{2}}w_{l}(\alpha,\beta)\leq v_{j}\langle v \rangle^{-\frac{5}{2}}w_{l}(\alpha-e_{i},\beta)\leq\langle v\rangle^{-\frac{3} {2}}w_{l}(\alpha-e_{i},\beta).\] In this way, combined with other related estimates, we can obtain the estimates we want, \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\sum_{|\alpha|=N}\varepsilon ^{2}\left\|w_{l}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right\|^{2}+\sum_{ \genfrac{}{}{0.0pt}{}{|\alpha|+|\beta|=N,\atop|\beta|\geq 1}}\left\|w_{l}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\| ^{2}\right\}+\cdots\] \[\lesssim \varepsilon^{2}\left\|\nabla_{x}^{N}E_{\varepsilon}\right\|\left\| \nabla_{x}^{N}f_{\varepsilon}\right\|_{\nu}+\cdots.\] (2.33) In order to overcome the regularization loss of the electromagnetic field, we multiply the above inequality by a time factor \((1+t)^{-\frac{1+t\alpha}{2}}\) so that it can be controlled in the following \[(1+t)^{-\frac{1+t\alpha}{2}}\varepsilon^{2}\left\|\nabla_{x}^{N}E_{\varepsilon} \right\|\left\|\nabla_{x}^{N}f_{\varepsilon}\right\|_{\nu}\lesssim(1+t)^{-(1+ \epsilon_{0})}\mathcal{E}_{N}(t)+\eta\mathcal{D}_{N}(t).\] 4. Except for the energy estimation of the highest spatial derivative, we must use the microscopic projection equation (3.20) to estimate them. Similar to the highest order estimate, we can obtain the following estimate: \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{|\alpha|+|\beta|\leq N-1}\left\|w_{l}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\cdots\lesssim\cdots+\mathcal{D}_{N}(t).\] (2.34) 5. In fact, our weighted energy estimate above heavily relies on the time decay of the low-order derivatives of the electromagnetic field. Therefore, we need to obtain time decay results that are independent of \(\varepsilon\). By using interpolation methods for derivatives and velocity, carefully treating the linear and nonlinear terms containing the singularity \(\frac{1}{\varepsilon}\), assuming the boundedness of the norm of the negative-order Sobolev space i.e. \(\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\|\), we can obtain the following estimate: \[\mathcal{E}_{k\to N_{0}}(t)\lesssim\cdots(1+t)^{-(k+\varrho)},\quad 0\leq t \leq T.\] (2.35) To ensure the validity of the decay estimate above, we use standard estimates to obtain the boundedness of the norm of the negative-order Sobolev space i.e. \(\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\|\). Finally, by combining the above strategies, we construct the a priori estimates and close it. #### 2.4.4. The uniform estimates for the angular cutoff cases 1. Unlike the non-angular truncated case with strong angular singularity, the extra difficulty in the angular truncated case lies in the fact that the dissipation of the linear collision operator does not have a dissipation mechanism for velocity derivatives, which makes it very difficult to obtain uniformity estimates for the solutions. Specifically, for the case of soft potential with angular truncation, we need to consider the singularity \(\frac{1}{\varepsilon}\), time growth, and velocity growth in the relevant energy estimates. 2. Similar to the non-angular truncated case, we can obtain the following estimate: \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}(t)+\mathcal{D}_{N}(t)\] \[\lesssim\|E_{\varepsilon}\|_{L^{\infty}_{x}}^{2}\left\|\langle v \rangle^{\frac{3}{2}}\nabla_{x}^{N}f_{\varepsilon}\right\|^{2}+\|\nabla_{x}B _{\varepsilon}\|_{L^{\infty}_{x}}^{2}\left\|\langle v\rangle^{\frac{3}{2}} \nabla_{x}^{N-1}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{ 2}+\cdots\] (2.36) Therefore, we need a weighted energy estimate for the spatial-velocity highest-order derivatives. Due to the lack of dissipation mechanisms related to velocity derivatives, we need to design the following weight function related to time and velocity: \[\widetilde{w}_{\ell}(\alpha,\beta)(t,v)=\langle v\rangle^{\ell-|\alpha|-\frac {1}{2}|\beta|}e^{\frac{q\langle v\rangle^{2}}{(1+\varrho)^{\varrho}}}\] The greatest advantage of designing an algebraic weight function here is that it can balance singularity and velocity growth such as for \(|\alpha_{1}|=1\) \[\frac{1}{\varepsilon}\left|\left([v\times\partial^{\alpha_{1}}B _{\varepsilon}\cdot\nabla_{v}\partial^{\alpha-\alpha_{1}}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{\ell_{1}}^{2}(\alpha,0)\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\lesssim\frac{1}{\varepsilon}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|_{L^{\infty}_{x}}\|\langle v\rangle^{\frac{1}{4}}\widetilde{w} _{\ell_{1}}(\alpha-\alpha_{1},e_{i})\partial^{\alpha-\alpha_{1}}_{e_{i}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\|\langle v\rangle^{\frac{1}{4}} \widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\} f_{\varepsilon}\|\] \[\lesssim\cdots+\frac{\eta}{\varepsilon}(1+t)^{-\frac{1+\varrho}{ 2}}\|\langle v\rangle^{\frac{1}{4}}w_{\ell}(\alpha,0)\partial^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2},\] (2.37) and \[\frac{1}{\varepsilon}\left(\partial^{\alpha}_{\beta}\left[v\cdot \nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\widetilde{w}_{\ell _{1}}^{2}(\alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\] \[=-\frac{1}{\varepsilon}\int_{\mathbb{R}^{3}_{x}\times\mathbb{R} ^{3}_{v}}\langle v\rangle^{\frac{1}{2}}\partial^{\alpha+e_{i}}_{\beta-e_{i}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\widetilde{w}_{\ell_{1}}(\alpha+e_{i}, \beta-e_{i})\widetilde{w}_{\ell_{1}}(\alpha,\beta)\partial^{\alpha}_{\beta}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}dvdx\] \[\lesssim\frac{1}{\varepsilon}(1+t)^{\frac{1+\varrho}{2}}\left\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha+e_{i},\beta-e_{i })\partial^{\alpha+e_{i}}_{\beta-e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}\] \[\quad+\frac{\eta}{\varepsilon}(1+t)^{-\frac{1+\varrho}{2}}\left\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha,\beta)\partial^{ \alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}\] (2.38) where we used the fact \[\langle v\rangle\widetilde{w}_{\ell_{1}}(\alpha,0)=\widetilde{w}_{\ell_{1}}( \alpha-\alpha_{1},e_{i})\langle v\rangle^{\frac{1}{2}},\widetilde{w}_{\ell_ {1}}(\alpha,\beta)=\langle v\rangle^{\frac{1}{2}}\widetilde{w}_{\ell_{1}}( \alpha+e_{i},\beta-e_{i}).\] (2.39) The last terms on the right-hand side of both (2.37) and (2.38) can be bounded by the dissipation terms like \[(1+t)^{-(1+\vartheta)}\left\|\langle v\rangle\widetilde{w}_{\ell_{1}}(\alpha, \beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\frac{1}{\varepsilon^{2}}\left\|\widetilde{w}_{\ell_{1}}(\alpha, \beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}_{\nu},\] (2.40) provided that \(\gamma\geq-1\). To control the first term on the right-hand side of (2.38), we need to design different time increments for the various spatial velocity derivatives such that \[\varepsilon^{2}\frac{d}{dt}\mathbb{E}_{\ell_{1}}^{(N)}(t)+ \varepsilon^{2}\mathbb{D}_{\ell_{1}}^{(N)}(t)\lesssim\eta(1+t)^{-2\sigma_{N,0}} \left\|\nabla_{x}^{N}E_{\varepsilon}\right\|^{2}+\mathcal{E}_{N}(t)\mathcal{E} _{1\to N_{0}-1,\overline{\ell}_{0}}(t)+\cdots, \tag{2.41}\] Here we take \[\sigma_{N,0}=\frac{1+\epsilon_{0}}{2},\ \sigma_{N,|\beta|}-\sigma_{N,|\beta|-1}= \frac{1+\vartheta}{2},|\beta|\geq 1.\] Following a similar approach, we have the following estimates established: \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{N_{0}+1\leq n\leq N-1}\mathbb{ E}_{\ell_{1}}^{(n)}(t)+\cdots\lesssim\mathcal{D}_{N}(t)+\mathcal{E}_{N}(t) \mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}(t)+\cdots, \tag{2.42}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{n\leq N_{0}}\mathbb{E}_{\ell _{0}}^{(n)}(t)+\cdots\lesssim\mathcal{D}_{N_{0}+1}(t)+\cdots.\] 3. The above weighted energy estimate heavily relies on the time decay of the low-order derivatives of the solutions, such as \[\mathcal{E}_{k\to N_{0}}(t)\lesssim(1+t)^{-(k+\varrho)},\ \mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}(t)\lesssim(1+t)^{-(1+\varrho)}\] These time decay estimates can be obtained using interpolation techniques, which is similar to the non-cutoff case. For the sake of brevity, we will not elaborate on this further here. To ensure the validity of these decay estimates, we need boundedness of certain weighted energy norm estimates. However, the weighted energy estimates in the aforementioned inequalities i.e. (2.41) and(2.42) are with respect to time growth with \((1+t)^{\sigma_{n,j}}\). 4. To do so, we introduce another weight function represented as follows: \[\overline{w}_{l}(\alpha,\beta)(t,v)=\langle v\rangle^{l-|\alpha|-2 |\beta|}e^{\frac{q\langle v\rangle^{2}}{(1+t)^{\varrho}}}.\] The advantage of this weight function is that the treatment of the transport term does not involve any growth in both velocity and time such as \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \cdot\nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{ l_{1}-|\beta|}^{2}\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\] \[= -\frac{1}{\varepsilon}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{ x}^{3}}\langle v\rangle^{-1}\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\overline{w}_{l_{1}}(\alpha+e_{i},\beta-e_{i}) \overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}dvdx\] \[\lesssim \frac{1}{\varepsilon}\left\|\langle v\rangle^{\frac{\gamma}{2}} \overline{w}_{l_{1}}(\alpha+e_{i},\beta-e_{i})\partial_{\beta-e_{i}}^{\alpha+ e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}+\frac{\eta}{ \varepsilon}\left\|\langle v\rangle^{\frac{\gamma}{2}}\overline{w}_{l_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right\|^{2},\] (2.43) where we used the fact \[\overline{w}_{l_{1}}(\alpha,\beta)=\langle v\rangle^{-1}\overline{w}_{l_{1}}( \alpha+e_{i},\beta-e_{i}).\] (2.44) However, the disadvantage of this weight function is that when performing corresponding weighted energy estimates for nonlinear terms containing magnetic fields, the weight function will unavoidably introduce a linear growth in velocity such as \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \times B_{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right],\overline{w}_{l_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[= \frac{1}{\varepsilon}\sum_{|\alpha_{1}|=1,\beta_{1}=0}\left( \partial_{\beta_{1}}^{\alpha_{1}}\left[v\times B_{\varepsilon}\right]\cdot \partial_{\beta-\beta_{1}}^{\alpha-\alpha_{1}}\left[\nabla_{v}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{l_{1}}^{2}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)+\cdots\] In contrast to the previous equation (2.44), the weight function here results in linearly increasing velocity, i.e. \[\overline{w}_{l_{1}}(\alpha,\beta)=\langle v\rangle\overline{w}_{l_{1}}( \alpha-\alpha_{1},\beta+e_{i}),\ |\alpha_{1}|=1,\beta_{1}=0.\] To account for this, we apply Sobolev inequalities, interpolation method and Young inequalites to get \[\lesssim\sum_{|\alpha_{1}|=1}\frac{1}{\varepsilon}\|\partial^{\alpha_ {1}}B_{\varepsilon}\|_{L^{\infty}_{x}}\|\langle v\rangle^{\frac{5}{2}}\overline {w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\] \[\quad\times\|\langle v\rangle^{-\frac{1}{2}}\overline{w}_{l_{1}} (\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\|+\cdots\] \[\lesssim\sum_{|\alpha_{1}|=1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|_{L^{\infty}_{x}}^{\frac{2}{2}}\|\langle v\rangle^{\tilde{t}} \overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}+\cdots \tag{2.45}\] where we take \(\theta\) as \(\theta=\frac{3}{\ell+\frac{1}{2}}\). Here we note that assuming the electromagnetic field has some decay, as the weight index \(\widetilde{\ell}\) increases, the overall decay will be fast enough so that the following inequality holds \[\sum_{|\alpha_{1}|=1}\|\partial^{\alpha_{1}}B_{\varepsilon}\|_{L^{\infty}_{x} }^{\frac{2}{\theta}}\lesssim(1+t)^{-\sigma_{N,N}}.\] At this point, we choose the weight index \(\ell_{1}\) in both (2.41) and (2.42) such that the following inequality holds. \[\langle v\rangle^{\tilde{\ell}}\overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e _{i})\leq\widetilde{w}_{\ell_{1}}(\alpha-\alpha_{1},\beta+e_{i})\langle v \rangle^{-\frac{1}{2}}.\] Then, the first term on the right-hand side of (2.45) can be controlled by the corresponding dissipation terms on the left-hand side of both (2.41) and (2.42). Thus, one can deduce that \[\frac{d}{dt}\overline{\mathcal{E}}_{N-1,l_{1}}(t)+\overline{\mathcal{D}}_{N-1,l_{1}}(t)\lesssim\mathcal{D}_{N}(t)+\eta\sum_{N_{0}+1\leq n\leq N-1}\mathbb{ D}_{\ell_{1}}^{(n)}(t)+\cdots. \tag{2.46}\] * Building upon the aforementioned technique and combining it with other relevant estimations, we can construct the a priori estimates (4.51) and close it to obtain uniform bound estimates, which is independent of time \(t\) and \(\varepsilon\). #### 2.4.5. The limits Based on the uniform in \(\varepsilon\) estimates, we employ the moment method to rigorously justify the hydrodynamic limit from the perturbed VMB to the two fluid incompressible NSFM system with Ohm's law as [53]. ## 3. The non-cutoff VMB system ### Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) Without generality, we take \(N=2N_{0}\) with \(N_{0}\geq 3\) for brevity, since we do not attempt to obtain the optimal regularity index. **Lemma 3.1**.: _For \(|\alpha|\leq N\), it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\|\partial^{\alpha}[f_{\varepsilon},E_{ \varepsilon},B_{\varepsilon}]\right\|^{2}+\frac{1}{\varepsilon^{2}}\left\| \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[\lesssim\|E_{\varepsilon}\|_{L^{\infty}_{x}}^{2}\left\|\langle v \rangle^{\frac{7}{4}}\nabla_{x}^{N}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\|\nabla_{x}B_{\varepsilon}\|_{L^{\infty}_{x}}^{2}\sum_{|\alpha^ {\prime}|=N-1}\|\langle v\rangle^{\frac{7}{4}}\partial^{\alpha^{\prime}} \nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\] \[\quad+\mathcal{E}_{N}(t)\left\{\left\|\langle v\rangle^{\frac{7}{ 4}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{H^{N-1}_{x}L^{2}_{v}}^{2 }+\sum_{|\alpha^{\prime}|\leq N-2}\|\langle v\rangle^{\frac{7}{4}}\partial^{ \alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\right\}\] \[\quad+\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\eta\mathcal{D}_{N}(t)\] _for all \(0\leq t\leq T\)._ Proof.: First of all, it is straightforward to establish the energy identities \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\partial^{\alpha}f _{\varepsilon}\|^{2}+\|\partial^{\alpha}[E_{\varepsilon},B_{\varepsilon}])\|^{2 }\right)+\frac{1}{\varepsilon^{2}}\left(\mathscr{L}\partial^{\alpha}f_{ \varepsilon},\partial^{\alpha}f_{\varepsilon}\right)\] \[=\left(\partial^{\alpha}\left(\frac{q_{0}}{2}E_{\varepsilon}\cdot vf _{\varepsilon}\right),\partial^{\alpha}f_{\varepsilon}\right)-\left(\partial^ {\alpha}\left(q_{0}E_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\right), \partial^{\alpha}f_{\varepsilon}\right) \tag{3.1}\] \[\quad-\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v \times B_{\varepsilon})\cdot\nabla_{v}f_{\varepsilon}\right),\partial^{ \alpha}f_{\varepsilon}\right)+\frac{1}{\varepsilon}\left(\partial^{\alpha} \mathscr{T}(f_{\varepsilon},f_{\varepsilon}),\partial^{\alpha}f_{\varepsilon }\right).\] The coercivity property of the linear operator \(\mathscr{L}\) i.e. (A.8) tells us that \[\frac{1}{\varepsilon^{2}}\left(\mathscr{L}\partial^{\alpha}f_{\varepsilon}, \partial^{\alpha}f_{\varepsilon}\right)\gtrsim\frac{1}{\varepsilon^{2}}\| \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}.\] For the four terms on the right-hand side of (3.1), we estimate them one by one in the following. **Case 1: \(\alpha=0\).** By applying macro-micro decomposition, \(L^{2}\)-\(L^{3}\)-\(L^{6}\) or \(L^{2}\)-\(L^{\infty}\)-\(L^{2}\) Sobolev inequalities and Cauchy inequalities, one has \[\left(\frac{q_{0}}{2}E_{\varepsilon}\cdot vf_{\varepsilon},f_{ \varepsilon}\right)\] \[=\left(\frac{q_{0}}{2}E_{\varepsilon}\cdot v\mathbf{P}f_{ \varepsilon},\mathbf{P}f_{\varepsilon}\right)+\left(\frac{q_{0}}{2}E_{ \varepsilon}\cdot v\mathbf{P}f_{\varepsilon},\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right) \tag{3.2}\] \[\lesssim\|f_{\varepsilon}\|_{H^{1}_{x}L^{2}_{v}}\|E_{\varepsilon }\|\|\nabla_{x}f_{\varepsilon}\|_{D}+\|E_{\varepsilon}\|_{L^{\infty}_{\infty}} ^{2}\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{\frac{3}{4}} \|^{2}+\eta\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}\] \[\lesssim\|f_{\varepsilon}\|_{H^{1}_{x}L^{2}_{v}}^{2}\|E_{ \varepsilon}\|^{2}+\eta\|\nabla_{x}f_{\varepsilon}\|_{D}^{2}+\|E_{\varepsilon }\|_{L^{\infty}_{x}}^{2}\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v \rangle^{\frac{7}{4}}\|^{2}+\eta\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{ D}^{2}\] \[\lesssim\mathcal{E}_{2}(t)\|E_{\varepsilon}\|^{2}+\eta\|\nabla_{ x}f_{\varepsilon}\|_{D}^{2}+\|E_{\varepsilon}\|_{L^{\infty}_{x}}^{2}\|\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{\frac{7}{4}}\|^{2}+ \eta\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}\] \[\lesssim\mathcal{E}_{2}(t)\|E_{\varepsilon}\|^{2}+\|E_{\varepsilon }\|_{L^{\infty}_{x}}^{2}\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v \rangle^{\frac{7}{4}}\|^{2}+\eta\mathcal{D}_{2}(t).\] Integrating in part with respect to \(v\) yields that \[-\left(\left(q_{0}E_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\right),f_{ \varepsilon}\right)-\frac{1}{\varepsilon}\left(\left(q_{0}(v\times B_{ \varepsilon})\cdot\nabla_{v}f_{\varepsilon}\right),f_{\varepsilon}\right)=0. \tag{3.3}\] By using Lemma A.2, we can deduce that the last term can be dominated by \[\mathcal{E}_{2}(t)\mathcal{D}_{2}(t)+\frac{\eta}{\varepsilon^{2}}\|\{\mathbf{I }-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}.\] By collecting the above related estimates, we arrive at \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\|f_{\varepsilon}\| ^{2}+\|[E_{\varepsilon},B_{\varepsilon}])\|^{2}\right)+\frac{1}{\varepsilon^{2} }\|\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}\] \[\lesssim\mathcal{E}_{2}(t)\mathcal{D}_{2}(t)+\mathcal{E}_{2}(t) \|E_{\varepsilon}\|^{2}+\|E_{\varepsilon}\|_{L^{\infty}_{x}}^{2}\|\{\mathbf{I} -\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{\frac{7}{4}}\|^{2}+\eta\mathcal{ D}_{2}(t). \tag{3.4}\] **Case 2: \(1\leq|\alpha|\leq N\).** By using Sobolev inequalities,Cauchy inequality and macro-micro decomposition, one can deduce that the first term can be bounded by \[\left(\partial^{\alpha}\left(\frac{q_{0}}{2}E_{\varepsilon}\cdot vf _{\varepsilon}\right),\partial^{\alpha}f_{\varepsilon}\right)\] \[\lesssim\|E_{\varepsilon}\|_{L^{\infty}_{x}}\left\|\langle v \rangle^{\frac{7}{4}}\partial^{\alpha}f_{\varepsilon}\right\|\Big{\|} \langle v\rangle^{-\frac{3}{4}}\partial^{\alpha}f_{\varepsilon}\Big{\|}\] \[+\chi_{|\alpha|\geq 3}\sum_{1\leq|\alpha_{1}|\leq|\alpha|-2}\| \partial^{\alpha_{1}}E_{\varepsilon}\|_{L^{\infty}_{x}}\left\|\langle v \rangle^{\frac{7}{4}}\partial^{\alpha-\alpha_{1}}f_{\varepsilon}\right\|\Big{\|} \langle v\rangle^{-\frac{3}{4}}\partial^{\alpha}f_{\varepsilon}\Big{\|}\] \[+\chi_{|\alpha|\geq 2}\sum_{|\alpha_{1}|=|\alpha|-1}\int_{ \mathbb{R}^{3}_{x}\times\mathbb{R}^{3}_{0}}|\partial^{\alpha_{1}}E_{ \varepsilon}||\langle v\rangle^{\frac{7}{4}}\partial^{\alpha-\alpha_{1}}f_{ \varepsilon}||\langle v\rangle^{-\frac{3}{4}}\partial^{\alpha}f_{\varepsilon}| dvdx\] \[+\chi_{|\alpha|\geq 1}\int_{\mathbb{R}^{3}_{x}\times\mathbb{R}^{3}_{0}}| \partial^{\alpha}E_{\varepsilon}||\langle v\rangle^{\frac{7}{4}}f_{\varepsilon}|| \langle v\rangle^{-\frac{3}{4}}\partial^{\alpha}f_{\varepsilon}|dvdx\] \[\lesssim\|E_{\varepsilon}\|_{L_{x}^{\infty}}^{2}\left\|\langle v \rangle^{\frac{7}{4}}\partial^{\alpha}f_{\varepsilon}\right\|^{2}+\chi_{| \alpha|\geq 3}\sum_{1\leq|\alpha_{1}|\leq|\alpha|-2}\|\partial^{\alpha_{1}}E_{ \varepsilon}\|_{L_{x}^{\infty}}^{2}\left\|\langle v\rangle^{\frac{7}{4}} \partial^{\alpha-\alpha_{1}}f_{\varepsilon}\right\|^{2}+\eta\left\|\langle v \rangle^{-\frac{3}{4}}\partial^{\alpha}f_{\varepsilon}\right\|^{2}\] \[+\chi_{|\alpha|\geq 2}\sum_{|\alpha_{1}|=|\alpha|-1}\int_{ \mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^{3}}|\partial^{\alpha_{1}}E_{ \varepsilon}||\langle v\rangle^{\frac{3}{2}}\partial^{\alpha-\alpha_{1}} \mathbf{P}f_{\varepsilon}||\langle v\rangle^{-\frac{3}{4}}\partial^{\alpha}f_ {\varepsilon}|dvdx\] \[+\chi_{|\alpha|\geq 1}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^ {3}}|\partial^{\alpha}E_{\varepsilon}||\langle v\rangle^{\frac{7}{4}}\mathbf{ P}f_{\varepsilon}||\langle v\rangle^{-\frac{3}{4}}\partial^{\alpha}f_{ \varepsilon}|dvdx\] \[+\chi_{|\alpha|\geq 2}\sum_{|\alpha_{1}|=|\alpha|-1}\int_{ \mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^{3}}|\partial^{\alpha_{1}}E_{ \varepsilon}||\langle v\rangle^{\frac{7}{4}}\partial^{\alpha-\alpha_{1}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}||\langle v\rangle^{-\frac{3}{4}} \partial^{\alpha}f_{\varepsilon}|dvdx\] \[+\chi_{|\alpha|\geq 1}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^ {3}}|\partial^{\alpha}E_{\varepsilon}||\langle v\rangle^{\frac{7}{4}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}||\langle v\rangle^{-\frac{3}{4}} \partial^{\alpha}f_{\varepsilon}|dvdx\] \[\lesssim\|E_{\varepsilon}\|_{L_{x}^{\infty}}^{2}\left\|\langle v \rangle^{\frac{7}{4}}\nabla_{x}^{N}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\mathcal{E}_{N}(t)\left\|\langle v\rangle^{\frac{7}{4}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{H_{x}^{N-1}L_{v}^{2}}^{2}\] \[+\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\eta\left\|\partial^{\alpha }f_{\varepsilon}\right\|_{D}^{2}. \tag{3.5}\] By a similar way, note that \((E_{\varepsilon}\cdot\partial^{\alpha}\nabla_{v}f,\partial^{\alpha}f_{ \varepsilon})=0\), we can deduce that \[\left(\partial^{\alpha}\left(q_{0}E_{\varepsilon}\cdot\nabla_{v}f _{\varepsilon}\right),\partial^{\alpha}f_{\varepsilon}\right)\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\mathcal{E}_{N}(t) \left\|\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{ \frac{7}{4}}\right\|_{H_{x}^{N-1}L_{v}^{2}}^{2}+\eta\left\|\partial^{\alpha}f _{\varepsilon}\right\|_{D}^{2}. \tag{3.6}\] As for the third term on the right-hand side of (3.1), by using macro-micro decomposition, we have \[\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}f_{\varepsilon}\right),\partial^{\alpha}f_{ \varepsilon}\right)\] \[=\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right),\partial^{ \alpha}\mathbf{P}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v \times B_{\varepsilon})\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right), \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right), \partial^{\alpha}\mathbf{P}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right), \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v \times B_{\varepsilon})\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right), \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right), \partial^{\alpha}\mathbf{P}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right), \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right). \tag{3.7}\] where we use the fact that \[\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B_{\varepsilon}) \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right),\partial^{\alpha}\mathbf{P}f_{ \varepsilon}\right)=0,\] due to the kernel structure of \(\mathbf{P}\) and the integral of oddness function with respect to velocity \(v\) over \(\mathbb{R}_{v}^{3}\). Using various Sobolev inequalities and Cauchy inequality, we can get that the first two terms on the right-hand side of (3.7) can be controlled by \[\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right),\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right), \partial^{\alpha}\mathbf{P}f_{\varepsilon}\right)\] \[\lesssim\|E_{\varepsilon}\|_{H^{N}_{x}}^{2}\left(t\right)\mathcal{D}_{N}(t)+ \frac{\eta}{\varepsilon^{2}}\sum_{|\alpha^{\prime}|\leq\alpha}\|\partial^{ \alpha^{\prime}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}, \tag{3.8}\] as for the last term on the right-hand side of (3.7), by using a similar way as the estimates on \(\left(\partial^{\alpha}\left(q_{0}E_{\varepsilon}\cdot\nabla_{v}f_{ \varepsilon}\right),\partial^{\alpha}f_{\varepsilon}\right)\), we can deduce that \[\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right),\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=\frac{1}{\varepsilon}\left(q_{0}(v\times B_{\varepsilon})\cdot \nabla_{v}\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon},\partial^ {\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\quad+\sum_{1\leq|\alpha_{1}|\leq|\alpha|}\frac{1}{\varepsilon} \left(q_{0}(v\times\partial^{\alpha_{1}}B_{\varepsilon})\cdot\nabla_{v} \partial^{\alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon},\partial ^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=\sum_{1\leq|\alpha_{1}|\leq|\alpha|}\frac{1}{\varepsilon}\left(q _{0}(v\times\partial^{\alpha_{1}}B_{\varepsilon})\cdot\nabla_{v}\partial^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{ \frac{3}{4}},\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v \rangle^{-\frac{3}{4}}\right)\] \[\lesssim\|\nabla_{x}B_{\varepsilon}\|_{L^{\infty}_{x}}\sum_{| \alpha^{\prime}|=N-1}\|\partial^{\alpha^{\prime}}\nabla_{v}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\langle v\rangle^{\frac{7}{4}}\|^{2}\] \[\quad+\|B_{\varepsilon}\|_{H^{N}_{x}}^{2}\sum_{|\alpha^{\prime}| \leq N-2}\|\partial^{\alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\langle v\rangle^{\frac{7}{4}}\|^{2}+\frac{\eta}{\varepsilon^{2}} \|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}.\] Now we arrive at \[\frac{1}{\varepsilon}\left(\partial^{\alpha}\left(q_{0}(v\times B _{\varepsilon})\cdot\nabla_{v}f_{\varepsilon}\right),\partial^{\alpha}f_{ \varepsilon}\right) \tag{3.9}\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\|\nabla_{x}B_{ \varepsilon}\|_{L^{\infty}_{x}}\sum_{|\alpha^{\prime}|=N-1}\|\partial^{ \alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v \rangle^{\frac{7}{4}}\|^{2}\] \[\quad+\|B_{\varepsilon}\|_{H^{N}_{x}}^{2}\sum_{|\alpha^{\prime}| \leq N-2}\|\partial^{\alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\langle v\rangle^{\frac{7}{4}}\|^{2}+\eta\mathcal{D}_{N}(t).\] By using Lemma A.2 and Cauchy inequality, one can get that the last term on the right-hand side of (3.7) can be dominated by \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\frac{\eta}{\varepsilon^{2}}\sum _{\alpha^{\prime}\leq\alpha}\|\partial^{\alpha^{\prime}}\{\mathbf{I}-\mathbf{ P}\}f_{\varepsilon}\|_{D}^{2}.\] Now we arrive at by collecting the above estimates \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{1\leq|\alpha|\leq N}\left(\| \partial^{\alpha}f_{\varepsilon}\|^{2}+\|\partial^{\alpha}[E_{\varepsilon},B_ {\varepsilon}]\|^{2}\right)+\sum_{1\leq|\alpha|\leq N}\frac{1}{\varepsilon^{2}} \left\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^ {2}\] \[\lesssim\|E_{\varepsilon}\|_{L^{\infty}_{x}}^{2}\left\|\langle v \rangle^{\frac{7}{4}}\nabla_{x}^{N}f_{\varepsilon}\right\|^{2}+\mathcal{E}_{N} (t)\left\|\langle v\rangle^{\frac{7}{4}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|_{H^{N-1}_{x}L^{2}_{v}}^{2}\] \[\quad+\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\|\nabla_{x}B_{ \varepsilon}\|_{L^{\infty}_{x}}\sum_{|\alpha^{\prime}|=N-1}\|\partial^{\alpha^ {\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{ \frac{7}{4}}\|^{2}\] \[\quad+\mathcal{E}_{N}(t)\sum_{|\alpha^{\prime}|\leq N-2}\|\partial^ {\alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v \rangle^{\frac{7}{4}}\|^{2}+\eta\mathcal{D}_{N}(t).\] By plugging (3.4) and (3.10) into (3.1), one has (3.1). Thus the proof of Lemma 3.1 is complete. **Lemma 3.2**.: _There exists an interactive energy functional \(\mathcal{E}_{N}^{int}(t)\) satisfying_ \[\mathcal{E}_{N}^{int}(t)\lesssim\sum_{|\alpha|\leq N}\|\partial^{\alpha}[f_{ \varepsilon},E_{\varepsilon},B_{\varepsilon}]\|^{2}\] _such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}^{int}(t)+\left\| \nabla_{x}[\rho_{\varepsilon}^{\pm},u_{\varepsilon},\theta_{\varepsilon}] \right\|_{H_{x}^{N-1}}^{2}+\|\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-}\|^ {2}+\|E_{\varepsilon}\|_{H_{x}^{N-1}}^{2}+\|\nabla_{x}B_{\varepsilon}\|_{H_{x }^{N-2}}^{2}\] \[\lesssim\frac{1}{\varepsilon^{2}}\sum_{|\alpha|\leq N}\|\partial ^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}+\mathcal{E}_{N}(t) \mathcal{D}_{N}(t). \tag{3.10}\] Proof.: This lemma can be proved by macroscopically projecting the original equation. For the sake of brevity in exposition, the details of the proof are provided in Appendix A.3. **Assumption 1:** \[\sup_{0<t\leq T}\left\{(1+t)^{\varrho+\frac{3}{2}}\left\|E_{ \varepsilon}\right\|_{L_{x}^{\infty}}^{2}+(1+t)^{\varrho+\frac{5}{2}}\left\| \nabla_{x}B_{\varepsilon}\right\|_{L_{x}^{\infty}}^{2}+\mathcal{E}_{N}(t) \right\}\leq M_{1} \tag{3.11}\] where \(M_{1}\) is a sufficiently small positive constant. **Proposition 3.1**.: _Under_ **Assumption 1**_, there exist an energy functional \(\mathcal{E}_{N}(t)\) and the corresponding energy dissipation functional \(\mathcal{D}_{N}(t)\) which satisfy (2.1), (2.2) respectively such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}(t)+\mathcal{D}_{N}(t)\] \[\lesssim M_{1}(1+t)^{-\varrho-\frac{3}{2}}\left\{\left\|\langle v \rangle^{\frac{7}{4}}\nabla_{x}^{N}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\sum_{|\alpha^{\prime}|=N-1}\|\langle v\rangle^{\frac{7}{4}} \partial^{\alpha^{\prime}}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \|^{2}\right\}\] \[+\mathcal{E}_{N}(t)\sum_{|\alpha^{\prime}|+|\beta^{\prime}|\leq N -1}\|\langle v\rangle^{\frac{7}{4}}\partial_{\beta^{\prime}}^{\alpha^{\prime}} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2} \tag{3.12}\] _holds for all \(0\leq t\leq T\)._ Proof.: A proper linear combination of (3.1) and (3.10) gives (3.12). ### The top-order energy estimates with weight To control the first term on the right-hand side of (3.12), one need the energy estimate with the weight. However, To overcome this difficulty term, we have the following the result: **Lemma 3.3**.: _It holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{|\alpha|=N}\varepsilon^{2} \left\|w_{l}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right\|^{2}+\frac{ \vartheta q\varepsilon^{2}}{(1+t)^{1+\vartheta}}\sum_{|\alpha|=N}\left\| \langle v\rangle^{\frac{1}{2}}w_{l}(\alpha,0)\partial^{\alpha}f_{\varepsilon} \right\|^{2}+\sum_{|\alpha|=N}\left\|w_{l}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\right\|_{D}^{2}\] \[\lesssim \mathcal{D}_{N}(t)+\varepsilon^{2}\left\|\partial^{\alpha}E_{ \varepsilon}\right\|\left\|M^{\delta}\partial^{\alpha}f_{\varepsilon}\right\| +\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t) \tag{3.13}\] _for all \(0\leq t\leq T\)._ Proof.: For this purpose, the standard energy estimate on \(\partial^{\alpha}f\) with \(|\alpha|=N\) weighted by the time-velocity dependent function \(w_{l}(\alpha,0)(t,v)\) gives \[\frac{\mathrm{d}}{\mathrm{d}t}\left\|w_{l}(\alpha,0)\partial^{ \alpha}f_{\varepsilon}\right\|^{2}+\frac{\vartheta q}{(1+t)^{1+\vartheta}} \left\|\langle v\rangle^{\frac{1}{2}}w_{l}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\right\|^{2}+\frac{1}{\varepsilon^{2}}\|w_{l}(\alpha,0)\partial^{ \alpha}f_{\varepsilon}\|_{D}^{2}\] \[\lesssim \frac{1}{\varepsilon^{2}}\left\|\partial^{\alpha}\mathbf{P}f_{ \varepsilon}\right\|^{2}+\frac{1}{\varepsilon^{2}}\left\|\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+\left\|\partial^{ \alpha}E_{\varepsilon}\right\|\left\|M^{\delta}\partial^{\alpha}f_{ \varepsilon}\right\| \tag{3.14}\] \[+\left|\left(\partial^{\alpha}(E_{\varepsilon}\cdot vf_{ \varepsilon}),w_{l}^{2}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right)\right| +\left|\left(\partial^{\alpha}(E_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}),w _{l}^{2}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right)\right|\] \[+\frac{1}{\varepsilon}\left|\left(\partial^{\alpha}[v\times B_{ \varepsilon}\cdot\nabla_{v}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{ \alpha}f_{\varepsilon}\right)\right|+\frac{1}{\varepsilon}\left|\left( \partial^{\alpha}\mathscr{T}(f_{\varepsilon},f_{\varepsilon}),w_{l}^{2}( \alpha,0)\partial^{\alpha}f_{\varepsilon}\right)\right|\] where the coercivity estimates on \(\mathscr{L}\) i.e. (A.2) tell us that \[\frac{1}{\varepsilon^{2}}\left(\mathscr{L}\partial^{\alpha}f_{ \varepsilon},w_{l}^{2}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right)\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \mathbf{P}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \mathbf{P}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\right)f_{\varepsilon}\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \mathbf{I}-\mathbf{P}\right)f_{\varepsilon}\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\right)f_{\varepsilon}\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\right)f_{\varepsilon}\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\right)f_{\varepsilon}\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{P}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\leq \varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\ \[+\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\eta\|\partial^{ \alpha}f_{\varepsilon}\|_{D}^{2}+\varepsilon\left|\left(\partial^{\alpha}[v \times B_{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }],w_{l}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right)\right|\] where the last term can be bounded by \[\varepsilon\left|\left(\partial^{\alpha}[v\times B_{\varepsilon} \cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[=\varepsilon\left|\left([v\times B_{\varepsilon}\cdot\nabla_{v} \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{1\leq|\alpha|\leq N}\varepsilon\left|\left([v\times \partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha-\alpha_{1 }}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[=\sum_{1\leq|\alpha|\leq N}\varepsilon\left|\left([v\times \partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha-\alpha_{1 }}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],w_{l}^{2}(\alpha,0)\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\lesssim\sum_{1\leq|\alpha|\leq N}\varepsilon\left.\int_{\mathbb{ R}_{\xi}^{3}\times\mathbb{R}_{v}^{3}}|\xi||\partial^{\alpha_{1}}B_{\varepsilon}| \left|\mathcal{F}_{v}\left[w_{l}(\alpha-\alpha_{1})\partial^{\alpha-\alpha_{1 }}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{\frac{5}{2}-4| \alpha_{1}|}\right]\right|\] \[\quad\times\left|\mathcal{F}_{v}\left[w_{l}(\alpha,0)\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{-\frac{3}{2}} \right]\right|d\xi dv\] \[\lesssim\sum_{1\leq|\alpha|\leq N}\varepsilon\int_{\mathbb{R}_{ \xi}^{3}\times\mathbb{R}_{v}^{3}}|\xi||\partial^{\alpha_{1}}B_{\varepsilon}| \left|\mathcal{F}_{v}\left[w_{l}(\alpha-\alpha_{1})\partial^{\alpha-\alpha_{1 }}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{-\frac{3}{2}} \right]\right|^{2}d\xi dv\] \[\quad+\eta\varepsilon\int_{\mathbb{R}_{\xi}^{3}\times\mathbb{R}_ {v}^{3}}|\xi|\left|\mathcal{F}_{v}\left[w_{l}(\alpha,0)\partial^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle^{-\frac{3}{2}}\right] \right|^{2}d\xi dv\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\eta\left\|w_{l} (\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D} ^{2}.\] Similarly, one also has \[\varepsilon^{2}\left|\left(\partial^{\alpha}(E_{\varepsilon}\cdot \nabla_{v}f_{\varepsilon}),w_{l}^{2}(\alpha,0)\partial^{\alpha}f_{\varepsilon }\right)\right| \tag{3.17}\] \[\lesssim\varepsilon^{2}\|E_{\varepsilon}\|_{L_{x}^{\infty}}\left\| \langle v\rangle^{1/2}w_{l}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right\|^{2 }+\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\eta\left\|w_{l}(\alpha,0)\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[\lesssim\varepsilon^{2}M_{1}^{\frac{1}{2}}(1+t)^{-\frac{3}{4}- \frac{\rho}{2}}\left\|\langle v\rangle^{1/2}w_{l}(\alpha,0)\partial^{\alpha}f _{\varepsilon}\right\|^{2}+\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\eta\left\| w_{l}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D} ^{2}\] \[\lesssim\varepsilon^{2}M_{1}^{\frac{1}{2}}(1+t)^{-1-\vartheta} \left\|\langle v\rangle^{1/2}w_{l}(\alpha,0)\partial^{\alpha}f_{\varepsilon} \right\|^{2}+\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\eta\left\|w_{l}(\alpha,0) \partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] where we use **Assumption 1** and take \[0<\vartheta\leq\frac{\varrho}{2}-\frac{1}{4}.\] The last term on the right-hand side of (3.15) can be bounded by \[\varepsilon\left|\left(\partial^{\alpha}\mathscr{T}(f_{\varepsilon},f_{\varepsilon}),w_{l}^{2}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right)\right| \tag{3.18}\] \[\lesssim \mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\eta\left\|w_{l}(\alpha,0) \partial^{\alpha}f_{\varepsilon}\right\|_{D}^{2}.\] By collecting the related inequalities into (3.15), one has (3.13), which complete the proof of Lemma 3.3. In addition to the above highest-order energy estimates with the wight \(w(\alpha,0)\) for \(|\alpha|=N\), to avoid the macroscopic part with the singularity factor \(\frac{1}{\epsilon^{2}}\), for the other cases with weight, we applying the micro projection equality. **Lemma 3.4**.: _It holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{\genfrac{}{}{0.0pt}{}{|\alpha|+| \beta|=N,}{|\alpha|\leq N-1}}\left\|w_{l}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}+\frac{1}{ \varepsilon^{2}}\sum_{\genfrac{}{}{0.0pt}{}{|\alpha|+|\beta|=N,}{|\alpha|\leq N-1}} \left\|w_{l}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|_{D}^{2}\] \[+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\sum_{\genfrac{}{}{0.0pt}{}{| \alpha|+|\beta|=N,}{|\alpha|\leq N-1}}\|\langle v\rangle^{\frac{1}{2}}w_{l}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \|^{2}\] \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\mathscr{L}f_{ \varepsilon},w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right)\] \[\gtrsim\frac{1}{\varepsilon^{2}}\|w_{l}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D}^{2}-\frac{1}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\| _{D}^{2}. \tag{3.22}\] As for the transport term on the right-hand side of (3.21), one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \cdot\nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],w_{l}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right)\] \[=-\frac{1}{\varepsilon}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{ v}^{3}}\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}w_{l}(\alpha+e_{i},\beta-e_{i})w_{l}(\alpha,\beta)\partial_{\beta }^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}dvdx\] \[=-\frac{1}{\varepsilon}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{ v}^{3}}\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}w_{l}(\alpha+e_{i},\beta-e_{i})\langle v\rangle^{-\frac{3}{2}}\Big{]}\] \[\lesssim\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{x}^{3}}|\xi| \left|\mathcal{F}_{v}\left[\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}w_{l}(\alpha+e_{i},\beta-e_{i})\langle v\rangle^{- \frac{3}{2}}\right]\right|^{2}d\xi dx\] \[+\frac{\eta}{\varepsilon^{2}}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{ \xi}^{3}}\left|\xi\right|\left|\overline{\mathcal{F}_{v}\left[\langle v\rangle^{ \frac{3}{2}}w_{l}(\alpha,\beta)\partial_{\beta-e_{j}}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right]}\right|^{2}d\xi dx\] \[\lesssim\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{\xi}^{3}} \left|\xi\right|\left|\mathcal{F}_{v}\left[\langle v\rangle^{-\frac{3}{2}}w_{l }(\alpha+e_{i},\beta-e_{i})\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right]\right|^{2}d\xi dx\] \[\quad+\frac{\eta}{\varepsilon^{2}}\int_{\mathbb{R}_{x}^{3}\times \mathbb{R}_{\xi}^{3}}\left|\xi\right|\left|\overline{\mathcal{F}_{v}\left[ \langle v\rangle^{-\frac{3}{2}}w_{l}(\alpha,\beta-e_{j})\partial_{\beta-e_{j} }^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right]}\right|^{2}d\xi dx\] \[\lesssim\left\|w_{l}(\alpha+e_{i},\beta-e_{i})\partial_{\beta-e _{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+ \frac{\eta}{\varepsilon^{2}}\left\|w_{l}(\alpha,\beta-e_{j})\partial_{\beta-e _{j}}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[\lesssim\left\|w_{l}(\alpha+e_{i},\beta-e_{i})\partial_{\beta-e _{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+ \eta\mathcal{D}_{N,l}(t). \tag{3.23}\] Here we used the fact \[\langle v\rangle^{\frac{3}{2}}w_{l}(\alpha,\beta)=\langle v\rangle^{\frac{3}{ 2}}w_{l}(\alpha,\beta-e_{j})\langle v\rangle^{-4}\leq\langle v\rangle^{-\frac {3}{2}}w_{l}(\alpha,\beta-e_{j}).\] For the second and third term on the right-hand side of (3.21), it is straightforward to compute that \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[E_{ \varepsilon}\cdot vM^{1/2}q_{1}\right],w_{l}^{2}(\alpha,\beta)\partial_{\beta }^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\|\partial^{\alpha}E_{\varepsilon}\|^{2}+\frac{\eta}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_ {D}^{2}\lesssim\mathcal{D}_{N}(t). \tag{3.24}\] and \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{P}(v\cdot\nabla_{x}f_{\varepsilon})-\frac{1}{\varepsilon}v\cdot \nabla_{x}\mathbf{P}f_{\varepsilon}\right],w_{l}^{2}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\|\nabla^{|\alpha|+1}f_{\varepsilon}\|_{D}^{2}+\frac{ \eta}{\varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\|_{D}^{2}\lesssim\mathcal{D}_{N}(t). \tag{3.25}\] As for the fourth term on the right-hand side of (3.21), one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[\{ \mathbf{I}-\mathbf{P}\}\left[v\times B_{\varepsilon}\cdot\nabla_{v}f_{ \varepsilon}\right]\right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \times B_{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \times B_{\varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right],w_{l}^{2 }(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{P}\left[v\times B_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\right] \right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right). \tag{3.26}\] Applying macroscopic part and the Cauchy inequality, one can deduce that \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v\times B _{\varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}\right],w_{l}^{2}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{P}\left[v\times B_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}\right] \right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right)\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N}(t)+\frac{\eta}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_ {D}^{2}. \tag{3.27}\] For the first term on the right-hand side of (3.26), \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],w_{ l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\] \[=\frac{1}{\varepsilon}\sum_{\alpha_{1}\leq\alpha,\beta_{1}\leq \beta}\left(\partial_{\beta_{1}}^{\alpha_{1}}\left[v\times B_{\varepsilon} \right]\cdot\partial_{\beta-\beta_{1}}^{\alpha-\alpha_{1}}\left[\nabla_{v}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],w_{l}^{2}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right),\] when \(|\alpha_{1}|=1,\beta_{1}=0\), we apply the Parseval identity to get \[\lesssim\frac{1}{\varepsilon}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{x}^{3}} \left|\xi\right|\partial^{e_{i}}B_{\varepsilon}\right|\left|\mathcal{F}_{v} \left[v_{j}\langle v\rangle^{\frac{3}{2}}w_{l}(\alpha,\beta)\partial_{\beta}^{ \alpha-e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right]\right|\] \[\times\left|\langle v\rangle^{-\frac{3}{2}}w_{l}(\alpha,\beta) \mathcal{F}_{v}\left[\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right]\right|d\xi dv\] \[\lesssim\frac{1}{\varepsilon}\int_{\mathbb{R}_{\xi}^{3}\times \mathbb{R}_{\xi}^{3}}|\xi||\partial^{e}B_{\varepsilon}|\left|\mathcal{F}_{v} \left[v_{j}\langle v\rangle^{\frac{3}{2}}w_{l}(\alpha,\beta)\partial_{\beta}^{ \alpha-e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right]\right|^{2}d\xi dv\] \[\quad+\frac{1}{\varepsilon}\int_{\mathbb{R}_{\xi}^{3}\times \mathbb{R}_{\xi}^{3}}|\xi|\left|\langle v\rangle^{-\frac{3}{2}}w_{l}(\alpha, \beta)\mathcal{F}_{v}\left[\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\} f_{\varepsilon}\right]\right|^{2}d\xi dv\] \[\lesssim\frac{1}{\varepsilon}\|\partial^{e_{i}}B_{\varepsilon} \|_{L_{x}^{\infty}}\int_{\mathbb{R}_{\xi}^{3}\times\mathbb{R}_{\xi}^{3}}|\xi| \left|\mathcal{F}_{v}\left[\langle v\rangle^{-\frac{3}{2}}w_{l}(\alpha-e_{i}, \beta)\partial_{\beta}^{\alpha-e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right]\right|^{2}d\xi dv\] \[\quad+\frac{1}{\varepsilon}\int_{\mathbb{R}_{\xi}^{3}\times \mathbb{R}_{\xi}^{3}}|\xi|\left|\langle v\rangle^{-\frac{3}{2}}w_{l}(\alpha, \beta)\mathcal{F}_{v}\left[\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\} f_{\varepsilon}\right]\right|^{2}d\xi dv\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\frac{\eta}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_ {D}^{2} \tag{3.28}\] where we used the fact that \[v_{j}\langle v\rangle^{\frac{3}{2}}w_{l}(\alpha,\beta)\leq v_{j} \langle v\rangle^{-\frac{5}{2}}w_{l}(\alpha-e_{i},\beta)\leq\langle v\rangle^ {-\frac{2}{2}}w_{l}(\alpha-e_{i},\beta).\] The other cases have similar upper bound by utilizing a similar way. Thus, one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right], w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\frac{\eta}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_ {D}^{2}. \tag{3.29}\] By using the similar argument as the estimate on (3.26), one has \[\left(\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}\left[- \frac{1}{2}q_{0}v\cdot E_{\varepsilon}f_{\varepsilon}+q_{0}E_{\varepsilon} \cdot\nabla_{v}f_{\varepsilon}\right],w_{l}^{2}(\alpha,\beta)\partial_{\beta}^ {\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\frac{\eta}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_ {D}^{2}. \tag{3.30}\] By using (A.4)-(A.5), one can deduce that \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\mathscr{T} (f_{\varepsilon},f_{\varepsilon}),w_{l}^{2}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)++\frac{\eta}{ \varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_ {D}^{2}. \tag{3.31}\] Now by collecting the above related estimates into (3.21), we arrive at \[\frac{\mathrm{d}}{\mathrm{d}t}\left\|w_{l}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}+\frac{1}{ \varepsilon^{2}}\left\|w_{l}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I} -\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\left\|\langle v\rangle^{ \frac{1}{2}}w_{l}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right\|^{2} \tag{3.32}\] \[\lesssim \left\|w_{l}(\alpha+e_{i},\beta-e_{i})\partial_{\beta-e_{i}}^{ \alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+ \mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\mathcal{D}_{N}(t)+\eta\mathcal{D}_{N,l }(t).\] The other cases can be dominated by the same upper bound as (3.32). A proper linear combination (3.32) with \(|\alpha|+|\beta|\leq N,|\alpha|\leq N-1\) implies (3.19). A proper linear combination of Lemma 3.3 and Lemma 3.4 gives that **Proposition 3.2**.: _It holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\sum_{|\alpha|=N}\varepsilon^{2}\left\|w_{l }(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right\|^{2}+\sum_{|\alpha|+|\beta|=N,\atop|\beta|\geq 1}\left\|w_{l}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|^{2}\right\}\] \[+\frac{\vartheta q\varepsilon^{2}}{(1+t)^{1+\vartheta}}\sum_{|\alpha|= N}\left\|\langle v\rangle^{\frac{1}{2}}w_{l}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\right\|^{2}+\sum_{|\alpha|=N}\left\|w_{l}(\alpha,0)\partial^{ \alpha}f_{\varepsilon}\right\|^{2}_{D}\] \[+\frac{1}{\varepsilon^{2}}\sum_{|\alpha|+|\beta|=N,\atop|\beta| \geq 1}\left\|w_{l}(\alpha,\beta)\partial^{\alpha}_{\beta}\{{\bf I}-{\bf P}\}f_{ \varepsilon}\right\|^{2}_{D}\] \[+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\sum_{|\alpha|+|\beta|=N, \atop|\beta|\geq 1}\|\langle v\rangle^{\frac{1}{2}}w_{l}(\alpha,\beta) \partial^{\alpha}_{\beta}\{{\bf I}-{\bf P}\}f_{\varepsilon}\|^{2}\] \[\lesssim\varepsilon^{2}\left\|\partial^{\alpha}E_{\varepsilon} \right\|\left\|M^{\delta}\partial^{\alpha}f_{\varepsilon}\right\|+\mathcal{E} _{N}(t)\mathcal{D}_{N,l}(t)+\mathcal{D}_{N}(t). \tag{3.33}\] ### The low-order energy estimates with weight By utilizing a slightly different technique as Proposition 3.2, we also have **Proposition 3.3**.: _It holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{|\alpha|+|\beta|\leq N-1}\left\|w_{l}( \alpha,\beta)\partial^{\alpha}_{\beta}\{{\bf I}-{\bf P}\}f_{\varepsilon} \right\|^{2}+\frac{1}{\varepsilon^{2}}\sum_{|\alpha|+|\beta|\leq N-1}\left\|w_ {l}(\alpha,\beta)\partial^{\alpha}_{\beta}\{{\bf I}-{\bf P}\}f_{\varepsilon} \right\|^{2}_{D}\] \[+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\sum_{|\alpha|+|\beta| \leq N-1}\|\langle v\rangle^{\frac{1}{2}}w_{l}(\alpha,\beta)\partial^{\alpha }_{\beta}\{{\bf I}-{\bf P}\}f_{\varepsilon}\|^{2}\] \[\lesssim\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)+\mathcal{D}_{N}(t). \tag{3.34}\] ### Lyapunov inequality for the energy functionals **Proposition 3.4**.: _Under_ **Assumption 1**_, take \(l\geq N+\frac{1}{2}\), we can deduce that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\mathcal{E}_{N,l}(t)+\mathcal{E}_{N}(t) \right\}+\mathcal{D}_{N}(t)+\mathcal{D}_{N,l}(t)\lesssim 0 \tag{3.35}\] _holds for all \(0\leq t\leq T\)._ Proof.: Recalling the definition of \(\mathcal{D}_{N,l}(t)\), (3.12) tells us that \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}(t)+\mathcal{D}_{N}(t)\] \[\lesssim M_{1}(1+t)^{-(1+\epsilon_{0})}(1+t)^{\frac{1+\epsilon_{0} }{2}}(1+t)^{-\frac{1+\epsilon_{0}}{2}}\] \[\quad\times\left\{\left\|\langle v\rangle^{\frac{7}{4}}\nabla^{N }_{x}\{{\bf I}-{\bf P}\}f_{\varepsilon}\right\|^{2}+\left\|\langle v\rangle^{ \frac{7}{4}}\nabla^{N-1}_{x}\nabla_{v}\{{\bf I}-{\bf P}\}f_{\varepsilon} \right\|^{2}\right\}\] \[\quad+\mathcal{E}_{N}(t)\sum_{|\alpha^{\prime}|+|\beta^{\prime}| \leq N-1}\|\langle v\rangle^{\frac{7}{4}}\partial^{\alpha^{\prime}}_{\beta^{ \prime}}\{{\bf I}-{\bf P}\}f_{\varepsilon}\|^{2}\] \[\lesssim M_{1}\mathcal{D}_{N,l}(t)+\mathcal{E}_{N}(t)\mathcal{D} _{N,l}(t). \tag{3.36}\] where we ask that \(l\geq N+\frac{1}{2}\). By multiplying \((1+t)^{-\epsilon_{0}}\) into (3.36), one has \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{(1+t)^{-\epsilon_{0}} \mathcal{E}_{N}(t)\right\}+\epsilon_{0}(1+t)^{-1-\epsilon_{0}}\mathcal{E}_{N}( t)+(1+t)^{-\epsilon_{0}}\mathcal{D}_{N}(t)\] \[\lesssim M_{1}\mathcal{D}_{N,l}(t)+(1+t)^{-\epsilon_{0}}\mathcal{ E}_{N}(t)\mathcal{D}_{N,l}(t). \tag{3.37}\] By multiplying \((1+t)^{-\frac{1+\epsilon_{0}}{2}}\) into (3.33), one has \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{(1+t)^{-\frac{1+\epsilon_{0}}{2}}\left\{ \sum_{|\alpha|=N}\varepsilon^{2}\left\|w_{l}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\right\|^{2}+\sum_{|\alpha|+|\beta|=N,\atop|\alpha|\leq N-1}\left\| w_{\ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{{\bf I}-{\bf P}\}f_{ \varepsilon}\right\|^{2}\right\}\right\}\] \[+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\frac{\vartheta q\varepsilon^{2}}{(1+t )^{1+\vartheta}}\sum_{|\alpha|=N}\left\|\langle v\rangle^{\frac{1}{2}}w_{l}( \alpha,0)\partial^{\alpha}f_{\varepsilon}\right\|^{2}+(1+t)^{-\frac{1+\epsilon _{0}}{2}}\sum_{|\alpha|=N}\left\|w_{l}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\right\|_{D}^{2}\] \[+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\frac{1}{\varepsilon^{2}}\sum_{| \alpha|+|\beta|=N}\left\|w_{\ell}(\alpha,\beta)\partial^{\alpha}_{\beta}\{{ \bf I}-{\bf P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\frac{q\vartheta}{(1+t)^{1+ \vartheta}}\sum_{|\alpha|+|\beta|=N}\left\|w_{\ell}(\alpha,\beta)\partial^{ \alpha}_{\beta}\{{\bf I}-{\bf P}\}f_{\varepsilon}\langle v\rangle^{\frac{1}{2 }}\right\|^{2}\] \[\lesssim(1+t)^{-\frac{1+\epsilon_{0}}{2}}\mathcal{D}_{N}(t)+(1+ t)^{-\frac{1+\epsilon_{0}}{2}}\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t)\] \[+(1+t)^{-\frac{1+\epsilon_{0}}{2}}\sum_{|\alpha|=N}\varepsilon^ {2}\left\|\partial^{\alpha}E_{\varepsilon}\right\|\left\|M^{\delta}\partial^ {\alpha}f_{\varepsilon}\right\|\] \[\lesssim\mathcal{D}_{N}(t)+\mathcal{E}_{N}(t)\mathcal{D}_{N,l}(t )+\eta(1+t)^{-1-\epsilon_{0}}\|\nabla^{N}_{x}E_{\varepsilon}\|^{2}. \tag{3.38}\] Thus (3.35) follows from (3.36), (3.37), (3.38) and (3.34), which complete the proof of this proposition. ### The temporal time decay estimate on \(\mathcal{E}_{k\to N_{0}}(t)\) To ensure **Assumption 1** in (3.11), this subsection is devoted into the temporal time decay rates for \([f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\) to the Cauchy problem (1.7). **Assumption 2:** \[\sup_{0<t\leq T}\left\{\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B _{\varepsilon}]\|^{2}+\mathcal{E}_{N,l}(t)\right\}\leq M_{2},\] where \(M_{2}\) is a sufficiently small positive constant. **Lemma 3.5**.: _Under_ **Assumption 1** _and_ **Assumption 2**_, there exists a suitably large constant \(\bar{l}\), and take \(l\geq\bar{l}\), \(\widetilde{k}=\min\{k+1,N_{0}-1\}\) let \(N_{0}\geq 4\), \(N=2N_{0}\), one has the following estimates:_ (i). _For \(k=0,1,\cdots,N_{0}-1\), it holds that_ \[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\nabla^{k} f_{\varepsilon}\right\|^{2}+\left\|\nabla^{k}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}\right)+\frac{1}{\varepsilon^{2}}\left\|\nabla^{k}\{{ \bf I}-{\bf P}\}f_{\varepsilon}\right\|_{D}^{2}\\ \lesssim&\max\{M_{1},M_{2}\}\left(\left\|\nabla^{ \widetilde{k}}[E_{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\nabla^{ \widetilde{k}}{\bf P}f_{\varepsilon}\right\|^{2}\right)+\eta\left\|\nabla^{ \widetilde{k}}f_{\varepsilon}\right\|_{D}^{2}.\end{split} \tag{3.39}\] (iii). _For \(k=N_{0}\), it follows that_ \[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\nabla^{N_{ 0}}f_{\varepsilon}\right\|^{2}+\left\|\nabla^{N_{0}}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}\right)+\frac{1}{\varepsilon^{2}}\left\|\nabla^{N_{0} }\{{\bf I}-{\bf P}\}f_{\varepsilon}\right\|_{D}^{2}\\ \lesssim&\max\{M_{1},M_{2}\}\left(\left\|\nabla^{N_ {0}-1}[E_{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\nabla^{N_{0}}f_{ \varepsilon}\right\|_{D}^{2}\right)+\eta\left\|\nabla^{N_{0}}f_{\varepsilon} \right\|_{D}^{2}.\end{split} \tag{3.40}\] (iii). _For \(k=0,1,2\cdots,N_{0}-1\), there exist interactive energy functionals \(G^{k}_{f}(t)\) satisfying_ \[G^{k}_{f}(t)\lesssim\left\|\nabla^{k}[f_{\varepsilon},E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}+\left\|\nabla^{k+1}[f_{\varepsilon},E_{\varepsilon},B _{\varepsilon}]\right\|^{2}+\left\|\nabla^{k+2}E_{\varepsilon}\right\|^{2}\] _such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}G^{k}_{f}(t)+\left\|\nabla^{k}[E_{ \varepsilon},\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-}]\right\|_{H^{1}}^{2}+ \left\|\nabla^{k+1}[{\bf P}f_{\varepsilon},B_{\varepsilon}]\right\|^{2}\] \[\lesssim\max\{M_{1},M_{2}\}\left(\left\|\nabla^{\widetilde{k}}[E _{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\nabla^{\widetilde{k}}f_{ \varepsilon}\right\|_{D}^{2}\right)+\frac{1}{\varepsilon^{2}}\left\|\nabla^{k} \{{\bf I}-{\bf P}\}f_{\varepsilon}\right\|_{H^{2}_{2}L^{2}_{D}}^{2}. \tag{3.41}\] Proof.: We can use a similar approach to Lemma 4.3 to prove this lemma. For brevity, we omit its detailed proof. **Proposition 3.5**.: _Under_ **Assumption 1** _and_ **Assumption 2**_, there exist an energy functional \(\mathcal{E}_{k\to N_{0}}(t)\) and the corresponding energy dissipation rate functional \(\mathcal{D}_{k\to N_{0}}(t)\) satisfying (2.6) and (2.7) respectively such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{k\to N_{0}}(t)+\mathcal{D}_{k\to N_ {0}}(t)\leq 0 \tag{3.42}\] _holds for \(k=0,1,2,\cdots,N_{0}-2\) and all \(0\leq t\leq T\)._ _Furthermore, we can get that_ \[\mathcal{E}_{k\to N_{0}}(t)\lesssim\max\{M_{1},M_{2}\}(1+t)^{-(k+\varrho)}, \quad 0\leq t\leq T. \tag{3.43}\] Proof.: (3.42) follows from (3.39), (3.40) and (3.41). To deduce (3.43), \[\left\|\nabla^{k}[\mathbf{P}f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}] \right\|\leq\left\|\nabla^{k+1}[\mathbf{P}f_{\varepsilon},E_{\varepsilon},B_{ \varepsilon}]\right\|^{\frac{k+\varrho}{k+1+\varrho}}\left\|\Lambda^{-\varrho }[\mathbf{P}f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\right\|^{\frac{1 }{k+1+\varrho}}.\] The above inequality together with the facts that \[\left\|\nabla^{m}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|\leq\left\| \langle v\rangle^{-\frac{1}{2}}\nabla^{m}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{\frac{k+\varrho}{k+1+\varrho}}\left\|\langle v\rangle^{ -\frac{(\gamma+2)k+\frac{1}{2}}}\nabla^{m}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{\frac{1}{k+1+\varrho}},\] \[\left\|\nabla^{N_{0}}[E_{\varepsilon},B_{\varepsilon}]\right\|\lesssim\left\| \nabla^{N_{0}-1}[E_{\varepsilon},B_{\varepsilon}]\right\|^{\frac{k+\varrho}{ k+1+\varrho}}\left\|\nabla^{N_{0}+k+\frac{1}{2}}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{\frac{1}{k+1+\varrho}} \tag{3.44}\] imply \[\mathcal{E}_{k\to N_{0}}(t)\leq\left(\mathcal{D}_{k\to N_{0}}(t)\right)^{\frac {k+\varrho}{k+1+\varrho}}\left\{\max\{M_{1},M_{2}\}\right\}^{\frac{1}{k+1+ \varrho}}.\] Hence, we deduce that \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{k\to N_{0}}(t)+\left\{\max\{M_{1}, M_{2}\}\right\}^{-\frac{1}{k+\varrho}}\left\{\mathcal{E}_{k\to N_{0}}(t)\right\}^{ 1+\frac{1}{k+\varrho}}\leq 0\] and we can get by solving the above inequality directly that \[\mathcal{E}_{k\to N_{0}}(t)\lesssim\max\{M_{1},M_{2}\}(1+t)^{-k+\varrho}.\] This completes the proof of Lemma 3.5. ### The estimates on the negative Sobolev space To ensure **Assumption 2**, this subsection is devoted into bound on \(\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}](t)\|\). The first one is on the estimate on \(\|[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}](t)\|_{H^{-\varrho}}\). **Lemma 3.6**.: _It holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\Lambda^{-\varrho}f_{ \varepsilon}\right\|^{2}+\left\|\Lambda^{-\varrho}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}\right)+\frac{1}{\varepsilon^{2}}\left\|\Lambda^{- \varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[\lesssim\left\|\Lambda^{-\varrho}f_{\varepsilon}\right\|\left( \left\|\Lambda^{\frac{3}{4}-\frac{\varrho}{2}}E_{\varepsilon}\right\|^{2}+ \left\|\Lambda^{\frac{3}{4}-\frac{\varrho}{2}}f_{\varepsilon}\right\|_{D}^{2}+ \left\|\Lambda^{\frac{3}{2}-\varrho}B_{\varepsilon}\right\|^{2}+\frac{1}{ \varepsilon^{2}}\left\|\Lambda^{1-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|_{D}^{2}\right)\] \[+\left\|\langle v\rangle^{\frac{3}{2}}f_{\varepsilon}\right\|_{L_{ x}^{2}H_{\varepsilon+\frac{3}{2}}^{2}}^{2}\left\|\langle v\rangle^{\frac{3}{2}} \Lambda^{\frac{3}{2}-\varrho}f_{\varepsilon}\right\|_{H_{s+\frac{\varepsilon }{2}}^{2}}^{2}. \tag{3.45}\] Proof.: By taking Fourier transform of the first equation of (1.7) with respect to \(x\), multiplying the resulting identity by \(|\xi|^{-2\varrho}\tilde{\bar{f}}_{\pm}\) with \(\tilde{\bar{f}}_{\pm}\) being the complex conjugate of \(\hat{f}_{\pm}\), and integrating the final result with respect to \(\xi\) and \(v\) over \(\mathbb{R}_{\xi}^{3}\times\mathbb{R}_{v}^{3}\) that we can get by using the coercivity property \(\mathscr{L}\) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\Lambda^{-\varrho}f_{ \varepsilon}\right\|^{2}+\left\|\Lambda^{-\varrho}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}\right)+\frac{1}{\varepsilon^{2}}\left\|\Lambda^{- \varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\] \[\lesssim\left|\left(v\cdot\mathcal{F}[q_{0}E_{\varepsilon}f_{ \varepsilon}]\mid|\xi|^{-2\varrho}\hat{f}_{\varepsilon}\right)\right|+\left| \left(\mathcal{F}[q_{0}E_{\varepsilon}\cdot\nabla_{v}f_{\varepsilon}]\mid| \xi|^{-2\varrho}\hat{f}_{\varepsilon}\right)\right|\] \[\left|\left(\mathcal{F}[E_{\varepsilon}\cdot vf_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\lesssim\left\|\Lambda^{-\varrho}f_{\varepsilon}\right\|\left( \left\|\Lambda^{\frac{3}{4}-\frac{\varrho}{2}}E_{\varepsilon}\right\|^{2}+ \left\|\Lambda^{\frac{3}{4}-\frac{\varrho}{2}}f_{\varepsilon}\right\|_{D}^{2}\right)\] \[\quad+\left\|\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \langle v\rangle^{\frac{3}{2}}\right\|^{2}\left\|\Lambda^{\frac{3}{2}-\varrho}E_ {\varepsilon}\right\|^{2}+\eta\left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right\|_{D}^{2}. \tag{3.50}\] For the third term on the right-hand side of (3.46), we have by repeating the argument used in deducing the estimate on the first two terms that \[\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{\varepsilon}\right)\right|\] \[\lesssim\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v \times B_{\varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\mathbf{P}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v \times B_{\varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\{\mathbf{I}-\mathbf{P }\}\hat{f}_{\varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\ |\ |\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\quad+\frac{1 \[\frac{1}{\varepsilon}\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\mid|\xi|^{-2 \varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}]\mid|\xi|^{-2\varrho}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\mid| \xi|^{-2\varrho}\mathbf{P}\hat{f}_{\varepsilon}\right)\right|\] \[\quad+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\mid| \xi|^{-2\varrho}\{\mathbf{I}-\mathbf{P}\}\hat{f}_{\varepsilon}\right)\right| \tag{3.51}\] where we used the fact that \[\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}]\mid|\xi|^{-2\varrho} \mathbf{P}\hat{f}_{\varepsilon}\right)\right|=0.\] Similar with the argument used in (3.47), one has \[\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}\mathbf{P}f_{\varepsilon}]\mid|\xi|^{-2\varrho} \{\mathbf{I}-\mathbf{P}\}\hat{f}_{\varepsilon}\right)\right|\] \[+\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\mid|\xi| ^{-2\varrho}\mathbf{P}\hat{f}_{\varepsilon}\right)\right|\] \[\lesssim\left(\left\|\Lambda^{-\varrho}f_{\varepsilon}\right\|^{2 }+\left\|f_{\varepsilon}\right\|^{2}\right)\left(\left\|\Lambda^{\frac{3}{2}- \varrho}B_{\varepsilon}\right\|^{2}+\frac{1}{\varepsilon^{2}}\left\|\Lambda^{ 1-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}\right)\] \[\quad+\frac{\eta}{\varepsilon^{2}}\left\|\Lambda^{-\varrho}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}. \tag{3.52}\] By using the similar way as (3.48), one has \[\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}]\mid|\xi| ^{-2\varrho}\{\mathbf{I}-\mathbf{P}\}\hat{f}_{\varepsilon}\right)\right|\] \[\lesssim\left\|\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f\langle v \rangle^{\frac{5}{2}}\right\|^{2}\left\|\Lambda^{\frac{3}{2}-\varrho}B_{ \varepsilon}\right\|^{2}+\frac{\eta}{\varepsilon^{2}}\left\|\Lambda^{-\varrho }\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}, \tag{3.53}\] consequently, one has \[\frac{1}{\varepsilon}\left|\left(\mathcal{F}[q_{0}v\times B_{ \varepsilon}\cdot\nabla_{v}f_{\varepsilon}]\mid|\xi|^{-2\varrho}\hat{f}_{ \varepsilon}\right)\right|\] \[\lesssim\left\|\Lambda^{-\varrho}f_{\varepsilon}\right\|\left( \left\|\Lambda^{1-\varrho}B_{\varepsilon}\right\|^{2}+\frac{1}{\varepsilon^{2}} \left\|\Lambda^{1-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D }^{2}\right)\] \[\quad+\left\|f\langle v\rangle^{\frac{5}{2}}\right\|^{2}\left\| \Lambda^{1-\varrho}B_{\varepsilon}\right\|^{2}+\frac{\eta}{\varepsilon^{2}} \left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^ {2}\] \[\lesssim\left\{\left\|\Lambda^{-\varrho}f_{\varepsilon}\right\|+ \left\|f\langle v\rangle^{\frac{5}{2}}\right\|^{2}\right\}\left(\left\| \Lambda^{1-\varrho}B_{\varepsilon}\right\|^{2}+\frac{1}{\varepsilon^{2}} \left\|\Lambda^{1-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^ {2}\right)\] \[\quad+\left\|\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f\langle v \rangle^{\frac{5}{2}}\right\|^{2}\left\|\Lambda^{\frac{3}{2}-\varrho}B_{ \varepsilon}\right\|^{2}+\frac{\eta}{\varepsilon^{2}}\left\|\Lambda^{-\varrho} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}. \tag{3.54}\] As to the last term on the right-hand side of (3.46), one has \[\frac{1}{\varepsilon}\left(\mathcal{F}[\mathscr{T}(f_{\varepsilon},f_{\varepsilon})]\mid|\xi|^{-2\varrho}\mathcal{F}\left[\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}\right]\right)\] \[\lesssim\left\|\Lambda^{-\varrho}\left(\langle v\rangle^{\frac{3}{2}} \mathscr{T}(f_{\varepsilon},f_{\varepsilon})\right)\right\|^{2}+\frac{\eta}{ \varepsilon^{2}}\left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|_{D}^{2}\] \[\lesssim\left\|\langle v\rangle^{\frac{3}{2}}f_{\varepsilon} \right\|_{L_{x}^{3}+\frac{\eta}{2}}^{2}\left\|\langle v\rangle^{\frac{3}{2}} \Lambda^{\frac{3}{2}-\varrho}f_{\varepsilon}\right\|_{H_{s+\frac{\eta}{2}}^{2}}^ {2}+\frac{\eta}{\varepsilon^{2}}\left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}\right\|_{D}^{2}.\] Substituting the estimates into (3.46) yields (3.45), which complete the proof of Lemma 3.6. **Lemma 3.7**.: _There exists an interactive functional \(G_{f_{\varepsilon}}^{-\varrho}(t)\) satisfying_ \[G_{f_{\varepsilon}}^{-\varrho}(t)\lesssim\left\|\Lambda^{1-\varrho}[f_{ \varepsilon},E_{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\Lambda^{- \varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\right\|^{2}+\| \Lambda^{\frac{3}{2}}E_{\varepsilon}\|^{2}\] _such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}G_{f_{\varepsilon}}^{-\varrho}(t)+ \left\|\Lambda^{1-\varrho}\mathbf{P}f_{\varepsilon}\right\|^{2}+\left\| \Lambda^{1-\varrho}[E_{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\| \Lambda^{-\varrho}E_{\varepsilon}\right\|_{H_{x}^{1}}^{2}+\left\|\Lambda^{- \varrho}(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-})\right\|_{H_{x}^{1}}^ {2}\] \[\lesssim\frac{1}{\varepsilon^{2}}\left\|\Lambda^{-\varrho}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{H_{x}^{2}L_{D}^{2}}^{2}+ \mathcal{E}_{2}(t)\mathcal{D}_{2}(t) \tag{3.55}\] _holds for any \(0\leq t\leq T\)._ Proof.: This lemma can be proved by a similar way as Lemma 3.2, we omit its proof for brevity. Based on the above two lemmas, one has **Proposition 3.6**.: _Under Under_ **Assumption 1** _and_ **Assumption 2**_, one has_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\Lambda^{-\varrho}f_{ \varepsilon}\right\|^{2}+\left\|\Lambda^{-\varrho}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}+\kappa_{1}G_{f_{\varepsilon}}^{-\varrho}(t)\right)+ \frac{1}{\varepsilon^{2}}\left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P}\}f _{\varepsilon}\right\|_{D}^{2}\] \[+\kappa_{1}\left\|\Lambda^{1-\varrho}\mathbf{P}f_{\varepsilon} \right\|^{2}+\kappa_{1}\left\|\Lambda^{1-\varrho}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}+\kappa_{1}\left\|\Lambda^{-\varrho}E_{\varepsilon} \right\|_{H_{x}^{1}}^{2}+\kappa_{1}\left\|\Lambda^{-\varrho}(\rho_{ \varepsilon}^{+}-\rho_{\varepsilon}^{-})\right\|_{H_{x}^{1}}^{2}\] \[\lesssim\left(M_{2}^{\frac{1}{2}}+M_{2}\right)\left(\left\| \nabla_{x}[\mathbf{P}f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\right\| ^{2}+\frac{1}{\varepsilon^{2}}\left\|\nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|_{D}^{2}+\left\|\langle v\rangle^{\frac{3}{2}}\nabla_{x} f_{\varepsilon}\right\|_{L_{x}^{2}H_{s+\frac{7}{2}}^{1}}^{2}\right)\] \[+\frac{1}{\varepsilon^{2}}\left\|\nabla_{x}^{2}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+\mathcal{E}_{2}(t)\mathcal{D}_{2} (t). \tag{3.56}\] Proof.: By the interpolation with respect to the spatial derivatives, one has \[\frac{1}{\varepsilon^{2}}\left\|\Lambda^{1-\varrho}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|_{H_{x}^{1}L_{D}^{2}}^{2}\] \[\lesssim\frac{1}{\varepsilon^{2}}\left\|\nabla_{x}^{2}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{D}^{2}+\frac{\eta}{\varepsilon ^{2}}\left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|_{D}^{2}, \tag{3.57}\] and for \(\frac{1}{2}<\varrho<\frac{3}{2}\), then \(\frac{3}{4}-\frac{\varrho}{2}>1-\varrho\), \[\left\|\Lambda^{\frac{3}{4}-\frac{\varrho}{2}}E_{\varepsilon}\right\|^{2}+ \left\|\Lambda^{\frac{3}{4}-\frac{\varrho}{2}}B_{\varepsilon}\right\|^{2} \lesssim\eta\left\|\Lambda^{1-\varrho}[E_{\varepsilon},B_{\varepsilon}] \right\|^{2}+C_{\eta}\left\|\nabla_{x}[E_{\varepsilon},B_{\varepsilon}] \right\|^{2},\] Similarly, \[\left\|\Lambda^{\frac{3}{2}-\varrho}[E_{\varepsilon},B_{\varepsilon}]\right\| ^{2}\lesssim\eta\left\|\Lambda^{1-\varrho}[E_{\varepsilon},B_{\varepsilon}] \right\|^{2}+C_{\eta}\left\|\nabla_{x}[E_{\varepsilon},B_{\varepsilon}] \right\|^{2}.\] By using the above estimates, one can deduce (3.56) from a proper linear combination of (3.45) and (3.55). ### The a priori estimates Based on Proposition 3.4 and 3.6, we are ready to construct the a priori estimates \[\sup_{0<t\leq T}\left\{\left\|\Lambda^{-\varrho}[f_{\varepsilon},E_{ \varepsilon},B_{\varepsilon}]\right\|^{2}+\mathcal{E}_{N,l}(t)+\mathcal{E}_{N}( t)\right\}\leq\overline{M} \tag{3.58}\] where \(\overline{M}\leq\min\{M_{1},M_{2}\}\) is a sufficiently small positive constant. One has \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\left\|\Lambda^{-\varrho}[f_{\varepsilon},E_ {\varepsilon},B_{\varepsilon}]\right\|^{2}+\mathcal{E}_{N,l}(t)+\mathcal{E}_{N}( t)\right\}+\mathcal{D}_{N}(t)+\mathcal{D}_{N,l}(t)\lesssim 0, \tag{3.59}\] which gives that \[\left\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}](t) \right\|^{2}+\mathcal{E}_{N,l}(t)+\mathcal{E}_{N}(t)\] \[\lesssim\left\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{ \varepsilon}](0)\right\|^{2}+\mathcal{E}_{N,l}(0)+\mathcal{E}_{N}(0)\lesssim Y _{f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}}^{2}(0) \tag{3.60}\] where \(Y_{f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}}(0)\) is defined in (2.20), thus we close the a priori estimates if the initial data \(Y_{f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}}(0)\) is taken as small sufficiently. Furthermore, we can get from Proposition 3.5 \[\mathcal{E}_{k\to N_{0}}(t)\lesssim Y_{f_{\varepsilon},E_{\varepsilon},B_{ \varepsilon}}^{2}(0)(1+t)^{-(k+\varrho)},\quad 0\leq t\leq T. \tag{3.61}\] ## 4. The cutoff VMB system ### Lyapunov inequality for the energy functional \(\mathcal{E}_{N}(t)\) **Proposition 4.1**.: _There exist an energy functional \(\mathcal{E}_{N}(t)\) and the corresponding energy dissipation functional \(\mathcal{D}_{N}(t)\) which satisfy (2.1), (2.2) respectively such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}(t)+\mathcal{D}_{N}(t)\] \[\lesssim\left\|E_{\varepsilon}\right\|_{L^{2}_{\infty}}^{2} \left\|\langle v\rangle^{\frac{3}{2}}\nabla_{x}^{N}f_{\varepsilon}\right\|^{2 }+\left\|\nabla_{x}B_{\varepsilon}\right\|_{L^{2}_{\infty}}^{2}\left\|\langle v \rangle^{\frac{3}{2}}\nabla_{x}^{N-1}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}\] \[+\mathcal{E}_{N}(t)\sum_{|\alpha^{\prime}|+|\beta^{\prime}|\leq N -1}\left\|\langle v\rangle^{\frac{3}{2}}\partial_{\beta^{\prime}}^{\alpha^{ \prime}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}+\mathcal{E}_{N} (t)\mathcal{D}_{N}(t). \tag{4.1}\] Proof.: This proposition can be proved by a similar way as to deduce (3.12), we omit its proof for brevity. The bound for energy functionals \(\overline{\mathcal{E}}_{N-1,l_{1}}(t)\) and \(\overline{\mathcal{E}}_{N_{0}-1,l_{0}}(t)\) **Assumption I:** \[\sup_{0<t\leq T}\left\{\sum_{k\leq N_{0}-2}(1+t)^{k+\varrho}\mathcal{E}_{k\to N _{0}}(t)+(1+t)^{1+\epsilon_{1}}\overline{\mathcal{E}}_{N_{0}-1,l_{0}}(t)+ \mathcal{E}_{N}(t)\right\}\leq\mathcal{M}_{1} \tag{4.2}\] where \(\mathcal{M}_{1}\) is a sufficiently small positive constant. **Proposition 4.2**.: _Under_ **Assumption 1**_, assume \(\tilde{\ell}\geq\frac{3}{2}\sigma_{N-1,N-1}\), \(\theta=\frac{3}{\tilde{\ell}+\frac{1}{2}}\) and \(\ell_{1}\geq l_{1}+\tilde{\ell}+\frac{1}{2}\), then one has_ \[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathcal{E}}_{N-1,l_{1}}( t)+\overline{\mathcal{D}}_{N-1,l_{1}}(t)\] \[\lesssim\mathcal{D}_{N}(t)+\overline{\mathcal{E}}_{N-1,l_{1}}(t) \overline{\mathcal{D}}_{N-1,l_{1}}(t)+\mathcal{M}_{1}^{\frac{2}{\theta}}\sum_ {N_{0}+1\leq n\leq N-1}\mathbb{D}_{\ell_{1}}^{(n)}(t). \tag{4.3}\] Proof.: Applying \(\partial_{\beta}^{\alpha}\) into (3.20), integrating the result identity over \(\mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^{3}\) by multiplying \(\overline{w}_{l_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\), then we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\overline{w}_{l_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\|^{2}+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\left\|\langle v\rangle \overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|^{2}\] \[+\frac{1}{\varepsilon^{2}}\left(\partial_{\beta}^{\alpha} \mathscr{L}f_{\varepsilon},\overline{w}_{l_{1}}^{2}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=-\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}[v\cdot \nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\overline{w}_{l_{1}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[E_{ \varepsilon}\cdot vM^{1/2}q_{1}\right],\overline{w}_{l_{1}}^{2}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{P}(v\cdot\nabla_{x}f_{\varepsilon})-\frac{1}{\varepsilon}v\cdot \nabla_{x}\mathbf{P}f_{\varepsilon}\right],\overline{w}_{l_{1}}^{2}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{I}-\mathbf{P}\right]\left[q_{0}v\times B_{\varepsilon}\cdot\nabla_{v}f_ {\varepsilon}\right]\right],\overline{w}_{l_{1}}^{2}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\left(\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}\left[- \frac{1}{2}q_{0}v\cdot E_{\varepsilon}f_{\varepsilon}+q_{0}E_{\varepsilon} \cdot\nabla_{v}f_{\varepsilon}\right],\overline{w}_{l_{1}}^{2}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{\eta}{\varepsilon^{2}}\|\overline{w}_{l_{1}}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}\] \[\lesssim\sum_{|\alpha_{1}|=1}\left\{\left\{\|\partial^{\alpha_{1}} B_{\varepsilon}\|_{L_{\infty}^{2}}^{\frac{2}{\ell}}\|\langle v\rangle^{\ell} \overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\right\}^{\theta}\right.\] \[\left.\times\left\{\|\langle v\rangle^{-\frac{1}{2}}\overline{w} _{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{\alpha-\alpha_{ 1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\right\}^{1-\theta}\right.\] \[\left.+\frac{\eta}{\varepsilon^{2}}\|\overline{w}_{l_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\|_{\nu}^{2}\right.\] \[\lesssim\sum_{|\alpha_{1}|=1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|_{L_{\infty}^{2}}^{\frac{2}{\ell}}\|\langle v\rangle^{\ell} \overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\] \[\left.+\eta\sum_{|\alpha_{1}|=1}\|\overline{w}_{l_{1}}(\alpha- \alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{\alpha-\alpha_{1}}\{\mathbf{I }-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}+\frac{\eta}{\varepsilon^{2}}\| \overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2},\right. \tag{4.7}\] where we take \(\theta\) as \[\theta=\frac{3}{\ell+\frac{1}{2}}\] and use the fact that \[\langle v\rangle^{\frac{3}{2}}\overline{w}_{l_{1}}(\alpha,\beta)\leq\langle v \rangle^{\frac{5}{2}}\overline{w}_{l_{1}}(\alpha-e_{i},\beta+e_{i}),\ |\alpha_{1}|=1,\beta_{1}=0.\] When \(|\alpha_{1}|=2,\beta_{1}=0\) or \(|\alpha_{1}|=3,\beta_{1}=0\), one has the similar bound \[\lesssim\sum_{|\alpha_{1}|=2,3}\|\partial^{\alpha_{1}}B_{\varepsilon }\|_{L^{\infty}_{x}}^{\frac{2}{2}}\|\langle v\rangle^{\tilde{t}}\overline{w}_{ l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{\alpha-\alpha_{1}} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\] \[+\eta\sum_{|\alpha_{1}|=2,3}\|\overline{w}_{l_{1}}(\alpha-\alpha_ {1},\beta+e_{i})\partial_{\beta+e_{i}}^{\alpha-\alpha_{1}}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}+\frac{\eta}{\varepsilon^{2}}\| \overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}. \tag{4.8}\] While for \(4\leq|\alpha_{1}|\leq N-2\), since \[\langle v\rangle^{\frac{3}{2}}\overline{w}_{l_{1}}(\alpha,\beta)\leq\langle v \rangle^{-\frac{1}{2}}\overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i}),\] one deduces that \[\lesssim\sum_{4\leq|\alpha_{1}|\leq N-2}\frac{1}{\varepsilon}\| \partial^{\alpha_{1}}B_{\varepsilon}\|_{L^{\infty}_{x}}\|\langle v\rangle^{- \frac{1}{2}}\overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{ \beta+e_{i}}^{\alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\] \[\quad\times\|\langle v\rangle^{-\frac{1}{2}}\overline{w}_{l_{1}} (\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\] \[\lesssim\sum_{4\leq|\alpha_{1}|\leq N-2}\|\partial^{\alpha_{1}}B_ {\varepsilon}\|_{L^{\infty}_{x}}^{2}\|\overline{w}_{l_{1}}(\alpha-\alpha_{1}, \beta+e_{i})\partial_{\beta+e_{i}}^{\alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}\|_{\nu}^{2}\] \[\quad+\frac{\eta}{\varepsilon^{2}}\|\overline{w}_{l_{1}}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{ \nu}^{2}\] \[\lesssim\mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t)+ \frac{\eta}{\varepsilon^{2}}\|\overline{w}_{l_{1}}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}.\] If \(|\alpha_{1}|=N-1\), one also has that the term can be bounded by \[\mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t)+\frac{\eta}{ \varepsilon^{2}}\|\overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}.\] Thus, one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v\times B _{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right], \overline{w}_{l_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\sum_{|\alpha_{1}|=1,2,3}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|_{L^{\infty}_{x}}^{\frac{2}{2}}\|\langle v\rangle^{\tilde{t}} \overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\] \[\quad+\mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t)+ \eta\overline{\mathcal{D}}_{N-1,l_{1}}(t). \tag{4.9}\] By using the similar argument, one also has \[\left(\partial_{\beta}^{\alpha}\left[E_{\varepsilon}\cdot\nabla_{ v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{l_{1}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\sum_{|\alpha_{1}|\leq 3}\|\partial^{\alpha_{1}}E_{ \varepsilon}\|_{L^{\infty}_{x}}^{\frac{2}{2}}\|\langle v\rangle^{\tilde{t}} \overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta+e_{i})\partial_{\beta+e_{i}}^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\] \[\quad+\mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t)+ \eta\overline{\mathcal{D}}_{N-1,l_{1}}(t) \tag{4.10}\] and \[\left(\partial_{\beta}^{\alpha}\left[E_{\varepsilon}\cdot v\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{l_{1}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\sum_{|\alpha_{1}|\leq 3}\|\partial^{\alpha_{1}}E_{ \varepsilon}\|_{L^{\infty}_{x}}^{\frac{2}{2}}\|\langle v\rangle^{\tilde{t}} \overline{w}_{l_{1}}(\alpha-\alpha_{1},\beta)\partial_{\beta}^{\alpha-\alpha_{1 }}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}\] \[\quad+\mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t)+ \eta\overline{\mathcal{D}}_{N-1,l_{1}}(t). \tag{4.11}\] For the last term, one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\mathscr{T}(f_{\varepsilon},f _{\varepsilon}),\overline{w}_{l_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\lesssim\overline{\mathcal{E}}_{N -1,l_{1}}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t)+\eta\overline{\mathcal{D}}_{N-1,l_{1}}(t). \tag{4.12}\] Now by collecting the above related estimates into (4.4), we arrive at \[\frac{\mathrm{d}}{\mathrm{d}t}\left\|\overline{w}_{l_{1}}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}+\frac{1}{ \varepsilon^{2}}\left\|\overline{w}_{l_{1}}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{\nu}^{2}\] \[+\sum_{1\leq|\alpha_{1}|\leq 2}(1+t)^{1+\vartheta}\|\partial^{\alpha_{1} }[E_{e},B_{\varepsilon}]\|_{L^{2}_{x}}^{2}\sum_{N-1\leq n\leq N}(1+t)^{\sigma_{n,1}}\mathcal{D}_{\ell_{1}}^{n,1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}[E_{ \varepsilon},B_{\varepsilon}]\|^{2}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{\sigma_{ n,1}}\mathcal{D}_{\ell_{1}}^{n,1}(t)\] \[+\mathcal{E}_{N}(t)\mathcal{E}_{1\to N_{0}-1,\overline{f}_{0}}(t)+ \mathcal{E}_{N-1,l_{1}}(t)\sum_{n\leq N}(1+t)^{\sigma_{n,0}}\mathcal{D}_{\ell_ {1}}^{n,0}(t) \tag{4.15}\] _for all \(0\leq t\leq T\)._ Proof.: The proofs of this lemma is similar to Lemma 3.3. For the sake of brevity, let's just point out the differences from Lemma 3.3. Recalling the definition of the weight \(\widetilde{w}_{\ell_{1}}(\alpha,\beta)(t,v)\), one has \[\langle v\rangle\widetilde{w}_{\ell_{1}}(\alpha,0)=\langle v\rangle\widetilde{w} _{\ell_{1}}(\alpha-e_{i},e_{i})\langle v\rangle^{-\frac{1}{2}}\leq\langle v \rangle^{\frac{1}{2}}\widetilde{w}_{\ell_{1}}(\alpha-e_{i},e_{i}),\] it follows that \[\frac{1}{\varepsilon}\left|\left(\partial^{\alpha}[v\times B_{ \varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}], \widetilde{w}_{\ell_{1}}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right)\right|\] \[=\frac{1}{\varepsilon}\left|\left([v\times B_{\varepsilon}\cdot \nabla_{v}\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}], \widetilde{w}_{\ell_{1}}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{1\leq|\alpha_{1}|\leq N}\frac{1}{\varepsilon}\left| \left([v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{ \ell_{1}}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\right|\] \[=\sum_{|\alpha_{1}|=1}\frac{1}{\varepsilon}\left|\left([v\times \partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha-\alpha_{1 }}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{\ell_{1}}^{2}( \alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{|\alpha_{1}|=2}\frac{1}{\varepsilon}\left|\left([v \times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{\ell_{1}}^ {2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{3\leq|\alpha_{1}|\leq N_{0}-2}\frac{1}{\varepsilon} \left|\left([v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v} \partial^{\alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}], \widetilde{w}_{\ell_{1}}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{|\alpha_{1}|=N_{0}}\frac{1}{\varepsilon}\left|\left([ v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{\ell_{1}}^ {2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{|\alpha_{1}|=N_{0}}\frac{1}{\varepsilon}\left|\left([ v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{\ell_{1}}^ {2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{|\alpha_{1}|=N_{0}+1}\frac{1}{\varepsilon}\left|\left( [v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{\ell_{1}}^ {2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\right|\] \[\quad+\sum_{N_{0}+2\leq|\alpha_{1}|\leq N}\frac{1}{\varepsilon} \left|\left([v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v} \partial^{\alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}], \widetilde{w}_{\ell_{1}}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\} f_{\varepsilon}\right)\right|\] \[\lesssim\sum_{|\alpha_{1}|=1}\frac{1}{\varepsilon}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v\rangle^{\frac{1}{4}} \widetilde{w}_{\ell_{1}}(\alpha-\alpha_{1},e_{i})\partial_{e_{i}}^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\|\langle v\rangle^{\frac {1}{4}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|\] \[\quad+\sum_{|\alpha_{1}|=2}\frac{1}{\varepsilon}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v\rangle^{-\frac{1}{4}} \widetilde{w}_{\ell_{1}}(\alpha-\alpha_{1},e_{i})\partial_{e_{i}}^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\|\langle v\rangle^{- \frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|\] \[\quad+\sum_{3\leq|\alpha_{1}|\leq N_{0}-2}\frac{1}{\varepsilon}\| \partial^{\alpha_{1}}B_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v\rangle^{2-| \alpha_{1}|}\widetilde{w}_{\ell_{1}}(\alpha-\alpha_{1},e_{i})\partial_{e_{i}}^ {\alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|\|\langle v\rangle^{- \frac{1}{2}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|\] \[\quad+\sum_{|\alpha_{1}|=N_{0}-1}\frac{1}{\varepsilon}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v\rangle^{2-|\alpha_{1}|} \widetilde{w}_{\ell_{1}}(\alpha-\alpha_{1},e_{i})\partial_{e_{i}}^{\alpha- \alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v \rangle^{-\frac{1}{2}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{ I}-\mathbf{P}\}f_{\varepsilon}\|\] \[\quad+\sum_{|\alpha_{1}|=N_{0}+1}\frac{1}{\varepsilon}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|\|\langle v\rangle^{2-|\alpha_{1}|}\widetilde{w}_{ \ell_{1}}(\alpha-\alpha_{1},e_{i})\partial_{e_{i}}^{\alpha-\alpha_{1}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v\rangle^{- \frac{1}{2}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|\] \[\quad+\sum_{|\alpha_{1}|=N_{0}+1}\frac{1}{\varepsilon}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|\|\langle v\rangle^{2-|\alpha_{1}|}\widetilde{w}_{ \ell_{1}}(\alpha-\alpha_{1},e_{i})\partial_{e_{i}}^{\alpha-\alpha_{1}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{L_{x}^{\infty}}\|\langle v\rangle^{- \frac{1}{2}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|\] \[\quad+\sum_{N_{0}+2\leq|\alpha_{1}|\leq N}\frac{1}{\varepsilon}\left| \left([v\times\partial^{\alpha_{1}}B_{\varepsilon}\cdot\nabla_{v}\partial^{ \alpha-\alpha_{1}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\widetilde{w}_{ \ell_{1}}^{2}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right)\right|\] \[\lesssim\sum_{1\leq|\alpha_{1}|\leq 2}\frac{1}{\varepsilon}(1+t)^{\frac{1+ \vartheta}{2}}\|\partial^{\alpha_{1}}B_{\varepsilon}\|_{L_{x}^{\infty}}^{2}\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha-\alpha_{1},e_{i}) \partial_{e_{i}}^{\alpha- \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\|\langle v\rangle^{\frac{1}{4}}\widetilde {w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\|^{2}+\eta\,\|\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha }\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}_{\nu}\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\sum_{N-1\leq n\leq N}(1+t)^{\sigma_{n,1}} \mathcal{D}_{\ell_{1}}^{n,1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\sum_{N-1\leq n\leq N}(1+t)^{\sigma_{n,1}} \mathcal{D}_{\ell_{1}}^{n,1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\sum_{N-1\leq n\leq N}(1+t)^{\sigma_{n,1}} \mathcal{D}_{\ell_{1}}^{n,1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\|\langle v\rangle^{\frac{1}{4}}\widetilde {w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\|^{2}+\eta\,\|\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}_{\nu}\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\sum_{N-1\leq n\leq N}(1+t)^{\sigma_{n,1}} \mathcal{D}_{\ell_{1}}^{n,1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{\sigma_{n, 1}}\mathcal{D}_{\ell_{1}}^{n,1}(t)+\mathcal{E}_{N}(t)\mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}(t)\] \[+\frac{\eta}{\varepsilon}(1+t)^{-\frac{1+\theta}{2}}\|\langle v \rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}+\eta\,\|\widetilde{w}_{\ell_{1}}( \alpha,0)\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|^{2}_{\nu} \tag{4.16}\] where the third term on the right-hand side of the above inequalities can dominated by \[\frac{1}{\varepsilon^{2}}\left\|\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{ \alpha}f_{\varepsilon}\right\|^{2}_{\nu}+\frac{\vartheta q}{(1+t)^{1+\theta}}\left\| \langle v\rangle\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\right\|^{2}.\] By combining the other similar estimates as Lemma 3.3, we complete the proof of Lemma 4.1. **Lemma 4.2**.: _Assume \(\overline{\ell}_{0}\geq\ell_{1}+\frac{3}{2}N\), for \(|\alpha|+|\beta|=N,|\beta|\geq 1\), one also has_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\|\widetilde{w}_{\ell_{1}}( \alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}+\frac{1}{\varepsilon^{2}}\left\|\widetilde{w}_{\ell_{1}}(\alpha, \beta)\partial^{\alpha}_{\beta}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}_ {\nu}\] \[+\frac{q\vartheta}{(1+t)^{1+\theta}}\|\langle v\rangle \widetilde{w}_{\ell_{1}}(\alpha,\beta)\partial^{\alpha}_{\beta}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|^{2}\] \[\lesssim\frac{1}{\varepsilon}(1+t)^{\frac{1+\epsilon_{0}}{2}}\left\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha+e_{i},\beta-e_{i}) \partial^{\alpha+e_{i}}_{\beta-e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}\] \[+\sum_{1\leq|\alpha_{1}|\leq 2}(1+t)^{1+\theta}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N+1-|\alpha_{1}|,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B_{ \varepsilon}\|^{2}_{L^{\infty}_{x}}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{\sigma_{n,| \beta|+1}}\mathcal{D}_{\ell_{1}}^{n,|\beta|+1}(t)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\mathscr{T}(f_{ \varepsilon},f_{\varepsilon}),\widetilde{w}_{\ell_{1}}^{2}\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right). \tag{4.18}\] The coercivity estimates on the linear operator \(\mathscr{L}\) yields that \[\frac{1}{\varepsilon^{2}}\left(\partial_{\beta}^{\alpha}\mathscr{ L}f_{\varepsilon},\widetilde{w}_{\ell_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\gtrsim\frac{1}{\varepsilon^{2}}\|\widetilde{w}_{\ell_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\|_{\nu}^{2}-\frac{1}{\varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}. \tag{4.19}\] As for the transport term on the right-hand side of (4.18), one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \cdot\nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\widetilde{w }_{\ell_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P }\}f_{\varepsilon}\right)\] \[=-\frac{1}{\varepsilon}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_ {y}^{3}}\langle v\rangle^{\frac{1}{2}}\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\widetilde{w}_{\ell_{1}}(\alpha+e_{i}, \beta-e_{i})w_{l}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{ P}\}f_{\varepsilon}dvdx\] \[\lesssim\frac{1}{\varepsilon}(1+t)^{\frac{1+4\alpha}{2}}\left\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha+e_{i},\beta-e_{ i})\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}\] \[\quad+\frac{\eta}{\varepsilon}(1+t)^{-\frac{1+4\alpha}{2}}\left\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha,\beta)\partial_{ \beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}, \tag{4.20}\] where we used the fact \[\widetilde{w}_{\ell_{1}}(\alpha,\beta)=\langle v\rangle^{\frac{1}{2}}\widetilde {w}_{\ell_{1}}(\alpha+e_{i},\beta-e_{i}).\] For the nonlinear term induced by magnetic field \(B(t,x)\), simiar to (4.16), one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v \times B_{\varepsilon}\cdot\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right],\widetilde{w}_{\ell_{1}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=\frac{1}{\varepsilon}\sum_{\alpha_{1}\leq\alpha,\beta_{1}\leq \beta}\left(\partial_{\beta_{1}}^{\alpha_{1}}\left[v\times B_{\varepsilon} \right]\cdot\partial_{\beta-\beta_{1}}^{\alpha-\alpha_{1}}\left[\nabla_{v}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\widetilde{w}_{\ell_{1}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right)\] \[\lesssim\sum_{1\leq|\alpha_{1}|\leq 2}(1+t)^{1+\theta}\| \partial^{\alpha_{1}}B_{\varepsilon}\|_{L_{x}^{\infty}}^{2}(1+t)^{\sigma_{N+1-| \alpha_{1}|,|\beta|+1}}\mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[\quad+\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{\alpha_{1}}B _{\varepsilon}\|^{2}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{\sigma_{n,|\beta|+1}} \mathcal{D}_{\ell_{1}}^{n,|\beta|+1}(t)+\mathcal{E}_{N}(t)\mathcal{E}_{1\to N _{0}-1,\overline{f}_{0}}(t)\] \[\varepsilon^{2}\frac{\mathrm{d}}{\mathrm{d}t}\|w_{\ell}(\alpha,0) \partial^{\alpha}f_{\varepsilon}\|^{2}+\|w_{\ell}(\alpha,0)\partial^{\alpha}f_{ \varepsilon}\|^{2}_{\nu}+\frac{\vartheta q\varepsilon^{2}}{(1+t)^{1+\vartheta} }\,\|\langle v\rangle w_{\ell}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\|^{2}\] \[\lesssim\|\nabla_{x}^{N}\mathbf{P}f_{\varepsilon}\|^{2}+\mathcal{ D}_{N}(t)+\|\partial^{\alpha}E_{\varepsilon}\|\left\|M^{\delta}\partial^{\alpha}f_{ \varepsilon}\right\|\] \[\quad+\varepsilon^{2}\sum_{1\leq|\alpha_{1}|\leq 2}(1+t)^{1+\vartheta} \|\partial^{\alpha_{1}}B_{\varepsilon}\|^{2}_{L^{\infty}_{x}}(1+t)^{\sigma_{N +1-|\alpha_{1}|,1}}\mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,1}(t)\] \[\quad+\varepsilon^{2}\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\| \partial^{\alpha_{1}}B_{\varepsilon}\|^{2}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{ \sigma_{n,1}}\mathcal{D}_{\ell_{1}}^{n,1}(t)+\mathcal{E}_{N}(t)\mathcal{E}_{ \overline{\ell}_{0}}^{1\to N_{0}-1}(t)\] \[\lesssim\mathcal{D}_{N}(t)+\|\partial^{\alpha}E_{\varepsilon}\| \left\|M^{\delta}\partial^{\alpha}f_{\varepsilon}\right\|\] \[+\varepsilon^{2}\sum_{1\leq|\alpha_{1}|\leq 2}(1+t)^{1+\vartheta}\| \partial^{\alpha_{1}}B_{\varepsilon}\|_{L^{\infty}_{x}}^{2}(1+t)^{\sigma_{N+1-| \alpha_{1}|,1}}\mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\varepsilon^{2}\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|^{2}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{\sigma_{n, 1}}\mathcal{D}_{\ell_{1}}^{n,1}(t)+\mathcal{E}_{N}(t)\mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}(t). \tag{4.26}\] When \(|\alpha|=N-1,\ |\beta|=1\), we hope that the term \[\frac{1}{\varepsilon}(1+t)^{\frac{1+\epsilon_{0}}{2}}\left\| \langle v\rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha+e_{i},\beta-e_{ i})\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}\] on the right-hand side of (4.17) can be dominated by the corresponding dissipation terms on the left-hand side of (4.26), to do so, we multiply \(\varepsilon^{2}\) into (4.17) to get \[\varepsilon^{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\|\widetilde{w }_{\ell_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_ {\varepsilon}\right\|^{2}+\left\|\widetilde{w}_{\ell_{1}}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}_ {\nu}\] \[+\frac{q\vartheta\varepsilon^{2}}{(1+t)^{1+\vartheta}}\| \widetilde{w}_{\ell_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\langle v\rangle_{\varepsilon}\|^{2}\] \[\lesssim\varepsilon(1+t)^{\frac{1+\epsilon_{0}}{2}}\left\|\langle v \rangle^{\frac{1}{4}}\widetilde{w}_{\ell_{1}}(\alpha+e_{i},\beta-e_{i}) \partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}\] \[+\varepsilon^{2}\sum_{1\leq|\alpha_{1}|\leq 2}(1+t)^{1+\vartheta}\| \partial^{\alpha_{1}}B_{\varepsilon}\|_{L^{\infty}_{x}}^{2}(1+t)^{\sigma_{N+1 -|\alpha_{1}|,|\beta|+1}}\mathcal{D}_{\ell_{1}}^{N+1-|\alpha_{1}|,|\beta|+1}(t)\] \[+\varepsilon^{2}\sum_{3\leq|\alpha_{1}|\leq N_{0}+1}\|\partial^{ \alpha_{1}}B_{\varepsilon}\|^{2}\sum_{N_{0}+1\leq n\leq N-1}(1+t)^{\sigma_{n, |\beta|+1}}\mathcal{D}_{\ell_{1}}^{n,|\beta|+1}(t)\] \[+\mathcal{D}_{N}(t)+\mathcal{E}_{N}(t)\mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}(t). \tag{4.27}\] Furthermore, to control the time increasing rates \((1+t)^{\frac{1+\vartheta}{2}}\) for the first term on the right-hand side of (4.27), we multiply \((1+t)^{-\sigma_{|\alpha|+|\beta|,|\beta|}}\) into (4.27) and (4.26), and take a proper linear combination of the result equalities, then we have \[\varepsilon^{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\{\sum_{|\alpha |=N}(1+t)^{-\sigma_{N,0}}\left\|\widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{ \alpha}f_{\varepsilon}\right\|^{2}+\sum_{\genfrac{}{}{0.0pt}{}{|\alpha|+|\beta| =N,}{|\beta|\geq 1}}(1+t)^{-\sigma_{N,|\beta|}}\left\|\widetilde{w}_{\ell_{1}}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{2}\right\}\] \[+\sum_{|\alpha|=N}(1+t)^{-\sigma_{N,0}}\left\{\|\widetilde{w}_{ \ell_{1}}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\|_{\nu}^{2}+\frac{\vartheta q \varepsilon^{2}}{(1+t)^{1+\vartheta}}\sum_{|\alpha|=N}\left\|\langle v\rangle \widetilde{w}_{\ell_{1}}(\alpha,0)\partial^{\alpha}f_{\varepsilon}\right\|^{2}\right\}\] \[+\sum_{\genfrac{}{}{0.0pt}{}{|\alpha|+|\beta|=N,}{|\beta|\geq 1}}(1+t)^{- \sigma_{N,|\beta|}}\left\{\left\|\widetilde{w}_{\ell_{1}}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{ \nu}^{2}+\frac{q\vartheta\varepsilon^{2}}{(1+t)^{1+\vartheta}}\|\widetilde{w}_{ \ell_{1}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\langle v\rangle_{\varepsilon}\|^{2}\right\}\] \[\lesssim\varepsilon^{2}(1+t)^{-\sigma_{N,0}}\left\|\nabla_{x}^{N}E_ {\varepsilon}\right\|\left\|M^{\delta}\nabla_{x}^{N}f_{\varepsilon}\right\|+ \mathcal{D}_{N}(t)\] \[\quad+\mathcal{M}_{1}\left\{(1+t)^{-(1+\epsilon_{0})}\mathcal{E}_{ N}(t)+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{D}_{\ell_{1}}^{(n)}(t)+\varepsilon^{2} \mathbb{D}_{\ell_{1}}^{(N)}(t)\right\}\] \[\lesssim\eta\varepsilon^{2}(1+t)^{-2\sigma_{N,0}}\left\|\nabla_{x }^{N}E_{\varepsilon}\right\|^{2}+\mathcal{D}_{N}(t)\] \[\quad+\mathcal{M}_{1}\left\{(1+t)^{-(1+\epsilon_{0})}\mathcal{E}_{ N}(t)+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{D}_{\ell_{1}}^{(n)}(t)+\varepsilon^{2} \mathbb{D}_{\ell_{1}}^{(N)}(t)\right\}. \tag{4.28}\] where we use (4.24). **Proposition 4.5**.: _For \(N_{0}+1\leq n\leq N-1\), it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{N_{0}+1\leq n\leq N-1}\mathbb{E }_{\ell_{1}}^{(n)}(t)+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{D}_{\ell_{1}}^{(n)}(t)\] \[\lesssim\mathcal{D}_{N}(t)+\mathcal{M}_{1}\left\{(1+t)^{-(1+\epsilon _{0})}\mathcal{E}_{N}(t)+\sum_{N_{0}+1\leq n\leq N_{0}}\mathbb{D}_{\ell_{1}}^{( n)}(t)\right\}. \tag{4.29}\] Proof.: This propositon can be proved by a similar way as Proposition 4.4, we omit its proof for brevity. When \(n\leq N_{0}\), one also has **Proposition 4.6**.: _For any \(\ell_{0}\geq N_{0}\), it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{n\leq N_{0}}\mathbb{E}_{\ell_{0}}^{(n)}(t )+\sum_{n\leq N_{0}}\mathbb{D}_{\ell_{0}}^{(n)}(t)\lesssim\mathcal{D}_{N_{0}+1 }(t)+\mathcal{M}_{1}\sum_{n\leq N_{0}}\mathbb{D}_{\ell_{0}}^{(n)}(t). \tag{4.30}\] The temporal time decay rates for \(\mathcal{E}_{k\to N_{0}}(t)\) and \(\mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}(t)\) To this end, we need **Assumption II:** \[\sup_{0\leq t\leq T}\left\{\overline{\mathcal{E}}_{N_{0}-1,\bar{\ell}_{0}+ \frac{5}{2}}(t)+\overline{\mathcal{E}}_{N-1,l_{1}}(t)+\|\Lambda^{-\varrho}[f_{ \varepsilon},E_{\varepsilon},B_{\varepsilon}]\|^{2}\right\}\leq\mathcal{M}_{2},\] where \(\mathcal{M}_{2}\) is a sufficiently small positive constant. **Lemma 4.3**.: _Under_ **Assumption I** _and_ **Assumption II**_, there exists suitably large constants \(\bar{l}\), and take \(l\geq\bar{l}\), \(\widetilde{k}=\min\{k+1,N_{0}-1\}\) let \(N_{0}\geq 4\), \(N=2N_{0}\), one has the following estimates:_ (i)_. For \(k=0,1,\cdots,N_{0}-1\), it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\nabla^{k}f_{\varepsilon}\right\|^ {2}+\left\|\nabla^{k}[E_{\varepsilon},B_{\varepsilon}]\right\|^{2}\right)+ \frac{1}{\varepsilon^{2}}\left\|\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|_{\nu}^{2} \tag{4.31}\] (iii)_. For \(k=N_{0}\), it follows that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\nabla^{N_{0}}f_{ \varepsilon}\right\|^{2}+\left\|\nabla^{N_{0}}[E_{\varepsilon},B_{\varepsilon }]\right\|^{2}\right)+\frac{1}{\varepsilon^{2}}\left\|\nabla^{N_{0}}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|_{\nu}^{2} \tag{4.32}\] \[\lesssim \max\{\mathcal{M}_{1},\mathcal{M}_{2}\}\left(\left\|\nabla^{N_{0} -1}[E_{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\nabla^{N_{0}}f_{ \varepsilon}\right\|_{\nu}^{2}\right)+\eta\left\|\nabla^{N_{0}}f_{\varepsilon }\right\|_{\nu}^{2}.\] (iii)_. For \(k=0,1,2\cdots,N_{0}-1\), there exist interactive energy functionals \(G_{f}^{k}(t)\) satisfying_ \[G_{f_{\varepsilon}}^{k}(t)\lesssim\left\|\nabla^{k}[f_{\varepsilon},E_{ \varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\nabla^{k+1}[f_{\varepsilon},E _{\varepsilon},B_{\varepsilon}]\right\|^{2}+\left\|\nabla^{k+2}E_{\varepsilon }\right\|^{2}\] _such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}G_{f_{\varepsilon}}^{k}(t)+\left\| \nabla^{k}[E_{\varepsilon},\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-}] \right\|_{H_{\varepsilon}^{k}}^{2}+\left\|\nabla^{k+1}[\mathbf{P}f_{ \varepsilon},B_{\varepsilon}]\right\|^{2} \tag{4.33}\] Proof.: We can use the method of proving Lemma 3.4-3.5 in [61] to prove this lemma. For the sake of brevity, we only provide some distinct and crucial estimates here, and similar calculations can be carried out for other related estimates. For the case \(1\leq j\leq k-1\leq N_{0}-2\), we can control \[\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{x}^{3}}\frac{1}{\varepsilon}\nabla_{ x}^{j}[v\times B_{\varepsilon}]\cdot\nabla_{v}\nabla_{x}^{k-j}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\nabla_{x}^{k}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}dxdv,\] by using interpolation with respect to both spatial-velocity derivatives and velocity. Take \(\varrho=\frac{1}{2}\) for brevity, when \(j=1\), the estimate is given as follows: \[\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^{3}}\frac{1}{\varepsilon }\nabla_{x}[v\times B_{\varepsilon}]\cdot\nabla_{v}\nabla_{x}^{k-1}\{\mathbf{I} -\mathbf{P}\}f_{\varepsilon}\nabla_{x}^{k}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}dxdv\] \[\lesssim\frac{1}{\varepsilon}\left\|\nabla_{x}B_{\varepsilon} \right\|_{L_{x}^{\infty}}\left\|\nabla_{v}\nabla_{x}^{k-1}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|\left\|\nabla_{x}^{k}\{\mathbf{I}-\mathbf{ P}\}f_{\varepsilon}\langle v\rangle\right\|\] \[\lesssim\frac{1}{\varepsilon}\left\|\Lambda^{-\frac{1}{2}}B_{ \varepsilon}\right\|^{\frac{k-\frac{3}{2}}{k+\frac{3}{2}}}\left\|\nabla^{k+1}B _{\varepsilon}\right\|^{\frac{3}{2}+\frac{3}{2}}\left\|\nabla_{v}^{m_{1}+1} \nabla_{x}^{k-1}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac{1}{ m_{1}+1}}\] \[\quad\times\left\|\Lambda^{-\frac{1}{2}}\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}\right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{1}{k+2}}\left\| \nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac{m_{1}}{m_{1 }+1}\times\frac{k-\frac{1}{2}}{k+\frac{1}{2}}}\left\|\nabla^{k}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\langle v\rangle\right\|\] \[\lesssim\frac{1}{\varepsilon}\left\|\Lambda^{-\frac{1}{2}}B_{ \varepsilon}\right\|^{\frac{k-\frac{3}{2}}{k+\frac{3}{2}}}\left\|\nabla_{v}^{m _{1}+1}\nabla_{x}^{k-1}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{ \frac{1}{m_{1}+1}}\left\|\Lambda^{-\frac{1}{2}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{1}{k+\frac{1}{2}}}\] \[\quad\times\left\|\nabla^{k+1}B_{\varepsilon}\right\|^{\frac{3}{ k+\frac{3}{2}}}\left\|\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{k-\frac{1}{2}}{k+\frac{1}{2}}} \left\|\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\langle v\rangle\right\|\] \[\lesssim\frac{1}{\varepsilon}\left\|\Lambda^{-\frac{1}{2}}B_{ \varepsilon}\right\|^{\frac{k-\frac{3}{2}}{k+\frac{3}{2}}}\left\|\nabla_{v}^{m _{1}+1}\nabla_{x}^{k-1}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{ \frac{1}{m_{1}+1}}\left\|\Lambda^{-\frac{1}{2}}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{\frac{1}{m_{1}+1}\times\frac{1}{k+\frac{1}{2}}}\] \[\quad\times\left\|\nabla^{k+1}B_{\varepsilon}\right\|^{\frac{3}{ k+\frac{3}{2}}}\left\|\langle v\rangle^{-\frac{1}{2}}\nabla^{k}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{k-\frac{1 }{2}}{k+\frac{1}{2}}\times\frac{1}{l_{1}+\frac{1}{2}}}\left\|\langle v\rangle^{ \bar{l}_{1}}\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac {m_{1}}{m_{1}+1}\times\frac{k-\frac{1}{2}}{k+\frac{1}{2}}\times\frac{1}{l_{1}+ \frac{1}{2}}}\] \[\quad\times\left\|\langle v\rangle^{-\frac{1}{2}}\nabla^{k}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac{l_{2}-1}{l_{2}+\frac{1}{2 }}}\left\|\langle v\rangle^{\bar{l}_{2}}\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{\frac{3}{2}}\] \[=\frac{1}{\varepsilon}\left\|\Lambda^{-\frac{1}{2}}B_{\varepsilon} \right\|^{\frac{k-\frac{3}{2}}{k+\frac{3}{2}}}\left\|\nabla_{v}^{m_{1}+1} \nabla_{x}^{k-1}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac{1}{m_ {1}+1}}\left\|\Lambda^{-\frac{1}{2}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{1}{k+\frac{1}{2}}}\] \[\quad\times\left\|\langle v\rangle^{\bar{l}_{2}}\nabla^{k}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac{\frac{3}{2}}{l_{2}+\frac{ 1}{2}}}\left\|\langle v\rangle^{\bar{l}_{1}}\nabla^{k}\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}\right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{k-\frac{1}{2}}{k+ \frac{1}{2}}\times\frac{1}{l_{1}+\frac{1}{2}}}\] \[\quad\times\left\|\nabla^{k+1}B_{\varepsilon}\right\|^{\frac{3}{ k+\frac{3}{2}}}\left\|\langle v\rangle^{-\frac{1}{2}}\nabla^{k}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|^{\frac{m_{1}}{m_{1}+1}\times\frac{k-\frac{1 }{2}}{k+\frac{1}{2}}\times\frac{\bar{l}_{1}}{l_{1}+\frac{1}{2}}}\left\|\langle v \rangle^{-\frac{1}{2}}\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|^{\frac{l_{2}-1}{l_{2}+\frac{1}{2}}}\] \[\lesssim\max\left\{\left\|\langle v\rangle^{\max\{\bar{l}_{1},\bar{ l}_{2}\}}\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|,\left\|\nabla_{v}^{m_{1}+1} \nabla_{x}^{k-1}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{\frac{1}{m_{1 }+1}},\left\|\Lambda^{-\frac{1}{2}}[f,B_{\varepsilon}]\right\|^{2}\right\}\] \[\quad\times\left(\left\|\nabla^{k+1}B_{\varepsilon}\right\|^{2}+ \left\|\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}_{\nu} \right)+\frac{\eta}{\varepsilon^{2}}\left\|\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}_{\nu}\] \[\lesssim\max\{\mathcal{M}_{1},\mathcal{M}_{2}\}\left(\left\|\nabla^ {k+1}B_{\varepsilon}\right\|^{2}+\left\|\nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}_{\nu}\right)+\frac{\eta}{\varepsilon^{2}}\left\| \nabla^{k}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}_{\nu}.\] In fact, we can take proper \(m_{1}\), which satisfies \[m_{1}>\frac{k}{2}-1-\frac{3}{8k},\ m_{1}+k\leq 2N_{0}-1\] such that \[\frac{3}{k+\frac{3}{2}}+\frac{m_{1}}{m_{1}+1}\times\frac{k-\frac{1}{2}}{k+ \frac{1}{2}}>1,\] so obviously there exists suitably large constants \(\bar{l}_{1}\) and \(\bar{l}_{2}\) such that the following equation holds \[\frac{3}{k+\frac{3}{2}}+\frac{m_{1}}{m_{1}+1}\times\frac{k-\frac{1}{2}}{k+ \frac{1}{2}}\times\frac{\bar{l}_{1}}{\bar{l}_{1}+\frac{1}{2}}+\frac{\bar{l}_ {2}-1}{\bar{l}_{2}+\frac{1}{2}}=2.\] Here we ask \(\bar{l}\geq\max\{\bar{l}_{1},\bar{l}_{2}\}\). Based on the aforementioned estimated ideas, we can prove (4.31), (4.32) and (4.33), thus the proof of this lemma is complete. **Proposition 4.7**.: _Under_ **Assumption I** _and_ **Assumption II**_, assume \(N_{0}\geq 5\), \(N=2N_{0}\), there exist an energy functional \(\mathcal{E}_{k\to N_{0}}(t)\) and the corresponding energy dissipation rate functional \(\mathcal{D}_{k\to N_{0}}(t)\) satisfying (2.6) and (2.7) respectively such that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{k\to N_{0}}(t)+\mathcal{D}_{k\to N_ {0}}(t)\leq 0 \tag{4.34}\] _holds for \(k=0,1,2,\cdots,N_{0}-2\) and all \(0\leq t\leq T.\)_ _Furthermore, we can get that_ \[\mathcal{E}_{k\to N_{0}}(t)\lesssim\min\{\mathcal{M}_{1},\mathcal{M}_{2}\}(1+ t)^{-(k+\varrho)}. \tag{4.35}\] Proof.: From (4.31), (4.31) and (4.33), one has (4.34). The decay estimtates (4.35) can be proved by a similar way as Proposition 3.5, we omit its proof for brevity. Besides, we need to deduce the temporal time decay rates for \(\mathcal{E}_{1\to N_{0}-1,\overline{t}_{0}}(t).\) **Proposition 4.8**.: _Under_ **Assumption I** _and_ **Assumption II**_, it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\mathcal{E}_{1\to N_{0}-1,\overline{t}_ {0}}(t)+\mathcal{E}_{1\to N_{0}}(t)\right\}+\mathcal{D}_{1\to N_{0}-1, \overline{t}_{0}}(t)+\mathcal{D}_{1\to N_{0}}(t)\lesssim 0. \tag{4.36}\] _Furthermore, one also deduces that_ \[\mathcal{E}_{1\to N_{0}-1,\overline{t}_{0}}(t)+\mathcal{E}_{1\to N_{0}}(t) \lesssim\min\{\mathcal{M}_{1},\mathcal{M}_{2}\}(1+t)^{-1-\varrho}. \tag{4.37}\] Proof.: Recalling the definition of \(\mathcal{E}_{1\to N_{0}-1,\overline{t}_{0}}(t)\) in (2.18), applying \(\partial_{\beta}^{\alpha}\) into (3.20), integrating the result identity over \(\mathbb{R}_{x}^{3}\times\mathbb{R}_{v}^{3}\) by multiplying \(\overline{w}_{\overline{\ell}_{0}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\) with \(|\alpha|\geq 1\), then we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\overline{w}_{\overline {\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_ {\varepsilon}\|^{2}+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\left\|\langle v \overline{w}_{\overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}\] \[+\frac{1}{\varepsilon^{2}}\left(\partial_{\beta}^{\alpha} \mathscr{L}f_{\varepsilon},\overline{w}_{\overline{\ell}_{0}}^{2}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[=-\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}[v\cdot \nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}],\overline{w}_{\overline{ \ell}_{0}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P} \}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{P}(v\cdot\nabla_{x}f_{\varepsilon})-\frac{1}{\varepsilon}v\cdot\nabla _{x}\mathbf{P}f_{\varepsilon}\right],\overline{w}_{\overline{\ell}_{0}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[ \mathbf{I}-\mathbf{P}\right\}\left[q_{0}v\times B_{\varepsilon}\cdot\nabla_{v}f_ {\varepsilon}\right]\right],\overline{w}_{\overline{\ell}_{0}}^{2}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\left(\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}\left[ \tfrac{1}{2}q_{0}(E_{\varepsilon}\cdot v)f_{\varepsilon}-q_{0}E_{\varepsilon} \cdot\nabla_{v}f_{\varepsilon}\right],\overline{w}_{\overline{\ell}_{0}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[+\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\mathscr{T}(f _{\varepsilon},f_{\varepsilon}),\overline{w}_{\overline{\ell}_{0}}^{2}(\alpha, \beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right). \tag{4.38}\] The coercivity estimates on the linear operator \(\mathscr{L}\) yields that \[\frac{1}{\varepsilon^{2}}\left(\partial_{\beta}^{\alpha}\mathscr{ L}f_{\varepsilon},\overline{w}_{\overline{\ell}_{0}}^{2}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\gtrsim\frac{1}{\varepsilon^{2}}\|\overline{w}_{\overline{\ell}_{ 0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\|_{\nu}^{2}-\frac{1}{\varepsilon^{2}}\|\partial^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}. \tag{4.39}\] As for the transport term on the right-hand side of (4.38), since \[\overline{w}_{\overline{\ell}_{0}}(\alpha,\beta)=\langle v\rangle^{-1}\overline{ w}_{\overline{\ell}_{0}}(\alpha+e_{i},\beta-e_{i}),\] one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\left[v\cdot \nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{ \overline{\ell}_{0}}^{2}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right)\] \[=-\frac{1}{\varepsilon}\int_{\mathbb{R}_{x}^{3}\times\mathbb{R}_{ x}^{3}}\langle v\rangle^{-1}\partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\overline{w}_{\overline{\ell}_{0}}(\alpha+e_{i}, \beta-e_{i})\overline{w}_{\overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}dvdx\] \[\left(\partial_{\beta}^{\alpha}\left[E_{\varepsilon}\cdot\nabla_{v} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{\ell_{0}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\mathcal{D}_{1\to N_{0}}(t)\overline{\mathcal{E}}_{N_{0}-1, \overline{\ell}_{0}+\frac{3}{2}}(t)+\frac{\eta}{\varepsilon^{2}}\|\overline{ w}_{\overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2} \tag{4.44}\] and \[\left(\partial_{\beta}^{\alpha}\left[E_{\varepsilon}\cdot v\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right],\overline{w}_{\ell_{0}}^{2}( \alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\right)\] \[\lesssim\mathcal{D}_{1\to N_{0}}(t)\overline{\mathcal{E}}_{N_{0}-1, \overline{\ell}_{0}+\frac{3}{2}}(t)+\frac{\eta}{\varepsilon^{2}}\|\overline {w}_{\overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}, \tag{4.45}\] one has \[\frac{1}{\varepsilon}\left(\partial_{\beta}^{\alpha}\mathscr{T}( f_{\varepsilon},f_{\varepsilon}),\overline{w}_{\ell_{0}}^{2}(\alpha,\beta) \partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right)\] \[\lesssim\overline{\mathcal{E}}_{N_{0}-1,\overline{\ell}_{0}}(t) \mathcal{D}_{1\to N_{0}-1,\overline{\ell}_{0}}(t)+\frac{\eta}{\varepsilon^{2 }}\|\overline{w}_{\overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha} \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{\nu}^{2}. \tag{4.46}\] Now by collecting the above related estimates into (4.38), we arrive at \[\frac{\mathrm{d}}{\mathrm{d}t}\left\|\overline{w}_{\overline{ \ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\right\|^{2}+\frac{1}{\varepsilon^{2}}\left\|\overline{w}_{ \overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|_{\nu}^{2}\] \[+\frac{q\vartheta}{(1+t)^{1+\vartheta}}\left\|\langle v\rangle \overline{w}_{\overline{\ell}_{0}}(\alpha,\beta)\partial_{\beta}^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2}\] \[\lesssim\left\|\overline{w}_{\overline{\ell}_{0}}(\alpha+e_{i},\beta-e_{i}) \partial_{\beta-e_{i}}^{\alpha+e_{i}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \right\|_{\nu}^{2}+\mathcal{D}_{1\to N_{0}}(t)\overline{\mathcal{E}}_{N_{0}-1,\overline{\ell}_{0}+\frac{5}{2}}(t)+\mathcal{D}_{|\alpha|\to N_{0}}(t), \tag{4.47}\] Taking summation for (4.47) with \(|\alpha|+|\beta|\leq N_{0}-1,|\alpha|\geq 1\), one has \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{1\to N_{0}-1,\overline{\ell}_{0}}( t)+\mathcal{D}_{1\to N_{0}-1,\overline{\ell}_{0}}(t)\lesssim\mathcal{D}_{1 \to N_{0}}(t) \tag{4.48}\] which combining with (4.34) yields that \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\mathcal{E}_{1\to N_{0}-1,\overline{ \ell}_{0}}(t)+\mathcal{E}_{1\to N_{0}}(t)\right\}+\mathcal{D}_{1\to N_{0}-1, \overline{\ell}_{0}}(t)+\mathcal{D}_{1\to N_{0}}(t)\lesssim 0. \tag{4.49}\] (4.37) follows by a similar way as (4.35), thus we complete the proof of this Propostion 4.8. ### The bound for \(\|\Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}](t)\|^{2}\) To ensure **Assumption 2**, one also need the following result: **Proposition 4.9**.: _Under Under_ **Assumption 1** _and_ **Assumption 2**_, one has_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\Lambda^{-\varrho}f_{ \varepsilon}\right\|^{2}+\left\|\Lambda^{-\varrho}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}+\kappa_{1}G_{f_{\varepsilon}}^{-\varrho}(t)\right) +\frac{1}{\varepsilon^{2}}\left\|\Lambda^{-\varrho}\{\mathbf{I}-\mathbf{P}\}f_ {\varepsilon}\right\|_{\nu}^{2}\] \[+\kappa_{1}\left\|\Lambda^{1-\varrho}\mathbf{P}f_{\varepsilon} \right\|^{2}+\kappa_{1}\left\|\Lambda^{1-\varrho}[E_{\varepsilon},B_{ \varepsilon}]\right\|^{2}+\kappa_{1}\left\|\Lambda^{-\varrho}E_{\varepsilon} \right\|_{H_{x}^{1}}^{2}+\kappa_{1}\left\|\Lambda^{-\varrho}(\rho_{ \varepsilon}^{+}-\rho_{\varepsilon}^{-})\right\|_{H^{1}}^{2}\] \[\lesssim\mathcal{M}_{2}\left\|\nabla_{x}f_{\varepsilon}\right\|_ {\nu}^{2}. \tag{4.50}\] Proof.: This proposition can be proved by a similar way as Proposition 3.6, we omit its proof for brevity. ### The a priori estimates Now we are ready to construct the a priori estimates: \[X(T) \equiv\sup_{0\leq t\leq T}\left\{\sum_{n\leq N_{0}}\mathbb{E}_{ \ell_{0}}^{(n)}(t)+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{E}_{\ell_{1}}^{(n)}(t)+ \varepsilon^{2}\mathbb{E}_{\ell_{1}}^{(N)}(t)\right\}\] \[\quad+\sup_{0\leq t\leq T}\left\{\overline{\mathcal{E}}_{N-1,l_{1 }}(t)+\overline{\mathcal{E}}_{N_{0}-1,l_{0}}(t)+\mathcal{E}_{N}(t)+\|\Lambda^ {-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\|^{2}\right\} \leq\mathcal{M}. \tag{4.51}\] **Proposition 4.10**.: _Assume that_ * \(N_{0}\geq 5\)_,_ \(N=2N_{0}\)_;_ * \(-1\leq\gamma<0\)_,_ \(\frac{1}{2}\leq\varrho<\frac{3}{2}\)_,_ \(0<\vartheta\leq\frac{2}{3}\rho\)_;_ * \(0<\epsilon_{0}\leq 2(1+\varrho)\)_;_ * \(\sigma_{N,0}=\frac{1+\epsilon_{0}}{2}\)_,_ \(\sigma_{n,0}=0\) _for_ \(n\leq N-1\)_,_ \(\sigma_{n,j+1}-\sigma_{n,j}=\frac{1+\epsilon_{0}}{2}\)_;_ * \(\tilde{l}\) _is a sufficiently large positive constant;_ * \(l_{1}\geq N+\tilde{l}\)_,_ \(\tilde{\ell}\geq\frac{3}{2}\sigma_{N-1,N-1}\)_,_ \(\ell_{1}\geq l_{1}+\tilde{\ell}+\frac{1}{2}\)_,_ \(\overline{\ell}_{0}\geq\ell_{1}+\frac{3}{2}N\)_,_ \(l_{0}\geq\overline{\ell}_{0}+\frac{5}{2}\)_,_ \(\ell_{0}\geq l_{0}+\tilde{\ell}+\frac{1}{2}\)_,_ _under the a priori estimates (4.51), we can deduce that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left\{\sum_{n\leq N_{0}}\mathbb{E }_{\ell_{0}}^{(n)}(t)+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{E}_{\ell_{1}}^{(n)}(t)+ \varepsilon^{2}\mathbb{E}_{\ell_{1}}^{(N)}(t)\right\}\] \[+\frac{\mathrm{d}}{\mathrm{d}t}\left\{\overline{\mathcal{E}}_{N-1,l_{1}}(t)+\overline{\mathcal{E}}_{N_{0}-1,l_{0}}(t)+\mathcal{E}_{N}(t)+\| \Lambda^{-\varrho}[f_{\varepsilon},E_{\varepsilon},B_{\varepsilon}]\|^{2} \right\}+\sum_{n\leq N_{0}}\mathbb{D}_{\ell_{0}}^{(n)}(t)\] \[+\sum_{N_{0}+1\leq n\leq N-1}\mathbb{D}_{\ell_{1}}^{(n)}(t)+ \varepsilon^{2}\mathbb{D}_{\ell_{1}}^{(N)}(t)+\overline{\mathcal{D}}_{N-1,l_{1} }(t)+\overline{\mathcal{D}}_{N_{0}-1,l_{0}}(t)+\mathcal{D}_{N}(t)\lesssim 0\] _holds for all \(0\leq t\leq T\)._ Proof.: Recalling the definition of \(\mathbb{D}_{\ell}^{(N)}(t)\) and \(\overline{\mathcal{D}}_{N-1,l}(t)\), (4.1) tells us that \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_{N}(t)+\mathcal{D}_{N}(t)\] \[\lesssim\left\|E_{\varepsilon}\right\|_{L_{L_{L}^{\infty}}}^{2} \left(1+t\right)^{\sigma_{N,0}}(1+t)^{-\sigma_{N,0}}\left\|\langle v\rangle^{ \frac{3}{2}}\nabla_{x}^{N}f_{\varepsilon}\right\|^{2}\] \[\begin{array}{l}+\|\nabla_{x}B_{\varepsilon}\|^{2}_{L^{\infty}_{x}}\,(1+t)^{ \sigma_{N,1}}(1+t)^{-\sigma_{N,1}}\left\|\langle v\rangle^{\frac{2}{\varepsilon} }\nabla^{N-1}_{x}\nabla_{v}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\right\|^{2 }\\ +\mathcal{E}_{N}(t)\sum_{|\alpha^{\prime}|+|\beta^{\prime}|\leq N-1}\|\partial ^{\alpha^{\prime}}_{\beta^{\prime}}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \langle v\rangle^{\frac{3}{2}}\|^{2}\\ \lesssim\mathcal{M}_{1}\varepsilon^{2}\mathbb{D}^{(N)}_{\ell_{1}}(t)+ \mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t).\end{array} \tag{4.52}\] By multiplying \((1+t)^{-\epsilon_{0}}\) into (4.52), one has \[\begin{array}{l}\frac{\mathrm{d}}{\mathrm{d}t}\left\{(1+t)^{- \epsilon_{0}}\mathcal{E}_{N}(t)\right\}+\epsilon_{0}(1+t)^{-1-\epsilon_{0}} \mathcal{E}_{N}(t)+(1+t)^{-\epsilon_{0}}\mathcal{D}_{N}(t)\\ \lesssim\mathcal{M}_{1}\varepsilon^{2}\mathbb{D}^{(N)}_{\ell_{1}}(t)+ \mathcal{E}_{N}(t)\overline{\mathcal{D}}_{N-1,l_{1}}(t).\end{array} \tag{4.53}\] Thus (4.52) follows from (4.52), (4.53), (4.25), (4.29), (4.30) and (4.50), which complete the proof of this proposition. ## 5. Limit to two fluid incompressible Navier-Stokes-Fourier-Maxwell equations with Ohm's law In this section, we will derive the two fluid incompressible NSFM equations (1.10) with Ohm's law from the perturbed two-species VMB as \(\varepsilon\to 0\). As [53], we first introduce the following fluid variables \[\begin{array}{l}\rho_{\varepsilon}=\frac{1}{2}\langle f_{ \varepsilon},q_{2}\sqrt{M}\rangle_{L^{2}_{x}}\,,\,\,u_{\varepsilon}=\frac{1}{2} \langle f_{\varepsilon},q_{2}v\sqrt{M}\rangle_{L^{2}_{x}}\,,\,\,\theta_{ \varepsilon}=\frac{1}{2}\langle f_{\varepsilon},q_{2}(\frac{|v|^{2}}{3}-1) \sqrt{M}\rangle_{L^{2}_{v}}\,,\\ n_{\varepsilon}=\langle f_{\varepsilon},q_{1}\sqrt{M}\rangle_{L^{2}_{v}}\,, \,\,j_{\varepsilon}=\frac{1}{\varepsilon}\langle f_{\varepsilon},q_{1}v\sqrt {M}\rangle_{L^{2}_{v}}\,,\,\,w_{\varepsilon}=\frac{1}{\varepsilon}\langle f _{\varepsilon},q_{1}(\frac{|v|^{2}}{3}-1)\sqrt{M}\rangle_{L^{2}_{v}}\,.\end{array} \tag{5.1}\] We use the similar argument as (A.18), we can deduce the following local conservation laws \[\begin{array}{l}\left\{\begin{array}{l}\partial_{t}\rho_{ \varepsilon}+\frac{1}{\varepsilon}\mathrm{div}_{x}\,u_{\varepsilon}=0\,,\\ \partial_{t}u_{\varepsilon}+\frac{1}{\varepsilon}\nabla_{x}(\rho_{\varepsilon} +\theta_{\varepsilon})+\mathrm{div}_{x}\left\langle\widehat{A}(v)\sqrt{M}\cdot q _{2},\frac{1}{\varepsilon}\mathscr{L}(\frac{f_{\varepsilon}}{2})\right\rangle _{L^{2}_{x}}=\frac{1}{2}(n_{\varepsilon}E_{\varepsilon}+j_{\varepsilon}\times B _{\varepsilon})\,,\\ \partial_{t}\theta_{\varepsilon}+\frac{2}{3}\frac{1}{\varepsilon}\mathrm{div }_{x}\,u_{\varepsilon}+\frac{2}{3}\mathrm{div}_{x}\left\langle\widehat{B}(v) \sqrt{M}\cdot q_{2},\frac{1}{\varepsilon}\mathscr{L}(\frac{f_{\varepsilon}}{2} )\right\rangle_{L^{2}_{v}}=\frac{\varepsilon}{3}j_{\varepsilon}\cdot E_{ \varepsilon}\,,\\ \partial_{t}n_{\varepsilon}+\mathrm{div}_{x}\,j_{\varepsilon}=0\,,\\ \partial_{t}E_{\varepsilon}-\nabla_{x}\times B_{\varepsilon}=-j_{\varepsilon}\,, \\ \partial_{t}B_{\varepsilon}+\nabla_{x}\times E_{\varepsilon}=0\,,\\ \mathrm{div}_{x}\,E_{\varepsilon}=n_{\varepsilon}\,,\quad\mathrm{div}_{x}B_{ \varepsilon}=0\,.\end{array}\right.\end{array} \tag{5.2}\] where \(\mathscr{L}[\widehat{A}(v)\sqrt{M}\cdot q_{2}]=\left(v\otimes v-\frac{|v|^{2}}{ 3}I_{3}\right)\sqrt{M}\cdot q_{2}\in\ker^{\perp}(\mathscr{L})\) with \(\widehat{A}(v)\sqrt{M}\cdot q_{2}\in\ker^{\perp}(\mathscr{L})\cdot q_{2}\) and \(\mathscr{L}[\widehat{B}(v)\sqrt{M}]=\left(v\left(\frac{|v|^{2}}{2}-\frac{5}{2} \right)\right)\sqrt{M}\cdot q_{2}\in\ker^{\perp}(\mathscr{L})\) with \(\widehat{B}(v)\sqrt{M}\cdot q_{2}\in\ker^{\perp}(\mathscr{L})\). Based on Theorem 2.1 and Theorem 2.2, the Cauchy problem to (1.7) admits a global solution \((f_{\epsilon},E_{\epsilon},B_{\epsilon})\) belonging to \(L^{\infty}(\mathbb{R}_{+};H^{N}_{x}L^{2}_{v})\), for the noncutoff cases, from (2.9), one has \[\mathcal{E}_{N,l}(t)+\mathcal{E}_{N}(t)\leq\mathcal{E}_{N,l}(0)+\mathcal{E}_{N} (0)\leq C \tag{5.3}\] and \[\int_{0}^{\infty}\left\{\mathcal{D}_{N}(t)+\mathcal{D}_{N-1,l}(t) \right\}dt\leq C \tag{5.4}\] where \(C\) is independent of \(\varepsilon\). In fact, (5.4) tells us that \[\sum_{|\alpha|\leq N}\int_{0}^{\infty}\left\|\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|^{2}_{D}dt\lesssim C\varepsilon^{2}. \tag{5.5}\] which yields that \[\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\to 0,\,\,\text{strongly in $L^{2}( \mathbb{R}_{+};H^{N}_{x}L^{2}_{D})$ as $\varepsilon\to 0$}. \tag{5.6}\] As for the cutoff cases, from (2.21), one also deduces that \[\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\to 0,\,\,\text{strongly in $L^{2}( \mathbb{R}_{+};H^{N}_{x}L^{2}_{\nu})$ as $\varepsilon\to 0$}. \tag{5.7}\] By standard convergent method, there exist \(f,E,B,\rho,u,\theta,n,w,j\) such that \[f_{\epsilon}\to f,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N}L_{v}^{2},\] \[E_{\epsilon}\to E,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N},\] \[B_{\epsilon}\to B,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N},\] \[\rho_{\epsilon}\to\rho,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N},\] \[u_{\epsilon}\to u,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N},\] \[\theta_{\epsilon}\to\theta,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N},\] \[n_{\epsilon}\to n,\text{weakly}{-}*\text{ for }t>0,\,\text{ weakly in }H_{x}^{N},\] \[w_{\epsilon}\to w,\text{weakly in }L^{2}(\mathbb{R}^{+};H_{x}^{N}L_{v}^{2}),\] \[j_{\epsilon}\to j,\text{weakly in }L^{2}(\mathbb{R}^{+};H_{x}^{N}L_{v}^{2}). \tag{5.8}\] where \[f=(\rho+n/2)\,\frac{q_{1}+q_{2}}{2}M^{1/2}+(\rho-n/2)\,\frac{q_{ 1}-q_{2}}{2}M^{1/2}\] \[\qquad+u\cdot vq_{2}M^{1/2}+\theta q_{2}(|v|^{2}-3)M^{1/2} \tag{5.9}\] with \[\rho=\tfrac{1}{2}\langle f,q_{2}\sqrt{M}\rangle_{L_{v}^{2}}\,,\,\,u=\tfrac{1} {2}\langle f,q_{2}v\sqrt{M}\rangle_{L_{v}^{2}}\,, \tag{5.10}\] \[\theta=\tfrac{1}{2}\langle f,q_{2}(\tfrac{|v|^{2}}{3}-1)\sqrt{M}\rangle_{L_{v }^{2}}\,,n=\langle f,q_{1}\sqrt{M}\rangle_{L_{v}^{2}}\,.\] In the sense of distributions, utilizing the uniform estimates (5.3), (5.4) and (5.5), applying Aubin-Lions-Simon Theorem and the similar argument as [53], we can deduce that \[(u,\theta,n,E,B)\in C(\mathbb{R}^{+};H_{x}^{N-1})\cap L^{\infty}(\mathbb{R}^{ +};H_{x}^{N})\] satisfy the following two fluid incompressible NSFM equations with Ohm's law \[\left\{\begin{aligned} &\partial_{t}u+u\cdot\nabla_{x}u-\mu \Delta_{x}u+\nabla_{x}p=\tfrac{1}{2}(nE+j\times B)\,,&\text{ \,div}_{x}\,u=0\,,\\ &\partial_{t}\theta+u\cdot\nabla_{x}\theta-\kappa\Delta_{x}\theta =0\,,&\rho+\theta=0\,,\\ &\partial_{t}E-\nabla_{x}\times B=-j\,,&\text{\,div}_{x }\,E=n\,,\\ &\partial_{t}B+\nabla_{x}\times E=0\,,&\text{\,div}_{x }\,B=0\,,\\ & j-nu=\sigma\big{(}-\tfrac{1}{2}\nabla_{x}n+E+u\times B\big{)} \,,& w=\tfrac{3}{2}n\theta\,,\end{aligned}\right.\] with initial data \[u(0,x)=\mathcal{P}u^{in}(x)\,,\,\,\theta(0,x)=\tfrac{3}{9}\theta^{in}(x)- \tfrac{2}{5}\rho^{in}(x)\,,\,\,E(0,x)=E^{in}(x)\,,\,\,B(0,x)=B^{in}(x)\,.\] We omit its detail proof for brevity. Moreover, from the uniform bound (2.9) in Theorem 2.1, (2.21) in Theorem 2.2 and the convergence (5.8), we have \[\sup_{t\geq 0}\big{(}\|f\|_{H_{x}^{N}L_{v}^{2}}^{2}+\|E\|_{H_{x}^{N }}^{2}+\|B\|_{H_{x}^{N}}^{2}\big{)}(t)\] \[\leq\sup_{t\geq 0}\big{(}\|f_{\epsilon}\|_{H_{x}^{N}L_{v}^{2}}^{2}+ \|E_{\epsilon}\|_{H_{x}^{N}}^{2}+\|B_{\epsilon}\|_{H_{x}^{N}}^{2}\big{)}(t)\] \[\lesssim Y_{f_{\epsilon},E_{\epsilon},B_{\epsilon}}^{2}(0)\to Y _{f,E,B}^{2}(0) \tag{5.11}\] as \(\varepsilon\to 0\). Hence \[\sup_{t\geq 0}\big{(}\|f\|_{H_{x}^{N}L_{v}^{2}}^{2}+\|E\|_{H_{x}^{N}}^{2}+\|B\|_{ H_{x}^{N}}^{2}\big{)}(t)\lesssim Y_{f,E,B}^{2}(0).\] Recalling the definition of \(f\) in (5.9), there are positive generic constants \(C_{h}\) and \(C_{l}\) such that \[C_{l}\big{(}\|u\|_{H_{x}^{N}}^{2}+\|\theta\|_{H_{x}^{N}}^{2}+\|n\|_{H_{x}^{N}}^{ 2}\big{)}\leq\|f\|_{H_{x}^{N}L_{v}^{2}}^{2}\leq C_{h}\big{(}\|u\|_{H_{x}^{N}}^{2} +\|\theta\|_{H_{x}^{N}}^{2}+\|n\|_{H_{x}^{N}}^{2}\big{)}\,.\] Consequently, the solution \((u,\theta,n,E,B)\) to the two fluid incompressible NSFM equations (1.10) with Ohm's law constructed above admits the energy bound \[\sup_{t\geq 0}\big{(}\|u\|_{H^{N}_{x}}^{2}+\|\theta\|_{H^{N}_{x}}^{2}+\|n\|_{H^{N} _{x}}^{2}+\|E\|_{H^{N}_{x}}^{2}+\|B\|_{H^{N}_{x}}^{2}\big{)}(t)\lesssim Y_{f,E, B}^{2}(0).\] Then the proof of Theorem 2.3 is complete. ## Appendix A Appendix ### Properties of the collision operator for non-cutoff cases Take \[w_{\ell,\theta}=\langle v\rangle^{\ell}e^{\frac{q(v)}{(1+t)^{\theta}}}.\] **Lemma A.1**.: _Let \(\ell\in\mathbb{R}\), \(\eta>0\), \(0<s<1\) and \(\max\{-3,-\frac{3}{2}-2s\}<\gamma<0\)._ (i). _It holds that_ \[\langle\mathscr{L}g,g\rangle\gtrsim|\{\mathbf{I}-\mathbf{P}\}g|_{L^{2}_{D}}^{ 2}\,.\] (A.1) (ii). _It holds that_ \[\left\langle w_{\ell,\theta}^{2}\mathscr{L}g,g\right\rangle\gtrsim|w_{\ell, \theta}g|_{D}^{2}-C|g|_{L^{2}_{B_{C}}}^{2}.\] (A.2) _For \(|\beta|\geq 1\), one has_ \[\left\langle w_{\ell,\theta}^{2}\partial_{\beta}\mathscr{L}g,\partial_{\beta }g\right\rangle\gtrsim|w_{\ell,\theta}\partial_{\beta}g|_{L^{2}_{D}}^{2}-C \sum_{\beta^{\prime}<\beta}\left|w_{\ell,\theta}\partial_{\beta^{\prime}}g \right|_{L^{2}_{D}}^{2}-C|g|_{L^{2}_{B_{C}}}^{2}.\] (A.3) Proof.: (A.1) has been shown in [1]. The relevant coercive estimate (A.2) and (A.3) with exponential weights can be found in [23]. **Lemma A.2**.: _For all \(0<s<1\), \(q>0\) and \(\ell\geq 0\),_ * _It holds that_ \[\left|\langle\partial_{\beta}^{\alpha}\mathscr{T}(f,g),w_{\ell, \theta}^{2}\partial_{\beta}^{\alpha}h\rangle\right|\] \[\lesssim \sum\left\{\left|w_{\ell,\vartheta}\partial_{\beta_{1}}^{\alpha_ {1}}f\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|\partial_{\beta_{2}}^{\alpha- \alpha_{1}}g\right|_{L^{2}_{D}}+\left|\partial_{\beta_{2}}^{\alpha-\alpha_{1 }}g\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|w_{\ell,\vartheta}\partial_{ \beta_{1}}^{\alpha_{1}}f\right|_{L^{2}_{D}}\right\}\left|w_{\ell,\vartheta} \partial_{\beta}^{\alpha}h\right|_{L^{2}_{D}}\] \[+\min\left\{\left|w_{\ell,\vartheta}\partial_{\beta_{1}}^{\alpha_ {1}}f\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|\partial_{\beta_{2}}^{\alpha- \alpha_{1}}g\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|\partial_{\beta_{2}}^{ \alpha-\alpha_{1}}g\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|w_{\ell, \vartheta}\partial_{\beta_{1}}^{\alpha_{1}}f\right|_{L^{2}_{\frac{\gamma}{2} +s}}\right\}\left|w_{\ell,\vartheta}\partial_{\beta}^{\alpha}h\right|_{L^{2}_{ D}}\] \[+\sum\left|e^{\frac{q(v)}{(1+t)^{\theta}}}\partial_{\beta_{2}}^{ \alpha_{1}}g\right|_{L^{2}_{x}}\left|w_{\ell,\vartheta}\partial_{\beta_{1}}^{ \alpha-\alpha_{1}}f\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|w_{\ell, \vartheta}\partial_{\beta}^{\alpha}h\right|_{L^{2}_{D}},\] (A.4) _where the summation_ \(\sum\) _is taken over_ \(\alpha_{1}+\alpha_{2}\leq\alpha\) _and_ \(\beta_{1}+\beta_{2}\leq\beta\)_._ _Furthermore, we have_ \[\left|\langle\mathscr{T}(f,g),h\rangle\right| \lesssim \left\{|f|_{L^{2}_{\frac{\gamma}{2}+s}}|g|_{L^{2}_{D}}+|g|_{L^{2}_ {\frac{\gamma}{2}+s}}|f|_{L^{2}_{D}}\right.\] (A.5) \[\left.+\min\left\{|f|_{L^{2}_{v}}|g|_{L^{2}_{\frac{\gamma}{2}+s}},|g| _{L^{2}}f|_{L^{2}_{\frac{\gamma}{2}+s}}\right\}\right\}|h|_{L^{2}_{D}}.\] * _For_ \(0<s<1,\ m\geq 4,\ \ell>0\)_, it holds that_ \[\left|w_{\ell,\vartheta}\partial^{\alpha}\mathscr{T}(f,f)\right|_{L^{2}_{v}} \lesssim\sum_{\alpha_{1}+\alpha_{2}\leq\alpha}\left|w_{\ell,\vartheta} \partial^{\alpha_{1}}f\right|_{H^{3}_{\frac{\gamma}{2}+s}}\left|w_{\ell, \vartheta}\partial^{\alpha_{2}}f\right|_{H^{1}_{\frac{\gamma}{2}+s}},\] (A.6) _or_ \[\left|w_{\ell,\vartheta}\partial^{\alpha}\mathscr{T}(f,f)\right|_{L^{2}_{v}} \lesssim\sum_{\alpha_{1}+\alpha_{2}\leq\alpha}\left|w_{\ell, \vartheta}\partial^{\alpha_{1}}f\right|_{L^{2}_{\frac{\gamma}{2}+s}}\left|w_{ \ell,\vartheta}\partial^{\alpha_{2}}f\right|_{H^{4}_{\frac{\gamma}{2}+s}}.\] (A.7) Proof.: Actually, for the case of weak angular singularity \(0<s<\frac{1}{2}\), (A.4) is shown as in [28]. For the case of strong angular singularity \(\frac{1}{2}\leq s<1\), we can follow the proof strategy of [28] and [23] with slight modifications to obtain (A.4). For the sake of brevity, we omit the detailed proof. (A.5) and (A.6)-(A.7) have been shown in [1] and [23]. ### Properties of the collision operator for angular cutoff cases Take \[w_{l,\vartheta}=\langle v\rangle^{l}e^{\frac{q(v)^{2}}{(1+t)^{\vartheta}}}.\] **Lemma A.3**.: _Let \(-3<\gamma\leq 1\),_ * _One has_ \[\langle\mathscr{L}g,g\rangle\geq|\{\mathbf{I}-\mathbf{P}\}g|_{L_{\nu}^{2}}^{2}.\] (A.8) * _It holds that_ \[\left\langle w_{l,\vartheta}^{2}\mathscr{L}g,g\right\rangle\gtrsim|w_{l, \vartheta}g|_{L_{\nu}^{2}}^{2}-C\|g\|_{L_{B_{C}}^{2}}^{2}.\] (A.9) _For_ \(|\beta|\geq 1\)_, one has_ \[\left\langle w_{l,\vartheta}^{2}\partial_{\beta}\mathscr{L}g,\partial_{\beta }g\right\rangle\gtrsim|w_{l,\vartheta}\partial_{\beta}g|_{L_{\nu}^{2}}^{2}-C \sum_{\beta^{\prime}<\beta}\left|w_{l,\vartheta}\partial_{\beta^{\prime}}g \right|_{L_{\nu}^{2}}^{2}-C\|g\|_{L_{B_{C}}^{2}}^{2}.\] (A.10) Proof.: The detail proof of this lemma can be found in [39] and [24]. **Lemma A.4**.: _Let \(-3<\gamma<0\), \(N\geq 4\), one has_ \[\partial_{\beta}^{\alpha}\mathscr{T}_{\pm}(g_{1},g_{2})\] \[\equiv\sum C_{\beta}^{\beta_{0}\beta_{1}\beta_{2}}C_{\alpha}^{ \alpha_{1}\alpha_{2}}\mathscr{T}_{\pm}^{0}\left(\partial_{\beta_{1}}^{\alpha _{1}}g_{1},\partial_{\beta_{2}}^{\alpha_{2}}g_{2}\right)\] \[\equiv\sum C_{\beta}^{\beta_{0}\beta_{1}\beta_{2}}C_{\alpha}^{ \alpha_{1}\alpha_{2}}\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|v-u|^{\gamma} \mathbf{b}(\cos\theta)\partial_{\beta_{0}}[\mu(u)^{\frac{1}{2}}]\left\{ \partial_{\beta_{1}}^{\alpha_{1}}g_{1\pm}(v^{\prime})\partial_{\beta_{2}}^{ \alpha_{2}}g_{2\pm}(u^{\prime})\right.\] \[\left.+\partial_{\beta_{1}}^{\alpha_{1}}g_{1\pm}(v^{\prime}) \partial_{\beta_{2}}^{\alpha_{2}}g_{2\mp}(u^{\prime})-\partial_{\beta_{1}}^{ \alpha_{1}}g_{1\pm}(v)\partial_{\beta_{2}}^{\alpha_{2}}g_{2\pm}(u)-\partial_{ \beta_{1}}^{\alpha_{1}}g_{1\pm}(v)\partial_{\beta_{2}}^{\alpha_{2}}g_{2\mp}(u )\right\}d\omega du,\] (A.11) _where \(g_{i}(t,x,v)=[g_{i+}(t,x,v),g_{i-}(t,x,v)]\)\((i=1,2)\) and the summations are taken for all \(\beta_{0}+\beta_{1}+\beta_{2}=\beta,\alpha_{1}+\alpha_{2}=\alpha\)._ (i). _When \(|\alpha_{1}|+|\beta_{1}|\leq N\), we have_ \[\left\langle w_{l,\vartheta}^{2}\mathscr{T}_{\pm}^{0}\left( \partial_{\beta_{1}}^{\alpha_{1}}g_{1},\partial_{\beta_{2}}^{\alpha_{2}}g_{2} \right),\partial_{\beta}^{\alpha}g_{3}\right\rangle\] \[\lesssim \sum_{m\leq 2}\left\{\left|\nabla_{v}^{m}\left\{\mu^{\delta} \partial_{\beta_{1}}^{\alpha_{1}}g_{1}\right\}\right|_{L_{v}^{2}}+\left|e^{ \frac{q(v)^{2}}{(1+t)^{\vartheta}}}\partial_{\beta_{1}}^{\alpha_{1}}g_{1} \right|_{L_{v}^{2}}\right\}\left|w_{l,\vartheta}\partial_{\beta_{2}}^{\alpha_ {2}}g_{2}\right|_{L_{\nu}^{2}}\left|w_{l,\vartheta}\partial_{\beta}^{\alpha_{ 2}}g_{3}\right|_{L_{\nu}^{2}}\] (A.12) \[+\sum_{m\leq 2}\left\{\left|\nabla_{v}^{m}\left\{\mu^{\delta} \partial_{\beta_{1}}^{\alpha_{1}}g_{1}\right\}\right|_{L_{v}^{2}}+\left|w_{l, \vartheta}\partial_{\beta_{1}}^{\alpha_{1}}g_{1}\right|_{L_{v}^{2}}\right\} \left|e^{\frac{q(v)^{2}}{(1+t)^{\vartheta}}}\partial_{\beta_{2}}^{\alpha_{2}}g _{2}\right|_{L_{\nu}^{2}}\left|w_{l,\vartheta}\partial_{\beta}^{\alpha}g_{3} \right|_{L_{\nu}^{2}},\] \[\text{or}\] \[\lesssim \sum_{m\leq 2}\left\{\left|\nabla_{v}^{m}\left\{\mu^{\delta} \partial_{\beta_{2}}^{\alpha_{2}}g_{2}\right\}\right|_{L_{v}^{2}}+\left|e^{ \frac{q(v)^{2}}{(1+t)^{\vartheta}}}\partial_{\beta_{2}}^{\alpha_{2}}g_{2} \right|_{L_{v}^{2}}\right\}\left|w_{l,\vartheta}\partial_{\beta_{1}}^{\alpha_{1 }}g_{1}\right|_{L_{\nu}^{2}}\left|w_{l,\vartheta}\partial_{\beta}^{\alpha}g_{3} \right|_{L_{\nu}^{2}}\] (A.13) \[+\sum_{m\leq 2}\left\{\left|\nabla_{v}^{m}\left\{\mu^{\delta} \partial_{\beta_{2}}^{\alpha_{2}}g_{2}\right\}\right|_{L_{v}^{2}}+\left|w_{l, \vartheta}\partial_{\beta_{2}}^{\alpha_{2}}g_{2}\right|_{L_{v}^{2}}\right\} \left|e^{\frac{q(v)^{2}}{(1+t)^{\vartheta}}}\partial_{\beta_{1}}^{\alpha_{1}}g_{ 1}\right|_{L_{\nu}^{2}}\left|w_{l,\vartheta}\partial_{\beta}^{\alpha}g_{3} \right|_{L_{\nu}^{2}}.\] (ii). _Set \(\varsigma(v)=\langle v\rangle^{-\gamma}\equiv\nu(v)^{-1},\ l\geq 0\), it holds that_ \[\begin{split}\left|\varsigma^{l}\mathscr{T}(g_{1},g_{2})\right|_{L _{v}^{2}}^{2}&\lesssim\sum_{|\beta|\leq 2}\left|\varsigma^{l-|\beta|} \partial_{\beta}g_{1}\right|_{L_{\nu}^{2}}^{2}\left|\varsigma^{l}g_{2}\right|_ {L_{\nu}^{2}}^{2},\\ \left|\varsigma^{l}\mathscr{T}(g_{1},g_{2})\right|_{L_{v}^{2}}^{2}& \lesssim\sum_{|\beta|\leq 2}\left|\varsigma^{l}g_{1}\right|_{L_{\nu}^{2}} ^{2}\left|\varsigma^{l-|\beta|}\partial_{\beta}g_{2}\right|_{L_{\nu}^{2}}^{2}. \end{split}\] (A.14) Proof.: The detail proof of this lemma can be found in [24]. ### The proof of Lemma 3.2 Proof.: Note that the proof here is not only valid for the non-cutoff cases, but also for the grad cutoff cases. For the sake of convenience, we use \(\|\cdot\|_{D\lor\nu}\) to denote the dissipative norm \(\|\cdot\|_{D}\) under non-cutoff cases or \(\|\cdot\|_{\nu}\) under cutoff cases. By applying the macro-micro decomposition (1.8) introduced in [40] and by defining moment functions \(\mathcal{A}_{mj}(f_{\varepsilon})\) and \(\mathcal{B}_{j}(f_{\varepsilon}),\ 1\leq m,j\leq 3\), by \[\mathcal{A}_{mj}(f_{\varepsilon})=\int_{\mathbb{R}^{3}}\left(v_{m}v_{j}-1 \right)M^{1/2}f_{\varepsilon}dv,\quad\mathcal{B}_{j}(f_{\varepsilon})=\frac{ 1}{10}\int_{\mathbb{R}^{3}}\left(|v|^{2}-5\right)v_{j}M^{1/2}f_{\varepsilon}dv,\] one can then derive from (1.7) a fluid-type system of equations \[\begin{cases}\partial_{t}\rho_{\varepsilon}^{\pm}+\frac{1}{ \varepsilon}\nabla_{x}\cdot u_{\varepsilon}+\frac{1}{\varepsilon}\nabla_{x} \cdot\left\langle vM^{1/2},\{\mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{ \varepsilon}\right\rangle=\left\langle M^{1/2},g_{\pm}\right\rangle,\\ \partial_{t}\left(u_{\varepsilon,i}+\left\langle v_{i}M^{1/2},\{\mathbf{I}_{ \pm}-\mathbf{P}_{\pm}\}f_{\varepsilon}\right\rangle\right)+\frac{1}{ \varepsilon}\partial_{i}\left(\rho_{\varepsilon}^{\pm}+2\theta_{\varepsilon} \right)\mp\frac{1}{\varepsilon}E_{\varepsilon,i}\\ \qquad+\frac{1}{\varepsilon}\nabla_{x}\cdot\left\langle vv_{i}M^{1/2},\{ \mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{\varepsilon}\right\rangle=\left\langle v _{i}M^{1/2},g_{\pm}+\frac{1}{\varepsilon^{2}}\mathscr{L}_{\pm}f_{\varepsilon} \right\rangle,\\ \partial_{t}\left(\theta_{\varepsilon}+\frac{1}{6}\left\langle(|v|^{2}-3)M^{1/ 2},\{\mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{\varepsilon}\right\rangle\right)+ \frac{1}{3\varepsilon}\nabla_{x}\cdot u_{\varepsilon}\\ \qquad+\frac{1}{6\varepsilon}\nabla_{x}\left\langle(|v|^{2}-3)vM^{1/2},\{ \mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{\varepsilon}\right\rangle=\left\langle (|v|^{2}-3)M^{1/2},g_{\pm}-\frac{1}{\varepsilon^{2}}\mathscr{L}_{\pm}f_{ \varepsilon}\right\rangle,\end{cases}\] (A.15) and \[\begin{cases}\partial_{t}[\mathcal{A}_{ii}(\{\mathbf{I}_{\pm}- \mathbf{P}_{\pm}\}f_{\varepsilon})+2\theta_{\varepsilon}]+2\frac{1}{\varepsilon }\partial_{i}u_{\varepsilon,i}=\mathcal{A}_{ii}(r_{\pm}+g_{\pm}),\\ \partial_{t}\mathcal{A}_{ij}(\{\mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{ \varepsilon})+\frac{1}{\varepsilon}\partial_{j}u_{\varepsilon,i}+\frac{1}{ \varepsilon}\partial_{i}u_{\varepsilon,j}+\frac{1}{\varepsilon}\nabla_{x} \cdot\langle vM^{1/2},\{\mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{\varepsilon} \rangle=\mathcal{A}_{ij}(r_{\pm}+g_{\pm}),\\ \partial_{t}\mathcal{B}_{j}(\{\mathbf{I}_{\pm}-\mathbf{P}_{\pm}\}f_{ \varepsilon})+\frac{1}{\varepsilon}\partial_{j}\theta_{\varepsilon}=\mathcal{ B}_{j}(r_{\pm}+g_{\pm}),\end{cases}\] (A.16) where \[r_{\pm} =-\frac{1}{\varepsilon}v\cdot\nabla_{x}\{\mathbf{I}_{\pm}-\mathbf{ P}_{\pm}\}f_{\varepsilon}-\frac{1}{\varepsilon^{2}}\mathscr{L}_{\pm}f_{\varepsilon},\] (A.17) \[g_{\pm} =\frac{1}{2}v\cdot E_{\varepsilon}f_{\varepsilon,\pm}\mp(E_{ \varepsilon}+\frac{1}{\varepsilon}v\times B_{\varepsilon})\cdot\nabla_{v}f_{ \varepsilon,\pm}+\frac{1}{\varepsilon}\mathscr{F}_{\pm}(f_{\varepsilon},f_{ \varepsilon}).\] Setting \[G\equiv\left\langle vM^{1/2},\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot q_{1 }\right\rangle,\] we can get from (A.15)-(A.16) that \[\begin{cases}\partial_{t}\left(\frac{\rho_{\varepsilon}^{+}+\rho_{ \varepsilon}^{-}}{2}\right)+\frac{1}{\varepsilon}\nabla_{x}\cdot u_{ \varepsilon}=0\\ \partial_{t}u_{\varepsilon,i}+\frac{1}{\varepsilon}\partial_{i}\left(\frac{ \rho_{\varepsilon}^{+}+\rho_{\varepsilon}^{-}}{2}+2\theta_{\varepsilon}\right)+ \frac{1}{2\varepsilon}\sum\limits_{j=1}^{3}\partial_{j}\mathcal{A}_{ij}(\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1])=\frac{\rho_{\varepsilon}^{+} -\rho_{\varepsilon}^{-}}{2}E_{i}+\frac{1}{\varepsilon}[G\times B]_{i},\\ \partial_{t}\theta_{\varepsilon}+\frac{1}{3\varepsilon}\nabla_{x}\cdot u_{ \varepsilon}+\frac{5}{6\varepsilon}\sum\limits_{i=1}^{3}\partial_{i} \mathcal{B}_{i}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1])=\frac{1 }{6}G\cdot E_{\varepsilon},\end{cases}\] (A.18) and \[\begin{cases}\partial_{t}[\frac{1}{2}\mathcal{A}_{ij}(\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\cdot[1,1])+2\theta_{\varepsilon}\delta_{ij}]+\frac{1}{\varepsilon} \partial_{j}u_{\varepsilon,i}+\frac{1}{\varepsilon}\partial_{i}u_{\varepsilon,j}=\frac{1}{2}\mathcal{A}_{ij}(r_{+}+r_{-}+g_{+}+g_{-}),\\ \frac{1}{2}\partial_{t}\mathcal{B}_{j}(\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\cdot[1,1])+\frac{1}{\varepsilon}\partial_{j}\theta_{\varepsilon }=\frac{1}{2}\mathcal{B}_{j}(r_{+}+r_{-}+g_{+}+g_{-}).\end{cases}\] (A.19) Moreover, by using the third equation of (A.18) to replace \(\partial_{t}\theta\) in the first equation of (A.19), one has \[\begin{split}&\frac{1}{2}\partial_{t}\mathcal{A}_{ij}(\{\mathbf{ I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1])+\frac{1}{\varepsilon}\partial_{j}u_{ \varepsilon,i}+\frac{1}{\varepsilon}\partial_{i}u_{\varepsilon,j}-\frac{2}{3 \varepsilon}\delta_{ij}\nabla_{x}\cdot u_{\varepsilon}\\ &-\frac{5}{3\varepsilon}\delta_{ij}\nabla_{x}\cdot\mathcal{B}(\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1])=\frac{1}{2}\mathcal{A}_{ij} (r_{+}+r_{-}+g_{+}+g_{-})-\frac{1}{3}\delta_{ij}G\cdot E_{\varepsilon}.\end{split}\] (A.20) In order to further obtain the dissipation rate related to \(\rho_{\varepsilon}^{\pm}\) from the formula \[|\rho_{\varepsilon}^{+}|^{2}+|\rho_{\varepsilon}^{-}|^{2}=\frac{|\rho_{ \varepsilon}^{+}+\rho_{\varepsilon}^{-}|^{2}}{2}+\frac{|\rho_{\varepsilon}^{+ }-\rho_{\varepsilon}^{-}|^{2}}{2},\] we need to consider the dissipation of \(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-}\). For that purpose, one can get from (A.18)\({}_{1}\) and (A.18)\({}_{2}\) that \[\begin{cases}\partial_{t}(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-})+ \frac{1}{\varepsilon}\nabla_{x}\cdot G=0,\\ \partial_{t}G+\frac{1}{\varepsilon}\nabla_{x}(\rho_{\varepsilon}^{+}-\rho_{ \varepsilon}^{-})-\frac{2}{\varepsilon}E_{\varepsilon}+\frac{1}{\varepsilon} \nabla_{x}\cdot\mathcal{A}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot q_{1}) \\ =E(\rho_{\varepsilon}^{+}+\rho_{\varepsilon}^{-})+\frac{2}{ \varepsilon}u_{\varepsilon}\times B_{\varepsilon}+\left\langle[v,-v]M^{1/2}, \frac{1}{\varepsilon^{2}}\mathscr{L}f_{\varepsilon}+\frac{1}{\varepsilon} \mathscr{T}(f_{\varepsilon},f_{\varepsilon})\right\rangle.\end{cases}\] (A.21) Applying \(\partial^{\alpha}\) to (A.19)\({}_{2}\), and multiplying to the identity with \(\varepsilon\partial^{\alpha}\partial_{j}\theta\), and integrating the identity result over \(\mathbb{R}_{x}^{3}\), one has \[\|\partial^{\alpha}\nabla_{x}\theta_{\varepsilon}\|^{2}=\sum_{j=1 }^{3}\left(\frac{1}{\varepsilon}\partial^{\alpha}\partial_{j}\theta_{ \varepsilon},\varepsilon\partial^{\alpha}\partial_{j}\theta_{\varepsilon}\right)\] \[\lesssim -\sum_{j=1}^{3}\left(\frac{1}{2}\partial_{t}\partial^{\alpha} \mathcal{B}_{j}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]), \varepsilon\partial^{\alpha}\partial_{j}\theta_{\varepsilon}\right)+\sum_{j=1 }^{3}\left(\frac{1}{2}\partial^{\alpha}\mathcal{B}_{j}(r_{+}+r_{-}+g_{+}+g_{- }),\varepsilon\partial^{\alpha}\partial_{j}\theta_{\varepsilon}\right)\] \[= -\frac{\mathrm{d}}{\mathrm{d}t}\sum_{j=1}^{3}\left(\frac{1}{2} \partial^{\alpha}\mathcal{B}_{j}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[ 1,1]), \varepsilon\partial^{\alpha}\partial_{j}\theta_{\varepsilon}\right)+\sum_{j=1}^{3 }\left(\frac{1}{2}\partial^{\alpha}\mathcal{B}_{j}(\{\mathbf{I}-\mathbf{P}\} f_{\varepsilon}\cdot[1,1]),\varepsilon\partial^{\alpha}\partial_{j}\partial_{t}\theta_{ \varepsilon}\right)\] \[+\sum_{j=1}^{3}\left(\frac{1}{2}\partial^{\alpha}\mathcal{B}_{j} (r_{+}+r_{-}+g_{+}+g_{-}),\varepsilon\partial^{\alpha}\partial_{j}\theta_{ \varepsilon}\right).\] (A.22) For the second term on the right hand of the second equality in (A.22), we have from (A.18)\({}_{3}\) that \[\sum_{j=1}^{3}\left(\frac{1}{2}\partial^{\alpha}\mathcal{B}_{j} (\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon\partial^{ \alpha}\partial_{j}\partial_{t}\theta_{\varepsilon}\right)\] \[= \left(\frac{1}{2}\partial^{\alpha}\mathcal{B}_{j}(\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon\partial^{\alpha}\partial_{j} \bigg{\{}-\frac{1}{3\varepsilon}\nabla_{x}\cdot u_{\varepsilon}-\frac{5}{6 \varepsilon}\sum_{i=1}^{3}\partial_{i}\mathcal{B}_{\varepsilon,i}(\{\mathbf{I} -\mathbf{P}\}f_{\varepsilon}\cdot[1,1])+\frac{1}{6}G\cdot E_{\varepsilon} \bigg{\}}\right)\] \[\lesssim \eta\left\|\partial^{\alpha}\nabla_{x}u_{\varepsilon}\right\|^{2}+ \|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{H^{1}_{x}L^{2}_{ \mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}_{ \mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}_{\mathrm{D}}}}}}}}}}}}}+ \mathcal{E}_{N}(t)\mathcal{D}_{N}(t),\] while for the last term on the right hand of (A.22), we can deduce that \[\sum_{j=1}^{3}\left(\frac{1}{2}\partial^{\alpha}\mathcal{B}_{j}(r_{ +}+r_{-}+g_{+}+g_{-}),\varepsilon\partial^{\alpha}\partial_{j}\theta_{\varepsilon}\right)\] \[\lesssim \eta\left\|\partial^{\alpha}\nabla_{x}u_{\varepsilon}\right\|^{2 }+\|\partial^{\alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{H^{1}_{x}L^{ 2}_{D\vee\nu}}+\mathcal{E}_{N}(t)\mathcal{D}_{N}(t).\] Thus we can get that \[\frac{\mathrm{d}}{\mathrm{d}t}G^{f_{\varepsilon}}_{\theta_{\varepsilon}}(t)+ \|\partial^{\alpha}\nabla_{x}\theta_{\varepsilon}\|^{2}\lesssim\eta\left\| \partial^{\alpha}\nabla_{x}u_{\varepsilon}\right\|^{2}+\|\partial^{\alpha}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{H^{1}_{x}L^{2}_{D\vee\nu}}+ \mathcal{E}_{N}(t)\mathcal{D}_{N}(t).\] (A.23) Here \[G^{f_{\varepsilon}}_{\theta_{\varepsilon}}(t)\equiv\sum_{j=1}^{3}\left(\frac{ 1}{2}\partial^{\alpha}\mathcal{B}_{j}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\cdot[1,1]),\varepsilon\partial^{\alpha}\partial_{j}\theta_{\varepsilon} \right).\] On the other hand, by using (A.20), one has \[-\left\{\frac{1}{2}\partial_{i}\partial_{t}\mathcal{A}_{ii}(\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1])+\sum_{j}\partial_{j} \partial_{t}\mathcal{A}_{ji}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1 ])\right\}\] \[-\frac{2}{\varepsilon}\Delta u_{\varepsilon,i}-\frac{2}{ \varepsilon}\partial_{i}\partial_{i}u_{\varepsilon,i}+\frac{5}{\varepsilon} \partial_{i}\nabla_{x}\cdot\mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon }\cdot[1,1])\] \[=-\frac{1}{2}\partial_{i}\mathcal{A}_{ii}(r_{+}+r_{-}+g_{+}+g_{- })-\sum_{j}\partial_{j}\mathcal{A}_{ji}(r_{+}+r_{-}+g_{+}+g_{-})+\partial_{i} \left[G\cdot E_{\varepsilon}\right].\] (A.24) Applying \(\partial^{\alpha}\) to the above equality, multiplying it with \(\epsilon\partial^{\alpha}u_{\varepsilon,i}\), and integrating the identity result over \(\mathbb{R}^{3}_{x}\), then one has \[2\|\partial^{\alpha}\nabla_{x}u_{\varepsilon}\|^{2}+2\sum_{i}\| \partial^{\alpha}\partial_{i}u_{\varepsilon,i}\|^{2}\] \[=\sum_{i}\left(-\frac{2}{\varepsilon}\partial^{\alpha}\Delta u_{ \varepsilon,i},\varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)+\sum_{i} \left(-\frac{2}{\varepsilon}\partial^{\alpha}\partial_{i}\partial_{i}u_{ \varepsilon,i},\varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)\] \[=\sum_{i}\left(-\frac{2}{\varepsilon}\partial^{\alpha}\Delta u_{ \varepsilon,i}-\frac{2}{\varepsilon}\partial^{\alpha}\partial_{i}\partial_{i}u _{\varepsilon,i},\varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)\] \[=\sum_{i}\left(\frac{1}{2}\partial^{\alpha}\partial_{i} \partial_{t}\mathcal{A}_{ii}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1 ])+\sum_{j}\partial^{\alpha}\partial_{j}\partial_{t}\mathcal{A}_{ji}(\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon\partial^{\alpha }u_{\varepsilon,i}\right)\] \[\quad+\sum_{i}\left(-\frac{1}{2}\partial^{\alpha}\partial_{i} \mathcal{A}_{ii}(r_{+}+r_{-}+g_{+}+g_{-})-\sum_{j}\partial^{\alpha}\partial_{j} \mathcal{A}_{ji}(r_{+}+r_{-}+g_{+}+g_{-}),\varepsilon\partial^{\alpha}u_{ \varepsilon,i}\right)\] \[\quad+\sum_{i}\left(\partial^{\alpha}\partial_{i}\left[G\cdot E_{ \varepsilon}\right]-\frac{5}{\varepsilon}\partial^{\alpha}\partial_{i}\nabla_{ x}\cdot\mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon \partial^{\alpha}u_{\varepsilon,i}\right)\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\sum_{i}\left(\frac{1}{2}\partial ^{\alpha}\partial_{i}\mathcal{A}_{ii}(\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\cdot[1,1])+\sum_{j}\partial^{\alpha}\partial_{j}\mathcal{A}_{ji}( \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon\partial^{ \alpha}u_{\varepsilon,i}\right)\] \[\quad-\sum_{i}\left(\frac{1}{2}\partial^{\alpha}\partial_{i} \mathcal{A}_{ii}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1])+\sum_{j} \partial^{\alpha}\partial_{j}\mathcal{A}_{ji}(\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\cdot[1,1]),\varepsilon\partial^{\alpha}\partial_{t}u_{\varepsilon, i}\right)\] \[\quad+\sum_{i}\left(-\frac{1}{2}\partial^{\alpha}\partial_{i} \mathcal{A}_{ii}(r_{+}+r_{-}+g_{+}+g_{-})-\sum_{j}\partial^{\alpha}\partial_{j} \mathcal{A}_{ji}(r_{+}+r_{-}+g_{+}+g_{-}),\varepsilon\partial^{\alpha}u_{ \varepsilon,i}\right)\] \[+\sum_{i}\left(\partial^{\alpha}\partial_{i}\left[G\cdot E_{\varepsilon} \right]-\frac{5}{\varepsilon}\partial^{\alpha}\partial_{i}\nabla_{x}\cdot \mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon \partial^{\alpha}u_{\varepsilon,i}\right)\] \[+\sum_{i}\left(\partial^{\alpha}\partial_{i}\left[G\cdot E_{ \varepsilon}\right]-\frac{5}{\varepsilon}\partial^{\alpha}\partial_{i}\nabla_ {x}\cdot\mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]), \varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)\] \[+\sum_{i}\left(\partial^{\alpha}\partial_{i}\left[G\cdot E_{ \varepsilon}\right]-\frac{5}{\varepsilon}\partial^{\alpha}\partial_{i}\nabla_ {x}\cdot\mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]), \varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)\] \[+\sum_{i}\left(\partial^{\alpha}\partial_{i}\left[G\cdot E_{ \varepsilon}\right]-\frac{5}{\varepsilon}\partial^{\alpha}\partial_{i}\nabla_ {x}\cdot\mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]), \varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)\] \[+\sum_{i}\left(\partial^{\alpha}\partial_{i}\left[G\cdot E_{ \varepsilon}\right]-\frac{5}{\varepsilon}\partial^{\alpha}\partial_{i}\nabla_ {x}\cdot\mathcal{B}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]), \varepsilon\partial^{\alpha}u_{\varepsilon,i}\right)\] (A.25) The other terms can be bounded by \[\lesssim\eta\|\partial^{\alpha}\nabla_{x}u_{\varepsilon}\|^{2}+\|\partial^{ \alpha}\nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D\vee\nu}^{2}+ \mathcal{E}_{N}(t)\mathcal{D}_{N}(t).\] (A.26) Consequently, one has \[\frac{\mathrm{d}}{\mathrm{d}t}G_{u_{\varepsilon}}^{f_{\varepsilon}}(t)+\| \partial^{\alpha}\nabla_{x}u_{\varepsilon}\|^{2}+\sum_{i}\|\partial^{\alpha} \partial_{i}u_{\varepsilon,i}\|^{2}\] \[\lesssim\eta\|\partial^{\alpha}\nabla_{x}\{\rho_{\varepsilon}^{+}+\rho_{ \varepsilon}^{-}\}\|^{2}+\eta\|\partial^{\alpha}\nabla_{x}\theta_{\varepsilon} \|^{2}+\|\partial^{\alpha}\nabla_{x}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon} \|_{D\vee\nu}^{2}+\mathcal{E}_{N}(t)\mathcal{D}_{N}(t),\] (A.27) where \(G_{u_{\varepsilon}}^{f_{\varepsilon}}(t)\) denotes that \[G_{u_{\varepsilon}}^{f_{\varepsilon}}(t)\equiv\sum_{i}\left(\frac{1}{2} \partial^{\alpha}\partial_{i}\mathcal{A}_{ii}(\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\cdot[1,1])+\sum_{j}\partial^{\alpha}\partial_{j}\mathcal{A}_{ji}( \{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot[1,1]),\varepsilon\partial^{ \alpha}u_{\varepsilon,i}\right).\] Next, we estimate \(\rho_{\varepsilon}^{+}+\rho_{\varepsilon}^{-}\). To this end, we have from \((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq we can deduce from (A.23), (A.27), and (A.28) that \[\frac{\mathrm{d}}{\mathrm{d}t}G^{f}_{\rho_{\varepsilon}^{+}+\rho_{ \varepsilon}^{-},u_{\varepsilon},\theta_{\varepsilon}}(t)+\left\|\partial^{ \alpha}\nabla_{x}[\rho_{\varepsilon}^{+}+\rho_{\varepsilon}^{-},u_{ \varepsilon},\theta_{\varepsilon}]\right\|^{2}\lesssim\left\|\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\right\|^{2}_{H^{N}_{x}L^{2}_{D\nu\nu}}+\mathcal{E }_{N}(t)\mathcal{D}_{N}(t).\] (A.29) Finally, for the corresponding estimate on \(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-}\), we have from \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq: \[-\frac{2}{\varepsilon}u_{\varepsilon}\times B_{\varepsilon}-\left\langle[v,-v]M^{1/ 2},\frac{1}{\varepsilon^{2}}\mathscr{L}f_{\varepsilon}-\frac{1}{\varepsilon} \mathscr{T}(f_{\varepsilon},f_{\varepsilon})\right\rangle.\] (A.33) Applying \(\partial^{\alpha}\) to the above equality and multiplying it with \(\varepsilon\partial^{\alpha}E\), one has \[2\|\partial^{\alpha}E_{\varepsilon}\|^{2}=\left(\partial^{ \alpha}\left\{\frac{2}{\varepsilon}E_{\varepsilon}\right\},\varepsilon \partial^{\alpha}E_{\varepsilon}\right)\] \[=\left(\partial^{\alpha}\left\{\partial_{t}G+\frac{1}{\varepsilon }\nabla_{x}(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-})+\frac{1}{ \varepsilon}\nabla_{x}\cdot\mathcal{A}(\{\mathbf{I}-\mathbf{P}\}f_{ \varepsilon}\cdot q_{1})-E_{\varepsilon}(\rho_{\varepsilon}^{+}+\rho_{ \varepsilon}^{-})\right\},\varepsilon\partial^{\alpha}E_{\varepsilon}\right)\] \[\quad+\left(\partial^{\alpha}\left\{-\frac{2}{\varepsilon}u_{ \varepsilon}\times B_{\varepsilon}-\left\langle[v,-v]M^{1/2},\frac{1}{ \varepsilon^{2}}\mathscr{L}f_{\varepsilon}-\frac{1}{\varepsilon}\mathscr{T}( f_{\varepsilon},f_{\varepsilon})\right\rangle\right\},\varepsilon\partial^{ \alpha}E_{\varepsilon}\right)\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}G, \varepsilon\partial^{\alpha}E_{\varepsilon}\right)-\left(\partial^{\alpha}G, \varepsilon\partial^{\alpha}\left\{\nabla_{x}\times B_{\varepsilon}-\frac{1}{ \varepsilon}G\right\}\right)\] \[\quad+\left(\partial^{\alpha}\left\{\frac{1}{\varepsilon}\nabla_{ x}(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-})+\frac{1}{\varepsilon}\nabla_{x} \cdot\mathcal{A}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot q_{1})-E_{ \varepsilon}(\rho_{\varepsilon}^{+}+\rho_{\varepsilon}^{-})\right\}, \varepsilon\partial^{\alpha}E_{\varepsilon}\right)\] \[\quad+\left(\partial^{\alpha}\left\{-\frac{2}{\varepsilon}u_{ \varepsilon}\times B_{\varepsilon}-\left\langle[v,-v]M^{1/2},\frac{1}{ \varepsilon^{2}}\mathscr{L}f_{\varepsilon}-\frac{1}{\varepsilon}\mathscr{T}( f_{\varepsilon},f_{\varepsilon})\right\rangle\right\},\varepsilon\partial^{ \alpha}E_{\varepsilon}\right)\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}G, \varepsilon\partial^{\alpha}E_{\varepsilon}\right)+\|\partial^{\alpha}G\|^{2} -\left(\partial^{\alpha}G,\varepsilon\partial^{\alpha}\left\{\nabla_{x}\times B _{\varepsilon}\right\}\right)\] \[\quad+\left(\partial^{\alpha}\left\{\frac{1}{\varepsilon}\nabla_ {x}(\rho_{\varepsilon}^{+}-\rho_{\varepsilon}^{-})+\frac{1}{\varepsilon} \nabla_{x}\cdot\mathcal{A}(\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\cdot q_{1} )-E_{\varepsilon}(\rho_{\varepsilon}^{+}+\rho_{\varepsilon}^{-})\right\}, \varepsilon\partial^{\alpha}E_{\varepsilon}\right)\] \[\quad+\left(\partial^{\alpha}\left\{-\frac{2}{\varepsilon}u_{ \varepsilon}\times B_{\varepsilon}-\left\langle[v,-v]M^{1/2},\frac{1}{ \varepsilon^{2}}\mathscr{L}f_{\varepsilon}-\frac{1}{\varepsilon}\mathscr{T}( f_{\varepsilon},f_{\varepsilon})\right\rangle\right\},\varepsilon\partial^{ \alpha}E_{\varepsilon}\right)\] \[\lesssim\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}G, \varepsilon\partial^{\alpha}E_{\varepsilon}\right)-\|\partial^{\alpha}(\rho_{ \varepsilon}^{+}-\rho_{\varepsilon}^{-})\|^{2}+\|\nabla_{x}G\|_{H_{x}^{N}}^{2} +\eta\|\nabla_{x}B_{\varepsilon}\|_{H_{x}^{N-2}}^{2}+\eta\|\partial^{\alpha}E_ {\varepsilon}\|^{2}\] \[\quad+\frac{1}{\varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{D\vee\nu}^{2}+\|\partial^{\alpha}\nabla_{x}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D\vee\nu}^{2}+\mathcal{E}_{N}(t) \mathcal{D}_{N}(t)\] where \(|\alpha|\leq N-1\). Consequently \[-\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}G, \varepsilon\partial^{\alpha}E_{\varepsilon}\right)+2\|\partial^{\alpha}E_{ \varepsilon}\|^{2}+\|\partial^{\alpha}(\rho_{\varepsilon}^{+}-\rho_{\varepsilon }^{-})\|^{2}\] \[\lesssim\|\nabla_{x}G\|_{H_{x}^{N}}^{2}+\eta\|\nabla_{x}B_{ \varepsilon}\|_{H_{x}^{N-2}}^{2}\] \[\quad+\frac{1}{\varepsilon^{2}}\|\partial^{\alpha}\{\mathbf{I}- \mathbf{P}\}f_{\varepsilon}\|_{D\vee\nu}^{2}+\|\partial^{\alpha}\nabla_{x}\{ \mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D\vee\nu}^{2}+\mathcal{E}_{N}(t) \mathcal{D}_{N}(t).\] (A.35) For \(B_{\varepsilon}\), it follows that for \(|\alpha|\leq N-2\) \[\|\partial^{\alpha}\nabla_{x}B_{\varepsilon}\|^{2} =\left(\partial^{\alpha}\nabla_{x}B_{\varepsilon},\partial^{\alpha} \nabla_{x}B_{\varepsilon}\right)\] \[=\left(\partial^{\alpha}\nabla_{x}\times B_{\varepsilon},\partial^{ \alpha}\nabla_{x}\times B_{\varepsilon}\right)\] \[=\left(\partial^{\alpha}\left\{\partial_{t}B_{\varepsilon}+\frac{1}{ \varepsilon}G\right\},\partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\right)\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}E_{ \varepsilon},\partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\right)-\left( \partial^{\alpha}E_{\varepsilon},\partial^{\alpha}\nabla_{x}\times\partial_{t}B_{ \varepsilon}\right)+\left(\partial^{\alpha}\left\{\frac{1}{\varepsilon}G\right\}, \partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\right)\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}E_{ \varepsilon},\partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\right)+\|\partial^{ \alpha}\nabla_{x}\times E_{\varepsilon}\|^{2}+\left(\partial^{\alpha}\left\{ \frac{1}{\varepsilon}G\right\},\partial^{\alpha}\nabla_{x}\times B_{ \varepsilon}\right)\] \[\lesssim\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}E_{ \varepsilon},\partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\right)+\|\partial^{ \alpha}\nabla_{x}\times E_{\varepsilon}\|^{2}+\eta\|\partial^{\alpha}\nabla_{x} \times B_{\varepsilon}\|^{2}+\frac{1}{\varepsilon^{2}}\|\partial^{\alpha}G\|^{2}\] That is \[-\frac{\mathrm{d}}{\mathrm{d}t}\left(\partial^{\alpha}E_{\varepsilon}, \partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\right)+\|\partial^{\alpha} \nabla_{x}B_{\varepsilon}\|^{2}\] \[\lesssim\|\partial^{\alpha}\nabla_{x}\times E_{\varepsilon}\|^{2 }+\eta\|\partial^{\alpha}\nabla_{x}\times B_{\varepsilon}\|^{2}+\frac{1}{ \varepsilon^{2}}\|\partial^{\alpha}G\|^{2}\] (A.36) for \(|\alpha|\leq N-2\). For sufficiently small \(\kappa>0\), (A.35)+\(\kappa\)(A.36) gives \[\frac{\mathrm{d}}{\mathrm{d}t}G_{E_{\varepsilon},B_{\varepsilon} }(t)+\left\|E_{\varepsilon}\right\|_{H_{x}^{N-1}}^{2}+\left\|\nabla_{x}B_{ \varepsilon}\right\|_{H_{x}^{N-2}}^{2}+\left\|\rho_{\varepsilon}^{+}-\rho_{ \varepsilon}^{-}\right\|_{H_{x}^{N-1}}^{2}\] (A.37) \[\lesssim \frac{1}{\varepsilon^{2}}\sum_{|\alpha|\leq N}\|\partial^{ \alpha}\{\mathbf{I}-\mathbf{P}\}f_{\varepsilon}\|_{D\vee\nu}^{2}+\mathcal{E} _{N}(t)\mathcal{D}_{N}(t).\] Here we set \[G_{E_{\varepsilon},B_{\varepsilon}}(t)=-\left\{\sum_{|\alpha|\leq N-1}( \partial^{\alpha}G,\varepsilon\partial^{\alpha}E_{\varepsilon})+\kappa\sum_{ |\alpha|\leq N-2}(\partial^{\alpha}E_{\varepsilon},\partial^{\alpha}\nabla_{x }\times B_{\varepsilon})\right\}.\] A proper combination of (A.32) and (A.37) gives (3.10). This completes the proof of Lemma 3.2. ## Acknowledgment The research of Ning Jiang was supported by grants from the National Natural Science Foundation of China under contract No.11471181 and No.11731008. This work is also supported by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No.XDA25010404. The research of Yuanjie Lei was supported by the National Natural Science Foundation of China under contract No.11971187 and No.12171176.
2303.10362
The H$α$ broadband photometric reverberation mapping of four Seyfert 1 galaxies
Broadband photometric reverberation mapping (PRM) have been investigated for AGNs in recent years, but mostly on accretion disk continuum RM. Due to the small fraction of broad emission lines in the broadband, PRM for emission lines is very challenging. Here we present an ICCF-Cut method for broadband PRM to obtain the H$\alpha$ broad line lag and apply it to four Seyfert 1 galaxies, MCG+08-11-011, NGC 2617, 3C 120 and NGC 5548. All of them have high quality broadband lightcurves with daily/sub-daily cadence, which enables us to extract H$\alpha$ lightcurves from the line band by subtracting the contributions from the continuum and host galaxy. Their extracted H$\alpha$ lightcurves are compared with the lagged continuum band lightcurves, as well as the lagged H$\beta$ lightcurves obtained by spectroscopic RM (SRM) at the same epochs. The consistency of these lightcurves and the comparison with the SRM H$\beta$ lags provide supports to the H$\alpha$ lags of these AGNs, in a range from 9 to 19 days, obtained by the ICCF-Cut, JAVELIN and $\chi^2$ methods. The simulations to evaluate the reliability of H$\alpha$ lags and the comparisons between SRM H$\beta$ and PRM H$\alpha$ lags indicate that the consistency of the ICCF-Cut, JAVELIN and $\chi^2$ results can ensure the reliability of the derived H$\alpha$ lags. These methods may be used to estimate the broad line region sizes and black hole masses of a large sample of AGNs in the large multi-epoch high cadence photometric surveys such as LSST in the future.
Qinchun Ma, Xue-Bing Wu, Huapeng Gu, Yuhan Wen, Yuming Fu
2023-03-18T08:36:57Z
http://arxiv.org/abs/2303.10362v1
# The H\(\alpha\) broadband photometric reverberation mapping of four Seyfert 1 galaxies ###### Abstract Broadband photometric reverberation mapping (PRM) have been investigated for AGNs in recent years, but mostly on accretion disk continuum RM. Due to the small fraction of broad emission lines in the broadband, PRM for emission lines is very challenging. Here we present an ICCF-Cut method for broadband PRM to obtain the H\(\alpha\) broad line lag and apply it to four Seyfert 1 galaxies, MCG+08-11-011, NGC 2617, 3C 120 and NGC 5548. All of them have high quality broadband lightcurves with daily/sub-daily cadence, which enables us to extract H\(\alpha\) lightcurves from the line band by subtracting the contributions from the continuum and host galaxy. Their extracted H\(\alpha\) lightcurves are compared with the lagged continuum band lightcurves, as well as the lagged H\(\beta\) lightcurves obtained by spectroscopic RM (SRM) at the same epochs. The consistency of these lightcurves and the comparison with the SRM H\(\beta\) lags provide supports to the H\(\alpha\) lags of these AGNs, in a range from 9 to 19 days, obtained by the ICCF-Cut, JAVELIN and \(\chi^{2}\) methods. The simulations to evaluate the reliability of H\(\alpha\) lags and the comparisons between SRM H\(\beta\) and PRM H\(\alpha\) lags indicate that the consistency of the ICCF-Cut, JAVELIN and \(\chi^{2}\) results can ensure the reliability of the derived H\(\alpha\) lags. These methods may be used to estimate the broad line region sizes and black hole masses of a large sample of AGNs in the large multi-epoch high cadence photometric surveys such as LSST in the future. galaxies: active - quasars: emission lines - quasars: supermassive black holes ## 1 Introduction Active Galactic Nuclei (AGNs) are powered by the accretion processes onto the central supermassive black holes (SMBHs)(Urry & Padovani, 1995). Surrounding the SMBH is the geometrically thin, optically thick accretion disk which generates the UV/optical continuum emission. The continuum light from the central accretion disk travels across the broad line region (BLR), and produces broad emission lines through the photoionization process. AGNs often show large and aperiodic variabilities in all the wavebands, but the origin of which is still not very clear. Several models including the accretion disk instabilities (Kawaguchi et al., 1998) and the general Poisson process models (Cid Fernandes et al., 2000) have been proposed to characterize the optical variability of AGNs. Reverberation mapping (RM) (Blandford & McKee, 1982; Peterson, 1993) exploits the time delay \(\tau\) between the lightcurves of the optical continuum and broad emission lines to study the size and structure of the BLR. The average size of the BLR is \(R_{\rm BLR}=\tau\cdot c\), where \(c\) is the speed of light. This method has been proved powerful to estimate the virial mass of the central supermassive black hole, \[M_{\rm BH}=f\frac{R_{\rm BLR}\cdot{\sigma_{v}}^{2}}{G}, \tag{1}\] where \(\sigma_{v}\) is the velocity dispersion of the broad emission line which can be estimated from the spectrum, \(G\) is the gravitational constant and \(f\) is a dimensionless factor in order of unity that depends on the geometry and kinematics of the BLR (Peterson & Wandel, 1999; Onken et al., 2004; Labita et al., 2006; Woo et al., 2010; Graham et al., 2011). The factor \(f\) can be estimated for nearby AGNs whose \(M_{\rm BH}\) values have been obtained by both RM and the correlation between \(M_{\rm BH}\) and the stellar velocity dispersion (Collin et al., 2006; Woo et al., 2015). RM has established an empirical relationship between the BLR size and the AGN continuum luminosity (Kaspi et al., 1996, 2000; Bentz et al., 2006, 2009), \(R_{\rm BLR}\propto L^{\alpha}\). The theoretical prediction from the photoionization model gives \(\alpha=0.5\)(Netzer, 1990), which has been examined extensively by many observational campaigns (Koratkar and Gaskell, 1991; Kaspi et al., 1996; Wandel et al., 1999; McGill et al., 2008; Vestergaard et al., 2011; Shen et al., 2015; Grier et al., 2017). However, a large number of accurate measurements of \(R_{\rm BLR}\) are required to reduce the scatter of the current \(R-L\) relation. Because the sizes of BLRs usually range from a few to several hundred light days and the observed time lags are the product of the rest-frame time lags and the time dilation factor \(1+z\), monitoring the variability of AGNs can take months to years. Spectroscopic reverberation mapping (SRM) monitors the spectra of AGNs to get the time delay between the continuum and the broad emission line. However, SRM campaigns are expensive because they need a large amount of observational time from intermediate to large optical telescopes. In addition, extracting accurate lightcurves from spectroscopic observations is often difficult due to the uncertainties in the flux calibration process (Shapovalova et al., 2008; Stalin et al., 2011). Photometric reverberation mapping (PRM) employs broad bands to trace the AGN continuum and suitable narrow bands to trace the broad emission lines. With small optical telescopes, PRM can monitor AGNs with high efficiency. For example, Haas et al. (2011) used the PRM with a 15-cm telescope VYSOS-6 to obtain the BLR sizes of PG0003+199 and Ark 120, proving the feasibility of the PRM method. Because the narrow bands contain both the emission lines and the underlying continuum, the contribution from the continuum in the narrow bands must be considered. Pozo Nunez et al. (2012) computed the synthetic H\(\beta\) lightcurve by subtracting a scaled broadband lightcurve from the narrow band lightcurve and measured the BLR size of 3C 120, which is very close to the value obtained with SRM (Grier et al., 2012). In addition to the narrow-band PRM, Kim et al. (2019) used three intermediate band filters to get the time lags between the continuum and the H\(\alpha\) emission line of five AGNs, which are consistent with the SRM results. Jiang et al. (2016) combined the broad and intermediate bands and detect the H\(\alpha\) time lags for 13 AGNs at redshift \(0.2<z<0.4\). While the narrow and intermediate band PRM methods can increase the signal-to-noise ratio (S/N) of emission lines, they have to limit the range of AGN redshifts and require special filters. The broadband PRM uses a suitable broadband to trace the strong emission line, allowing it to cover the emission line over a wide range of redshifts. Some previous works (Edri et al., 2012; Zu et al., 2016) have investigated several AGNs with the broadband PRM. The most important advantage is that the broadband PRM can use the multi-epoch data of large photometric sky surveys such as the Zwicky Transient Facility (ZTF) (Masci et al., 2019) and the Legacy Survey of Space and Time (LSST of the Vera C. Rubin Observatory) (LSST Science Collaboration et al., 2017). Such surveys are much more efficient than the narrow or intermediate band PRM campaigns, which are only applicable to a few targets within a narrow redshift range. The multi-epoch data of photometric sky surveys have been widely used for continuum reverberation mapping (Homayouni et al., 2019; Yu et al., 2020). They usually use the data of broad bands to calculate the continuum lags between two broad bands directly. An important issue for the broadband PRM of emission lines is that the emission line only contributes a small fraction of the total flux in the broad band, where the continuum is dominant. Therefore the difference between the lightcurves of the line band and the continuum band is usually too small to calculate the emission line time lag directly. We need some new methods to obtain the emission line time lags for the broadband PRM, and make comparisons with the SRM results to examine the methods and results. We select 4 Seyfert 1 galaxies MCG +8-11-011, NGC 2617, 3C 120 and NGC 5548, which have been widely studied from continuum RM to SRM in previous research. Fausnaugh et al. (2017) and Fausnaugh et al. (2018) presented the results of the simultaneous continuum RM and H\(\beta\) SRM for MCG +8-11-011 and NGC 2617. NGC 2617 is also known as a changing look AGN (Shappee et al., 2014) from other SRM campaign (Feng et al., 2021) and continuum RM campaign (Kammouni et al., 2021). 3C 120 has been widely studied with SRM (Peterson et al., 1998; Grier et al., 2012, 2013; Kollatschny et al., 2014; Hlabathe et al., 2020). Ramolla et al. (2018) used the narrow band to obtain the H\(\alpha\) narrow band PRM lag for 3C 120, which is much larger than the SRM H\(\alpha\) lag at a different time epoch, mainly due to the much higher luminosity in this period. NGC 5548 is one of the best-studied AGNs with many RM campaigns in past decades (Peterson, 1993; Peterson et al., 2002; Bentz et al., 2007; Denney et al., 2009; Bentz et al., 2010; Lu et al., 2016; Landt et al., 2019; Horne et al., 2021). The AGN Space Telescope and Optical Reverberation Mapping Project (AGN STORM: Fausnaugh et al., 2016; Pei et al., 2017) has done multiwavelength photometric and simultaneous spectroscopic observations of NGC 5548. This paper is arranged as follows. We describe the target selections in Section 2. The methods to calculate the time lags are presented in Section 3. The calculated H\(\alpha\) lag results for 4 Seyfert 1 galaxies are presented in Section 4. We use the simulations and other methods to evaluate the influences of the H\(\beta\) and inter-continuum lags in Section 5. A summary is given in Section 6. We adopt the standard \(\Lambda\)CDM cosmology with \(\Omega_{m}=0.32\), \(\Omega_{\Lambda}=0.68\) and \(H_{0}=67\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Planck Collaboration et al., 2014) in this paper. ## 2 Target Selections Broadband PRM requires at least two broadband lightcurves with high photometric accuracy and high observational cadence for AGNs. One band is the continuum band that does not contain strong broad emission lines and another band is the line band with a strong emission line. To examine our methods and results, we select AGNs with both multi-band photometric observations and simultaneous SRM results, so that we can compare the time lags and the shapes of lightcurves of the broadband PRM with the time lags and lightcurves of the SRM. Because the H\(\alpha\) line is much stronger than the H\(\beta\) line, obtaining the H\(\alpha\) time lag is more feasible for broadband PRM. The best way is to use the data of the H\(\alpha\) SRM campaigns to compare the broadband PRM H\(\alpha\) time lags and lightcurves with the SRM results. However, the number of H\(\alpha\) SRM campaigns is much smaller than H\(\beta\) SRM campaigns, and these H\(\alpha\) SRM campaigns do not have enough photometric broadband observations or their photometric lightcurves do not have high accuracy and high cadence for broadband PRM. Finally we select four Seyfert 1 galaxies with simultaneous high quality continuum RM and H\(\beta\) RM observations. We use the photometric data to obtain the time lags and lightcurves of H\(\alpha\) line and compare these results with their SRM H\(\beta\) time lags and lightcurves. Table 1 shows the properties including the photometric durations and the epochs of the line band for 4 Seyfert 1 galaxies. The data of MCG +8-11-011 and NGC 2617 are obtained from Fausnaugh et al. (2018) and Fausnaugh et al. (2017). The data of 3C 120 are obtained from Hlabathe et al. (2020). These three targets were observed with the Las Cumbres Observatory (LCO; Brown et al., 2013) global robotic telescope network. The data of NGC 5548 are obtained from Fausnaugh et al. (2016) and Pei et al. (2017). Their H\(\alpha\) ratios are calculated from the single-epoch spectra shown in Figure 1. The transfer functions of the continuum bands and the line bands used in the broadband PRM for 4 galaxies are also shown in Figure 1. Because the spectrum of the simultaneous H\(\beta\) SRM can not cover the whole wavelength range of the line band, we use the single-epoch spectra from other observations and campaigns as the substitutes. The spectrum of NGC 2617 is obtained from Feng et al. (2021). The spectrum of 3C 120 is obtained from Ramolla et al. (2018). The spectrum of NGC 5548 is obtained from the Sloan Digital Sky Survey (SDSS; York et al., 2000). We obtain the single-epoch spectrum of MCG +8-11-011 using the Beijing Faint Object Spectrograph and Camera (BFOSC) Figure 1: The spectrum of 4 Seyfert 1 galaxies and the transmission functions of the broad bands used for broadband PRM. The \(g\), \(g^{\prime}\) and \(B\) bands are used for the continuum bands and the \(r\), \(r^{\prime}\) and \(R\) bands are used to extract the H\(\alpha\) line. of the Xinglong 2.16-m telescope in China. We use the Grism 4 with a dispersion of 198 A/mm and the slit width of 1\({}^{\prime\prime}\).8. The spectra are reduced by the standard IRAF routine (Tody, 1986, 1993). We use the LCO \(g^{\prime}\) band as the continuum band and the \(r^{\prime}\) band as the line band for MCG +8-11-011, NGC 2617 and 3C 120. For NGC 5548, besides the SDSS \(g\) and \(r\) bands, we also use the Johnson/Cousins \(B\) and \(R\) bands to do the broadband PRM as the comparison. Table 1 and Figure 1 show that all of them have very high photometric cadences and strong H\(\alpha\) emission lines for the broadband PRM (we will not distinguish \(g^{\prime}\) and \(r^{\prime}\) from \(g\) and \(r\) afterwards). ## 3 Time Lag Calculations ### The ICCF-Cut Method The interpolated cross-correlation function (ICCF; Gaskell & Sparke, 1986) calculates the time lag between two different lightcurves directly and has been extensively used in SRM. To get the emission line time lag of the broadband PRM, we need to get rid of the continuum contribution in the line band. First we simply assume that the continuum flux in the line band equals to a fixed fraction \(\alpha\) of the flux in the continuum band for each AGN. Under this assumption, we use the line band flux to subtract the continuum band flux with the ratio \(\alpha\), and get the lightcurve of H\(\alpha\) line (hereafter ICCF-Cut), \[L_{\rm H\alpha}(t)=L_{\rm line}(t)-\alpha L_{\rm cont}(t). \tag{2}\] Here \(L_{\rm line}(t)\), \(L_{\rm cont}(t)\), and \(L_{\rm H\alpha}(t)\) are the lightcurves of the line band, continuum band, and H\(\alpha\) emission line respectively. We ignore the influence of H\(\beta\) variability in the continuum band because the H\(\beta\) emission line usually contributes less than 5% in the continuum band and is much weaker than the H\(\alpha\) emission line (which has a \(10\sim 30\)% contribution to the line band). According to the thin disk model, there is a small inter-continuum time lag between the continuum in the \(g\) and \(r\) bands. Because the inter-continuum lag is usually very small for lower redshift Seyfert 1 galaxies (Fausnaugh et al., 2016, 2018), we ignore the influence of the inter-continuum lag between the line band and continuum band at first. Further simulation and discussion on the influence of the H\(\beta\) emission line and the inter-continuum lag will be presented in Section 5. The value of \(\alpha\) is associated with the spectral slope of the AGN continuum and usually does not change much within several months. We use the spectral data of the LICK AGN MONITORING PROJECT (LAMP; Bentz et al., 2010) to examine the variability of \(\alpha\) and find that \(\alpha\) changes little during the campaign (e.g. \(\alpha=1.02\pm 0.07\) for NGC 4748, and \(\alpha=1.16\pm 0.09\) for Arp 151). The \(\alpha\) value can be obtained from the transmission functions of the line and continuum band filters and the single-epoch spectrum of an AGN. Because the spectra of simultaneous H\(\beta\) SRM for these targets can not cover the entire wavelength range of the line band, we use the single-epoch spectra in Figure 1 to calculate \(\alpha\): \[\alpha=(F_{\rm line}-F_{\rm H\alpha})/F_{\rm cont}. \tag{3}\] Here \(F_{\rm cont}\), \(F_{\rm line}\) and \(F_{\rm H\alpha}\) are the fluxes obtained from the integral of the single-epoch spectrum. However, there are two issues. One is that the flux calibration of the spectrum is usually not as accurate as the photometric flux calibration. Another one is that these spectroscopic and photometric observations have been conducted at different epochs (several years apart from each other), so the spectral index of the AGN continuum and the value of \(\alpha\) may change significantly. Here we adopt another method to calculate the value of \(\alpha\) for the broadband PRM. We assume the average contri \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & Redshift & H\(\alpha\) ratio(\%) & Band & Duration(days) & Epoch & Cadence(days) & \(F_{var}\)(\%) \\ \hline MCG +8-11-011 & 0.0205 & 26 & \(r^{\prime}\) & 93 & 42 & 1.07 & 4.8 \\ & & & \(g^{\prime}\) & 156 & 85 & 1.00 & 7.0 \\ NGC 2617 & 0.0142 & 16 & \(r^{\prime}\) & 102 & 127 & 0.62 & 3.7 \\ & & & \(g^{\prime}\) & 145 & 166 & 0.69 & 3.8 \\ 3C 120 & 0.0330 & 23 & \(r^{\prime}\) & 249 & 370 & 1.08 & 6.3 \\ & & & \(g^{\prime}\) & 246 & 392 & 1.06 & 9.1 \\ NGC 5548 & 0.0172 & 25 & \(r\) & 212 & 203 & 0.93 & 3.5 \\ & & & \(g\) & 212 & 204 & 0.98 & 6.2 \\ & & & 19 & \(R\) & 226 & 161 & 0.96 & 3.7 \\ & & & & \(B\) & 225 & 180 & 1.00 & 8.6 \\ \hline \end{tabular} \end{table} Table 1: The properties for 4 Seyfert 1 galaxies. bution of the H\(\alpha\) line in the line band does not change much between the spectroscopic epoch and the mean epoch of the photometric lightcurves. To make sure the extracted H\(\alpha\) lightcurve containing all the contributions of the H\(\alpha\) emission line, we use the minimum function: \[\alpha=\left(1-\frac{F_{\rm H\alpha}}{F_{\rm line}}\right)\min\left(\frac{L_{ \rm line,t}}{L_{\rm cont,t}}\right) \tag{4}\] to calculate the value of \(\alpha\). Here \(F_{\rm H\alpha}\) and \(F_{\rm line}\) are measured from the single-epoch spectrum, and \(L_{\rm line,t}\) and \(L_{\rm cont,t}\) are the fluxes at each point obtained from linear interpolated photometric lightcurves. For the single-epoch spectrum and two broadband lightcurves, we calculate the \(\alpha\) value using Eq. (4) and use it for the observational period. In this case, adopting the minimum as the \(\alpha\) value means that the contribution of H\(\alpha\) in the line band for each point of the photometric lightcurve will be kept and is larger than the contribution obtained directly from the single-epoch spectrum. However, we can still exclude a large fraction of continuum in the line band while keeping the H\(\alpha\) contribution as complete as possible. We will discuss the influence of the varying value of \(\alpha\) in Section 6. Here we investigate the difference in the results of applying Eqs. (3) and (4) to obtain the value of \(\alpha\). Figure 2 shows an example of the comparison between the results derived from two different \(\alpha\) values given by the two approaches above for MCG +8-11-011. By comparing the lightcurves of the extracted H\(\alpha\) and the SRM H\(\beta\) lines, we find that the extracted H\(\alpha\) lightcurve using Eq. (4) is more consistent with the SRM H\(\beta\) lightcurve. The value of the cross-correlation coefficient \(r\) of using Eq. (4) is higher and the lag distribution is smoother than using Eq. (3). This indicates that the result obtained from Eq. (4) is more reliable. When using Eq. (3), the extracted H\(\alpha\) lightcurves do not contain enough contribution of the real H\(\alpha\) line variability to obtain the time lag. This comparison indicates that using Eq. (4) to calculate the value of \(\alpha\) is more suitable to obtain the H\(\alpha\) lightcurves and time lags for the broadband PRM. We also consider the influence of the host galaxy. The luminosity of the host galaxy changes little for the RM campaign durations, so it can be regarded as constant for most SRM campaigns. The shape of the lightcurves does not change with the host galaxy contribution for SRM and narrow band PRM(Pozo Nunez et al., 2013; Ramolla et al., 2018). But for broadband PRM, the Figure 2: The SRM H\(\beta\) and extracted H\(\alpha\) lightcurves and the ICCF-Cut lag distributions of MCG +8-11-011 with the \(15.7^{+0.5}_{-0.5}\) days SRM H\(\beta\) lag. The left panel shows the H\(\beta\) and extracted H\(\alpha\) emission lines lightcurves with different methods. The grey points represent the H\(\beta\) lightcurves. The labels H\(\alpha\)(1) and H\(\alpha\)(2) represent the cases using Eqs. (3) and (4) respectively. All lightcurves have been normalized and the Y-axis has been set to the same scaling for comparison. The right panel shows the ICCF-Cut lag results obtained from the H\(\alpha\)(1) and H\(\alpha\)(2). The black lines represent the relations between the cross-correlation coefficient \(r\) with the time lag and the grey parts represent the 1000 FR/RSS simulations which represent the lag distributions. The time lag between the \(g\) band and the H\(\alpha\) line for H\(\alpha\)(1) and H\(\alpha\)(2) is \(23.8^{+8.2}_{-4.8}\) and \(17.3^{+6.2}_{-4.3}\) days respectively. contribution of the host galaxy can change the value of \(\alpha\) and then affect the shape of the extracted H\(\alpha\) lightcurves. Its contribution must be considered. We use the flux variation gradient (FVG; Choloniewski, 1981; Winkler et al., 1992; Pozo Nu\(\acute{\rm e}\)ez et al., 2012) to determine the contribution of the host galaxy. As shown in Figure 3, we obtain the host galaxy contributions in the \(B\) and \(V\) bands for NGC 5548. They are \(2.6\pm 0.5\) and \(5.6\pm 0.3\) in units of \(10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) for the \(B\) and \(V\) bands respectively, which are consistent with the result in Fausnaugh et al. (2016) (\(2.88\pm 0.05\) and \(5.25\pm 0.10\)). The host galaxy contributions to the \(g\), \(r\) and \(R\) bands can be estimated from the \(B\) and \(V\) band values by using the host galaxy template (Polletta et al., 2007). A similar method is also applied to other 3 AGNs to estimate the host contributions to the continuum and line bands. ### The JAVELIN Method To check the reliability of the time lag calculated by the ICCF-Cut method, we also use another method to calculate the time lags of AGNs as the comparison. The Just Another Vehicle for Estimating Lags In Nuclei (JAVELIN: Zu et al., 2011, 2013, 2016) program assumes that the lightcurve of AGNs can be modeled by a damped random walk (DRW) process and uses thousands of Markov Chain Monte Carlo (MCMC) DRW processes to get the distributions of the parameters including the time lag. Because the two-band photometry model (Pmap Model) of JAVELIN may be competitive with the SRM for strong (large equivalent width) lines such as H\(\alpha\) and H\(\beta\)(Zu et al., 2016), we can use JAVELIN to calculate the time lag of AGNs with strong H\(\alpha\) emission lines. From the time lag distribution of JAVELIN, we use the highest posterior density to identify the time lag. The \(1\sigma\) limits of the time lag that encompasses 68% of the time lag distribution are adopted to obtain the upper and lower limits of the most probable time lag. To make sure the results of JAVELIN are reliable, in addition to the lag distribution, the transfer function amplitude between the continuum and the H\(\alpha\) emission line in the DRW model is also examined for the broadband PRM. The transfer function amplitude can represent the statistic mean of the H\(\alpha\) line contribution in the line band. The lag with much higher transfer function amplitude than the real H\(\alpha\) line ratio in the spectrum is not physically possible. Combining the distribution of lags and the transfer function amplitude, we can evaluate the reliability of the lag results. Figure 4 shows the time lags and the transfer function amplitude distribution of the JAVELIN Pmap model results for MCG +8-11-011. The transfer function amplitude (on average \(\sim\) 23%) can represent the H\(\alpha\) line ratio in the line band. It is consistent with the real H\(\alpha\) line ratio (26%) calculated from the single-epoch spectrum. MCG +8-11-011 has the SRM H\(\beta\) lag of \(15.72^{+0.50}_{-0.52}\) days, while the JAVELIN H\(\alpha\) time lag of \(17.3^{+1.5}_{-5.2}\) days is consistent with the ICCF-cut H\(\alpha\) result of \(17.3^{+6.2}_{-4.3}\) days. The H\(\alpha\) time lag is slightly larger than the SRM H\(\beta\) time lag, which is consistent with the structure of BLR Figure 4: The time lags and the transfer function amplitude distribution of the JAVELIN Pmap model results for MCG +8-11-011 whose SRM H\(\beta\) lag is \(15.72^{+0.50}_{-0.52}\) days. The color represents the number density of the points. The JAVELIN H\(\alpha\) lag is \(17.3^{+1.5}_{-5.2}\) days. Figure 3: The \(B\) versus \(V\) band fluxes of NGC 5548. A linear least-squares fit to data points yields the AGN slope plotted by the orange line. The range of host slopes plotted by two grey lines is taken from Sakata et al. (2010). and the results of previous works (Kaspi et al., 2000; Bentz et al., 2010; Grier et al., 2012). We notice that the lags of a few points are close to zero with much higher transfer function amplitude (see Figure 4), which may be because the JAVELIN MCMC processes need higher line ratio as the time lag decreases to reproduce the given line band flux. To reduce such influence, we exclude the points with the transfer function amplitude larger than 0.4 and the lag results very close to zero. ### The \(\chi^{2}\) Method From the comparison of the lightcurves in panels (b) and (c) in Figure 5, it can be noticed that the errors of the extracted H\(\alpha\) are much larger than the errors of the photometric broad bands and SRM H\(\beta\) lightcurves. To evaluate the influence of errors and the feasibility of the broadband PRM with large errors, we also apply the \(\chi^{2}\) method (Czerny et al., 2013; Bao et al., 2022) which works better than using ICCF for AGNs with red-noise variability to obtain the H\(\alpha\) time lag from the broadband and extracted H\(\alpha\) lightcurves. The \(\chi^{2}\) method uses the uncertainties to weight the data points in lightcurves. The \(\chi^{2}\) is calculated by \[\chi^{2}(\Delta t)=\frac{1}{N}\sum_{i=1}^{n}\frac{(x_{i}-A_{\chi^{2}}y_{i, \Delta t})^{2}}{\delta x_{i}^{2}+A_{\chi^{2}}^{2}\delta y_{i,\Delta t}^{2}}, \tag{5}\] where \(x_{i}\) and \(y_{i,\Delta t}\) are the continuum band flux and the extracted H\(\alpha\) flux with shifted lag \(\Delta t\), \(\delta x_{i}\) and \(\delta y_{i,\Delta t}\) are their uncertainties. We interpolate and shift the ex Figure 5: The lightcurves and lag distributions for MCG +8-11-011. The panel (a) shows the lightcurves of the continuum band (\(g\)) and line band (\(r\)). The panels (b) and (c) show the extracted H\(\alpha\) lightcurves compared with the lagged continuum band and SRM H\(\beta\) broad line lightcurves. The panel (d) shows the lag distribution between the SRM H\(\beta\) line and extracted H\(\alpha\) line. The panels (e), (f) and (g) show the lag distributions of the continuum band and extracted H\(\alpha\) line with the ICCF-Cut, JAVELIN and \(\chi^{2}\) methods respectively. The red line represents the median value of the lag distribution. For two ICCF results, the black lines represent the relations between the cross-correlation coefficient \(r\) and the time lag. For the \(\chi^{2}\) results, the black line represents the relation between the \(\chi^{2}\) value and the time lag. The grey parts of these three panels (d),(e) and (g) represent the 1000 FR/RSS simulations. The grey part of JAVELIN (panel (f)) represents the distributions of 10000 MCMC simulations. tracted H\(\alpha\) flux with the time \(\Delta t\). For each line flux with the shifted lag \(\Delta t\), we can obtain the value of \(\chi^{2}(\Delta t)\). It leads to a relation between the \(\chi^{2}(\Delta t)\) value and time lag \(\Delta t\). This process is similar to ICCF. \(A_{\chi^{2}}\) is a normalized factor formulated as \[A_{\chi^{2}}=\frac{S_{xy}+(S_{xy}^{2}+4S_{x3y}S_{xy3})^{1/2}}{2S_{xy3}}, \tag{6}\] where the coefficients are given by \[S_{xy} =\sum_{i=1}^{N}(x_{i}^{2}\delta y_{i,\Delta t}^{2}-y_{i,\Delta t} ^{2}\delta x_{i}^{2}),\] \[S_{xy3} =\sum_{i=1}^{N}x_{i}y_{i,\Delta t}\delta y_{i,\Delta t}^{2}, \tag{7}\] \[S_{x3y} =\sum_{i=1}^{N}x_{i}y_{i,\Delta t}\delta x_{i}^{2}.\] We take the minimum points in the \(\chi^{2}\) functions as the H\(\alpha\) time lag measurements. In order to estimate the uncertainties of the time lags of ICCF-Cut and the \(\chi^{2}\) method, we use the flux randomization (FR) and random subset selection (RSS) with Monte Carlo (MC) simulations (Peterson et al., 1998). The FR alters the flux with the errors. Each data point is modified by adding a random noise according to a Gaussian distribution around the measured value, with the standard deviation of the measurement uncertainty. The RSS is used to estimate the errors of the unevenly sampled data by randomly excluding data points from the simulated lightcurves. Each realization is based on a randomly chosen subset of the original lightcurve. The FR procedure examines the sensitivity of the flux accuracy, and the RSS checks the effect of the incomplete sampling. For each target, we perform 1000 MC simulations to get the lag uncertainty of ICCF-Cut and the \(\chi^{2}\) method (see panel (g) in Figure 5). The \(\chi^{2}\) centroid H\(\alpha\) time lag of \(17.2^{+4.8}_{-3.9}\) days for MCG +8-11-011 is consistent with the previous ICCF-Cut and JAVELIN results, which indicates that although the uncertainties Figure 6: Same as Figure 6 but for NGC 2617. The red dash line in the panel (g) represents the peak value of the \(\chi^{2}\) lag distribution. The red solid lines represent the median lags of the distributions. of the extracted H\(\alpha\) line flux are large compared with the variabilities of the lightcurves, the broadband PRM can still obtain the H\(\alpha\) time lag for these targets. The consistency of the H\(\alpha\) lag distributions from the ICCF-Cut, JAVELIN and \(\chi^{2}\) methods can ensure the reliability of the H\(\alpha\) broadband PRM. ## 4 Results for 4 Seyfert 1 Galaxies We apply these methods to four Seyfert 1 galaxies. MCG +8-11-011 shows the best results, as shown in Figure 5. Besides the H\(\alpha\) lag calculated in the previous part, we also use the ICCF to calculate the time lag between the SRM H\(\beta\) and extracted H\(\alpha\) light curve. (panel (d) in Figure 5). The high value of coefficient \(r\) and a small lag between the SRM H\(\beta\) and subtracted H\(\alpha\) light curves confirm the reliability of the subtracted H\(\alpha\) light curves. Because the H\(\alpha\) ratio obtained from the single-epoch spectrum is the highest among 4 galaxies and the lightcurves have obvious variabilities, the results of MCG +8-11-011 are better than other targets. The extracted H\(\alpha\) lightcurve is well consistent with the lagged continuum and the lagged SRM H\(\beta\) lightcurves. All the methods, including the ICCF-Cut, JAVELIN and \(\chi^{2}\), show similar lag distributions for the continuum and extracted H\(\alpha\) lightcurves. The H\(\alpha\) lag is around 17 days. For NGC 2617 shown in Figure 6, the extracted H\(\alpha\) lightcurve is consistent with the lagged continuum and the lagged SRM H\(\beta\) line lightcurves. The H\(\alpha\) lag distributions of the ICCF-Cut and JAVELIN are very close. Because the variabilities are smaller than MCG +8-11-011 compared with uncertainties, the \(\chi^{2}\) result is worse than those of ICCF-Cut and JAVELIN. Although the median lag of the \(\chi^{2}\) method is much larger than the results of ICCF-Cut and JAVELIN, the peak value \(10.0^{+15.4}_{-0.2}\) days of the \(\chi^{2}\) method is very close to the results of ICCF-Cut and JAVELIN. Because NGC 2617 was discovered by Shappee et al. (2014) to be a changing look AGN, we abandon the SRM H\(\beta\) line lightcurve with MJD\(>56735\)(Fausnaugh et al., 2017) for the comparisons. The latter part of the lightcurves with lower Figure 7: Same as Figure 6 but for 3C 120 without lightcurve segmentation. fluxes may also be one of the reasons for the bad \(\chi^{2}\) result. We apply the same methods to 3C 120, as shown in Figure 7. But the extracted H\(\alpha\) line lightcurve is not well consistent with the continuum band (panel (b) in Figure 7). The left part of the extracted H\(\alpha\) line lightcurve is obviously lower than the continuum band lightcurve and the right part is higher. The correlation coefficient value of ICCF-Cut is also very low. We noticed that even for the simultaneous H\(\beta\) SRM, the correlation coefficient value of ICCF between the continuum and H\(\beta\) line is also not high, only about 0.4 (Hlabathe et al., 2020). Another issue is that the observational duration of 3C 120 is about 250 days, twice of the durations of MCG +8-11-011 and NGC 2617. The spectral index of the AGN continuum usually changes little within several months as in the cases of MCG +8-11-011 and NGC 2617, but for 3C 120, the spectral index of the continuum and the value of \(\alpha\) may change during the longer observation durations. To understand the deviation between the extracted H\(\alpha\) line and the continuum, we adjust Equation (4) to calculate the value of \(\alpha\). We divide the lightcurve into two parts so that each part of the lightcurve has similar duration time as MCG +8-11-011 and NGC 2617, and for each part the value of \(\alpha\) is adjusted according to its average line band and continuum band fluxes. For each part of the lightcurves, the \(\alpha_{i}\) is calculated by \[\alpha_{i}=\alpha\frac{\overline{L_{\rm cont}}}{\overline{L_{\rm line}}}\frac {\overline{L_{\rm line,i}}}{\overline{L_{\rm cont,i}}}, \tag{8}\] where \(\overline{L_{\rm cont}}\) and \(\overline{L_{\rm line}}\) are the average fluxes in the the continuum and line bands for the whole period lightcurves, while \(\overline{L_{\rm cont,i}}\) and \(\overline{L_{\rm line,i}}\) are the average fluxes for each part of lightcurves. Figure 8 shows that for 3C 120 the subtracted H\(\alpha\) lightcurve with the varying \(\alpha\) value in two duration parts (separated at MJD 58092) is more consistent with the lagged continuum band lightcurve. Similar to the ICCF-Cut, we also divide the initial lightcurves into two parts to calculate the JAVELIN lag respectively, then combine the two part results into the final one. All three Figure 8: Same as Figure 6 but for 3C 120 with lightcurve segmentation into two parts at MJD 58092. methods show similar lag distributions in Figure 8. Although the ICCF-Cut H\(\alpha\) lag of \(18.6^{+2.2}_{-2.7}\) days is slightly shorter than the SRM H\(\beta\) lag of \(21.2^{+1.6}_{-1.0}\) days, the time lag between the SRM H\(\beta\) line and the extracted H\(\alpha\) line calculated by ICCF still shows that the extracted H\(\alpha\) lightcurve is possibly lagged behind the SRM H\(\beta\) lightcurve with \(2.5^{+2.3}_{-3.0}\) days (panel (d) in Figure 8). This contradiction may be due to the lower accuracy of the SRM H\(\beta\) lightcurve compared with other targets. This contradiction may also be ignored because the lag uncertainties are larger than the difference between the H\(\alpha\) and H\(\beta\) lags. The extracted H\(\alpha\) lag is consistent with the SRM H\(\beta\) lag in general, which has also been found by the previous SRM research of 3C 120 showing that its H\(\alpha\) lag (\(28.5^{+8.5}_{+9.0}\) days) and H\(\beta\) lag (\(27.9^{+5.9}_{-7.1}\) days) are very close (Kollatschny et al., 2014). The consistency of H\(\alpha\) lightcurve with the lagged continuum and SRM H\(\beta\) lightcurves as well as the similar lag distributions for three methods indicate that this \(\alpha\) value adjustment method is effective for the broadband PRM with longer duration time. For NGC 5548 with more than 200 days observational duration, we also divide the lightcurves into two parts (separated at MJD 56772 in Figure 9). The extracted H\(\alpha\) lightcurve is consistent with the lagged continuum band and the lagged SRM H\(\beta\) line lightcurves in general. Because the flux uncertainties of the continuum and line bands are larger than other targets, the lag distributions of three methods are not very consistent with each other. The lag value of JAVELIN is much smaller than others. We will use the simulations to explain such a difference in Section 5. To determine the H\(\alpha\) lag and examine the reliability of the broadband PRM, besides the \(g\) and \(r\) bands, we also used the \(B\) band as the continuum band and the \(R\) band as the line band (separated at MJD 56766 in Figure 10). The results of the \(B\) and \(R\) bands are similar to the results of the \(g\) and \(r\) bands, especially for the lag distributions of ICCF-Cut. Although the Figure 9: Same as Figure 6 but for NGC 5548 with the \(g\) and \(r\) band lightcurves with lightcurve segmentation into two parts at MJD 56772. result of the \(\chi^{2}\) method is worse, the lag distributions of the ICCF-Cut and JAVELIN are still similar. Considering the simultaneous SRM H\(\beta\) lag of \(4.17^{+0.36}_{-0.36}\) days (Pei et al., 2017) as the broadband PRM and the SRM H\(\alpha\) lag of \(11.02^{+1.27}_{-1.15}\) days in other period (Bentz et al., 2010) for NGC 5548, the lag distributions in Figure 9 and Figure 10 are probably reasonable. Especially the lag distributions of the ICCF-Cut and \(\chi^{2}\) method for the \(g\) and \(r\) bands and the lag distributions of the ICCF-Cut and JAVELIN for the \(B\) and \(R\) bands are more reliable and consistent with each other. All H\(\alpha\) lag results (median values) for 4 Seyfert 1 galaxies are listed in Table 2. We try to plot the lag distribution of three methods with the same weight in one figure (Figure 11) and use the highest posterior density to obtain the lags as the comparison. We find that these combined lags are similar to the ICCF-Cut results. Because the cross-correlation function (CCF; Blandford and McKee, 1982) method has been widely used and examined for decades in many RM projects, we chose the results of ICCF-cut as the final lags (using the combined lag results has no significant changes). It is also more convenient to use these lags to compare with other SRM results which are mainly obtained from the CCF and its variants. We compare our broadband PRM H\(\alpha\) lags with the SRM time lags in the \(R-L\) relation (see Figure 12). Our results from the H\(\alpha\) PRM are consistent with the commonly adopted \(R_{\rm{BLR}}\propto L^{\alpha}\) relationship (Panda et al., 2019). We also compare our H\(\alpha\) lag results with those of SRM H\(\beta\) lags. Figure 13 shows that on average the H\(\alpha\) time lags are slightly larger than the SRM H\(\beta\) time lags, which is consistent with the standard model of AGNs, where the BLR size of H\(\alpha\) line is usually larger than that of H\(\beta\) line (Kaspi et al., 2000; Bentz et al., 2010; Grier et al., 2012, 2017). ## 5 Discussion Figure 10: Same as Figure 6 but for NGC 5548 with the \(B\) and \(R\) band lightcurves with lightcurve segmentation into two parts at MJD 56766. To examine the reliability of the time lags and the influence of H\(\beta\) emission lines in the continuum band, we use the DRW model to produce the mock lightcurves of AGNs. The DRW process can be described by a stochastic differential equation (Kelly et al., 2009), \[dc(t)=-\frac{1}{\tau}c(t)dt+\sigma\sqrt{dt}\epsilon(t)+b\tau, \tag{9}\] where \(c(t)\) is the continuum flux, \(\tau\) is the relaxation time of the continuum, \(\sigma\) is the standard deviation of the continuum, and \(\epsilon(t)\) is a white Gaussian noise process with zero mean and the variance equal to one. The mean value of the continuum is \(b\tau\) and the variance is \(\sigma\tau^{2}/2\). The variability of the broad emission line relative to the continuum can be described as \[l(t)=\int\Psi(t-t^{\prime})c(t)dt, \tag{10}\] where \(\Psi(t)\) is the transfer function between the continuum and the broad emission line. We use a top hat \begin{table} \begin{tabular}{c c c c c c} \hline \hline Name & ICCF-Cut & JAVELIN & \(\chi^{2}\) & Combined & SRM H\(\beta\) & H\(\beta\) vs H\(\alpha\) \\ \hline MCG +8-11-011 & \(17.3^{+6.2}_{-4.3}\) & \(17.3^{+1.5}_{-5.2}\) & \(17.2^{+4.8}_{-3.9}\) & \(17.4^{+4.8}_{-3.9}\) & \(15.72^{+0.50}_{-0.52}\) & \(0.8^{+5.4}_{-4.1}\) \\ NGC 2617 & \(9.5^{+9.1}_{-7.0}\) & \(9.6^{+0.7}_{-0.6}\) & \(17.5^{+7.9}_{-7.7}\) & \(9.0^{+7.4}_{-12.3}\) & \(4.32^{+1.0}_{-1.35}\) & \(4.7^{+3.3}_{-6.3}\) \\ 3C 120 & \(18.6^{+2.2}_{-2.7}\) & \(16.6^{+2.6}_{-3.9}\) & \(18.1^{+2.0}_{-1.5}\) & \(17.8^{+2.5}_{-1.6}\) & \(21.2^{+1.6}_{-1.0}\) & \(2.5^{+2.3}_{-3.0}\) \\ NGC 5548 (\(gr\)) & \(12.8^{+14.7}_{-4.8}\) & \(4.5^{+0.1}_{-0.2}\) & \(18.0^{+7.0}_{-9.3}\) & \(14.3^{+16.9}_{-5.8}\) & \(4.17^{+0.36}_{-0.36}\) & \(5.0^{+6.5}_{-2.5}\) \\ NGC 5548 (\(BR\)) & \(15.0^{+14.2}_{-5.1}\) & \(12.5^{+7.2}_{-6.7}\) & \(35.7^{+7.8}_{-8.8}\) & - & \(5.0^{+4.0}_{-2.5}\) \\ \hline \end{tabular} Note. – The H\(\beta\) vs H\(\alpha\) lags are obtained from the SRM H\(\beta\) and the extracted H\(\alpha\) lightcurves with ICCF. For NGC 5548, the combined lag is obtained from the lag distributions of both \(g,r\) and \(B,R\) bands. \end{table} Table 2: The H\(\alpha\) lag results (in days) of 4 Seyfert 1 galaxies. Figure 11: The combined lag distributions for three methods with the same weight. The red lines represent the median lags. For NGC 5548, it contains the lag distributions of both \(g,r\) and \(B,R\) bands. (rectangular function) for the transfer function centered at a time lag \(\tau_{d}\) with a width \(w\) and an amplitude \(A\), \[\Psi(t)=\frac{A}{w}\quad\text{for}\quad\tau_{d}-\frac{w}{2}\leqslant t\leqslant \tau_{d}+\frac{w}{2}. \tag{11}\] Combining the transmission functions of the continuum and line bands and the H\(\alpha\) and H\(\beta\) line strengths, as well as the parameters of the AGN lightcurve variability, we can produce the mock lightcurves of the continuum and line bands. We use JAVELIN to obtain the DRW parameters from the observational data of the four Seyfert 1 galaxies, and use these parameters to reproduce the mock lightcurves. We set the H\(\alpha\) and H\(\beta\) line strengths obtained from the spectra as the transfer function amplitude. To simulate the observational errors, we use the skewed normal distribution to fit the error distributions for 4 sources (see Figure 14) and use the same skewed normal distribution errors to reproduce the mock lightcurves. To simulate the small inter-continuum time lag between the continuum emissions in the continuum and line bands, the mock line band consists of the 1 day lagged continuum and 20 day lagged H\(\alpha\) line. To make sure the variability of mock lightcurves is similar to that of the sources, we use the Welch-Stetson J Variability Index (Welch & Stetson, 1993) to evaluate the variability. The J index is composed of the relative error (\(\delta\)), the normalized residuals of a pair of observations (\(P_{k}\)), and a weighting factor (\(w_{k}\)). The relative error is defined by Stetson (1996) as \[\delta_{i}=\frac{f_{i}-\overline{f}}{\sigma_{f,i}}\sqrt{\frac{n}{n-1}}. \tag{12}\] Here \(n\) is the number of observations, \(\sigma_{f,i}\) is the measurement error and \(\overline{f}\) is the mean flux of the light curve. To reduce the influence of very large flux change within few data points, the weight factor is defined as \[w_{i}=\left[1+(\frac{\delta_{i}}{2})^{2}\right]^{-1}. \tag{13}\] The J index is defined as \[\mathrm{J}=\frac{\sum\mathrm{w_{k}sgn}(\delta_{i}^{2}-1)\sqrt{|\delta_{i}^{2}- 1|}}{\sum\mathrm{w_{k}}}. \tag{14}\] Here sgn simply returns the sign of the value. \(\mathrm{J}<0\) means that the variability is dominated by the uncertainties of the observation. After reproducing the mock lightcurves with the DRW model, we select the mock lightcurves which have similar J index with the real observational data. For each set of parameters, we use the DRW model to reproduce four mock lightcurves in one simulation. One Figure 12: The \(R_{BLR}-L_{5100}\) relationship of the broadband PRM, SRM for H\(\alpha\) line (blue points) and SRM for H\(\beta\) line (black points) (Du and Wang, 2019). The red points represent the H\(\alpha\) time lags obtained with the ICCF-Cut for the \(g\) and \(r\) bands. The dash line is the \(R-L\) relation given by Panda et al. (2019). Figure 13: A comparison of the H\(\alpha\) and H\(\beta\) time lags for 8 AGNs with SRM results. The red points are from this work and the blue points are from SRM (Kaspi et al., 2000; Bentz et al., 2010; Grier et al., 2017). The solid line represents the one-to-one ratio. pair of lightcurves are those of the (pure) continuum band continuum and the line band continuum with the H\(\alpha\) emission line. This pair of lightcurves represent the ideal data to calculate the time lag with the ICCF-Cut, JAVELIN and \(\chi^{2}\) methods. To simulate the small inter-continuum time lag between the continuum emissions in the continuum and line bands and evaluate the influence of the H\(\beta\) line in the continuum band, the mock continuum band consists of the continuum and the 15 day lagged H\(\beta\) line, the mock line band consists of the 1 day lagged continuum and 20 day lagged H\(\alpha\) line for MCG +8-11-011. We simulate 200 pairs of the mock lightcurves which have similar parameters with MCG +8-11-011 such as the cadence, variabilities, H\(\alpha\) line strength in the line band, and H\(\beta\) line strength in the continuum band. Then we use three methods to calculate the time lags for each pair of the mock lightcurves and get the distributions of H\(\alpha\) time lag (see Figure 15). By comparing the left and right panels of Figure 15, we find that although the H\(\beta\) line and inter-continuum lag can slightly influence the lag distributions, most of the lags estimated by three methods are clustered around the true lag of 20 days, indicating that three methods are efficient for the broadband PRM and the influence of the H\(\beta\) emission line and the inter-continuum lag can be ignored for the H\(\alpha\) lag calculations. From the top panels of Figure 15, it can be noticed that the results of the ICCF-Cut and \(\chi^{2}\) methods have positive correlation, which means that we may obtain the similar but not independent lag distributions with the ICCF-Cut and \(\chi^{2}\) methods. Only relying on the consistency of the results with the ICCF-Cut and \(\chi^{2}\) methods we may obtain biased results. We still need other methods to confirm the results. We also apply the simulations with the H\(\beta\) emission line and the inter-continuum lag to other three Seyferts. To investigate the influences of the line lag and cadence, the initial H\(\alpha\) lag is set as 10 days for NGC 2617 as the comparison. From Figure 16, we find that for the sources with small lags, like NGC 2617, the dispersion and uncertainty are large. We need higher cadence to obtain the reliable results for AGNs with smaller BLR sizes. From Figure 17 we also find that for NGC 5548, some results of JAVELIN are much smaller than the setting lags. The reason may be because the J index of NGC 5548 (around 1.0) is smaller than other 3 Seyferts Figure 14: The flux error distributions of the line band for 4 Seyferts. The blue histograms represent the error distributions of the \(r\) band and the red lines represent their fitting skewed normal distribution. For NGC 5548, the grey histogram and red dash line represent the data of \(R\) band. (2.0\(\sim\)3.0), which means that the variability is smaller than other 3 Seyferts. It may explain the small H\(\alpha\) lag results of JAVELIN for NGC 5548(panel (f) in Figure 9). It also means that although the dispersion of the simulative lag distributions for JAVELIN is smaller than those of the ICCF-Cut and \(\chi^{2}\) methods, only relying on the result of JAVELIN may have problems. Based on the above simulations, we find that using a single method for the broadband PRM may not be very convincing in some cases. We need to use multiple methods to obtain the time lags. The consistency of the lag distributions from different methods can ensure the reliabilities of the results. JAVELIN also provides the DPmap model which can be used to calculate the continuum time lag between the continuum and line bands as well as the H\(\alpha\) line lag. We can compare the H\(\alpha\) time lag and continuum lag obtained from the DPmap with the results in Section 4 and the continuum lag in Fausnaugh et al. (2018). The DPmap model assumes that the \(r\) band has two compo Figure 15: The results of three methods for 200 mock lightcurves with the initial H\(\alpha\) time lag set as 20 days for MCG +8-11-011. The left panels show the simulations without H\(\beta\) line contribution in the continuum band, and the right panels show the simulations with H\(\beta\) line contribution in the continuum band and 1 day continuum lag in the line band. The colors of points represent the peak values of the cross-correlation coefficient \(r\) for ICCF-Cut. nents with different time lags. In the MCMC processes, we only request one component to have a \(-10\sim 10\) day time lag which can be regarded as the inter-continuum lag between the \(g\) and \(r\) bands. The results for MCG +8-11-011 are shown in Figure 18 as an example. The DPmap model for MCG +8-11-011 shows a similar H\(\alpha\) time lag distribution at \(18.4^{+2.4}_{-6.0}\) days as shown in Figure 6 and Table 2. The ratios between the line and continuum transfer function amplitude given by the DPmap model are close to the real H\(\alpha\) emission line to the continuum ratio observed in the line band. The continuum lag distribution of \(0.5^{+0.2}_{-0.3}\) days is slightly shorter than the inter-continuum lag between the \(g\) and \(r\) band for ICCF (about 1.7 days) in Fausnaugh et al. (2018). This may be because the DPmap model includes more parameters than the Pmap model for the same initial data, and the DPmap model is more sensitive to the data quality and tends to yield worse results than the Pmap model. Another reason is that the continuum lags for these local Seyfert 1 AGNs are very small, even smaller than the observational cadence. It needs very high cadence and accuracy to obtain the continuum lag and H\(\alpha\) lag Figure 16: The results of three methods for NGC 2617 and 3C 120. The left panels show the results of the ICCF-Cut and \(\chi^{2}\) methods, and the right panels show the results of the ICCF-Cut and JAVELIN. The colors of points represent the peak value of the cross-correlation coefficient \(r\) for ICCF-Cut. simultaneously from the line band. The consistency of H\(\alpha\) lag results of DPmap model with the results shown in Figure 6 and Table 2 indicates that the influence of the continuum lag in the line band is insignificant and can be ignored for these Seyfert 1 galaxies. According to the DRW model, the transfer function amplitude (which can be represented by the statistic mean of the H\(\alpha\) line contribution in the line band) changes very little, but there is a time lag between the H\(\alpha\) and the continuum, so the contribution of the H\(\alpha\) in the line band is not constant. Although we use the minimum function in Equation (4) to make sure the calculated H\(\alpha\) lightcurve contains all the contributions of the H\(\alpha\) emission line, it still needs to evaluate the change of H\(\alpha\) ratio for the broadband PRM. According to the H\(\alpha\) ratio in the line band obtained from the single-epoch spectrum (as shown in Table 1), we change the H\(\alpha\) ratio from 0.2 to 0.3 with a bin size of 0.01 for MCG +8-11-011, 3C 120 and NGC 5548, and from 0.1 to 0.2 with a bin size of 0.01 for NGC 2617. For each H\(\alpha\) ratio value, we use the ICCF-Cut to calculate the lag distribution with FR/RSS, then combine all the lag distributions to Figure 17: Same as Figure 16 but for NGC 5548. The two top panels represent the results of the \(g\) and \(r\) bands. The two bottom panels represent the results of the \(B\) and \(R\) bands. calculate the median value of the lag with highest posterior density. In Figure 19, MCG +8-11-011, NGC 2617 and 3C 120 show consistent lag distributions with the results presented in Section 4. For NGC 5548, because of the larger uncertainties in the initial photometric data, the median lag is much larger than the results shown in Table 2, but the peak value \(13.0^{+48.0}_{-3.1}\) days of the lag distribution for NGC 5548 is very close to the previous results. These consistencies indicate that using the minimum function in Equation (4) is efficient for the H\(\alpha\) broadband PRM. ## 6 Summary By assuming that the continuum flux in the line band equals to a fraction of that in the continuum band, we use the modified method of ICCF (ICCF-Cut) to calculate the H\(\alpha\) emission line time lags from the lightcurves in the continuum and line broadbands. We also consider the host galaxy contribution to the broadbands and the change of \(\alpha\) value for AGNs with longer observational duration to improve the lag results. The lightcurves of extracted H\(\alpha\) are similar with the lagged continuum band lightcurves and the lagged simultaneous SRM H\(\beta\) lightcurves. To evaluate the influence of the errors of extracted H\(\alpha\) lightcurves and the feasibility of the H\(\alpha\) broadband PRM with large uncertainties, we apply the \(\chi^{2}\) method to weigh of the points in lightcurves by uncertainties. By combining the results of the ICCF-Cut, JAVELIN and \(\chi^{2}\) methods, we find that the derived H\(\alpha\) time lags for 4 Seyfert 1 galaxies are consistent with the \(R-L\) relationship obtained from the SRM. These AGNs show slightly larger H\(\alpha\) lags from the broadband PRM than the H\(\beta\) lags from SRM, which is consistent with previous works and theoretical predictions that the BLR size of the H\(\alpha\) line is usually larger than that of the H\(\beta\) emission line. To confirm our results further, we use the DRW model to simulate the mock lightcurves which have similar parameters with our selected AGNs. By calculating the time lags of these mock lightcurves, we evaluate the reli Figure 18: The JAVELIN DPmap model results for MCG +08-11-011. Left panels show the lag distributions and right panels show the line or continuum strength ratio distributions. The upper left panel represents the H\(\alpha\) lag and the bottom left panel represents the inter-continuum lag between the \(g\) and \(r\) bands. The initial lag limits are set to \(-50\sim 50\) days for the broad emission line and \(-10\sim 10\) days for the inter-continuum lag. The DPmap model lag is \(18.4^{+2.4}_{-6.0}\) days and the inter-continuum lag is \(0.5^{+0.2}_{-0.3}\) days. ability of the time lags obtained from the H\(\alpha\) broadband PRM and the influences of the H\(\beta\) line in the continuum band. By comparing the results of JAVELIN DPmap model with previous results, we find that the continuum lag in the line band can be ignored in broadband PRM for these local Seyfert 1 galaxies whose continuum lags are very small. By calculating the H\(\alpha\) lags with different H\(\alpha\) ratios, we find that using the minimum function in Equation (4) is efficient for the H\(\alpha\) broadband PRM. From the comparisons of the results from the H\(\alpha\) broadband PRM and SRM and the results of simulations, we find that the consistency of the ICCF-Cut, JAVELIN and \(\chi^{2}\) methods can ensure the reliability of the H\(\alpha\) line lags obtained from the broadband PRM. However, we must admit that all 4 Seyfert 1 galaxies have high quality broadband lightcurves with daily/sub-daily cadences, which enables us to get reliable H\(\alpha\) lags. It is difficult to do so for other AGNs with poor quality broadband data. We expect that these broadband PRM methods can be used to study the BLR sizes and BH masses of a large sample of AGNs in the era of large multi-epoch and high cadence photometric sky surveys such as ZTF (Masci et al., 2019) and LSST (LSST Science Collaboration et al., 2017) in the near future. _Acknowledgements_. We thank the anonymous referee for helpful suggestions. We are thankful for the support of the National Science Foundation of China (11721303, 11927804, and 12133001). We acknowledge the science research grant from the China Manned Space Project with No. CMS-CSST-2021-A06. We acknowledge the support of the staff of the Xinglong 2.16m telescope. This work was partially Supported by the Open Project Program of the CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences. This work makes use of observations from the Las Cumbres Observatory global telescope network. This work makes use of on observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan. The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. This work is based partly on observations obtained with the Apache Point Observatory 3.5 m telescope, which is owned and operated by the Astrophysical Research Consortium. This research has made use of the NASA/IPAC Extra- galactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Figure 19: The ICCF-Cut lag distributions with different values of \(\alpha\) for 4 Seyfert 1 galaxies. The red line represents the median value of the lag distribution. The H\(\alpha\) lags are \(18.0^{+6.5}_{-4.5}\) days for MCG +8-11-011, \(7.8^{+4.2}_{-5.7}\) days for NGC 2617, \(17.0^{+3.8}_{-3.1}\) days for 3C 120 and \(26.0^{+35.2}_{-16.1}\) days for NGC 5548 respectively. For NGC 5548, the red dash line represents the peak value of the lag distribution, which is \(13.0^{+48.0}_{-3.1}\) days.
2307.02291
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
Recent one-stage transformer-based methods achieve notable gains in the Human-object Interaction Detection (HOI) task by leveraging the detection of DETR. However, the current methods redirect the detection target of the object decoder, and the box target is not explicitly separated from the query embeddings, which leads to long and hard training. Furthermore, matching the predicted HOI instances with the ground-truth is more challenging than object detection, simply adapting training strategies from the object detection makes the training more difficult. To clear the ambiguity between human and object detection and share the prediction burden, we propose a novel one-stage framework (SOV), which consists of a subject decoder, an object decoder, and a verb decoder. Moreover, we propose a novel Specific Target Guided (STG) DeNoising training strategy, which leverages learnable object and verb label embeddings to guide the training and accelerate the training convergence. In addition, for the inference part, the label-specific information is directly fed into the decoders by initializing the query embeddings from the learnable label embeddings. Without additional features or prior language knowledge, our method (SOV-STG) achieves higher accuracy than the state-of-the-art method in one-third of training epochs. The code is available at this https://github.com/cjw2021/SOV-STG.
Junwen Chen, Yingcheng Wang, Keiji Yanai
2023-07-05T13:42:31Z
http://arxiv.org/abs/2307.02291v2
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising ###### Abstract Recent one-stage transformer-based methods achieve notable gains in the Human-object Interaction Detection (HOI) task by leveraging the detection of DETR. However, the current methods redirect the detection target of the object decoder, and the box target is not explicitly separated from the query embeddings, which leads to long and hard training. Furthermore, matching the predicted HOI instances with the ground-truth is more challenging than object detection, simply adapting training strategies from the object detection makes the training more difficult. To clear the ambiguity between human and object detection and share the prediction burden, we propose a novel one-stage framework (SOV), which consists of a subject decoder, an object decoder, and a verb decoder. Moreover, we propose a novel Specific Target Guided (STG) DeNoising training strategy, which leverages learnable object and verb label embeddings to guide the training and accelerate the training convergence. In addition, for the inference part, the label-specific information is directly fed into the decoders by initializing the query embeddings from the learnable label embeddings. Without additional features or prior language knowledge, our method (SOV-STG) achieves higher accuracy than the state-of-the-art method in one-third of training epochs. ## 1 Introduction Recent Human-Object Interaction (HOI) detection studies are mainly built on the object detection framework. The most widely used datasets, HICO-DET [2] and V-COCO [10], share the same object categories as the MS-COCO dataset [22]. Following the definition of the HOI instance \(\{B_{s},(B_{o},O),V\}\), which is a tuple of the subject (human) box \(B_{s}\), the object box \(B_{o}\) with class \(O\), and the verb class \(V\), detecting methods are split into one-stage and two-stage methods. In the beginning, a multi-stream architecture built on top of a CNN-based object detector is commonly adopted in the two-stage methods [28, 2, 8, 9]. Multi-stream methods resolve the HOI detection problem in split parts and have a great potential to improve. By introducing the human pose information [16, 41, 19], the language priors [7, 41], or graph structure [37, 32, 36], CNN-based two-stage methods achieve considerable accuracy. On the other hand, CNN-based one-stage methods [20, 33, 42] leverage interaction points to detect possible interaction between the subject and object and achieve promising performance. The attention mechanism of the transformer is more flexible than the CNN architecture in handling the relationships of features at different locations in the feature map and extracting global context information [6]. At first, the transformer-based methods [4, 14, 31, 45] show the advantage of the atten Figure 1: **End-to-end training pipeline of our SOV-STG.** Our SOV framework splits the decoding process into three parts for each element of the HOI instance. Our STG training strategy efficiently transfers the ground-truth information to label embeddings through additional denoising queries. tion mechanism by adopting DETR [1] in the HOI detection task. QPIC [31] and HOITrans [45] follow the same training pipeline as the DETR by viewing the HOI detection problem as a set prediction problem. Without the matching process in one-stage and two-stage CNN-based methods, QPIC and HOITrans adopt a compact encoder-decoder architecture to predict the HOI instances directly. However, the compact architecture with a single decoder binds the feature of the subject and object localization and verb recognition together. As a result, even leveraging the DETR model pretrained on the COCO dataset, the finetuning for QPIC and HOITrans still needs 150 and 250 epochs. Following one-stage methods [12, 21, 34, 35, 36, 43] improve the single decoder design by disentangling the object localization and the verb recognition in a cascade manner. Specifically, GEN-VLKT [21] improves the cascade decoder design of CDN [36] by introducing two isolated queries of humans and objects in an instance decoder and fusing the human and object features in an interaction decoder. However, the subject and object detection are still tangled in the instance decoder, and the spatial information is implicitly represented by the query embeddings. Consequently, the training of GEN-VLKT is still hard and slow, which needs 90 epochs. On the other hand, the two-stage transformer-based methods [24, 38, 39] stack additional interaction pair detection modules on top of the object decoder without modifying the subject and object detection part. Thus, compared with one-stage methods, two-stage methods can focus on filtering the interaction pairs and achieve higher accuracy than the one-stage transformer-based methods with fewer training epochs. To relieve the training burden caused by the ambiguity of the query and decoding process in recent one-stage methods [21, 31, 36], our motivation can be summarized in two aspects: 1) how to focus on decoding specific targets and 2) how to guide the training with specific priors. For the first aspect, we revisit the decoding pipeline of the transformer-based method. Recent one-stage methods [21, 31, 36] redirect the decoding target of the decoder pretrained from the object detection task, which leads to slow training convergence. To this end, as shown in Figure 1, according to the definition of the HOI instance, we propose a new framework (SOV), which fully splits the decoding process into three parts: **S**ubject detection, **O**bject detection, and **V**erb recognition. Specifically, the object decoder, subject decoder, and verb decoder are assigned to decode the object, subject, and verb class, respectively. Furthermore, the spatial information (anchor boxes) and label information (label queries) are explicitly disentangled and fed into the decoders to guide the feature extraction. By doing so, each decoder can focus on specific targets and share the training burden. In Figure 2, we compare the training convergence with recent SOTA methods. From the results, SOV takes advantage of the balanced decoding pipeline and the training converges faster than the current SOTA methods. Moreover, the object detection part of SOV is the same as the pretrained detection model, which makes the training more stable and achieves a notable high accuracy at the early stage of the training. For the second aspect, we focus on how to obtain specific label priors to initialize the label queries for HOI detection with an effective training strategy. As shown in Figure 1, we introduce a novel **S**pecific **T**arget **G**uided (STG) denoising training strategy for HOI detection, which constructs a connection between the ground-truth label information and predefined label priors (embeddings) to guide the training. With the specific priors, the queries of the inference part are able to be represented as the weighted sum of the label embeddings by the learnable coefficient matrices. Moreover, we leverage the verb label embeddings to guide the verb recognition in the verb recognition part to improve the verb representation learning capabilities. In Figure 2, we illustrate the training convergence of SOV and QPIC with STG, and the results show that our STG strategy effectively accelerates the training convergence before the learning rate drops and finally improves the performance. In summary, our contributions are mainly in two aspects: (1) we propose a novel one-stage framework (SOV) to enable the model to concentrate on what to detect and what to recognize; (2) we propose a novel training strategy (STG) to allow the model to learn label-specific information from the ground-truth. With the SOV framework design and the STG training strategy, we achieve a new state-of-the-art performance on the HOI detection benchmark with 3x fewer training epochs (30 epochs on HICO-DET) than the current state-of-the-art method. ## 2 Related Work Predicting interactions with specific priors.For one-stage transformer-baesd methods, how to extract the interaction information under a predefined representation of the interaction region is a key issue. Recent studies [3, 27, 15] Figure 2: Comparison of the training convergence curves of the state-of-the-art methods on the HICO-DET dataset. The mAP is evaluated under the default setting of HICO-DET. attempt to leverage the deformable attention mechanism [44] to guide the decoding by reference points. QAHOI [3] and FGAHOI [27] view the deformable transformer decoder's reference point as the HOI instance's anchor and use the anchor to guide the subject and object detection. However, QAHOI and FGAHOI still use the HOI query embeddings to predict all the elements of the HOI instance. MSTR [15] proposes to use the subject, object, and context reference points to represent the HOI instance and predict the subject, object, and verb based on the reference points. The context reference point is defined as the center of the subject and object reference point, which follows the idea of the interaction point [20, 33, 42]. Nevertheless, the query embedding in MSTR is used to predict the final boxes and labels of the HOI instance and still suffers from ambiguous representations. Besides, QAHOI, FGAHOI, and MSTR use x-y coordinates as the spatial priors to guide the decoding, while the box size priors are not considered. In contrast, our SOV explicitly defines the subject and object anchor boxes as the spatial priors and refines the anchor boxes layer by layer. Moreover, for the verb recognition part of our SOV, we introduce a verb box, which is directly generated from the subject and object boxes, to guide the verb feature extraction. Effective learning with ground-truth guided.For the object detection methods of the DETR family [1, 23, 44], DN-DETR [18] shows that using the ground-truth information to guide the training can accelerate the training convergence and improve the performance. In the HOI detection task, HQM [40] encodes the shifted ground-truth boxes as hard-positive queries to guide the training. However, the ground-truth label information is not considered in HQM. DOQ [29] introduces the oracle queries to implicitly encode the ground-truth boxes of human-object pairs and the object labels, and guide the decoder to learn to reconstruct the ground-truth HOI instances. However, DOQ implicitly encodes the same number of oracle queries as the ground-truth with learnable weights and only uses the learned weights during training. Without a complete and clear usage of ground-truth information, both HQM and DOQ still need 80 epochs to converge. Different from DOQ and HQM, we introduce denoising queries to encode the ground-truth information and guide the training. Moreover, our STG is used to learn the label priors for our model, and we intuitively use a "select" and a "weighted sum" approach to transfer the ground-truth label information to the denoising queries and inference queries, respectively. ## 3 HOI Efficient Decoding and Training Figure 3 shows the overall architecture of our framework. First, a feature extractor is used to extract the multi-scale global features. Then, the global features are fed into the object and subject decoder with learnable anchor boxes and label queries to predict pairs of subjects and objects. The label queries are initialized by the label embeddings and learnable coefficient matrices. The STG training strategy (In Sec 3.2) is used to learn the label embeddings with the ground-truth information. Finally, pairs of subject and object embeddings and boxes are fed into the verb recognition part to predict the verb classes. The **S**ubject-**O**bject (S-O) attention module and **A**daptive **S**hifted **M**inimum **B**ounding **R**ectangle (ASMBR) used to fuse the subject and object embeddings and generate the verb box for the verb decoder are introduced in Sec 3.1. Figure 3: **The overall framework of our SOV-STG.** SOV is composed of the feature extractor and SOV decoders. The label embeddings \(\mathbf{t}_{o}\) and \(\mathbf{t}_{v}\) learned by our STG training strategy are used to initialize the label queries \(\mathbf{Q}_{ov}\) with learned coefficient matrices \(\mathbf{A}_{o}\) and \(\mathbf{A}_{v}\). The subject and object decoder leverage the learnable anchor boxes \(B_{s}\) and \(B_{v}\) to predict the subject and object boxes, and the verb boxes \(B_{v}\) are generated by the adaptive shifted MBR according to the subject and object boxes. ### HOI split decoders To clarify the decoding target, the design of the split decoders is crucial for our framework. Different from recent one-stage transformer-based methods [21, 31, 36], which use a single decoder to detect objects and subjects, We split the detection part into two decoders, the subject decoder, and the object decoder, and share the prediction burden of the verb decoder. Moreover, we explore the design of the multi-branch feature fusion module and a new way to represent the interaction region for the verb decoder. **Subject Decoder and Object Decoder.** The same as recent deformable transformer based methods [3, 15, 27], we leverage a hierarchical backbone and deformable transformer encoder [44] as the feature extractor to extract the multi-scale global features \(\mathbf{f}_{g}\in\mathbb{R}^{N_{g}\times D}\), where \(N_{g}\) is the number of the total pixels of the multi-scale feature maps and \(D\) is the hidden dimension of the embeddings in the whole transformer architecture. For the decoders, we adopt an improved deformable transformer decoder proposed in the recent object detection method [23], which is able to process the label queries with the constraint of anchor boxes. As shown in Figure 3, the global features are fed into the subject and object decoder with the learnable anchor boxes. To maintain the detection capability of the object detector, the object decoder with the feed-forward heads is the same as the one trained in the detection task. Furthermore, we clone the object decoder to initialize the subject decoder and alleviate the learning burden of the subject decoder. The subject and object decoder both use the label queries \(\mathbf{Q}_{ov}\in\mathbb{R}^{N_{q}\times D}\) as the input queries, where \(N_{q}\) is the number of queries and \(D\) is the hidden dimension of the embeddings. With the same query input, the pair of subject embeddings \(\mathbf{E}_{s}\in\mathbb{R}^{N_{q}\times D}\) and object embeddings \(\mathbf{E}_{o}\in\mathbb{R}^{N_{q}\times D}\) can share the same prior label information. Besides, the subject and object embeddings with corresponding learnable anchor boxes \(\mathbf{B}_{s}\) and \(\mathbf{B}_{o}\) are updated layer by layer during decoding. Then, the object embeddings from the object decoder are used to predict the object classes, and the subject and object boxes are used to generate the verb boxes \(\mathbf{B}_{v}\). Next, the object and subject embeddings are fed into the S-O attention module to fuse the verb embeddings. Finally, the verb boxes generated from the subject and object boxes with the verb embeddings are fed into the verb recognition part to predict the verb classes. **Verb Decoder with S-O attention module.** As shown in Figure 3, the verb decoder receives the input queries from the outputs of the subject and object decoder. For the label queries used in the verb decoder, we introduce S-O attention module to fuse the subject and object embeddings in a multi-layer manner. In Figure 4, we illustrate the fusion process of our S-O attention module. Given the subject embedding \(\mathbf{e}_{s_{i}}\in\mathbb{R}^{N_{q}\times D}\) and object embedding \(\mathbf{e}_{o_{i}}\in\mathbb{R}^{N_{q}\times D}\) from the \(i\)-th layer (\(i>1\)), first, we sum the subject and object embeddings to fuse an intermidiate embedding \(\mathbf{e}_{so_{i}}=(\mathbf{e}_{o_{i}}+\mathbf{e}_{s_{i}})/2\). Then, to guide the verb recognition with our predefined label priors, the intermidiate embeddings \(\mathbf{e}_{so_{i}}\) are used to absorb the prior knowledge from the learnable verb label embeddings \(\mathbf{t}_{v}\in\mathbb{R}^{N_{q}\times D}\) (in Sec 3.2). Specifically, we use \(\mathbf{e}_{so_{i}}\) as the query and \(\mathbf{t}_{v}\) as the key and value to perform a cross-attention operation. Furthermore, we introduce a bottom-up path to amplify the information from the bottom to the top layer. Finally, the verb embedding \(\mathbf{e}_{v_{i}}\) after the bottom-up path can be defined as: \[\begin{split}\mathbf{e}_{v_{i}}=&((\text{CrossAttn}(\mathbf{e }_{so_{i-1}},\mathbf{t}_{v})+\mathbf{e}_{so_{i-1}})\\ &+(\text{CrossAttn}(\mathbf{e}_{so_{i}},\mathbf{t}_{v})+\mathbf{e}_{so_{i}})) /2\end{split} \tag{1}\] Then, the verb embeddings from the last layer are fed into the verb decoder to further extract the global semantic information based on the global feature \(\mathbf{f}_{g}\) and the verb boxes. **Verb box represented by ASMBR** To constrain the verb feature extraction with positional information in the verb Figure 4: **Illustration of S-O attention module.** The S-O attention consists of three parts: a sum operation to fuse the subject and object embeddings, a cross-attention layer to integrate the verb priors, and a bottom-up path to amplify the intermidiate embeddings. Figure 5: **Illustration of ASMBR.** The Shifted MBR is generated by shifting the center of the MBR. The Adaptive Shifted MBR is generated by shrinking the width and height of the Shifted MBR. decoder, as shown in Figure 5, we introduce a novel representation, adaptive shifted minimum bounding rectangle (ASMBR) for initializing the verb box. Unlike the previous CNN-based method, UnionDet [13], which learns a union box to guide the verb recognition, the verb box of SOV is directly initialized from the last layer subject and object boxes from the subject and object decoder. To balance the attention between the subject and object, we shift the center of the MBR to the center of the subject and object boxes. Considering the boxes will overlap with each other, we shrink the width and height of the MBR according to the spatial relationship between the two boxes. With the shift and adapt operation, the verb box can constrain the interaction region for sampling points of the deformable attention and extract interaction information from specific subject and object pairs. Finally, given the last layer subject box \(\mathbf{B}_{s}=(x_{s},y_{s},w_{s},h_{s})\) and object box \(\mathbf{B}_{o}=(x_{o},y_{o},w_{o},h_{o})\), where \((x,y)\) indicates the box center, the verb box is defined as: \[\mathbf{B}_{v}=\left(\frac{x_{s}+x_{o}}{2},\frac{y_{s}+y_{o}}{2},w_{v},h_{v}\right) \tag{2}\] \[w_{v}=\frac{w_{s}+w_{o}}{2}+|x_{s}-x_{o}|,h_{v}=\frac{h_{s}+h_{o}}{2}+|y_{s}-y _{o}| \tag{3}\] ### Specific Target Guided DeNoising Training The object and verb labels are the targets of HOI detection, the two label embeddings, which are used to initialize the label queries can be viewed as the specific target priors. Since the denoising queries are generated from the specific target priors and learned during the denoising training, we call our training strategy as **S**pecific **T**arget **G**uided (STG) denoising. In this subsection, we first introduce the definition and usage of the label priors, then we introduce the STG denoising training strategy. Label-specific PriorsTo explicitly equip the prior label knowledge into the decoders and disentangle the training and decoding target, as shown in Figure 1, two kinds of learnable label embeddings are used to initialize the query embeddings for SOV decoders. Specifically, in Figure 3, we define the object label embeddings \(\mathbf{t}_{o}\in\mathbb{R}^{C_{o}\times D}\) as the object label priors, which consist of \(C_{o}\) vectors with \(D\) dimensions, where \(C_{o}\) is the number of object classes. Similarly, the verb label embeddings \(\mathbf{t}_{v}\in\mathbb{R}^{C_{v}\times D}\) are defined as the verb label priors. With the object label and verb label priors, we first initialize the query embeddings of object label \(\mathbf{q}_{o}\in\mathbb{R}^{N_{q}\times D}\) and verb label \(\mathbf{q}_{v}\in\mathbb{R}^{N_{q}\times D}\) by linear combining the object label and verb label embeddings with two learnable coefficient matrices \(\mathbf{A}_{o}\in\mathbb{R}^{N_{q}\times C_{o}}\) and \(\mathbf{A}_{v}\in\mathbb{R}^{N_{q}\times C_{v}}\), respectively. Then, we add the object and verb label embeddings to obtain the inference query embeddings \(\mathbf{q}_{ov}\in\mathbb{R}^{N_{q}\times D}\). The initialization of \(\mathbf{q}_{o}\), \(\mathbf{q}_{v}\), and \(\mathbf{q}_{ov}\) is defined as follows: \[\mathbf{q}_{o}=\mathbf{A}_{o}\mathbf{t}_{o},\quad\mathbf{q}_{v}=\mathbf{A}_{v}\mathbf{t}_{v} \tag{4}\] \[\mathbf{q}_{ov}=\mathbf{q}_{o}+\mathbf{q}_{v} \tag{5}\] Different from DN-DETR [18] and DOQ [29], which learn an encoding weight to generate queries only used in training, we use the label embeddings both in the denoising and inference parts and enable the inference part to obtain the input query with label-specific information from the beginning. Learning Priors with DeNoising TrainingIn Figure 6, we show the initialization of the DN (DeNoising) query embeddings and visualize the process of adding noise to a ground-truth HOI instance. Given the ground-truth object label set \(\mathbf{O}_{gt}=\{\mathbf{o}_{i}\}_{i=1}^{K}\) and verb label set \(\mathbf{V}_{gt}=\{\mathbf{v}_{i}\}_{i=1}^{K}\) of an image, where \(\mathbf{o}_{i}\) and \(\mathbf{v}_{i}\) are the labels of the object and verb classes, \(K\) is the number of ground-truth HOI instances, we generate \(N_{p}\) groups of noised labels for each of the ground-truth HOI instances. For the \(k\)-th ground-truth HOI instance, the noised object labels are obtained by randomly flipping the ground-truth index of the object label \(\mathbf{o}_{k}\) to another object class index. Because the verb label \(\mathbf{v}_{k}\) consists of co-occurrence ground-truth classes, to keep the co-occurrence ground-truth indices appearing in the noised verb labels, we randomly flip the other indices of the ground-truth verb label to generate the noised verb labels. Two flipping rate hyper-parameters \(\eta_{o}\in(0,1)\) and \(\eta_{v}\in(0,1)\) are used to control the percentage of the noised object labels and verb labels, respectively. Besides, a verb class flipping rate hyper-parameter \(\lambda_{v}\in(0,1)\) is used to control the class-wise flipping rate in the verb labels. Next, we introduce a "select" approach to "encode" the noised labels to DN query embeddings. Specifically, we directly compose the object DN query Figure 6: **Illustration of adding noise to a ground-truth HOI instance.** The initialization consists of two parts, the object label and the verb label DN queries initialization. The final DN query embeddings \(\mathbf{q}_{k}^{dn}\) are concatenated with the object label DN queries \(\mathbf{q}_{k}^{\bar{b}}\) and the verb label DN queries \(\mathbf{q}_{k}^{\bar{b}}\). embeddings \(\mathbf{q}_{k}^{\tilde{\phi}}\in\mathbb{R}^{N_{p}\times D}\) by selecting class-specific vectors \(\{\tilde{\mathbf{\sigma}}_{k}^{j}\}_{j=1}^{N_{p}}\) from the object label embeddings \(\mathbf{t}_{o}\) according to the indices of the noised object labels. For encoding of the noised verb labels, we select and sum the class-specific vectors to construct multi-class vectors \(\{\mathbf{\tilde{v}}_{k}^{j}\}_{j=1}^{N_{p}}\), and compose the verb DN query embeddings \(\mathbf{q}_{k}^{\tilde{\psi}}\in\mathbb{R}^{N_{p}\times D}\). Finally, we concatenate the object DN query embeddings \(\mathbf{q}_{k}^{\tilde{\phi}}\) and verb DN query embeddings \(\mathbf{q}_{k}^{\tilde{\psi}}\) to form the DN query embeddings \(\mathbf{q}_{k}^{dn}\in\mathbb{R}^{2N_{p}\times D}\) for the denoising training. Since the specific target priors learned by the denoising training are also used to guide the inference during end-to-end training, our STG can accelerate the training convergence and improve the inference performance at the same time. In addition, motivated by the box denoising strategy of DN-DETR [18], we scale and shift pairs of ground-truth subject and object boxes to generate \(2N_{p}\) groups of noised anchor boxes for corresponding DN query embeddings. ### Training and Inference As shown in Figure 1, our proposed method SOV-STG is trained in an end-to-end manner. For the inference queries \(\mathbf{Q}_{ov}\), the Hungarian algorithm [17] is used to match the ground-truth HOI instances with the predicted HOI instances, and the matching cost and the training loss of predicted HOI instances follow the previous deformable transformer based method [3]. For the DN queries \(\mathbf{Q}_{dn}\), the ground-truth indices used in query initialization are used to match the predicted HOI instances, and the loss function is the same as the inference queries. With the basic concept that the same ground-truth label flipping rate is difficult for the model to denoise at the beginning of the training but becomes acceptable during the training, we further improve the denoising strategy by introducing a dynamic DN scale factor \(\gamma\in(0,1)\) to control the object label flipping rate \(\eta_{o}\) and the verb label denoising rate \(\eta_{v}\) according to the training epochs. With the dynamic DN scale strategy, the label flipping rate \(\eta\) will be set to \(\gamma\cdot\eta\) at the beginning of the training and linearly increase to \(\eta\) during the training. As the label embeddings used in the denoising training part are also the specific target priors of the inference part, SOV-STG uses all of the parameters in training and inference. ## 4 Experiments We evaluate our proposed SOV-STG on the HICO-DET [2] and V-COCO [10] datasets to compare with current SOTA methods and conduct extensive ablation studies to analyze the contributions of each component and show the effectiveness of our proposed method. ### Experimental Settings Dataset and Metric.The HICO-DET [2] dataset contains 38,118 images for training and 9,658 images for the test. The 117 verb classes and 80 object classes in HICO-DET form 600 HOI classes. According to the number of HOI instances appearing in the dataset, the HOI classes are divided into three categories: _Full_, _Rare_, and _Non-Rare_. Moreover, considering HOI instances including or not including the unknown objects, the evaluation of HICO-DET is divided into two settings: Default and Known Object. The V-COCO [10] dataset contains 5,400 images for training and 4,946 images for the test. In V-COCO, 80 object classes and 29 verb classes are annotated, and two scenarios are considered: scenario 1 with 29 verb classes and scenario 2 with 25 verb classes. We follow the standard evaluation [2] and report the mAP scores. Implementation Details.We use the DAB-Deformable-DETR trained on COCO [22] to initialize the weight of the feature extractor, the subject decoder, and the object decoder. The feature extractor consists of a ResNet-50 [11] backbone and a 6-layer deformable transformer encoder. Similar to GEN-VLKT [21], we implement three variants of SOV-STG by adjusting the backbone and the number of layers in all the decoders, which are denoted as **SOV-STG-S** with ResNet-50 and 3-layer decoders, **SOV-STG-M** with ResNet-101 and 3-layer decoders, and **SOV-STG-L** with ResNet-101 and 6-layer decoders. The hidden dimension of the transformer is \(D=256\), and the number of the query is set to \(N_{q}=64\). For the DN part, \(2N_{p}=6\) groups of noised labels are generated for each ground-truth HOI instance. The dynamic DN scale is set to \(\gamma=\frac{2}{3}\), and we define the maximum denoising level by setting the noising rate of the box to \(\delta_{b}=0.4\), the object label flipping rate to \(\eta_{o}=0.3\), the verb denoising rate to \(\eta_{v}=0.6\), and the verb label flipping rate to \(\lambda_{v}=0.6\). We train the model with the AdamW optimizer [26] with a learning rate of 2e-4 (except for the backbone, which is 1e-5 for HICO-DET, 2e-6 for V-COCO) and a weight decay of 1e-4. The batch size is set to 32 (4 images per GPU), and the training epochs are 30 (learning rate drops at the 20th epoch), which is one-third of the GEN-VLKT [21], and one-fifth of the QPIC [31] and QAHOI [3]. All of the experiments are conducted on 8 NVIDIA A6000 GPUs. ### Comparison to State-of-the-Arts In Table 1, we compare our proposed SOV-STG with the recent SOTA methods on the HICO-DET dataset. Our SOV-STG-S with ResNet-50 backbone achieves 33.80 mAP on the _Full_ category of the Default setting. Compared with the transformer-based one-stage methods, QAHOI and MSTR, which are based on the reference point, SOV-STG benefits from the anchor box priors and label priors and achieves 7.62 (29.11%) and 2.63 (8.44%) mAP improvements, respectively. Note that, without any extra language prior knowledge [30], SOV-STG-M outperforms GEN-VLKT-M by 0.26% in one-third of the training epochs. Our proposed framework and learning strategy close the gap in training efficiency between transformer-based one-stage and two-stage methods. As a result, compared with UPT, SOV-STG-L achieves 3.42 (10.56%) mAP improvements with only 10 more training epochs. Since our SOV-STG explicitly makes full use of the ground-truth information, compared with DOQ, which also uses ground-truth to guide the training, SOV-STG-S achieves 1.05% mAP improvement with less than half of the training epochs of DOQ. Furthermore, our best model SOV-STG-Swin-L with Swin-Large [25] backbone achieves a new SOTA performance of 43.35 mAP, which outperforms FGAHOI-Swin-L by 14.23%. Similarly, in Table 2, SOV-STG-M achieves 63.7 mAP on \(AP_{role}^{\mathrm{S1}}\) and surpasses UPT and GEN-VLKT-L by 3.92% and 0.63%, respectively. ### Ablation Study We conduct all the ablation experiments on the HICODET dataset with the SOV-STG-S model, and if not explicitly noticed, the same training setting is used as the training of our SOTA model. **Contributions of proposed modules.** SOV-STG is composed of flexible decoding architecture and training strategies. To clarify the contributions of each proposed module, in Table 3, we remove the proposed modules one by one and conduct ablation studies on the HICO-DET dataset. The row of (5) indicates the experiment removing the STG strategy and the S-O attention module is degraded to a sum fusion module which is similar to GEN-VLKT [21]. From the result, the STG strategy and S-O attention improve the performance by 5.96% on the _Full_ category. Moreover, without the STG strategy, our framework also achieves a significant improvement over QPIC (ResNet-50) by 9.74% with one-fifth of the training epochs. Next, in (4), we remove the verb decoder in (5). As the result, comparing (4) with (5), without the verb decoder, the performance drops by 4.01%. \begin{table} \begin{tabular}{c|c c c|c c} \hline \multicolumn{3}{c|}{Denoising Strategies} & \multicolumn{3}{c}{Default} \\ \hline \# & Box & Obj & Verb & _Full_ & _Rare_ & _Non-Rare_ \\ \hline \hline (1) & & & 33.29 & 28.28 & 34.40 \\ (2) & ✓ & & & 33.27 & 29.07 & 34.53 \\ (3) & ✓ & ✓ & & 33.28 & 28.57 & 34.69 \\ (4) & ✓ & ✓ & & 33.93 & 28.82 & 34.76 \\ (5) & ✓ & ✓ & & 33.51 & 29.05 & 34.84 \\ (6) & ✓ & ✓ & ✓ & **33.80** & **29.28** & **35.15** \\ \hline \end{tabular} \end{table} Table 6: Ablation studies for denoising strategies. The symbol of \(\bigvee\) means adding noise to the ground-truth. \begin{table} \begin{tabular}{c|c|c|c c c|c c c} \hline \multicolumn{3}{c|}{Method} & Fea. & Backbone & \(AP_{role}^{\mathrm{S1}}\) & \(AP_{role}^{\mathrm{S2}}\) & \(AP_{role}^{\mathrm{S3}}\) \\ \hline \hline HID-Parse [4] & L & ResNet-50 & 61.9 & 64.2 & 61.9 & 64.2 \\ MSTR [15] & A & ResNet-50 & 62.0 & 65.2 & 64.8 \\ PartS [36] & A & ResNet-50 & 62.5 & 64.8 \\ GEN-VLKT-M [21] & L & ResNet-101 & 63.3 & 65.6 \\ GEN-VLKT-M [21] & L & ResNet-101 & 63.3 & 65.9 \\ GEN-VLKT [21] & A & ResNet-101 & 65.0 & 67.1 \\ \hline **SOV-STG-M** & A & ResNet-101 & 63.7 & 65.2 \\ **SOV-STG-L** & A & ResNet-101 & 63.9 & 65.4 \\ \hline \end{tabular} \end{table} Table 2: Comparison on V-COCO. 'A’ and ‘L’ indicate the appearance and language features, respectively. \begin{table} \begin{tabular}{c|c|c|c c c|c c c} \hline \multicolumn{3}{c|}{Method} & \multirow{2}{*}{Epoch} & \multicolumn{3}{c|}{Default} & \multicolumn{3}{c}{Known Object} \\ \cline{3-10} \multicolumn{3}{c|}{Method} & & \multirow{2}{*}{Exoch} & \multirow{2}{*}{Exochone} & \multirow{2}{*}{\(Full\)} & \multirow{2}{*}{\(Bar\)} & \multirow{2}{*}{\(Non\)-\(Bar\)} & \multirow{2}{*}{\(Full\)} & \multirow{2}{*}{\(Bar\)} & \multirow{2}{*}{\(Non\)-\(Bar\)} \\ \cline{3-10} \multicolumn{3}{c|}{} & & & & & & \\ Method & Epoch & Backbone & \(Full\) & _Full_ & _Rare_ & _Non-Rare_ \\ \hline \hline **Two-stage** & & & & & & & & \\ \hline CATN [5] & 12 & ResNet-50 & 31.86 & 25.15 & 33.84 & 34.44 & 27.69 & 36.45 \\ STIP [39] & 30 & ResNet-50 & 32.22 & 28.15 & 33.43 & 35.29 & 31.43 & 36.45 \\ UPT [38] & 20 & ResNet-101-DCS & 32.62 & 28.62 & 33.81 & 36.08 & 31.41 & 37.47 \\ Lin _et al._[24] & 129 & ResNet-50 & 33.51 & 30.30 & 34.46 & 36.28 & 33.16 & 37.21 \\ \hline \hline **One-stage** & & & & & & & & \\ \hline QMH [3] & 150 & ResNet-50 & 26.18 & 18.06 & 28.61 & - & - & - \\ QPIC [31] & 150 & ResNet-50 & 29.07 & 21.85 & 31.23 & 31.68 & 24.14 & 33.93 \\ MSTR [15] & 50 & ResNet-50 & 31.17 & 25.31 & 32.92 & 34.02 & 28.83 & 35.57 \\ SSRT [12] & 150 & ResNet-101 & 31.34 & 24.31 & 33.32 & - & - & - \\ CDN-S [36] & 100 & ResNet-50 & 31.44 & 27.39 & 32.64 & 34.09 & 29.63 & 35.42 \\ Zhou _et al._[43] & 80 & ResNet-50 & 31.75 & 27.45 & 33.03 & 34.50 & 30.13 & 35.81 \\ CDN-L [36] & 100 & ResNet-101 & 32.07 & 27.19 & 33.53 & 34.79 & 29.48 & 36.38 \\ HOM (CDN-S) [40] & 80 & ResNet-50 & 32.47 & 28.15 & 33.76 & 35.17 & 30.73 & 36.50 \\ RLP-Parse [34] & 90 & ResNet-50 & 32.84 & **34.63** & 26.85 & - & - & - \\ DQO (CDN-S) [29] & 80 & ResNet-50 & 33.28 & 29.19 & 34.50 & - & - & - \\ GEN-VLKT-S [21] & 90 & ResNet-50 & 33.75 & 29.25 & 35.10 & 36.78 & 32.75 & 37.99 \\ GEN-VLKT-M [21] & 90 & ResNet-101 & 34.78 & 31.50 & 35.77 & 38.07 & 34.94 & 39.01 \\ GEN-VLKT-L [21] & 90 & ResNet-101 & 34.95 & 31.18 & 36.08 & **38.22** & **34.36** & **39.37** \\ \hline QAHOI-Swin-L [3] & 150 & Swin-Large-22K & 35.78 & 29.80 & 37.56 & 37.59 & 31.36 & 39.36 \\ FGAHOI-Swin-L [27] & 190 & Swin-Large-22K & 37.18 & 30.71 & 39.11 & 38.93 & 31.93 & 41.02 \\ \hline **SOV-STG-S** & 30 & ResNet-50 & 33.80 & 29.28 & 35.15 & 36.22 & 39.99 & 37.78 \\ **SOV-STG-M** & 30 & ResNet-101 & 34.87 & 30.41 & 36.20 & 37.35 & 32.46 & 38.81 \\ **SOV-STG-L** & 30 & ResNet-101 & **35.01** & 30.63 & **36.32** & 37.60 & 32.77 & 39.05 \\ \hline **SOV-STG-Swin-L** & 30 & Swin-Large-22K & **43.35** & **42.25** & **43.69** & **45.83** & **45.62** & **46.11** \\ \hline \end{tabular} \end{table} Table 1: Comparison to state-of-the-arts on the HICO-DET. \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \multicolumn{3}{c|}{Denoising Strategies} & \multicolumn{3}{c|}{Default} \\ \ Then, in (3), we remove the subject decoder and the sum fusion module, and update both the subject and object boxes by the object decoder. Without balancing the decoding burden of the detection, compared with (4), the performance drops by 1.57%. Furthermore, in (1) and (2), we conduct drop-one-out experiments on the subject and verb decoder, respectively. Compared with (1) and (2), the model without the verb decoder is worse than the model without the subject decoder, which indicates that the verb decoder plays a more critical role. S-O attention module.The S-O attention module is the core module of the SOV model, which is responsible for the fusion of the object and subject features. To explore the strength of the S-O attention mechanism, different variants of designs we have attempted are shown in Table 5. The result of (1) indicates the S-O attention module used in SOV-STG-S. In (2), we remove the bottom-up path in S-O attention. Since the bottom-up path strengthens the feature fusion, without the information flow from lower layers, the performance drops by 1.18%. Our verb decoder uses the fused embeddings from the last layer of the S-O attention module as the input query, and updates the embeddings layer by layer. In (3), we attempt to feed all layers' fused embeddings as query positional embeddings for the verb decoder. However, compared with our design in (1), the accuracy drops by 3.05%. The cross-attention enables the fused embeddings to be enriched by the verb label embeddings. In (4), we remove the cross-attention of S-O attention, and the attention module is degraded to a sum fusion module. From the result, the performance drops by 2.34% compared with (1). Similarly, in (5), we also attempt the multi-layer design of the (4), which is similar to GEN-VLKT [21], and the performance also drops. We consider that the multi-layer design is not suitable for the verb prediction as the deformable transformer attention mechanism is a local attention mechanism, which focuses on different parts of the source feature in different layers. Specifically, the sampling points in the verb decoder are not related to the sampling points in the object and subject decoders, which focuses on different positions of the global semantic feature. Consequently, the multi-layer design forces the verb decoder to match the attention of the object and subject decoders, which leads to the performance drop.
2303.06500
Diffusion-Based Hierarchical Multi-Label Object Detection to Analyze Panoramic Dental X-rays
Due to the necessity for precise treatment planning, the use of panoramic X-rays to identify different dental diseases has tremendously increased. Although numerous ML models have been developed for the interpretation of panoramic X-rays, there has not been an end-to-end model developed that can identify problematic teeth with dental enumeration and associated diagnoses at the same time. To develop such a model, we structure the three distinct types of annotated data hierarchically following the FDI system, the first labeled with only quadrant, the second labeled with quadrant-enumeration, and the third fully labeled with quadrant-enumeration-diagnosis. To learn from all three hierarchies jointly, we introduce a novel diffusion-based hierarchical multi-label object detection framework by adapting a diffusion-based method that formulates object detection as a denoising diffusion process from noisy boxes to object boxes. Specifically, to take advantage of the hierarchically annotated data, our method utilizes a novel noisy box manipulation technique by adapting the denoising process in the diffusion network with the inference from the previously trained model in hierarchical order. We also utilize a multi-label object detection method to learn efficiently from partial annotations and to give all the needed information about each abnormal tooth for treatment planning. Experimental results show that our method significantly outperforms state-of-the-art object detection methods, including RetinaNet, Faster R-CNN, DETR, and DiffusionDet for the analysis of panoramic X-rays, demonstrating the great potential of our method for hierarchically and partially annotated datasets. The code and the data are available at: https://github.com/ibrahimethemhamamci/HierarchicalDet.
Ibrahim Ethem Hamamci, Sezgin Er, Enis Simsar, Anjany Sekuboyina, Mustafa Gundogar, Bernd Stadlinger, Albert Mehl, Bjoern Menze
2023-03-11T21:31:54Z
http://arxiv.org/abs/2303.06500v3
# Diffusion-Based Hierarchical Multi-Label Object Detection to Analyze Panoramic Dental X-rays ###### Abstract Due to the necessity for precise treatment planning, the use of panoramic X-rays to identify different dental diseases has tremendously increased. Although numerous ML models have been developed for the interpretation of panoramic X-rays, there has not been an end-to-end model developed that can identify problematic teeth with dental enumeration and associated diagnoses at the same time. To develop such a model, we structure the three distinct types of annotated data hierarchically following the FDI system, the first labeled with only quadrant, the second labeled with quadrant-enumeration, and the third fully labeled with quadrant-enumeration-diagnosis. To learn from all three hierarchies jointly, we introduce a novel diffusion-based hierarchical multi-label object detection framework by adapting a diffusion-based method that formulates object detection as a denoising diffusion process from noisy boxes to object boxes. Specifically, to take advantage of the hierarchically annotated data, our method utilizes a novel noisy box manipulation technique by adapting the denoising process in the diffusion network with the inference from the previously trained model in hierarchical order. We also utilize a multi-label object detection method to learn efficiently from partial annotations and to give all the needed information about each abnormal tooth for treatment planning. Experimental results show that our method significantly outperforms state-of-the-art object detection methods, including RetinaNet, Faster R-CNN, DETR, and DiffusionDet for the analysis of panoramic X-rays, demonstrating the great potential of our method for hierarchically and partially annotated datasets. The code and the datasets are available at [https://github.com/ibrahimethemhamamci/HierarchicalDet](https://github.com/ibrahimethemhamamci/HierarchicalDet). Keywords:Diffusion Network, Hierarchical Learning, Multi-Label Object Detection, Panoramic Dental X-ray, Transformers Introduction The use of panoramic X-rays to diagnose numerous dental diseases has increased exponentially due to the demand for precise treatment planning [11]. However, visual interpretation of panoramic X-rays may consume a significant amount of essential clinical time [2] and interpreters may not always have dedicated training in reading scans as specialized radiologists have [13]. Thus, the diagnostic process can be automatized and enhanced by getting the help of Machine Learning (ML) models. For instance, an ML model that automatically detects abnormal teeth with dental enumeration and associated diagnoses would provide a tremendous advantage for dentists in making decisions quickly and saving their time. Many ML models to interpret panoramic X-rays have been developed specifically for individual tasks such as quadrant segmentation [19, 29], tooth detection [6], dental enumeration [14, 23], diagnosis of some abnormalities [12, 30], as well as treatment planning [27]. Although many of these studies have achieved good results, three main issues still remain. _(1) Multi-label detection:_ there has not been an end-to-end model developed that gives all the necessary information for treatment planning by detecting abnormal teeth with dental enumeration and multiple diagnoses simultaneously [1]. _(2) Data availability:_ to train a model that performs this task with high accuracy, a large set of fully annotated data is needed [13]. Because labeling every tooth with all required classes may require expertise and take a long time, such kind of fully labeled large datasets do not always exist [24]. For instance, we structure three different available annotated data hierarchically shown in Fig. 1, using the Federation Dentaire Internationale (FDI) system. The first data is partially labeled because it only included quadrant information. The second data is also partially labeled but contains additional enumeration information along with the quadrant. The third data is fully labeled because it includes all quadrant-enumeration-diagnosis information for each abnormal tooth. Thus, conventional object detection algorithms would Figure 1: The annotated datasets are organized hierarchically as (a) quadrant-only, (b) quadrant-enumeration, and (c) quadrant-enumeration-diagnosis respectively. not be well applicable to this kind of hierarchically and partially annotated data [21]. _(3) Model performance:_ to the best of our knowledge, models designed to detect multiple diagnoses on panoramic X-rays have not achieved the same high level of accuracy as those specifically designed for individual tasks, such as tooth detection, dental enumeration, or detecting single abnormalities [18]. To circumvent the limitations of the existing methods, we propose a novel diffusion-based hierarchical multi-label object detection method to point out each abnormal tooth with dental enumeration and associated diagnosis concurrently on panoramic X-rays, see Fig. 2. Due to the partial annotated and hierarchical characteristics of our data, we adapt a diffusion-based method [5] that formulates object detection as a denoising diffusion process from noisy boxes to object boxes. Compared to the previous object detection methods that utilize conventional weight transfer [3] or cropping strategies [22] for hierarchical learning, the denoising process enables us to propose a novel hierarchical diffusion network by utilizing the inference from the previously trained model in hierarchical order to manipulate the noisy bounding boxes as in Fig. 2. Besides, instead of pseudo labeling techniques [28] for partially annotated data, we develop a multi-label object detection method to learn efficiently from partial annotations and to give all the needed information about each abnormal tooth for treatment planning. Finally, we demonstrate the effectiveness of our multi-label detection method on partially annotated data and the efficacy of our proposed bounding box manipulation technique in diffusion networks for hierarchical data. The contributions of our work are three-fold. (1) We propose a multi-label detector to learn efficiently from partial annotations and to detect the abnormal tooth with all three necessary classes, as shown in Fig 3 for treatment planning. (2) We rely on the denoising process of diffusion models [5] and frame the detection problem as a hierarchical learning task by proposing a novel bounding box manipulation technique that outperforms conventional weight transfer as shown in Fig. 4. (3) Experimental results show that our model with bounding box manipulation and multi-label detection significantly outperforms state-of-the-art object detection methods on panoramic X-ray analysis, as shown in Tab. 1. We have designed our approach to serve as a foundational baseline for the Dental Enumeration and Diagnosis on Panoramic X-rays Challenge (DENTEX), set to take place at MICCAI 2023. Remarkably, the data set and annotations we utilized for our method mirror exactly those employed for DENTEX [9]. ## 2 Methods Figure 2 illustrates our proposed framework. We utilize the DiffusionDet [5] model, which formulates object detection as a denoising diffusion process from noisy boxes to object boxes. Unlike other state-of-the-art detection models, the denoising property of the model enables us to propose a novel manipulation technique to utilize a hierarchical learning architecture by using previously inferred boxes. Besides, to learn efficiently from partial annotations, we design a multi-label detector with adaptable classification layers based on available labels. Figure 2: Our method relies on a hierarchical learning approach utilizing a combination of multi-label detection, bounding box manipulation, and weight transfer. ### Base Model Our method employs the DiffusionDet [5] that comprises two essential components, an image encoder that extracts high-level features from the raw image and a detection decoder that refines the box predictions from the noisy boxes using those features. The set of initial noisy bounding boxes is defined as: \[q(z_{t}|z_{0})=\mathcal{N}(z_{t}|\sqrt{\bar{\alpha}_{t}}z_{0},(1-\bar{\alpha}_{t} )I) \tag{1}\] where \(z_{0}\) represents the input bounding box \(b\), and \(b\in\mathbb{R}^{N\times 4}\) is a set of bounding boxes, \(z_{t}\) represents the latent noisy boxes, and \(\bar{\alpha}_{t}\) represents the noise variance schedule. The DiffusionDet model [5]\(f_{\theta}(z_{t},t,x)\), is trained to predict the final bounding boxes defined as \(b^{i}=(c_{x}^{i},c_{y}^{i},w^{i},h^{i})\) where \((c_{x}^{i},c_{y}^{i})\) are the center coordinates of the bounding box and \((w^{i},h^{i})\) are the width and height of the bounding boxes and category labels defined as \(y^{i}\) for objects. ### Proposed Framework To improve computational efficiency during the denoising process, DiffusionDet [5] is divided into two parts: an image encoder and a detection decoder. Iterative denoising is applied only for the detection decoder, using the outputs of the image encoder as a condition. Our method employs this approach with several adjustments, including multi-label detection and bounding box manipulation. Finally, we utilize conventional transfer learning for comparison. **Image Encoder.** Our method utilizes a Swin-transformer [17] backbone pre-trained on the ImageNet-22k [7] with a Feature Pyramid Network (FPN) architecture [15] as it was shown to outperform convolutional neural network-based models such as ResNet50 [10]. We also apply pre-training to the image encoder using our unlabeled data, as it is not trained during the training process. We utilize SimMIM [26] that uses masked image modeling to finetune the encoder. **Detection Decoder.** Our method employs a detection decoder that inputs noisy initial boxes to extract Region of Interest (RoI) features from the encoder-generated feature map and predicts box coordinates and classifications using a detection head. However, our detection decoder has several differences from DiffusionDet [5]. Our proposed detection decoder (1) has three classification heads instead of one, which allows us to train the same model with partially annotated data by freezing the heads according to the unlabeled classes, (2) employs manipulated bounding boxes to extract RoI features, and (3) leverages transfer learning from previous training steps. **Multi-Label Detection.** We utilize three classification heads as quadrant-enumeration-diagnosis for each bounding box and freeze the heads for the unlabeled classes, shown in Fig. 2. Our model denoted by \(f_{\theta}\) is trained to predict: \[f_{\theta}(z_{t},t,x,h_{q},h_{e},h_{d})=\begin{cases}\quad(y^{i}_{q},b^{i}),&h_ {q}=1,h_{e}=0,h_{d}=0\quad(a)\\ \quad(y^{i}_{q},y^{i}_{e},b^{i}),&h_{q}=1,h_{e}=1,h_{d}=0\quad(b)\\ (y^{i}_{q},y^{i}_{q},y^{i}_{d},b^{i}),&h_{q}=1,h_{e}=1,h_{d}=1\quad(c)\end{cases} \tag{2}\] where \(y_{q}^{i}\), \(y_{e}^{i}\), and \(y_{d}^{i}\) represent the bounding box classifications for quadrant, enumeration, and diagnosis, respectively, and \(h_{q}\), \(h_{e}\), and \(h_{d}\) represent binary indicators of whether the labels are present in the training dataset. By adapting this approach, we leverage the full range of available information and improve our ability to handle partially labeled data. This stands in contrast to conventional object detection methods, which rely on a single classification head for each bounding box [25] and may not capture the full complexity of the underlying data. Besides, this approach enables the model to detect abnormal teeth with all three necessary classes for clinicians to plan the treatment, as seen in Fig. 3. **Bounding Box Manipulation.** Instead of completely noisy boxes, we use manipulated bounding boxes to extract RoI features from the encoder-generated feature map and to learn efficiently from hierarchical annotations as shown in Fig. 2. Specifically, to train the model (b) in Eq. (2), we concatenate the noisy boxes described in Eq. (1) with the boxes inferred from the model (a) in Eq. (2) with a score greater than 0.5. Similarly, we manipulate the denoising process during the training of the model (c) in Eq. (2) by concatenating the noisy boxes with boxes inferred from the model (b) in Eq. (2) with a score greater than 0.5. The set of manipulated boxes \(b_{m}\), and \(b_{m}\in\mathbb{R}^{N\times 4}\), can be defined as \(b_{m}=[b_{n}[:-k],b_{i}]\), where \(b_{n}\), and \(b_{n}\in\mathbb{R}^{N\times 4}\), represents the set of noisy boxes and, \(b_{i}\), and \(b_{i}\in\mathbb{R}^{k\times 4}\), represents the set of inferred boxes from the previous training. Our framework utilizes completely noisy boxes during the inference. Figure 3: Output from our final model showing well-defined boxes for diseased teeth with corresponding quadrant (Q), enumeration (N), and diagnosis (D) labels. ## 3 Experiments and Results We evaluate models' performances using a combination of Average Recall (AR) and Average Precision (AP) scores with various Intersection over Union (IoU) thresholds. This included AP\({}_{[0.5,0.95]}\), AP\({}_{50}\), AP\({}_{75}\), and separate AP scores for large objects (AP\({}_{1}\)), and medium objects (AP\({}_{\text{m}}\)). **Data.** All panoramic X-rays were acquired from patients above 12 years of age using the VistaPano S X-ray unit (Durr Dental, Germany). To ensure patient privacy and confidentiality, panoramic X-rays were randomly selected from the hospital's database without considering any personal information. To effectively utilize FDI system [8], three distinct types of data are organized hierarchically as in Fig. 1 (a) 693 X-rays labeled only for quadrant detection, (b) 634 X-rays labeled for tooth detection with both quadrant and tooth enumera \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & AR & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{\text{m}}\) & AP\({}_{1}\) \\ \hline \hline \multicolumn{6}{c}{Quadrant} \\ RetinaNet [16] & 0.604 & 25.1 & 41.7 & 28.8 & 32.9 & 25.1 \\ Faster R-CNN [20] & 0.588 & 29.5 & 48.6 & 33.0 & 39.9 & 29.5 \\ DETR [4] & 0.659 & 39.1 & 60.5 & 47.6 & 55.0 & 39.1 \\ Base (DiffusionDet) [5] & 0.677 & 38.8 & 60.7 & 46.1 & 39.1 & 39.0 \\ Ours w/o Transfer & 0.699 & 42.7 & 64.7 & **52.4** & 50.5 & 42.8 \\ Ours w/o Manipulation & **0.727** & 40.0 & 60.7 & 48.2 & 59.3 & 40.0 \\ Ours w/o Manipulation and Transfer & 0.658 & 38.1 & 60.1 & 45.3 & 45.1 & 38.1 \\ Ours (Manipulation+Transfer+Multilabel) & 0.717 & **43.2** & **65.1** & 51.0 & **68.3** & **43.1** \\ \hline \multicolumn{6}{c}{Enumeration} \\ RetinaNet [16] & 0.560 & 25.4 & 41.5 & 28.5 & 55.1 & 25.2 \\ Faster R-CNN [20] & 0.496 & 25.6 & 43.7 & 27.0 & 53.3 & 25.2 \\ DETR [4] & 0.440 & 23.1 & 37.3 & 26.6 & 43.4 & 23.0 \\ Base (DiffusionDet) [5] & 0.617 & 29.9 & 47.4 & 34.2 & 48.6 & 29.7 \\ Ours w/o Transfer & 0.648 & **32.8** & **49.4** & **39.4** & **60.1** & **32.9** \\ Ours w/o Manipulation & 0.662 & 30.4 & 46.5 & 36.6 & 58.4 & 30.5 \\ Ours w/o Manipulation and Transfer & 0.557 & 26.8 & 42.4 & 29.5 & 51.4 & 26.5 \\ Ours (Manipulation+Transfer+Multilabel) & **0.668** & 30.5 & 47.6 & 37.1 & 51.8 & 30.4 \\ \hline \hline \multicolumn{6}{c}{Diagnosis} \\ RetinaNet [16] & 0.587 & 32.5 & 54.2 & 35.6 & 41.7 & 32.5 \\ Faster R-CNN [20] & 0.533 & 33.2 & 54.3 & 38.0 & 24.2 & 33.3 \\ DETR [4] & 0.514 & 33.4 & 52.8 & 41.7 & 48.3 & 33.4 \\ Base (DiffusionDet) [5] & 0.644 & 37.0 & 58.1 & 42.6 & 31.8 & 37.2 \\ Ours w/o Transfer & 0.669 & **39.4** & **61.3** & **47.9** & **49.7** & **39.5** \\ Ours w/o Manipulation & 0.688 & 36.3 & 55.5 & 43.1 & 45.6 & 37.4 \\ Ours w/o Manipulation and Transfer & 0.648 & 37.3 & 59.5 & 42.8 & 33.6 & 36.4 \\ Ours (Manipulation+Transfer+Multilabel) & **0.691** & 37.6 & 60.2 & 44.0 & 36.0 & 37.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Our method outperforms state-of-the-art methods, and our bounding box manipulation approach outperforms the weight transfer. Results shown here indicate the different tasks in the test set which is multi-labeled (quadrant-enumeration-diagnosis) for abnormal tooth detection. tion classifications, and (c) 1005 X-rays fully labeled for diseased tooth detection with quadrant, tooth enumeration, and diagnosis classifications. In the diagnosis, there are four specific classes corresponding to four different diagnoses: caries, deep caries, periapical lesions, and impacted teeth. The remaining 1571 unlabeled X-rays are used for pre-training. All necessary permissions were obtained from the ethics committee. **Experimental Design.** To evaluate our proposed method, we conduct two experiments: (1) Comparison with state-of-the-art object detection models, including DETR [4], Faster R-CNN [20], RetinaNet [16], and DiffusionDet [5] in Tab. 1. (2) A comprehensive ablation study to assess the effect of our modifications to DiffusionDet in hierarchical detection performance in Fig. 4. **Evaluation.** Figure 3 presents the output prediction of the final trained model. As depicted in the figure, the model effectively assigns three distinct classes to each well-defined bounding box. Our approach that utilizes novel box manipulation and multi-label detection, significantly outperforms state-of-the-art methods. The box manipulation approach specifically leads to significantly higher AP and AR scores compared to other state-of-the-art methods, including RetinaNet, Faster-R-CNN, DETR, and DiffusionDet. Although the impact of conventional transfer learning on these scores can vary depending on the data, our bounding box manipulation outperforms it. Specifically, the bounding box manipulation approach is the sole factor that improves the accuracy of the model, while weight transfer does not improve the overall accuracy, as shown in Fig. 4. **Ablation Study.** Our ablation study results, shown in Fig. 4 and Tab. 1, indicate that our approaches have a synergistic impact on the detection model's accuracy, with the highest increase seen through bounding box manipulation. We systematically remove every combination of bounding box manipulation and weight transfer, to demonstrate the efficacy of our methodology. Conventional transfer learning does not positively affect the models' performances compared to the bounding box manipulation, especially for enumeration and diagnosis. Figure 4: The results of the ablation study reveals that our bounding box manipulation method outperforms conventional weight transfer. ## 4 Discussion and Conclusion In this paper, we introduce a novel diffusion-based multi-label object detection framework to overcome one of the significant obstacles to the clinical application of ML models for medical and dental diagnosis, which is the difficulty in getting a large volume of fully labeled data. Specifically, we propose a novel bounding box manipulation technique during the denoising process of the diffusion networks with the inference from the previously trained model to take advantage of hierarchical data. Moreover, we utilize a multi-label detector to learn efficiently from partial annotations and to assign all necessary classes to each box for treatment planning. Our framework outperforms state-of-the-art object detection models for training with hierarchical and partially annotated panoramic X-ray data. From the clinical perspective, we develop a novel framework that simultaneously points out abnormal teeth with dental enumeration and associated diagnosis on panoramic dental X-rays with the help of our novel diffusion-based hierarchical multi-label object detection method. With some limits due to partially annotated and limited amount of data, our model that provides three necessary classes for treatment planning has a wide range of applications in the real world, from being a clinical decision support system to being a guide for dentistry students. **Supplementary Material of Diffusion-Based Hierarchical Multi-Label Object Detection to Analyze Panoramic Dental X-rays** \begin{table} \begin{tabular}{l l l l} \hline \hline Detection Model & Image Encoder Backbone & Iterations & Learning Rate \\ \hline Ours & FPN-Swin Transformer & 40000 & 0.000025 \\ DiffusionDet & FPN-Swin Transformer & 40000 & 0.000025 \\ Faster R-CNN & ResNet101 & 40000 & 0.02 \\ RetinaNet & ResNet101 & 40000 & 0.01 \\ DETR & ResNet50 & 300(epochs) & 0.0001 \\ \hline \hline \end{tabular} \end{table} Table 2: Different detection models are utilized for comparison with our method. The best test metrics for each model are selected for the results. All models are trained with randomly cropped and resized panoramic X-rays with a batch size of 16. All training is done on a single NVIDIA RTX A6000 48 GB GPU. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Training & Validation & Testing \\ \hline Quadrant & 590 & 103 & N/A \\ Quadrant-Enumeration & 539 & 95 & N/A \\ Quadrant-Enumeration-Diagnosis & 705 & 50 & 250 \\ \hline \hline \end{tabular} \end{table} Table 3: To ensure accurate testing of all models, we only use fully labeled data with quadrant-enumeration-diagnosis for abnormal tooth detection. We do not utilize quadrant or quadrant-enumeration data for testing. Our diagnosis labels have four specific classes: caries, deep caries, periapical lesions, and impacted. Figure 6: Example inferences during hierarchical training. (a) is used to manipulate noisy boxes during the training for (b). (b) is used to manipulate noisy boxes during the training for (c). (c) is the output of the final model.
2302.11355
Measuring Electroweak Quantum Numbers of Color Sextet Resonances at the LHC
We study the prospect of measuring the electroweak quantum numbers of beyond the Standard Model (SM) color sextet particles that decay into same-sign top quark pairs. Among these particles, the color sextet scalars give rise to top quarks with the same chirality, while the top quarks coming from the color sextet vector would have opposite chirality. This difference gets encoded in the angular distributions of the bottom quarks and leptons originating from the decays of the top quarks. We utilize this feature and the energy distributions of the final state jets and leptons to distinguish among the three possible color sextet resonances, taking into account various SM background processes at the $13$ TeV LHC.
Soubhik Kumar, Rafiqul Rahaman, Ritesh K. Singh
2023-02-22T13:02:44Z
http://arxiv.org/abs/2302.11355v2
# Measuring Electroweak Quantum Numbers of Color Sextet Resonances at the LHC ###### Abstract We study the prospect of measuring the electroweak quantum numbers of beyond the Standard Model (SM) color sextet particles that decay into same-sign top quark pairs. Among these particles, the color sextet scalars give rise to top quarks with the same chirality, while the top quarks coming from the color sextet vector would have opposite chirality. This difference gets encoded in the angular distributions of the bottom quarks and leptons originating from the decays of the top quarks. We utilize this feature and the energy distributions of the final state jets and leptons to distinguish among the three possible color sextet resonances, taking into account various SM background processes. + Footnote †: preprint: HRI-RECAPP-2023-01 ## I Introduction A variety of beyond the Standard Model (BSM) scenarios, especially those addressing the Higgs hierarchy problem, e.g., supersymmetry or composite Higgs, predict new physics around the TeV scale (see Ref. [1] for reviews). The search for such BSM states has been actively going on at the Large Hadron Collider (LHC) and will continue through its high-luminosity (HL-LHC) phase, in conjunction with other indirect probes. Given the absence of new physics at the LHC so far, we can ask a bottom-up and purely group theoretic question as follows. Noting that the SM is based on the gauge group \(SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\), we can ask which possible BSM scalar or vector particles can have direct, tree-level couplings to SM fermions. A priori, there are a number of such BSM states [2; 3]. However, a subset of them would couple to both leptons and quarks so as to mediate proton decay at tree level, unless that is forbidden by some other global symmetry, and therefore are ruled out [4] for TeV-scale masses. Among the remaining states, those coupling to two top quarks are particularly interesting from an experimental perspective. As we will discuss in the following, two such scalar and one such vector resonance have the SM quantum numbers, \[\Phi_{1}\sim(\mathbf{6},\mathbf{1},4/3),\;\Phi_{3}\sim(\mathbf{6},\mathbf{3}, 1/3),\;\Phi_{2}^{\mu}\sim(\mathbf{6},\mathbf{2},5/6). \tag{1}\] At the LHC, these states can be produced via their couplings to the first-generation quarks, and subsequently, they can decay into a _like_-sign top quark pair. The top quarks can then decay into a pair of \(b\)-jets, a pair of like-sign leptons, and neutrinos when the intermediate \(W\) bosons decay leptonically. The phenomenology and ultraviolet origin of such color sextet diquarks have been discussed extensively in the literature, see, e.g., [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. The question we would like to ask in the present work is the following: suppose a discovery of a color sextet particle is made at the (HL-)LHC. Then purely based on the lab-frame observables, can we extract the quantum numbers of the discovered sextet state and distinguish among the three possible sets of quantum numbers of a generic color sextet particle, as mentioned in Eq. (1)? To answer this, we first note that the fermionic couplings of interest are inherently chiral in nature. Therefore, the top quarks coming from the sextet decay would carry definite polarization and this plays a crucial role in extracting the quantum numbers of the sextet states [14; 12]. For example, the rest-frame angular distributions of leptons and \(b\)-jets are different depending on the polarization of the parent top quarks, as we describe explicitly below. However, since in such processes, the top quarks decay leptonically, there is missing energy in the form of neutrinos in the final state, and reconstructing the rest frame of a top quark is not immediate. While it is possible to use the \(M_{T2}\) observable [17; 18] to address this issue [19; 20], it is still useful to construct observables that do not rely on any such rest frame reconstruction. To this end, we construct some new lab-frame observables that can be used to investigate and isolate the quantum numbers of the various sextet states. Some lab-frame observables for processes involving missing energy have been discussed in the literature. One example is the visible energy fraction of the leptons [21; 22] \[z_{l}=\frac{E_{l_{i}}}{E_{l_{i}}+E_{b}}. \tag{2}\] Here \(E_{l_{i}}\) and \(E_{b}\) are respectively the energies of the lepton and the associated \(b\)-quark originating from the decay of the same top quark. This observable is sensitive to the polarization of the top quark. However, in the signal of our interest, we have two \(b\)-jets and two leptons, and the effectiveness of \(z_{l}\) inherently depends on correctly pairing a \(b\)-quark with the associated lepton. This motivates us to look for additional lab-frame observables which are independent of such pairing. We will show that such observables can be constructed based on the azimuthal distribution of final state visible particles. The rest of this work is organized as follows. In Sec. II, we describe the couplings of the sextet states to the SM fermions and the existing experimental constraints on such states. In Sec. III, we construct the lab-frame observables of interest and explain how top quark polarization and spin correlation plays an important role in this context through a parton-level analysis. In Sec. IV, we present our results to distinguish among the color sextet states through a detector-level simulation, taking into account possible SM backgrounds. We conclude in Sec. V. ## II Model In this work, we focus on color sextet BSM resonances that couple to two top quarks. Given the quantum numbers of the SM quark doublet \(q_{L}=\left(\begin{array}{c}u_{L}\\ d_{L}\end{array}\right)\sim(3,\mathbf{2},+1/6)\), and the right-handed up-type quark \(u_{R}\sim(3,\mathbf{1},+2/3)\), the quantum numbers of the BSM scalar and vector resonances are fixed as in (1). We first consider the two scalars, \(\Phi_{1}\) and \(\Phi_{3}\). We can write the Yukawa coupling of \(\Phi_{1}\) as (see, e.g., [10]) \[\lambda_{1}\bar{K}_{ij}^{a}\Phi_{1a}\bar{u}_{Ri}\bar{u}_{Rj}^{c}+\text{h.c.}, \tag{3}\] where the superscript \(c\) denotes charge conjugation operation. The matrices \(\bar{K}_{ij}^{a}\) are determined by the Clebsch-Gordon coefficients for the sextet representation of \(SU(3)\). In both Eq. (3) and Eq. (4) below, the indices \(a\) and \(i,j\) correspond to color indices and they run over \(1\cdots 6\) and \(1\cdots 3\), respectively. The Yukawa coupling of \(\Phi_{3}\) is given in an analogous manner, except that it couples to the symmetric combination of two copies of \(q_{L}\). The coupling to \(u_{L}\) is given by \[\lambda_{3}\bar{K}_{ij}^{a}\Phi_{3a}\bar{u}_{Li}\bar{u}_{Lj}^{c}+\text{h.c.}. \tag{4}\] Here we have focused on the part of the isospin triplet field that couples only to the up-type quarks since that can decay into a pair of top quarks, which is the signal of our interest. Both in (3) and (4), we have suppressed the generation indices on the Yukawa couplings, \(\lambda_{1}\) and \(\lambda_{3}\). Finally, the coupling of the vector resonance is described by, \[\lambda_{2}\bar{K}_{ij}^{a}\Phi_{2,a}^{\mu}\bar{u}_{Ri}\gamma_{\mu}\bar{u}_{Li }+\text{h.c.}. \tag{5}\] Here we have focused on the up-quark coupling for the same reason as above. We will be agnostic about how \(\Phi_{2}^{\mu}\) gets its mass. For a general Yukawa coupling, there would be large flavor changing neutral current processes mediated by the above sextet states. To avoid the stringent experimental constraints from those, we assume that the matrices \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are flavor diagonal. With this assumption, the next stringent constraints on \(\lambda_{1}\) come from the measurements of \(D^{0}-\bar{D}^{0}\) mixing [23], \[\frac{|\text{Re}(\lambda_{1,cc}\lambda_{1,am}^{*})|}{m_{\phi}^{2}}<x\times 7.2 \times 10^{-11}/\text{GeV}^{2}, \tag{6}\] where \(x=\Delta m_{D}/\Gamma_{D}\) is a \(D^{0}-\bar{D}^{0}\) mixing parameter. Assuming \(CP\) conservation and taking \(x\approx 4\times 10^{-3}\)[24], we get \(\text{Re}(\lambda_{1,cc}\lambda_{1,am}^{*})<3\times 10^{-7}\) for \(m_{\phi}=1\) TeV. In the following, we will assume a hierarchy between \(\lambda_{1,cc}\) and \(\lambda_{1,am}\) with \(|\lambda_{1,cc}|\ll|\lambda_{1,am}|\) so as to satisfy the above bound. The dominant constraints on \(\lambda_{2}\) and \(\lambda_{3}\) also come from \(D^{0}-\bar{D}^{0}\) mixing [12], which can again be suppressed by taking the couplings to second generation quarks to be small as we mentioned above. We note that \(\Phi_{1}\) and \(\Phi_{3}\) decay to a pair of right-handed and left-handed top quarks, respectively. This implies that we would be able to distinguish between the quantum numbers of these two sextet scalars using the polarization properties of the daughter top quarks. In particular, in its rest frame, a right-handed top quark would decay into leptons whose average distribution would peak in the same direction as the top quark spin, while the associated \(b\) quark distribution would be peaked in the opposite direction. Thus after boosting to the lab frame, the angle between the two leptons, each coming from the two daughter top quarks from \(\Phi_{1}\) decay, will be peaked around \(\Delta\phi=\pi\), while the angle between the two \(b\) quarks will be more broadly distributed around \(\Delta\phi=\pi\). The situation with \(\Phi_{3}\) is exactly the opposite of this. While the vector BSM resonance \(\Phi_{2}^{\mu}\) decays into a same-sign top pair as well, the top quarks would have opposite chirality. These features can then be used to distinguish among \(\Phi_{1}\), \(\Phi_{2}^{\mu}\), and \(\Phi_{3}\), as we will see below. We will refer to these particles as Singlet, Doublet, and Triplet, respectively, based on their \(SU(2)_{L}\) quantum numbers. For numerical simulations, we choose the benchmark: \(m_{\phi}=1\) TeV, \(\lambda_{am}=\lambda_{tr}=0.003\) for all the three sextet particles. ## III Observable The polarization of a top quark is preserved in its decay products and this plays an important role in our analysis. The double angular distribution of the decay products (fermion) of two top quarks can be expressed in the rest frame of top quarks as [25] \[\frac{1}{\sigma}\frac{d^{2}\sigma}{(d\cos\theta_{1})(d\cos\theta _{2})}=\] \[\frac{1}{4\pi}\left(1+\alpha_{1}p_{t_{1}}\cos\theta_{1}+\alpha_{2 }p_{t_{2}}\cos\theta_{2}\right.\] \[\left.+\alpha_{1}\alpha_{2}pp_{t}\cos\theta_{1}\cos\theta_{2} \right), \tag{7}\] in terms of the polarization of two top quarks (\(p_{t_{1}},p_{t_{2}}\)) and their spin correlation (\(pp_{t_{t}}\)). Here, \(\alpha_{i}\) is called the analyzing power which depends on the daughter fermion. For example, for a lepton, \(\alpha_{t}=1\), whereas for a \(b\)-quark, \(\alpha_{b}=-0.4\)[26]. The polar angle \(\theta_{i}\) is the angle between the momentum of the daughter fermion and the spin axis of the top quark (\(t_{i}\)) in its rest frame. The polarization \(p_{t}\) is positive (negative) for the right (left)-polarized top quark. The above formula in Eq. (II) is valid only in the rest frame of the top quark which requires a complete reconstruction of the missing neutrinos [27; 28; 29; 30]. The polarization of the top quark can also be obtained from the distribution of the ratio of energies of the daughter \(b\)-quark (\(E_{b}\)) and the top quark (\(E_{t}\)), \(E_{b}/E_{t}\). This again requires a complete reconstruction of the top quark momenta and hence that of the neutrinos. The visible energy fractions (\(z_{i}\)), as defined in Eq. (2), are correlated with the polarization of the top quarks, and can let us distinguish between left and right chiral top quarks, without completely reconstructing their momenta. However, this requires a correct pairing of the two \(b\)-quarks with the corresponding leptons. This can be achieved using the fact that in the lab frame, a sibling lepton and a \(b\)-quark have a smaller \(\Delta R\) than the \(\Delta R\) between Figure 1: Normalized distributions for the visible energy fractions \(z_{1}\) and \(z_{2}\) for the signals with parton level events. See text for further discussions. Figure 2: Normalized distributions for the two-dimensional correlations between \(\Delta\phi(l_{1},l_{2})\) vs. \(\Delta\phi(b_{1},b_{2})\) (left-column), \(\Delta\phi(b_{1},p_{\rm miss})\) vs. \(\Delta\phi(b_{2},p_{\rm miss})\) (middle-column), and \(\Delta\phi(l_{1},p_{\rm miss})\) vs. \(\Delta\phi(l_{2},p_{\rm miss})\) (right-column) for the signals with parton level events. The red points represent maximum density, and light blue represents minimum density for the number of events. a cousin lepton and a \(b\)-quark. Here, \(\Delta R=\sqrt{\Delta\phi^{2}+\Delta\eta^{2}}\) where \(\phi\) is the azimuthal angle and \(\eta\) is the pseudorapidity of a particle. We label the two leptons as \(l_{1}\) and \(l_{2}\), ordered according to their transverse momenta (\(p_{T}\)), i.e., with \(p_{T}(l_{1})>p_{T}(l_{2})\). The two \(b\)-quarks can pair with the leptons in two ways. If the correct pairs are such that \(l_{1}\) and \(b_{1}\) come from one top quark, while \(l_{2}\) and \(b_{2}\) come from another top quark, then we expect to have \(\Delta R(l_{1},b_{1})+\Delta R(l_{2},b_{2})<\Delta R(l_{1},b_{2})+\Delta R(l_{ 2},b_{1})\). This criterion is satisfied for more than 96 % of times for the chosen mass of 1 TeV. For a heavier scalar, the top quarks will be more boosted which will lead their decay products to collimate further, and this increases the efficiency of pairing the lepton and the \(b\)-quark correctly. We study the distributions for the energy fraction variables \(z_{1}\) and \(z_{2}\) defined in Eq. (2) using parton-level events to see how these variables can separate the three signals, namely Singlet, Doublet, and Triplet. The normalized distributions for \(z_{1}\) and \(z_{2}\) are shown in Fig. 1. Both \(z_{1}\) and \(z_{2}\) peak on the lower side for Triplet (red lines) as the produced top quarks are left-handed in this case. This means the final state leptons are less boosted as we go from the top rest frame to the lab frame, and thus \(z_{i}\)'s peak at smaller values. For the Singlet (blue lines), the variables \(z_{i}\) sharply drop near 0.2, having more fractional events with \(z_{i}>0.5\), compared to the Triplet. For the Doublet (green lines), however, the variables peak below \(z_{i}=0.5\), but the asymmetry with respect to \(z_{i}=0.5\) is smaller than the Triplet since the Doublet decay gives rise to both left-handed as well as right-handed top quarks. With realistic detector effects the strengths of these variables might diminish, but the qualitative result would remain the same, as we will see below. We define net asymmetries for the visible energy fractions (\(z_{i}\) ) as \[\mathcal{A}_{z_{i}}=\frac{N(z_{i}>c_{z_{i}})-N(z_{i}<c_{z_{i}})}{N_{\rm tot}}, \tag{8}\] with \(N(x>c)\) being the number of events with \(x>c\), and \(N_{\rm tot}\) is the total number of events. Similar to the polarization parameters, one can construct variables to capture the spin correlation between the two top quarks using the lab frame angular separation between the final state particles in the transverse plane. We study two-dimensional (2D) correlations between \(\Delta\phi(l_{1},l_{2})\) vs. \(\Delta\phi(b_{1},b_{2})\), \(\Delta\phi(b_{1},p_{\rm miss})\) vs. \(\Delta\phi(b_{2},p_{\rm miss})\), and \(\Delta\phi(l_{1},p_{\rm miss})\) vs. \(\Delta\phi(l_{2},p_{\rm miss})\). The normalized 2D correlations are shown in Fig. 2 with parton-level events. The correlations are not symmetric with respect to the diagonal axes, and they are different for different signals. In the first column in Fig. 2, the distributions are more peaked in \(\Delta\phi(l_{1},l_{2})\) compared to \(\Delta\phi(b_{1},b_{2})\) for the Singlet, while the opposite is true for the Triplet. For the Doublet, however, the distributions are less peaked and more symmetric around the \(\Delta\phi(l_{1},l_{2})=\Delta\phi(b_{1},b_{2})\) line. This can be explained in the following way. As the two top quarks are produced from a heavy resonance, \(\Delta\phi(l_{1},l_{2})\) and \(\Delta\phi(b_{1},b_{2})\) are expected to peak around \(\pi\) in the lab frame due to relativistic focusing. But due to the positive polarization of top quarks from the Singlet, the leptons (\(b\)-quarks) are emitted primarily in the same (opposite) direction as the top quark spin in the top rest frame. As a result, after boosting to the lab frame, the peak of \(\Delta\phi(l_{1},l_{2})\) becomes sharper, while the peak of \(\Delta\phi(b_{1},b_{2})\) becomes broader. For the Triplet with negatively polarized top quarks, the opposite happens, i.e., the peak of \(\Delta\phi(b_{1},b_{2})\) becomes sharper, while the peak of \(\Delta\phi(l_{1},l_{2})\) becomes broader. Finally for the Doublet, with a mix of positively and negatively polarized top quarks, the peaks of \(\Delta\phi(l_{1},l_{2})\) and \(\Delta\phi(b_{1},b_{2})\) have similar broadening. The \(\Delta\phi(l/b,p_{\rm miss})\) peak near zero and \(\pi\), as shown in the second and third column of Fig. 2. The distribution of \(\Delta\phi(l/b,p_{\rm miss})\) can be explained in a similar way as that of \(\Delta\phi(l_{1}/b_{1},l_{2}/b_{2})\), after noting that \(\bar{\nu}\) is distributed the same way as \(b\) quarks around the top quark spin in the top rest frame. The peak of \(\Delta\phi(l,p_{\rm miss})\) is sharper than that of \(\Delta\phi(b,p_{\rm miss})\) for the Singlet, and the opposite is true for the Triplet. For the Doublet, \(\Delta\phi(l,p_{\rm miss})\) and \(\Delta\phi(b,p_{\rm miss})\) behave roughly in the same way. Given this, we propose the following variables based on these 2D correlations: \[x_{lb} = \Delta\phi(l_{1},l_{2})/\pi-\Delta\phi(b_{1},b_{2})/\pi,\] \[x_{ll} = \Delta\phi(l_{1},p_{\rm miss})/\pi+\Delta\phi(l_{2},p_{\rm miss})/\pi,\] \[x_{bb} = \Delta\phi(b_{1},p_{\rm miss})/\pi+\Delta\phi(b_{2},p_{\rm miss})/\pi, \tag{9}\] to extract the spin correlation of the two top quarks. The asymmetries for the correlations are defined as, \[\mathcal{A}_{lb}=\frac{N(x_{lb}>0)-N(x_{lb}<0)}{N_{\rm tot}}, \tag{10}\] \[\mathcal{A}_{lb}=\frac{N(|x_{bb}-1|<c_{bb})-N(|x_{bb}-1|>c_{bb})}{N_{\rm tot}}, \tag{11}\] \[\mathcal{A}_{ll}=\frac{N(|x_{ll}-1|<c_{ll})-N(|x_{ll}-1|>c_{ll})}{N_{\rm tot}}. \tag{12}\] Note that the pairing of \(b\)-quark with leptons is required only for the variable \(x_{bb}\); the rest two variables \(x_{lb}\) and \(x_{ll}\) are independent of pairing. To maximize the asymmetries defined in Eqs. (8), (10), (11), and (12), we choose \(c_{z_{1}}=c_{z_{2}}\simeq 0.38\) and \(c_{bb}=c_{ll}\simeq 0.14\) by observing the parton level distributions in Figs. 1 and 2. The values of the asymmetries at parton level are listed in Table 1 for the three signals. It is clear that with the magnitudes and signs of the asymmetries, one can easily identify and separate the signals. However, we need to extract these asymmetries with realistic detector effects, including possible backgrounds. Therefore, in the next section, we analyze the signals with detector-level events in the presence of SM backgrounds. ## IV Results In this section, we discuss how to distinguish the three types of signals in the presence of background events, \begin{table} \begin{tabular}{l c c c c c} \hline & \(\mathcal{A}_{z_{1}}\) & \(\mathcal{A}_{z_{2}}\) & \(\mathcal{A}_{lb}\) & \(\mathcal{A}_{ll}\) & \(\mathcal{A}_{bb}\) \\ \hline Singlet & 0.31 & 0.31 & 0.33 & 0.58 & 0.05 \\ \hline Triplet & \(-0.36\) & \(-0.36\) & \(-0.31\) & 0.0 & 0.28 \\ \hline Doublet & \(-0.03\) & \(-0.03\) & \(-0.10\) & 0.07 & 0.11 \\ \hline \end{tabular} \end{table} Table 1: Values of asymmetries for the signals with parton level events. For all the signals, we choose \(\lambda=0.003\) and \(m_{\phi}=1\) TeV. with a simplified detector-level simulation. The backgrounds that can mimic our signal topology of two positively charged leptons (\(2l^{+}\)), two \(b\)-jets, and missing energy are [31; 32; 33; 34]\(t\bar{t}W^{+}\), \(t\bar{t}Z\), \(t\bar{t}h\), \(t\bar{t}W^{+}W^{-}\), \(W^{+}W^{+}jj\), \(ZZjj\), \(ZZW^{+}\), and \(W^{+}W^{-}Z\). In addition, \(t\bar{t}\)+jets process, which has a large cross-section, can fake our signal if the lepton charges get misidentified [35; 36; 37]. We generate the signal (Singlet, Doublet, and Triplet) and background events in [email protected] [38] at leading order (LO) in QCD, without cuts on the final state particles, and with a dynamic choice of factorization scale given by \(\sum_{i}M_{i}^{T}/2\), where \(M_{i}^{T}\) is the transverse mass of the \(i\)-th final state particle. We use nn23lo1[39] for the parton distribution functions (PDFs). Background events are generated in MadGraph5 with the final states that can give rise to \(2l^{+}2b+\mathcal{E}_{T}\) at the detector level. Events are then passed to PYTHIA8.2 [40] for showering and hadronization, followed by fast detector simulation in Delphes v3.4.2 [41]. Events are selected at detector level with at least two \(b\)-tagged jets, two positively charged leptons (\(2l^{+}\)), and missing transverse energy with the following selection level cuts \[p_{T}(l)>10\text{ GeV},\ p_{T}(b)>20\text{ GeV},\mathcal{E}_{T} ^{\prime}>10\text{ GeV},\] \[\Delta R(b,b)>0.5,\ \Delta R(b,l)>0.4,\ \Delta R(l,l)>0.4,\] \[|\eta_{b}|<2.5,\ |\eta_{l}|<2.5. \tag{13}\] With the selected background (BKG) and signal events, we study the one-dimensional and two-dimensional normalized distributions for the variables (\(z_{i}\) and \(\Delta\phi\) correlation, respectively) that we introduced in the previous section. These are shown in Figs. 3 (\(z_{i}\)) and 4 (\(\Delta\phi\)). The qualitative features of the distributions at the detector level remain the same as the parton level distributions studied in the previous section. The \(z_{i}\) distributions make clear distinctions between the Singlet and Triplet peaking in the right and left, respectively. Distributions are roughly symmetric around \(z_{i}=0.5\) for the Doublet. For the BKG, the \(z_{1}\) distribution is symmetric around \(z_{1}=0.5\) as well, while the \(z_{2}\) distribution is asymmetric. In the 2D \(\Delta\phi\) distributions, the background events are distributed quite differently than the signals. Therefore, it is possible to distinguish the signals based on these distributions, even with the detector-level events. However, the total cross-section for the background is larger than that for the signal, and therefore we now discuss some kinematic cuts that are needed to reduce backgrounds and increase the signal sensitivity. We study various kinematic distributions, such as the transverse momenta (\(p_{T}\)) of the final state particles, total hadronic energy (\(H_{T}\)), and visible mass (\(m_{l_{1}^{+}l_{2}^{+}b_{1}b_{2}}\)). We find that visible mass, the normalized distribution shown in Fig. 5, can be used to suppress the background and enhance the significance of all three signals. In particular, a cut on the visible mass of \[m_{l_{1}^{+}l_{2}^{+}b_{1}b_{2}}>430\text{ GeV} \tag{14}\] maximizes the signal significance. The signal and background processes are summarized in Table 2, showing their cross sections at the generation level and the expected number of events for an integrated luminosity of \(\mathcal{L}=300\) fb\({}^{-1}\) after the visible mass cut. The total number of background events is expected to be 208 in the \(2l^{+}2b\mathcal{E}_{T}\) final state. The signal significance expected for the three signals are given in Table 3 for integrated luminosities of \(\mathcal{L}=150\) fb\({}^{-1}\) (currently available), 300 fb\({}^{-1}\) (next phase of LHC), and 3000 fb\({}^{-1}\) (high luminosity phase). We calculate the signal significance [42] using \[\mathcal{S}=\sqrt{2\left[(s+b)\log\left(1+\frac{s}{b}\right)-s\right]}, \tag{15}\] where \(s\) and \(b\) stand for the total number of signal and background events surviving after cuts. For the chosen coupling of \(\lambda=0.003\) and \(m_{\phi}=1\) TeV, the significance for the Triplet and Doublet signals are higher than the Singlet signal. A luminosity of about 1300 fb\({}^{-1}\) is required for discovery of Singlet, whereas the Doublet and Triplet can be discovered with about 790 fb\({}^{-1}\) and 930 fb\({}^{-1}\) of luminosity, respectively with 5\(\sigma\) significance. Having discussed the discovery potential for the signals, we now turn to our primary goal of this analysis, which is to distinguish among the three signals with the help of the observables discussed in Sec. III. To this end, we first calculate the asymmetries for all the variables and summarize them in Table 4 for all three signals and backgrounds separately, as well as, signals in the presence of backgrounds. The numbers in the first three rows show that all three signals can be identified by looking at the value and the sign Figure 3: Normalized distributions for the visible energy fractions \(z_{1}\) and \(z_{2}\) for the signals and background at Delphes level with selection cuts as in Eq. (13). of the five asymmetries. However, these asymmetries are affected by the background events, as shown in the last three rows. We calculate the differences between any two signals in the presence of backgrounds using all five asymmetries, given in Table 4, in terms of the \(\chi^{2}\) function \[\chi^{2}=\sum_{i}\left|\frac{\mathcal{A}_{i}(\text{Signal\_1+BKG})-\mathcal{A}_ {i}(\text{Signal\_2+BKG})}{\delta\mathcal{A}(\text{BKG})}\right|^{2}. \tag{16}\] Here \(\delta\mathcal{A}_{i}(\text{BKG})=\sqrt{\frac{1-\mathcal{A}_{i}^{2}(\text{ BKG})}{\mathcal{A}\sigma(\text{BKG})}}\) is the statistical uncertainty due to the SM backgrounds with \(\sigma(\text{BKG})\) being the total background cross section. Signal_j (\(j=1,2\)) denote any two among the Singlet, Doublet, and Triplet signals. We estimate the luminosity required for \(2\sigma\), \(3\sigma\), and \(5\sigma\) C.L. separability among the signals using the above \(\chi^{2}\) functions and quote them in Table 5. The Singlet and the Triplet are quite different since the former (latter) decays only into right-(left-) chiral top quarks; a luminosity of about 628 fb\({}^{-1}\) is required to achieve separability at \(5\sigma\) C.L., although neither of them can be discovered Figure 4: Normalized distributions for the two-dimensional correlations between \(\Delta\phi(l_{1},l_{2})\) vs. \(\Delta\phi(b_{1},b_{2})\) (left-column), \(\Delta\phi(b_{1},p_{\text{miss}})\) vs. \(\Delta\phi(b_{2},p_{\text{miss}})\) (middle-column), and \(\Delta\phi(l_{1},p_{\text{miss}})\) vs. \(\Delta\phi(l_{2},p_{\text{miss}})\) (right-column) for the signals and background at Delphes level with selection cuts in Eq. (13). The color description is the same as in Fig. 2. for that luminosity with the expected number of events, see Table 3. On the other hand, we require about 2842 fb\({}^{-1}\) and 2061 fb\({}^{-1}\) of luminosity for 5\(\sigma\) separation between the Singlet versus Doublet and Doublet versus Triplet, respectively, as the two top quarks have both chiralities originating from the Doublet. Thus, to make distinctions between the signals involving the Doublet, we need higher luminosity than what is required to discover them. ## V Conclusion In this work, we have discussed how to measure the electroweak quantum numbers of BSM color sextet scalar and vector particles. While all the sextet particles that we consider decay into a _like_-sign top quark pair, the top \begin{table} \begin{tabular}{l c c c} \hline \hline Process (generated up-to) & Cross section (fb) & Efficiency (\(\epsilon\)) & Expected Events \\ & & & (\(2l^{+}2hE_{T}\)) \\ \hline Singlet: \(tr\) (\(2l^{+}2hE_{T}\)) & 1.121 & 10.62 \% & 35.7 \\ \hline Doublet: \(tr\) (\(2l^{+}2hE_{T}\)) & 1.348 & 11.40 \% & 46.1 \\ \hline Triplet: \(tr\) (\(2l^{+}2hE_{T}\)) & 1.166 & 12.06 \% & 42.2 \\ \hline B1: \(i\bar{h}^{+}\) (\(2l^{+}2hE_{T}\)) & 9.2 & 3.75 \% & 103.5 \\ \hline B2: \(i\bar{t}Z\) (\(t\to l^{+}l\bar{b}E_{T},l\to\bar{b}jj+\bar{b}l^{-}E_{T},Z\to 2l\)) & 8.056 & 1.76 \% & 42.6 \\ \hline B3: \(i\bar{t}+jj\) (\(t/\bar{t}\to l^{+}l^{-}\)) & 29482.0 & \(4.7\times 10^{-5}\) \% & 41.6 \\ \hline B4: \(i\bar{t}h+jj\) (\(t/\bar{t}\to l^{+}l^{-}/\),\(h\to\) all) & 23.68 & 0.014 \% & 10.3 \\ \hline B5: \(i\bar{t}h^{+}W^{-}-(t/W^{+}\to l^{+}\), \(i/W^{-}\to all\)) & 0.398 & 4.94 \% & 5.9 \\ \hline B6: \(W^{+}W^{+}jj\) (\(2l^{+}2jE_{T}\)) & 8.967 & 0.11 \% & 2.9 \\ \hline B7: \(ZZjj\) (\(4l2j\)) & 12.69 & 0.02 \% & 0.8 \\ \hline B8: \(ZZW^{+}\) (\(4ljj+3ljjE_{T}\)) & 0.4 & 0.17 \% & 0.2 \\ \hline B9: \(W^{+}W^{-}Z\) (\(3l2jE_{T}\)) & 1.61 & \(\leq 10^{-3}\) \% & 0 \\ \hline Total Background & & & 207.8 \\ \hline \hline \end{tabular} \end{table} Table 2: The signal and the background cross sections and the expected number of events for an integrated luminosity of \(\mathcal{L}=300\) fb\({}^{-1}\) with selection cuts in Eq. (13) as well as the visible mass cut in Eq. (14). The contents in the parenthesis of the first column correspond to the final states contributing to the \(2l^{+}2hE_{T}\) final state. For all signals, we choose \(\lambda=0.003\) and \(m_{\phi}=1\) TeV. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \(\mathcal{A}_{t_{1}}\) & \(\mathcal{A}_{t_{2}}\) & \(\mathcal{A}_{tb}\) & \(\mathcal{A}_{ll}\) & \(\mathcal{A}_{bb}\) \\ \hline Singlet & 0.52 & 0.49 & 0.28 & 0.35 & \(-\)0.04 \\ \hline Doublet & 0.13 & 0.09 & \(-\)0.12 & \(-\)0.03 & 0.08 \\ \hline Triplet & \(-\)0.27 & \(-\)0.32 & \(-\)0.36 & \(-\)0.09 & 0.32 \\ \hline BKG & 0.14 & \(-\)0.28 & \(-\)0.31 & \(-\)0.50 & \(-\)0.25 \\ \hline Singlet + BKG & 0.19 & \(-\)0.16 & \(-\)0.22 & \(-\)0.38 & \(-\)0.21 \\ \hline Doublet + BKG & 0.14 & \(-\)0.21 & \(-\)0.27 & \(-\)0.42 & \(-\)0.19 \\ \hline Triplet + BKG & 0.07 & \(-\)0.28 & \(-\)0.31 & \(-\)0.43 & \(-\)0.15 \\ \hline \hline \end{tabular} \end{table} Table 4: Values of asymmetries for the signals along with the backgrounds with selection cuts in Eq. (13) as well as the visible mass cut in Eq. (14). For all the signals, we choose \(\lambda=0.003\) and \(m_{\phi}=1\) TeV. Figure 5: Normalized distributions for the visible mass with selection cuts in Eq. (13). For all signals, we choose \(\lambda=0.003\) and \(m_{\phi}=1\) TeV. \begin{table} \begin{tabular}{l c c c} \hline \hline Signal & 150 fb\({}^{-1}\) & 300 fb\({}^{-1}\) & 3000 fb\({}^{-1}\) & \(\mathcal{L}\) for 5\(\sigma\) C.L. \\ \hline Singlet & 1.70 & 2.40 & 7.59 & \(\simeq 1300\) fb\({}^{-1}\) \\ \hline Doublet & 2.18 & 3.09 & 9.77 & \(\simeq 790\) fb\({}^{-1}\) \\ \hline Triplet & 2.01 & 2.84 & 8.97 & \(\simeq 930\) fb\({}^{-1}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Signal significance using only the total number of events with integrated luminosities of \(\mathcal{L}=150\) fb\({}^{-1}\), 300 fb\({}^{-1}\), and 3000 fb\({}^{-1}\) with selection cuts in Eq. (13) as well as the visible mass cut in Eq. (14). For all the signals, we choose \(\lambda=0.003\) and \(m_{\phi}=1\) TeV. quarks have identical chirality for the two sextet scalars and opposite chirality for the sextet vector. Furthermore, one of the scalars give rise to left-handed top quarks while the other decays to right-handed ones. These features can be captured by several kinematic variables that rely only on visible final states. One such variable is the well known visible energy fraction (\(z_{i}\)) of final state leptons, Eq. (2). Here we construct three additional variables, defined in Eq. (9), that depend on the angular correlation between the final state leptons and \(b\)-jets. All of these variables are sensitive to the polarization of the top quarks, and in a combined fashion can distinguish among the three possible sextet states. Through a parton level analysis, we first demonstrate the utility of the visible energy fraction variables and the angular correlation variables. We study their distributions in Figs. 1 and 2, and compute a set of asymmetries in Table 1 to identify the differences among the three signals. These show that the asymmetries can fully distinguish, as well as, identify the quantum numbers of the sextet states, in principle. We then implement a simplified detector level simulation, taking into account possible SM backgrounds, to verify how well we can differentiate among the three types of signals. We find that while the three signals can be distinguished among themselves even at the detector level, the inclusion of SM background reduces the difference among the signals. Nonetheless, with sufficient statistics within the reach of the high-luminosity phase of the LHC, the three types of signals can still be distinguished, as demonstrated in Table 5. We find higher luminosities are required to make all these distinctions than the luminosities required to discover them. To summarize, if discovered at the LHC, it is possible to measure the electroweak quantum numbers of BSM color sextet particles using top quark polarization and spin correlation observables. ## Acknowledgements S.K. is supported in part by the U.S. National Science Foundation (NSF) grant PHY-1915314 and the DOE contract DE-AC02-05CH11231. S.K. thanks IISER Kolkata for hospitality during various stages of this work. R.R. would like to acknowledge support from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish Chandra Research Institute.
2301.07067
Transformers as Algorithms: Generalization and Stability in In-context Learning
In-context learning (ICL) is a type of prompting where a transformer model operates on a sequence of (input, output) examples and performs inference on-the-fly. In this work, we formalize in-context learning as an algorithm learning problem where a transformer model implicitly constructs a hypothesis function at inference-time. We first explore the statistical aspects of this abstraction through the lens of multitask learning: We obtain generalization bounds for ICL when the input prompt is (1) a sequence of i.i.d. (input, label) pairs or (2) a trajectory arising from a dynamical system. The crux of our analysis is relating the excess risk to the stability of the algorithm implemented by the transformer. We characterize when transformer/attention architecture provably obeys the stability condition and also provide empirical verification. For generalization on unseen tasks, we identify an inductive bias phenomenon in which the transfer learning risk is governed by the task complexity and the number of MTL tasks in a highly predictable manner. Finally, we provide numerical evaluations that (1) demonstrate transformers can indeed implement near-optimal algorithms on classical regression problems with i.i.d. and dynamic data, (2) provide insights on stability, and (3) verify our theoretical predictions.
Yingcong Li, M. Emrullah Ildiz, Dimitris Papailiopoulos, Samet Oymak
2023-01-17T18:31:12Z
http://arxiv.org/abs/2301.07067v2
# Transformers as Algorithms: ###### Abstract In-context learning (ICL) is a type of prompting where a transformer model operates on a sequence of (input, output) examples and performs inference on-the-fly. In this work, we formalize in-context learning as an _algorithm learning_ problem where a transformer model implicitly constructs a hypothesis function at inference-time. We first explore the statistical aspects of this abstraction through the lens of multitask learning: We obtain generalization bounds for ICL when the input prompt is (1) a sequence of i.i.d. (input, label) pairs or (2) a trajectory arising from a dynamical system. The crux of our analysis is relating the excess risk to the stability of the algorithm implemented by the transformer. We characterize when transformer/attention architecture provably obeys the stability condition and also provide empirical verification. For generalization on unseen tasks, we identify an inductive bias phenomenon in which the transfer learning risk is governed by the task complexity and the number of MTL tasks in a highly predictable manner. Finally, we provide numerical evaluations that (1) demonstrate transformers can indeed implement near-optimal algorithms on classical regression problems with i.i.d. and dynamic data, (2) provide insights on stability, and (3) verify our theoretical predictions. ## 1 Introduction Transformer (TF) models were originally developed for NLP problems to address long-range dependencies through the attention mechanism. In recent years, language models have become increasingly large, with some boasting billions of parameters (e.g., GPT-3 has 175B, and PaLM has 540B parameters [6, 9]). It is perhaps not surprising that these large language models (LLMs) have achieved state-of-the-art performance on a wide range of natural language processing tasks. What is surprising is the ability of some of these LLMs to perform _in-context learning_ (ICL), i.e., to adapt and perform a specific task given a short prompt, in the form of instructions, and a small number of examples [6]. These models' ability to learn in-context without explicit training allows them to efficiently perform new tasks without a need for updating model weights. Figure 1 illustrates examples of ICL where a transformer makes a prediction on an example based on a few (input, output) examples provided within its prompt. For NLP, the examples may correspond to pairs of (question, answer)'s or translations. Recent works [17, 24] demonstrate that ICL can also be used to infer general functional relationships. For instance, [19, 17] aims to solve certain supervised learning problems where they feed an entire training dataset \((\mathbf{x}_{i},f(\mathbf{x}_{i}))_{i=1}^{n-1}\) as the input prompt, expecting that conditioning Figure 1: Examples of ICL. We focus on the lower two settings where a transformer admits a supervised dataset or a dynamical system trajectory as a prompt. Then, it auto-regressively predicts the output following an input example \(\mathbf{x}_{i}\) based on the prompt \((\mathbf{x}_{1},\dots,\mathbf{x}_{i})\). the TF model on this prompt would allow it to make an accurate prediction on a new input point \(\mathbf{x}_{n}\). As discussed in [17, 1], this provides an implicit optimization flavor to ICL, where the model _implicitly trains_ on the data provided within the prompt, and performs inference on test points. Our work formalizes in-context learning from a statistical lens, abstracting the transformer as a learning algorithm where the goal is inferring the correct (input, ouput) functional relationship from prompts. We focus on a meta-learning setting where the model is trained on many tasks, allowing ICL to generalize to both new and previously-seen tasks. Our main contributions are: * **Generalization bounds (Sec 3 & 5):** Suppose the model is trained on \(T\) tasks each with a data-sequence containing \(n\) examples. During training, each sequence is fed to the model auto-regressively as depicted in Figure 1. By abstracting ICL as an _algorithm learning_ problem, we establish a multitask (MTL) generalization rate of \(1/\sqrt{nT}\) for i.i.d. as well as dynamic data. In order to achieve the proper dependence on the sequence length (\(1/\sqrt{n}\) factor), we overcome temporal dependencies by relating generalization to algorithmic stability [4]. Experiments demonstrate that (1) ICL can select near-optimal algorithms for flagship regression problems as illustrated in Figure 2 and (2) ICL indeed benefits from learning across the full task sequence in line with theory. * **Stability of transformer architectures (Sec 3.1&7):** We verify our stability assumptions that facilitate favorable generalization rates. Theoretically we identify when self-attention enjoys favorable stability properties through a tight analysis that quantify the influence of one token on another. Empirically, we show that ICL predictions become more stable to input perturbations as the prompt length increases. We also find that training with noisy data helps promote stability. * **From multitask to meta-learning (Sec 4):** We provide insights into how our MTL bounds can inform generalization ability of ICL on previously unseen tasks (i.e. transfer learning). Our experiments also reveal an intriguing _inductive bias phenomenon_: The transfer risk is governed by the _task complexity_ (i.e. functions \(f\) in Fig 1) and the number of MTL tasks \(T\) in a highly predictable fashion and exhibits little dependence on the complexity of the TF architecture. The remainder of the paper is organized as follows. The next section discusses connections to prior art and Section 2 introduces the problem setup. Section 3 provides our main theoretical guarantees for ICL and stability of transformers. Section 4 extends our arguments and experiments to the transfer learning setting. Section 5 extends our results to learning stable dynamical systems where each prompt corresponds to a system trajectory. In Section 6, we explain how ICL can be interpreted as an implicit model selection procedure building on the algorithm learning viewpoint. Finally, Section 7 provides numerical evaluations. Figure 2: Examples of _algorithm learning_ in three ICL settings: **(a) Noisy linear regression: \(y_{i}\sim\mathcal{N}(\mathbf{x}_{t}^{\top}\mathbf{\beta},\sigma^{2})\)** with \(\mathbf{x}_{i},\mathbf{\beta}\sim\mathcal{N}(0,\mathbf{I})\). **(b) Linear data with covariance prior: \(y_{i}=\mathbf{x}_{i}^{\top}\mathbf{\beta}\)** with \(\mathbf{\beta}\sim\mathcal{N}(0,\mathbf{\Sigma})\) with non-isotropic \(\mathbf{\Sigma}\). **(c) Partially observed linear dynamics: \(\mathbf{x}_{t}=\mathbf{C}\mathbf{s}_{t}\)** and \(\mathbf{s}_{t+1}\sim\mathcal{N}(\mathbf{A}\mathbf{s}_{t},\sigma^{2}\mathbf{I})\)** with randomly sampled \(\mathbf{C},\mathbf{A}\). Each setting trains a transformer with large number of random regression tasks and evaluates on a new task from the same distribution. In (a) and (b), ICL performances match Bayes-optimal decision (weighted linear ridge regression) that adapt to noise level \(\sigma\) and covariance prior \(\mathbf{\Sigma}\) on the tasks. (c) shows that ICL outperforms auto-regressive least-squares estimators with varying memory \(H\). ICL is able to implement competitive ML algorithms by leveraging the task prior learned during training. See Sec 7 for experimental details. ### Related work With the success of large language models, prompting methods have witnessed immense interest [25]. ICL [6, 39] is a prompting strategy where a transformer serves as an on-the-fly predictive model through conditioning on a sequence of input/output examples \((\mathbf{x}_{1},f(\mathbf{x}_{1}),\ldots\mathbf{x}_{n-1},f(\mathbf{x}_{n-1}),\mathbf{x}_{n})\). Our work is inspired by [17] which studies ICL in synthetic settings and demonstrates transformers can serve as complex classifiers through ICL. In parallel, [19] uses ICL as an AutoML (i.e. model-selection, hyperparameter tuning) framework where they plug in a dataset to transformer and use it as a classifier for new test points. Our formalism on _algorithm learning_ provides a justification on how transformers can accomplish this with proper meta-training. [56] interprets ICL as implicit Bayesian inference and develops guarantees when the training distribution is a mixture of HMMs. Very recent works [54, 1, 11] aim to relate ICL to running gradient descent algorithm over the input prompt. [1] also provides related observations regarding the optimal decision making ability of ICL for linear models. Unlike prior ICL works, we provide finite sample generalization guarantees and our theory extends to temporally-dependent prompts (e.g. when prompts are trajectories of dynamical systems). Dynamical systems in turn relate to a recent work by [24] who use ICL for reinforcement learning. This work is also related to the literature on the statistical aspects of time-series prediction [57, 23, 22, 46, 35] and learning (non)linear dynamics [16, 60, 59, 52, 44, 12, 49, 29, 30, 40, 3] among others. Most of these focus on autoregressive models of order 1 whereas in ICL we allow for arbitrarily long memory/prompt for predictions. Closer works by [33, 36] identify broad conditions for time-series learning however they still require finite memory as well as \(\beta/\phi\)-mixing assumptions. We remark that mixing assumptions are not really applicable to training sequences/prompts in ICL due to the meta-learning nature of the problem as the sequence elements are coupled through the (stochastic) task functions (see Sec 2). Our algorithm learning formulation leads to new challenges and insights when verifying the conditions for Azuma-type inequalities and our results are facilitated through connections to algorithmic stability [5]. We also provide experiments and theory that justify our stability conditions. Further discussion is under Appendix F. ## 2 Problem Setup Notation.Let \(\mathcal{X}\) be the input feature space, and \(\mathcal{Y}\) be the output/label space. We use boldface for vector variables. \([n]\) denotes the set \(\{1,2,\ldots,n\}\). \(c,C>0\) denote absolute constants and \(\|\cdot\|_{\ell_{p}}\) denotes the \(\ell_{p}\)-norm. **In-context learning setting:** We denote a length-\(m\) prompt containing \(m-1\) in-context examples and the \(m\)'th input by \(\mathbf{x}_{\text{prompt}}^{(m)}=(\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{m-1},\mathbf{x}_ {m})\). Here \(\mathbf{x}_{m}\in\mathcal{X}\) is the input to predict and \(\mathbf{z}_{i}\in\mathcal{Z}\) is the \(i\)'th in-context example provided within prompt. Let \(\mathtt{TF}(\cdot)\) denote a transformer (more generally an auto-regressive model) that admits \(\mathbf{x}_{\text{prompt}}^{(m)}\) as its input and outputs a label \(\hat{\mathbf{y}}_{m}=\mathtt{TF}(\mathbf{x}_{\text{prompt}}^{(m)})\) in \(\mathcal{Y}\). \(\bullet\)_Independent \((\mathbf{x},\mathbf{y})\) pairs._ Similar to [17], we draw i.i.d. samples \((\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{n}\in\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) from a data distribution. Then a length-\(m\) prompt is written as \(\mathbf{x}_{\text{prompt}}^{(m)}=(\mathbf{x}_{1},\mathbf{y}_{1},\ldots\mathbf{x}_{m-1},\mathbf{y}_ {m-1},\mathbf{x}_{m})\), and the model predicts \(\hat{\mathbf{y}}_{m}=\mathtt{TF}(\mathbf{x}_{\text{prompt}}^{(m)})\in\mathcal{Y}\) for \(1\leq m\leq n\). \(\bullet\)_Dynamical systems._ In this setting, the prompt is simply the trajectory generated by a dynamical system, namely, \(\mathbf{x}_{\text{prompt}}^{(m)}=(\mathbf{x}_{1},\ldots\mathbf{x}_{m-1},\mathbf{x}_{m})\) and therefore, \(\mathcal{Z}=\mathcal{X}=\mathcal{Y}\). Specifically, we investigate the state observed setting that is governed by dynamics \(f(\cdot)\) via \(\mathbf{x}_{m+1}=f(\mathbf{x}_{m})+\text{noise}\). Here, \(\mathbf{y}_{m}:=\mathbf{x}_{m+1}\) is the label associated to \(\mathbf{x}_{m}\), and the model admits \(\mathbf{x}_{\text{prompt}}^{(m)}\) as input and predicts the next state \(\hat{\mathbf{y}}_{m}:=\hat{\mathbf{x}}_{m+1}=\mathtt{TF}(\mathbf{x}_{\text{prompt}}^{(m)}) \in\mathcal{X}\). We first consider the training phase of ICL where we wish to learn a good \(\mathtt{TF}(\cdot)\) model through MTL. Suppose we have \(T\) tasks associated with data distributions \(\{\mathcal{D}_{t}\}_{t=1}^{T}\). Each task independently samples a training sequence \(\mathcal{S}_{t}=(\mathbf{z}_{ti})_{i=1}^{n}\) according to its distribution. \(\mathcal{S}_{\text{all}}=\{\mathcal{S}_{t}\}_{t=1}^{T}\) denote the set of all training sequences. We use \(\mathcal{S}_{t}^{(m)}=(\mathbf{z}_{11},\ldots,\mathbf{z}_{tm})\) to denote a subsequence of \(\mathcal{S}_{t}:=\mathcal{S}_{t}^{(n)}\) for \(m\leq n\) and \(\mathcal{S}^{(0)}\) denotes an empty subsequence. ICL can be interpreted as an implicit optimization on the subsequence \(\mathcal{S}^{(m)}=(\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{m})\) to make prediction on \(\mathbf{x}_{m+1}\). To model this, we abstract the transformer model as a learning algorithm that maps a sequence of data to a prediction function (e.g. gradient descent, empirical risk minimization). Concretely, let \(\mathcal{A}\) be a set of algorithm hypotheses such that algorithm \(\mathtt{Alg}\in\mathcal{A}\) maps a sequence of form \(\mathcal{S}^{(m)}\) into a prediction function \(f^{\texttt{Alg}}_{\mathcal{S}^{(m)}}:\mathcal{X}\rightarrow\mathcal{Y}\). With this, we represent TF via \[\texttt{TF}(\mathbf{x}^{(m+1)}_{\text{prompt}})=f^{\texttt{Alg}}_{\mathcal{S}^{(m)}} (\mathbf{x}_{m+1}).\] This abstraction is without losing generality as we have the explicit relation \(f^{\texttt{Alg}}_{\mathcal{S}^{(m)}}(\mathbf{x}):=\texttt{TF}((\mathcal{S}^{(m)}, \mathbf{x}))\). Given training sequences, \(\mathcal{S}_{\text{all}}\) and a loss function \(\ell(\mathbf{y},\hat{\mathbf{y}})\), the ICL training can be interpreted as searching for the optimal algorithm \(\texttt{Alg}\in\mathcal{A}\), and the training objective becomes \[\widehat{\texttt{Alg}}= \operatorname*{arg\,min}_{\texttt{Alg}\in\mathcal{A}}\widehat{ \mathcal{L}}_{\mathcal{S}_{\text{all}}}(\texttt{Alg}):=\frac{1}{T}\sum_{t=1}^ {T}\widehat{\mathcal{L}}_{t}(\texttt{Alg})\] (ERM) \[\text{where}\quad\widehat{\mathcal{L}}_{t}(\texttt{Alg})=\frac{1} {n}\sum_{i=1}^{n}\ell(\mathbf{y}_{ti},f^{\texttt{Alg}}_{\mathcal{S}_{i}^{(i-1)}}( \mathbf{x}_{ti})).\] Here, \(\widehat{\mathcal{L}}_{t}(\texttt{Alg})\) is the training loss of task \(t\) and \(\widehat{\mathcal{L}}_{\mathcal{S}_{\text{all}}}(\texttt{Alg})\) is the task-averaged MTL loss. Let \(\mathcal{L}_{t}(\texttt{Alg})=\mathbb{B}_{\mathcal{S}_{t}}[\widehat{\mathcal{L }}_{t}(\texttt{Alg})]\) and \(\mathcal{L}_{\text{MTL}}(\texttt{Alg})=\mathbb{B}[\widehat{\mathcal{L}}_{ \mathcal{S}_{\text{all}}}(\texttt{Alg})]=\frac{1}{T}\sum_{t=1}^{T}\mathcal{L}_{ t}(\texttt{Alg})\) be the corresponding population risks. Observe that, task-specific loss \(\widehat{\mathcal{L}}_{t}(\texttt{Alg})\) is an empirical average of \(n\) terms, one for each prompt \(\mathbf{x}^{(i)}_{\text{prompt}}\). To develop generalization bounds, our primary interest is controlling the gap between empirical and population risks. For problem (ERM), we wish to bound the excess MTL risk \[R_{\text{MTL}}(\widehat{\texttt{Alg}})=\mathcal{L}_{\text{MTL}}(\widehat{ \texttt{Alg}})-\min_{\texttt{Alg}\in\mathcal{A}}\mathcal{L}_{\text{MTL}}( \texttt{Alg}). \tag{1}\] Following the MTL training (ERM), we also evaluate the model on previously-unseen tasks; this can be thought of as the transfer learning problem. Concretely, let \(\mathcal{D}_{\text{task}}\) be a distribution over tasks and draw a target task \(\mathcal{T}\sim\mathcal{D}_{\text{task}}\) with data distribution \(\mathcal{D}_{\mathcal{T}}\) and a sequence \(\mathcal{S}_{\mathcal{T}}=\{\mathbf{z}_{i}\}_{i=1}^{n}\sim\mathcal{D}_{\mathcal{T}}\). Define the empirical and population risks on \(\mathcal{T}\) as \(\widehat{\mathcal{L}}_{\mathcal{T}}(\texttt{Alg})=\frac{1}{n}\sum_{i=1}^{n} \ell(\mathbf{y}_{i},f^{\texttt{Alg}}_{\mathcal{S}_{\mathcal{T}}^{(i-1)}}(\mathbf{x}_{ i}))\) and \(\mathcal{L}_{\mathcal{T}}(\texttt{Alg})=\mathbb{B}_{\mathcal{S}_{\mathcal{T}}}[ \widehat{\mathcal{L}}_{\mathcal{T}}(\texttt{Alg})]\). Then the transfer risk of an algorithm Alg is defined as \(\mathcal{L}_{\text{TFR}}(\texttt{Alg})=\mathbb{B}_{\mathcal{T}}[\mathcal{L}_{ \mathcal{T}}(\texttt{Alg})]\). With this setup, we are ready to state our main contributions. ## 3 Generalization in In-context Learning In this section, we study ICL under the i.i.d. data setting with training sequences \(\mathcal{S}_{t}=(\mathbf{x}_{ti},\mathbf{y}_{ti})_{i=1}^{n}\overset{\text{i.i.d.}}{ \sim}\mathcal{D}_{t}\). Section 5 extends our results to dynamical systems. ### Algorithmic Stability In ICL a training example \((\mathbf{x}_{i},\mathbf{y}_{i})\) in the prompt impacts all future decisions of the algorithm from predictions \(i+1\) to \(n\). This necessitates us to control the stability to input perturbation of the learning algorithm emulated by the transformer. Our stability condition is borrowed from the algorithmic stability literature. As stated in [4, 5], the stability level of an algorithm is typically in the order of \(1/m\) (for realistic generalization guarantees) where \(m\) is the training sample size (in our setting prompt length). This is formalized in the following assumption that captures the variability of the transformer output. **Assumption 3.1** (Error Stability [5]): _Let \(\mathcal{S}=(\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{m}\) be a sequence in \(\mathcal{X}\times\mathcal{Y}\) with \(m\geq 1\) and \(\mathcal{S}^{j}\) be the sequence where the \(j\)'th sample of \(\mathcal{S}\) is replaced by \((\mathbf{x}_{j}^{\prime},\mathbf{y}_{j}^{\prime})\). Error stability holds for a distribution \((\mathbf{x},\mathbf{y})\sim\mathcal{D}\) if there exists a \(K>0\) such that for any \(\mathcal{S},(\mathbf{x}_{j}^{\prime},\mathbf{y}_{j}^{\prime})\in(\mathcal{X}\times \mathcal{Y})\), \(j\leq m\), and \(\texttt{Alg}\in\mathcal{A}\), we have_ \[\Big{|}\mathbb{B}_{(\mathbf{x},\mathbf{y})}\left[\ell(\mathbf{y},f^{\texttt{Alg}}_{ \mathcal{S}}(\mathbf{x}))-\ell(\mathbf{y},f^{\texttt{Alg}}_{\mathcal{S}^{j}}(\mathbf{x}) )\right]\Big{|}\leq\frac{K}{m}. \tag{2}\] _Let \(\rho\) be a distance metric on \(\mathcal{A}\). Pairwise error stability holds if for all \(\texttt{Alg},\texttt{Alg}^{\prime}\in\mathcal{A}\) we have_ \[\Big{|}\mathbb{B}_{(\mathbf{x},\mathbf{y})}\left[\ell(\mathbf{y},f^{\texttt{Alg}}_{ \mathcal{S}}(\mathbf{x}))-\ell(\mathbf{y},f^{\texttt{Alg}^{\prime}}_{\mathcal{S}}( \mathbf{x}))-\ell(\mathbf{y},f^{\texttt{Alg}}_{\mathcal{S}^{j}}(\mathbf{x}))+\ell(\mathbf{y},f ^{\texttt{Alg}^{\prime}}_{\mathcal{S}^{j}}(\mathbf{x}))\right]\Big{|}\leq\frac{K \rho(\texttt{Alg},\texttt{Alg}^{\prime})}{m}.\] Here (2) is our primary stability condition borrowed from [5] and ensures that all algorithms \(\mathtt{Alg}\in\mathcal{A}\) are \(K\)-stable. We will also use the stronger pairwise stability condition to develop tighter generalization bounds. The following theorem shows that, under mild assumptions, a multilayer transformer obeys the stability condition (2). The proof is deferred to Appendix B.1 and Theorem B.4. **Theorem 3.2**: _Let \(\mathbf{x}_{\text{prompt}}^{(\mathbf{m})},\mathbf{x}_{\text{prompt}}^{\prime(\mathbf{m})}\) be two prompts that only differ at the inputs \(\mathbf{z}_{j}=(\mathbf{x}_{j},\mathbf{y}_{j})\) and \(\mathbf{z}_{j}^{\prime}=(\mathbf{x}_{j}^{\prime},\mathbf{y}_{j}^{\prime})\) where \(j<m\). Assume inputs and labels lie within the unit Euclidean ball in \(\mathbb{R}^{d}\)1. Shape these prompts into matrices \(\mathbf{X}_{\text{prompt}},\mathbf{X}_{\text{prompt}}^{\prime}\in\mathbb{R}^{(2m-1) \times d}\) respectively. Let \(\texttt{TF}(\cdot)\) be a \(D\)-layer transformer as follows: Setting \(\mathbf{X}_{(0)}:=\mathbf{X}_{\text{prompt}}\), the \(i\)'th layer applies MLPs and self-attention2 and outputs_ Footnote 1: Here, we assume \(\mathcal{X},\mathcal{Y}\subset\mathbb{R}^{d}\), otherwise, inputs and labels are both embedded into \(d\)-dimensional vectors of proper size. \[\mathbf{X}_{(i)}=\texttt{Parallel\_MLPs}(\texttt{ATTN}(\mathbf{X}_{(i-1)}))\quad \text{where}\quad\texttt{ATTN}(\mathbf{X}):=\text{softmax}(\mathbf{X}\mathbf{W}_{i}\mathbf{X}^ {\top})\mathbf{X}\mathbf{V}_{i}.\] _Assume \(\texttt{TF}\) is normalized as \(\|\mathbf{V}\|\leq 1\), \(\|\mathbf{W}\|\leq\Gamma/2\) and MLPs obey \(\texttt{MLP}(\mathbf{x})=\texttt{ReLU}(\mathbf{M}\mathbf{x})\) with \(\|\mathbf{M}\|\leq 1\). Let \(\texttt{TF}\) output the last token of the final layer \(\mathbf{X}_{(D)}\) that correspond to the query \(\mathbf{x}_{m}\). Then,_ \[|\texttt{TF}(\mathbf{x}_{\text{prompt}}^{(\mathbf{m})})-\texttt{TF}(\mathbf{x}_{\text{ prompt}}^{\prime(\mathbf{m})})|\leq\frac{2}{2m-1}((1+\Gamma)\mathbf{e}^{\Gamma})^{D}.\] _Thus, assuming loss \(\ell(\mathbf{y},\cdot)\) is \(L\)-Lipschitz, the algorithm induced by \(\texttt{TF}(\cdot)\) obeys (2) with \(K=2L((1+\Gamma)\mathbf{e}^{\Gamma})^{D}\)._ A few remarks are in place. First, the dependence on depth is exponential. However, this is not as prohibitive for typical transformer architectures which tend to not be very deep. For example, the different variants of GPT-2 and BERT have between 12-48 layers [20]. In our theorem, the upper bound on \(\Gamma\) helps ensure that one token cannot have substantial influence on another one. In Appendix B, we provide a more general version of this result which also covers our stronger stability assumption for dynamical systems (see Theorem B.4). Importantly, we also show that our theorem is rather tight (see Sec B.2): (1) Stability can fail if \(\Gamma\) is allowed to be logarithmic in \(m\) indicating the tightness of our \(\mathbf{e}^{\Gamma}/m\) bound. (2) It is also critical that the modified token is not the last one (i.e. \(j<m\) condition), otherwise stability can again fail. The key technicality in our result is establishing the stability of the self-attention layer which is the central component of a transformer, see Lemma B.2. Finally, Figure 7 provides numerical evidence for multiple ICL problems and demonstrate that stability of GPT-2 architecture's predictions with respect to inputs indeed improves with longer prompts in line with theory. ### Generalization Bounds We are ready to establish generalization bounds by leveraging our stability conditions. We use covering numbers (i.e. metric entropy) to control model complexity. **Definition 3.3** (Covering number): _Let \(\mathcal{Q}\) be any hypothesis set and \(d(q,q^{\prime})\geq 0\) be a distance metric over \(q,q^{\prime}\in\mathcal{Q}\). Then, \(\bar{\mathcal{Q}}=\{q_{1},\ldots,q_{N}\}\) is an \(\varepsilon\)-cover of \(\mathcal{Q}\) with respect to \(d(\cdot,\cdot)\) if for any \(q\in\mathcal{Q}\), there exists \(q_{i}\in\bar{\mathcal{Q}}\) such that \(d(q,q_{i})\leq\varepsilon\). The \(\varepsilon\)-covering number \(\mathcal{N}(\mathcal{Q},d,\varepsilon)\) is the cardinality of the minimal \(\varepsilon\)-cover._ To cover the algorithm space \(\mathcal{A}\), we need to introduce a distance metric. We formalize this in terms of the prediction difference between the two algorithms on the worst-case data-sequence. **Definition 3.4** (Algorithm distance): _Let \(\mathcal{A}\) be an algorithm hypothesis set and \(\mathcal{S}=(\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{n}\) be a sequence that is admissible for some task \(t\in[T]\). For any pair \(\mathtt{Alg},\mathtt{Alg}^{\prime}\in\mathcal{A}\), define the distance metric \(\rho(\mathtt{Alg},\mathtt{Alg}^{\prime}):=\sup_{\mathcal{S}}\frac{1}{n}\sum_{i= 1}^{n}\|f_{\mathcal{S}^{(i-1)}}^{\mathtt{Alg}}(\mathbf{x}_{i})-f_{\mathcal{S}^{(i-1 )}}^{\mathtt{Alg}}(\mathbf{x}_{i})\|_{\ell_{2}}\)._ We note that the distance \(\rho\) is controlled by the Lipschitz constant of the transformer architecture (i.e. the largest gradient norm with respect to the model weights). Following Definitions 3.3&3.4, the \(\varepsilon\)-covering number of the hypothesis set \(\mathcal{A}\) is \(\mathcal{N}(\mathcal{A},\rho,\varepsilon)\). This brings us to our main result on the MTL risk of (ERM). **Theorem 3.5**: _Suppose \(\mathcal{A}\) is \(K\)-stable per Assumption 3.1 for all \(T\) tasks and the loss function \(\ell(\mathbf{y},\mathbf{\cdot})\) is \(L\)-Lipschitz taking values over \([0,1]\). Let \(\overline{\mathbf{Alg}}\) be the empirical solution of (ERM). Then, with probability at least \(1-2\delta\), the excess MTL test risk obeys,_ \[R_{\text{MTL}}(\widehat{\mathbf{Alg}})\leq\inf_{\varepsilon>0}\left\{4L \varepsilon+2(1+K\log n)\sqrt{\frac{\log(\mathcal{N}(\mathcal{A},\rho, \varepsilon)/\delta)}{\text{cn}T}}\right\}. \tag{3}\] _Additionally suppose \(\mathcal{A}\) is \(K\)-pairwise-stable and set diameter \(D:=\sup_{\mathbf{Alg},\mathbf{Alg}^{\prime}\in\mathcal{A}}\rho(\mathbf{Alg},\mathbf{Alg}^{ \prime})\). Using the convention \(x_{+}=\max(x,1)\), with probability at least \(1-4\delta\),_ \[R_{\text{MTL}}(\overline{\mathbf{Alg}})\lesssim\inf_{\varepsilon>0}\left\{L \varepsilon+\frac{L_{+}+K\log n}{\sqrt{nT}}\Big{(}\int_{\varepsilon}^{D/2} \sqrt{\log\mathcal{N}(\mathcal{A},\rho,u)}du+D_{+}\sqrt{\log\frac{1}{\delta} }\Big{)}\right\}. \tag{4}\] The first bound (3) achieves \(1/\sqrt{nT}\) rate by covering the algorithm space with resolution \(\varepsilon\). For Lipschitz architectures with \(\dim(\mathcal{A})\) trainable weights we have \(\log\mathcal{N}(\mathcal{A},\rho,\varepsilon)\sim\dim(\mathcal{A})\log(1/ \varepsilon)\). Thus, up to logarithmic factors, the excess risk is bounded by \(\sqrt{\frac{\dim(\mathcal{A})}{nT}}\) and will vanish as \(n,T\to\infty\). Note that our bound is also task-dependent through \(\rho\) in Def. 3.4. For instance, suppose tasks are realizable with labels \(\mathbf{y}=f(\mathbf{x})\) and admissible task sequences have the form \(\mathcal{S}=(\mathbf{x}_{i},f(\mathbf{x}_{i}))_{i=1}^{n}\). Then, \(\rho\) will depend on the function class of \(f\) (e.g. whether \(f\) is a linear model, neural net, etc), specifically, as the function class becomes richer, both \(\rho\) and the covering number becomes larger. Under the stronger pairwise-stability, we can obtain a bound in terms of Dudley's entropy integral which arises from a chaining argument. This bound is typically in the same order as the Rademacher complexity of the function class with \(T\times n\) samples [55]. Note that achieving \(1/\sqrt{T}\) dependence is rather straightforward as tasks are sampled independently. Thus, the main feature of Theorem 3.5 is obtaining the multiplicative \(1/\sqrt{n}\) term by overcoming temporal dependencies. Figure 3 shows that training with full sequence is indeed critical for ICL accuracy. **Proof sketch.** The main component of the proof is to find a concentration bound on \(|\mathcal{L}_{\text{MTL}}(\mathbf{Alg})-\widehat{\mathcal{L}}_{\mathcal{S}_{\text {all}}}(\mathbf{Alg})|\) for a fixed algorithm \(\mathbf{Alg}\in\mathcal{A}\). To achieve this bound, we introduce the sequence of variables \(X_{t,i}=\mathbb{E}\left[\frac{1}{n}\sum_{j=1}^{n}\ell(\mathbf{y}_{tj},f^{\text{ Alg}}_{\mathcal{S}_{t}^{(j-1)}}(\mathbf{x}_{tj}))\bigm{|}\mathcal{S}_{t}^{(i)}\right]\) for \(0\leq i\leq n\), which forms a martingale by construction. The critical stage is upper bounding the martingale differences \(|X_{t,i}-X_{t,i-1}|\) through our stability assumption, namely, increments are at most \(1+\sum_{j=i}^{n}\frac{K}{j}\lesssim 1+K\log n\). Then, we utilize Azuma-Hoeffding's inequality Figure 3: The benefit of learning across the full task sequence: **Right side:** Standard ERM where each task trains with all \(n=40\) prompts. **Left side:** ERM focuses on different parts of the trajectory by fitting \(n/4=10\) prompts per task over \(i\in[1,10]\) to \([31,40]\) (highlighted as the orange ranges). We train with \(T=6.4\) million random linear regression tasks and display the performance on new tasks (i.e. transfer risk). Right side learns to solve linear regression via ICL whereas left side fails to do so even when restricted to their target ranges. to achieve a concentration bound on \(\left|\frac{1}{T}\sum_{t=1}^{T}X_{t,0}-X_{t,n}\right|\), which is equivalent to \(|\mathcal{L}_{\mathrm{MTL}}(\mathtt{Alg})-\widehat{\mathcal{L}}_{\mathrm{S}_{ \mathrm{sil}}}(\mathtt{Alg})|\). To conclude, we make use of covering/chaining arguments to establish a uniform concentration bound on \(|\mathcal{L}_{\mathrm{MTL}}(\mathtt{Alg})-\widehat{\mathcal{L}}_{\mathrm{S}_{ \mathrm{sil}}}(\mathtt{Alg})|\) for all \(\mathtt{Alg}\in\mathcal{A}\). The details are provided in Appendix C.1. **Multiple sequences per task.** Finally consider a setting where each task is associated with \(M\) independent sequences with size \(n\). This typically arises in reinforcement learning problems (e.g. dynamical systems in Sec. 5) where we collect data through multiple rollouts each leading to independent sequences. In this setting, the statistical error rate improves to \(1/\sqrt{nMT}\) as discussed in Appendix C.1. In the next section, we will contrast MTL vs transfer learning by letting \(M\to\infty\). This way, even if \(n\) and \(T\) are fixed, the model will fully learn the \(T\) source tasks during the MTL phase as the excess risk vanishes with \(M\to\infty\). ## 4 Generalization and Inductive Bias on Unseen Tasks In this section, we explore transfer learning to assess the performance of ICL on new tasks: The MTL phase generates a model \(\widetilde{\mathtt{Alg}}\) trained on \(T\) source tasks and we use \(\widetilde{\mathtt{Alg}}\) to predict a target task \(\mathcal{T}\). Consider a meta-learning setting where \(T\) sources are drawn from the distribution \(\mathcal{D}_{\mathrm{task}}\) and we evaluate the transfer risk on a new \(\mathcal{T}\sim\mathcal{D}_{\mathrm{task}}\). We aim to control the transfer risk \(\mathcal{L}_{\mathrm{TFR}}(\widetilde{\mathtt{Alg}})=\mathbb{E}[\mathcal{L}_{ \mathcal{T}}(\widetilde{\mathtt{Alg}})]\) in terms of the MTL risk \(\mathcal{L}_{\mathrm{MTL}}(\widetilde{\mathtt{Alg}})\). When the source tasks are i.i.d, one can use a standard generalization analysis to bound the transfer risk as follows \(\mathcal{L}_{\mathrm{TFR}}(\widetilde{\mathtt{Alg}})-\mathcal{L}_{\mathrm{MTL }}(\widetilde{\mathtt{Alg}})\lesssim\sqrt{\log(\mathcal{N}(\mathcal{A}, \rho,\varepsilon)/T}\) (see Thm C.3). Here, an important distinction with MTL is that transfer risk decays as \(1/\mathrm{poly}(T)\) because the unseen tasks induce a distribution shift, which, typically, cannot be mitigated with more samples \(n\) or more sequences-per-task \(M\). \(\bullet\)**Inductive Bias in Transfer Risk.** Before investigating distribution shift, let us consider the following question: While \(1/\mathrm{poly}(T)\) behavior may be unavoidable, is it possible that dependence on architectural complexity \(\dim(\mathcal{A})\) is avoidable? Perhaps surprisingly, we answer this question affirmatively through experiments on linear regression. In what follows, during MTL pretraining, we train with \(M\to\infty\) independent sequences per task to minimize population MTL risk \(\mathcal{L}_{\mathrm{MTL}}(\cdot)\). We then evaluate resulting \(\widetilde{\mathtt{Alg}}\) on different dimensions \(d\) and numbers of MTL tasks \(T\). Figures 4(a,b,c) display the MTL and transfer risks for dimensions \(d=5,10,20\). In each figure, we evaluate the results on \(T=\{1,2,5\}\times d^{2}\) and the \(x\)-axis moves from \(0\) to \(n=2d\). Each task has isotropic features, noiseless labels and task vectors \(\mathbf{\beta}\sim\mathcal{N}(0,\mathbf{I}_{d})\). Here, our first observation is that, the Figures 4(a,b,c) seem (almost perfectly) aligned with each other, that is, each figure exhibits identical MTL and transfer risk curves. To further elucidate this, Figure 4(d) integrates the transfer risk curves from \(d=5,10,20\) and overlays them together. This alignment indicates that, for a fixed point \(\alpha=n/d\) and \(\mathbf{\beta}=T/d^{2}\), the transfer risks remain unchanged. Here, \(n\) proportional to \(d\) can be attributed to linearity, thus, the more surprising aspect is the dependence on \(T\): This is because rather than \(\dim(\mathcal{A})/T\) (where \(\mathcal{A}\) is fixed to a GPT-2 architecture), the generalization risk behaves like \(d^{2}/T\). Thus, rather than model complexity, what matters seems to be the task complexity \(d\). In support of this hypothesis, Figure 8 trains ICL on GPT-2 architectures with up to \(64\) times different parameter counts and reveals that transfer risk indeed exhibits little dependence on the model complexity \(\dim(\mathcal{A})\). Figure 4: In Figures (a,b,c,), we plot the \(d\in\{5,10,20\}\)-dimensional results for transfer and MTL risk curves with the same GPT-2 architecture. Figure (d) overlays (a,b,c) to reveal that transfer risks are aligned for fixed \((n/d,T/d^{2})\) choice. Inductive bias is a natural explanation of this behavior: Intuitively, the MTL pretraining process identifies a favorable algorithm that lies in the span of the source tasks \(\mathbf{\Theta}_{\mathrm{MTL}}=(\mathbf{\beta}_{t})_{t=1}^{T}\). Specifically, while the transformer model can potentially fit MTL tasks through a variety of algorithms, we speculate that the optimization process is implicitly biased to an algorithm \(\mathtt{Alg}(\mathbf{\Theta}_{\mathrm{MTL}})\) (akin to [47, 38]). Such bias would explain the lack of \(\dim(\mathcal{A})\) dependence since \(\mathtt{Alg}(\mathbf{\Theta}_{\mathrm{MTL}})\) solely depends on the source tasks. While we leave the theoretical exploration of the empirical \(d^{2}/T\) behavior to a future work, below we explain that \(d^{2}/T\) dependence is rather surprising. To this end, let us first introduce the optimal estimator (in terms of Bayes risk) for linear regression with Gaussian task prior \(\mathbf{\beta}\sim\mathcal{N}(0,\mathbf{\Sigma})\). This estimator can be described explicitly [43, 27] and is given by the weighted ridge regression solution \[\hat{\mathbf{\beta}}=(\mathbf{X}^{\top}\mathbf{X}+\sigma^{2}\mathbf{\Sigma}^{-1})^{-1}\mathbf{X}^ {\top}\mathbf{y}. \tag{5}\] Here \(\mathbf{X}=[\mathbf{x}_{1},\dots,\mathbf{x}_{n}]^{\top}\in\mathbb{R}^{n\times d},\mathbf{y}=[ \mathbf{y}_{1},\dots,\mathbf{y}_{n}]^{\top}\in\mathbb{R}^{n}\) are the concatenated features and labels obtained from the task sequence and \(\sigma^{2}\) is the label noise variance. With this in mind, what is the ideal algorithm \(\mathtt{Alg}(\mathbf{\Theta}_{\mathrm{MTL}})\) based on the (perfect) knowledge of source tasks? Eqn. (5) crucially requires the knowledge of the task covariance \(\mathbf{\Sigma}\) and variance \(\sigma^{2}\). Thus, even with the hindsight knowledge that our problem is linear, we have to estimate the task covariance from source tasks. This can be done via the empirical covariance \(\hat{\mathbf{\Sigma}}=\frac{1}{T}\sum_{i=1}^{T}\mathbf{\beta}_{i}\mathbf{\beta}_{i}^{T}\). To ensure \(\hat{\mathbf{\Sigma}}\)-weighted LS performs \(\mathbf{O}(1)\)-close to \(\mathbf{\Sigma}\)-weighted LS, we need a spectral norm control, namely, \(\|\mathbf{\Sigma}-\hat{\mathbf{\Sigma}}\|/\lambda_{\min}(\mathbf{\Sigma})\leq\mathbf{O}(1)\). When \(\mathbf{\Sigma}=\mathbf{I}_{d}\) (as in our experiments) and tasks are isotropic, the latter condition holds with high probability when \(T=\mathbf{\Omega}(d)\). This is also numerically demonstrated in Figure 9 in the appendix. This behavior is in contrast to the stronger \(T\propto d^{2}\) requirement we observe for ICL and indicates that ICL training may not be sample-optimal in terms of \(T\). For instance, \(T\propto d^{2}\) is sufficient to ensure the stronger entrywise control \(\|\mathbf{\Sigma}-\hat{\mathbf{\Sigma}}\|_{\ell_{\infty}}\leq O(1)\) rather than spectral norm. \(\bullet\)**Exploring transfer risk via source-target distance.** Besides drawing source and target tasks from the same \(\mathcal{D}_{\mathrm{task}}\), we also investigate transfer risk in an instance specific fashion. Specifically, the population risk of a new task \(\mathcal{T}\) can be bounded as \(\mathcal{L}_{T}(\mathtt{Alg})\leq\mathcal{L}_{\mathrm{MTL}}(\mathtt{Alg})+ \mathrm{dist}(\mathcal{T},(\mathcal{D}_{t})_{t=1}^{T})\). Here, \(\mathrm{dist}(\cdot)\) assesses the (distributional) distance of task \(\mathcal{T}\) to the source tasks \((\mathcal{D}_{t})_{t=1}^{T}\) (e.g. [2, 18]). In case of linear tasks, we can simply use the Euclidean distance between task vectors, specifically, the distance of target weights \(\mathbf{\beta}_{\mathcal{T}}\) to the nearest source task \(\mathrm{dist}(\mathcal{T})=\min_{t\in[T]}\|\mathbf{\beta}_{\mathcal{T}}-\mathbf{\beta} _{t}\|_{\ell_{2}}\). In Fig. 5 we investigate the distance of specific target tasks from source tasks and how the distance affects the transfer performance. Here, all source and target tasks have unit Euclidean norms so that closer distance is equivalent to larger cosine similarity. We again train each MTL task with multiple sequences \(M\rightarrow\infty\) (as in Fig. 4) and use \(T=20\) source tasks with \(d=5\) dimensional regression problems. In a nutshell, Figure 5 shows that Euclidean task similarity is indeed highly predictive of transfer performance across different distance slices (namely \([0,0.2],[0.2,0.6],[0.6,1],[1,2]\)). ## 5 Extension to Stable Dynamical Systems Until now, we have studied ICL with sequences of i.i.d. (input, label) pairs. In this section, we investigate the scenario where prompts are obtained from the trajectories of stable dynamical systems, thus, they consist of dependent data. Let \(\mathcal{X}\subset\mathbb{R}^{d}\) and \(\mathcal{F}:\mathcal{X}\rightarrow\mathcal{X}\) be a hypothesis class elements of which are dynamical systems. During MTL phase, suppose that we are given \(T\) tasks associated with \((f_{t})_{t=1}^{T}\) where \(f_{t}\in\mathcal{F}\), and each contains \(n\) in-context samples. Then, the data-sequence of \(t\)'th task is denoted by \(\mathcal{S}_{t}=(\mathbf{x}_{ti})_{i=0}^{n}\) where \(\mathbf{x}_{ti}=f_{t}(\mathbf{x}_{t,t-1})+\mathbf{w}_{ti}\), \(\mathbf{x}_{t0}\) is the initial state, and \(\mathbf{w}_{ti}\in\mathcal{W}\subset\mathbb{R}^{d}\) are bounded i.i.d. random noise following some distribution. Then, prompts are given by \(\mathbf{x}_{\mathrm{prompt}}^{(i)}:=(\mathbf{x}_{0},\mathbf{x}_{1},\dots\mathbf{x}_{i})\). Let \(\mathbf{S}^{(i)}=\mathbf{x}_{\mathrm{prompt}}^{(i)}\), and we can make prediction \(\hat{\mathbf{x}}_{ti}=f_{\mathcal{S}_{t}^{(i-1)}}^{\mathtt{Alg}}(\mathbf{x}_{t,i-1})\). We consider the similar optimization problem as (ERM). Figure 5: Transfer risk as a function of distance to the MTL tasks. Distant tasks (with smaller cosine similarity) generalize worse. For generalization analysis, we require the system to be stable (which differs from algorithmic stability!). In this work, we use an exponential stability condition [16, 45] that controls the distance between two trajectories initialized from different points. **Definition 5.1** (\((C_{\rho},\rho)\)-stability): _Denote the \(m\)'th state resulting from the initial state \(\mathbf{x}_{t0}\) and \((\mathbf{w}_{ti})_{i=1}^{m}\) by \(f_{t}^{(m)}(\mathbf{x}_{t0})\). Let \(C_{\rho}\geq 1\) and \(\rho\in(0,1)\) be system related constants. We say that the dynamical system for the task \(t\) is \((C_{\rho},\rho)\)-stable if, for all \(\mathbf{x}_{t0},\mathbf{x}^{\prime}_{t0}\in\mathcal{X}\), \(m\geq 1\), and \((\mathbf{w}_{ti})_{i\geq 1}\in\mathcal{W}\), we have_ \[\left\|f_{t}^{(m)}(\mathbf{x}_{t0})-f_{t}^{(m)}(\mathbf{x}^{\prime}_{t0})\right\|_{ \ell_{2}}\leq C_{\rho}\rho^{m}\left\|\mathbf{x}_{t0}-\mathbf{x}^{\prime}_{t0}\right\|_ {\ell_{2}} \tag{6}\] **Assumption 5.2**: _There exist \(\bar{C}_{\rho}\) and \(\bar{\rho}<1\) such that all dynamical systems \(f\in\mathcal{F}\) are \((\bar{C}_{\rho},\bar{\rho})\)-stable._ In addition to the stability of the hypothesis set \(\mathcal{F}\), we also leverage the algorithmic-stability of the set \(\mathcal{A}\) similar to Assumption 3.1. Different from Assumption 3.1, we restrict the variability of algorithms with respect to Euclidean distance metric, similar to the definition of Lipschitz stability. **Assumption 5.3** (Algorithmic-stability for Dynamics): _Let \(\mathcal{S}=(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{m+1})\) be a realizable dynamical system trajectory and \(\mathcal{S}^{j}\) be the trajectory obtained by swapping \(\mathbf{w}_{j}\) with \(\mathbf{w}^{\prime}_{j}\) (\(j=0\) implies that \(\mathbf{x}_{0}\) is swapped with \(\mathbf{x}^{\prime}_{0}\)). As a result, starting with the \(j\)'th index, the sequence \(\mathcal{S}^{j}\) has different samples \((\mathbf{x}^{\prime}_{j},\ldots,\mathbf{x}^{\prime}_{m+1})\). Let \(X:=\ell(\mathbf{x}_{m+1},f_{\mathcal{S}}^{\mathbf{u}1g}(\mathbf{x}_{m}))\) and \(X^{j}:=\ell(\mathbf{x}^{\prime}_{m+1},f_{\mathcal{S}^{j}}^{\mathbf{d}1g}(\mathbf{x}^{ \prime}_{m}))\). There exists \(K>0\) such that for any \(\mathcal{S}\), \(\mathbf{x}^{\prime}_{0}\in\mathcal{X}\), \(\mathbf{w}^{\prime}_{j}\in\mathcal{W},j\in[m]\), we have_ Lemma B.5 fully justifies this assumption for multilayer transformers. To proceed, we state the main result of this section. **Theorem 5.4**: _Suppose \(\ell(\mathbf{x},\hat{\mathbf{x}})=\ell(\mathbf{x}-\hat{\mathbf{x}}):\mathcal{X}\times\mathcal{ X}\to[0,1]\) is \(L\)-Lipschitz and Assumptions 5.2&5.3 hold. Assume \(\mathcal{X}\) and \(\mathcal{W}\) are bounded by \(\bar{x},\bar{w}\), respectively. Then, with the same probability, the identical bound as in Theorem 3.5 Eqn. (3) holds after updating \(K\) to be \(\bar{K}=2K\frac{\bar{C}_{\rho}}{1-\bar{\rho}}(\bar{w}+\bar{x}/\sqrt{n})\)._ The proof of this result is similar in spirit to the proof of Theorem 3.5 and is provided in Appendix D. The main difference is that we use system's stability to control the impact of a perturbation on the future trajectory. ## 6 Interpreting In-context Learning as a Model Selection Procedure In Section 3, we study the generalization error of ICL, which can be eliminated by increasing sample size \(n\) or number of sequences \(M\) per task. In this section, we will discuss how ICL can be interpreted as an implicit model selection procedure building on the formalism that transformer is a learning algorithm. Following Figure 2 and prior works [17, 24, 19], a plausible assumption is that, transformer can implement ERM algorithms up to a certain accuracy. Then, model selection can be formalized by the selection of the right hypothesis class so that running ERM on that hypothesis class can strike a good bias-variance tradeoff during ICL. To proceed with our discussion, let us consider the following hypothesis which states that transformer can implement an algorithm competitive with ERM. **Hypothesis 1**: _Let \(\mathbb{F}=(\mathcal{F}_{h})_{h=1}^{H}\) be a family of \(H\) hypothesis classes. Let \(\mathcal{S}=(\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{n}\) be a data-sequence with \(n\) examples sampled i.i.d. from \(\mathcal{D}\) and let \(\mathcal{S}^{(m)}=(\mathbf{x}_{i},\mathbf{y}_{i})_{i=1}^{m}\) be the first \(m\) examples. Consider the risk3 associated to ERM with \(m\) samples over \(\mathcal{F}_{h}\in\mathbb{F}\):_ \[\text{risk}(h,m)=\mathbb{E}_{(\mathbf{x},\mathbf{y},\mathcal{S}^{(m)})}[\ell(\mathbf{y}, \hat{f}_{\mathcal{S}^{(m)}}^{(h)}(\mathbf{x}))]\quad\text{where}\quad\hat{f}_{ \mathcal{S}^{(m)}}^{(h)}=\arg\min_{f\in\mathcal{F}_{h}}\frac{1}{m}\sum_{i=1}^{ m}\ell(\mathbf{y}_{i},f(\mathbf{x}_{i})),\] _Let \((\varepsilon_{\text{TF}}^{h,m})>0\) be approximation errors associated with \((\mathcal{F}_{h})_{h=1}^{H}\). There exists \(\texttt{Alg}\in\mathcal{A}\) such that, for any \(m\in[n],h\in[H]\), \(f_{\mathcal{S}^{(m)}}^{\texttt{Alg}}\) can approximate ERM in terms of population risk, i.e._ \[\mathbb{E}_{(\mathbf{x},\mathbf{y},\mathcal{S}^{(m)})}[\ell(\mathbf{y},f_{\mathcal{S}^{(m )}}^{\texttt{Alg}}(\mathbf{x}))]\leq\text{risk}(h,m)+\varepsilon_{\text{TF}}^{h,m}.\] For model selection purposes, these hypothesis classes can be entirely different ML models, for instance, \(\mathcal{F}_{1}=\{\text{convolutional-nets}\}\), \(\mathcal{F}_{2}=\{\text{fully-connected-nets}\}\), and \(\mathcal{F}_{3}=\{\text{decision-trees}\}\). Alternatively, they can be a nested family useful for capacity control purposes. For instance, Figures 2(a,b) are learning covariance/noise priors to implement a constrained-ridge regression. Here \(\mathbb{F}\) can be indexed by positive-definite matrices \(\Sigma\) with linear classes of the form \(\mathcal{F}_{\Sigma}=\{f(\mathbf{x})=\mathbf{x}^{\mathsf{T}}\mathbf{\beta}\quad\text{where} \quad\mathbf{\beta}^{\mathsf{T}}\Sigma^{-1}\mathbf{\beta}\leq 1\}\). Under Hypothesis 1, ICL selects the most suitable class that minimizes the excess risk for each \(m\in[n]\). **Observation 1**: _Suppose Hypothesis 1 holds for a target distribution \(\mathcal{D}_{\mathcal{T}}\). Let \(\mathcal{L}_{\mathcal{T}}^{\star}:=\min_{\texttt{Alg}\in\mathcal{A}} \mathcal{L}_{\mathcal{T}}(\texttt{Alg})\) be the risk of the optimal algorithm. We have that_ \[\mathcal{L}_{\mathcal{T}}^{\star}\leq\frac{1}{n}\sum_{m=0}^{n-1}\min_{h\in[H] }\{\text{risk}(h,m)+\varepsilon_{\text{TF}}^{h,m}\}.\] _Additionally, denote Rademacher complexity of a class \(\mathcal{F}\) by \(\mathcal{R}_{m}(\mathcal{F})\). Define the minimum achievable risk over function set \(\mathcal{F}_{h}\) as \(\mathcal{L}_{h}^{\star}:=\min_{f\in\mathcal{F}_{h}}\mathbb{E}_{\mathcal{D}_{ \mathcal{T}}}[\ell(\mathbf{y},f(\mathbf{x}))]\). Since risk\((h,m)\) is controlled by \(\mathcal{R}_{m}(\mathcal{F}_{h})\)[37], we have that_ \[\mathcal{L}_{\mathcal{T}}^{\star}\leq\frac{1}{n}\sum_{m=0}^{n-1}\min_{h\in[H] }\{\mathcal{L}_{h}^{\star}+\varepsilon_{\text{TF}}^{h,m}+\mathcal{O}( \mathcal{R}_{m}(\mathcal{F}_{h}))\}.\] Here, ICL adaptively selects the classes \(\arg\min_{h\in[H]}\{\mathcal{L}_{h}^{\star}+\mathcal{R}_{m}(\mathcal{F}_{h})+ \varepsilon_{\text{TF}}^{h,m}\}\) to achieve small risk. This is in contrast to training over a single large class \(\mathcal{F}=\cup_{i=1}^{H}\mathcal{F}_{i}\), which would result in a less favorable bound \(\approx\min_{h\in[H]}\mathcal{L}_{h}^{\star}+\frac{1}{n}\sum_{m=0}^{n-1} \mathcal{R}_{m}(\mathcal{F})\). A formal version of this statement is provided in Appendix E. Hypothesis 1 assumes a discrete family for simpler exposition (\(|\mathbb{F}|=H<\infty\)), however, our theory in Section 3 allows for the continuous setting. We emphasize that, in practice, we need to adapt the hypothesis classes for different sample sizes \(m\) (typically, more complex classes for larger \(m\)). With this in mind, while we have \(H\) classes in \(\mathbb{F}\), in total we have \(H^{n}\) different ERM algorithms to compete against. This means that VC-dimension of the algorithm class is as large as \(n\log H\). This highlights an insightful benefit of our main result: Theorem 3.5 would result in an excess risk \(\propto\sqrt{\frac{n\log H}{nT}}=\sqrt{\frac{\log H}{T}}\). In other words, the additional \(\times n\) factor achieved through Theorem 3.5 facilitates the adaptive selection of hypothesis classes for each sample size and avoids requiring unreasonably large \(T\). ## 7 Numerical Evaluations Our experimental setup follows [17]: All ICL experiments are trained and evaluated using the same GPT-2 architecture with 12 layers, 8 attention heads, and 256 dimensional embeddings. We first explain the details of Fig. 2 and then provide stability experiments.4 Footnote 4: Our code is available at [https://github.com/yingcong-li/transformers-as-algorithms](https://github.com/yingcong-li/transformers-as-algorithms). \(\bullet\) **Linear regression (Figures 2(a,b)).** We consider a \(d\)-dimensional linear regression tasks with in-context examples of the form \(\mathbf{z}=(\mathbf{x},y)\in\mathbb{R}^{d}\times\mathbb{R}\). Given \(t\)'th task \(\mathbf{\beta}_{t}\), we generate \(n\) i.i.d. samples via \(y=\mathbf{\beta}_{t}^{\mathsf{T}}\mathbf{x}+\mathbf{\xi}\) where \(\mathbf{x}\sim\mathbf{N}(0,\mathbf{I}),\ \xi\sim\mathbf{N}(0,\sigma^{2})\) and \(\sigma\) is the noise level. Tasks are sampled i.i.d. via \(\mathbf{\beta}_{t}\sim\mathbf{N}(0,\mathbf{\Sigma}),\ t\in[T]\). Results are displayed in Figures 2(a)&(b). We set \(d=20\), \(n=40\) and significantly larger \(T\) to make sure model is sufficiently trained and we display meta learning results (i.e. on unseen tasks) for both experiments. In Fig. 2(a), \(\sigma=1\) and \(\mathbf{\Sigma}=\mathbf{I}\). We also solve ridge-regularized linear regression (with sample size from 1 to \(n\)) over the grid \(\lambda=[0.01,0.05,0.1,0.5,1]\) and display the results of the best \(\lambda\) selection as the optimal ridge curve (Black dotted). Recall from (5) that ridge regression is optimal for isotropic task covariance. In Fig. 2(b), we set \(\sigma=0\) and \(\mathbf{\Sigma}=\text{diag}(\big{[}1,\frac{1}{2^{2}},\frac{1}{3^{2}},\ldots,\frac{ 1}{20^{2}}\big{]})\). Besides ordinary least squares (Green curve), we also display the optimally-weighted regression according to (5) (dotted curve) as \(\sigma\to 0\). In both figures, ICL (Red) outperforms the least squares solutions (Green) and are perfectly aligned with optimal ridge/weighted solutions (Black dotted). This in turn provides evidence for the automated model selection ability of transformers by learning task priors. \(\bullet\)**Partially-observed dynamical systems (Figures 2(c) & 6).** We generate in-context examples \(\mathbf{z}_{i}=\mathbf{x}_{i}\in\mathbb{R}^{r},\ i\in[n]\) via the partially-observed linear dynamics \(\mathbf{x}_{i}=\mathbf{C}\mathbf{s}_{i}\), \(\mathbf{s}_{i}=\mathbf{A}\mathbf{s}_{i-1}+\mathbf{\xi}_{i}\) with noise \(\mathbf{\xi}_{i}\sim\mathbf{N}(0,\sigma^{2}\mathbf{I}_{d})\) and initial state \(\mathbf{s}_{0}=\mathbf{0}\). Each task is parameterized by \(\mathbf{C}\in\mathbb{R}^{r\times d}\) and \(\mathbf{A}\in\mathbb{R}^{d\times d}\) which are drawn with i.i.d. \(\mathbf{N}(0,1)\) entries and \(\mathbf{A}\) is normalized to have spectral radius \(0.9\). In Fig. 2(c), we set \(d=10\), \(r=4\), \(\sigma=0\), \(n=20\) and use sufficiently large \(T\) to train the transformer. For comparison, we solve least-squares regression to predict new observations \(\mathbf{x}_{i}\) via the most recent \(H\) observations for varying window sizes \(H\). Results show that in-context learning outperforms the least-squares results of all orders \(H=1,2,3,4\). In Figure 6, we also solve the dynamical problem using optimal ridge regression for different window sizes. This reveals that ICL can also outperform auto-regressive models with optimal ridge tuning, albeit the performance gap is much narrower. It would be interesting to compare ICL performance to a broader class of system identification algorithms (e.g. Hankel nuclear norm, kernel-based, atomic norm [28, 41]) and understand the extent ICL can inform practical algorithm design. \(\bullet\)**Stability analysis (Figure 7).** In Assumption 3.1, we require that transformer-induced algorithms are stable to input perturbations, specifically, we require predictions to vary by at most \(\mathcal{O}(1/m)\) where \(m\) is the sample size. This was justified in part by Theorem 3.2. To understand empirical stability, we run additional experiments where the results are displayed in Fig. 7. We study stability of four function classes: linear models, 3-sparse linear models, decision trees with depth 4, and 2-layer ReLU networks with 100 hidden units, all with Figure 6: Dynamical system experiments. The difference from Fig. 2(c) is that we compare ICL to the optimally-tuned ridge regression with different history windows \(H\). Figure 7: Experiments to assess the algorithmic stability Assumption 3.1. Each figure shows the increase in the risk for varying ICL sample sizes after an example in the prompt is modified. We swap an input example in the prompt and assign a flipped label to this new input, e.g., we move from \((\mathbf{x},f(\mathbf{x}))\) to \((\mathbf{x}^{\prime},-f(\mathbf{x}^{\prime}))\). input dimension of 20. For each class \(\mathcal{F}\), a GPT-2 architecture is trained with large number of random tasks \(f\in\mathcal{F}\) and evaluate on new tasks. With the exception of Fig. 2(a), we use the pretrained models provided by [17] and the task sequences are noiseless i.e. sequences obey \(y_{i}=f(\mathbf{x}_{i})\). As a coarse approximation of the _worst-case_ perturbation, we perturb a prompt \(\mathbf{x}^{(m)}_{\text{prompt}}=(\mathbf{x}_{1},y_{1},\cdots,\mathbf{x}_{m-1},y_{m-1}, \mathbf{x}_{m})\) as follows. Draw a random point \((\mathbf{x}^{\prime}_{1},y^{\prime}_{1})\sim(\mathbf{x}_{1},y_{1})\) and flip its label to obtain \((\mathbf{x}^{\prime}_{1},-y^{\prime}_{1})\). We obtain the adversarial prompt via \(\bar{\mathbf{x}}^{(m)}_{\text{prompt}}=(\mathbf{x}^{\prime}_{1},-y^{\prime}_{1}, \cdots,\mathbf{x}_{m-1},y_{m-1},\mathbf{x}_{m})\)5. In Fig. 7, we plot the test risk change between the adversarial and standard prompts. All figures corroborate that, after a certain sample size, the risk change noticeably decreases as the in-context sample size increases. This behavior is in line with Assumption 3.1; however, further investigation and longer context window are required to accurately characterize the stability profile (e.g. to verify whether stability is \(\mathcal{O}(1/m)\) or not). Finally, in Figure 12 of the appendix, we show that adding label noise to regression tasks during MTL training can help improve stability. Footnote 5: To fully verify Assumption 3.1 one should adversarially optimize \(\mathbf{x}^{\prime}_{1},\mathbf{y}^{\prime}_{1}\) and also swap the other indices \(m>i>1\). ## 8 Conclusions In this work, we approached in-context learning as an algorithm learning problem with a statistical perspective. We presented generalization bounds for MTL where the model is trained with \(T\) tasks each mapped to a sequence containing \(n\) examples. Our results build on connections to algorithmic stability which we have verified for transformer architectures empirically as well as theoretically. Our generalization and stability guarantees are also developed for dynamical systems capturing autoregressive nature of transformers. There are multiple interesting questions related to our findings: (1) Our bounds are mainly useful for capturing multitask learning risk, which motivates us to study the question: How can we control generalization on individual tasks or prompts with specific lengths (rather than average risk)? (2) We provided guarantees for dynamical systems with full-state observations. Can we extend such results to more general dynamic settings such as reinforcement/imitation learning or system identification with partial state observations? (3) Our investigation of transfer learning in Section 4 revealed that transfer risk is governed by the number of MTL tasks and task complexity however it seems to be independent of the model complexity. It would be interesting to further demystify this inductive bias during pretraining and characterize exactly what algorithm is learned by the transformer. ## Acknowledgements This work was supported in part by the NSF grants CCF-2046816 and CCF-2212426, Google Research Scholar award, and Army Research Office grant W911NF2110312.
2305.15194
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model. We thus design a multimodal T2I diffusion model, coined as DiffBlender, by separating the channels of conditions into three types, i.e., image forms, spatial tokens, and non-spatial tokens. The unique architecture of DiffBlender facilitates adding new input modalities, pioneering a scalable framework for conditional image generation. Notably, we achieve this without altering the parameters of the existing generative model, Stable Diffusion, only with updating partial components. Our study establishes new benchmarks in multimodal generation through quantitative and qualitative comparisons with existing conditional generation methods. We demonstrate that DiffBlender faithfully blends all the provided information and showcase its various applications in the detailed image synthesis.
Sungnyun Kim, Junsoo Lee, Kibeom Hong, Daesik Kim, Namhyuk Ahn
2023-05-24T14:31:20Z
http://arxiv.org/abs/2305.15194v2
# DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models ###### Abstract The recent progress in diffusion-based text-to-image generation models has significantly expanded generative capabilities via conditioning the text descriptions. However, since relying solely on text prompts is still restrictive for fine-grained customization, we aim to extend the boundaries of conditional generation to incorporate diverse types of modalities, _e.g._, sketch, box, and style embedding, simultaneously. We thus design a multimodal text-to-image diffusion model, coined as DiffBlender, that achieves the aforementioned goal in a single model by training only a few small hypernetworks. DiffBlender facilitates a convenient scaling of input modalities, without altering the parameters of an existing large-scale generative model to retain its well-established knowledge. Furthermore, our study sets new standards for multimodal generation by conducting quantitative and qualitative comparisons with existing approaches. By diversifying the channels of conditioning modalities, DiffBlender faithfully reflects the provided information or, in its absence, creates imaginative generation. ## 1 Introduction In the field of image generation research, there has been a notable breakthrough due to the high generative capacity of text-to-image (T2I) generation models based on a diffusion process [9; 24; 40]. The observed growth can be attributed to the availability of extensive datasets containing pairs Figure 1: **Generated results with multimodal conditions.** By incorporating various input modalities (\(1^{\text{st}}\) row), DiffBlender successfully synthesizes high-fidelity and diverse samples (\(2^{\text{nd}}\) row). of images and corresponding text [20; 25; 37; 38], as well as the research progress that aligns the image-text embedding space [27]. Drawing on the aforementioned studies, diffusion-based models for T2I generation are trained in a self-supervised manner, thereby providing them with generalizable and robust generative capabilities. Specifically, Latent Diffusion Models (_i.e_., LDM) [30] have enhanced performance and reduced computational cost by executing the diffusion process at the latent level. Providing fine-grained descriptions solely through text-based prompts, however, can indeed limit utilization due to the labor-intensive prompt engineering. Describing concrete details such as structural information (_e_.\(g\)., location, edge lines, and depth) or an abstract concept like the style of a reference image is not straightforward using text alone. To address these challenges and enhance the generation process, the incorporation of additional modalities alongside text has proven beneficial [11; 19; 22; 53]. However, the existing studies still have limitations; supporting only a single modality at a time [19]; supporting only similar types of modalities [22; 53]; supporting multiple modalities but being sensitive to hyperparameters when used together [22]; or training models from scratch with a massive number of parameters without leveraging the existing knowledge [11]. The goal of our work, DiffBlender, is to achieve **incorporating diverse types of modalities** as conditions with its **efficient training**. To this end, we first classify the input modalities by their configurations and characteristics and establish our diffusion model to process each class of modalities. Our framework allows for composing the modality combinations to control the T2I generation in an intended way. By training only additional networks--hypernetworks--while leaving the entire parameters of LDM as itself, we guarantee its generative prior. Figures 1 and 2 visualize the generated samples of DiffBlender, employing multiple conditions at once and successfully synthesizing the images as user preferences. In addition, we emphasize that our method is scalable, as we can step up the model by training only the necessary parameters when additional modalities are introduced after training. Table 1 summarizes the comparisons with previous multimodal T2I diffusion models. We note that DiffBlender supports diverse input modalities, updates only partial parameters, and renders multimodal training, as well as extends to future conditioning modalities. Our contributions are summarized as follows: (_i_) We propose DiffBlender to express complex combinations of conditions while achieving a low training cost through a partial update of hypernetworks. (_ii_) We design its structure to intuitively extend to additional modalities and demonstrate the effectiveness of DiffBlender's multimodal-conditioned generation. (_iii_) The quantitative evaluation using various metrics shows our model outshining the existing unimodal baselines. We provide these standards for the measures in multi-condition settings. (_iv_) We introduce a novel guidance approach that allows delicate control of a specific modality, which is necessary for multimodal generation. ## 2 Related Works Diffusion models have gained significant attention as a promising approach for generating high-quality images [9; 23; 24; 31; 32; 35; 40]. They have shown superior performance compared to the Figure 2: **The versatile applications of DiffBlender. It enables flexible manipulation of conditions, providing the customized generation aligned with user preferences. Note that all results are generated by our single model at once, not in sequence.** adversarial models [4; 15; 36] in terms of both fidelity and diversity [3]. Among them, LDM [30] proposed a unified and effective approach for conditional generation, being further developed into Stable Diffusion (SD) and outperforming GAN-based T2I methods [43; 48; 51]. Following studies such as Imagen [33], UPaunting [17], and DALL-E 2 [28] also demonstrated their ways to improve text-to-image synthesis performance. Imagen and UPaunting leveraged frozen large language models, while UPaunting and DALL-E 2 utilized CLIP's image-text joint representation space [27]. Imagen and DALL-E 2 also applied the cascaded super-resolution technique [10]. In this study, we leverage SD which offers unified and efficient image generation, enabling our model's wide applications. Although prompt conditioning in T2I diffusion models is adequate for explaining desired images, expressing fine-grained details (_e_.\(g\)., spatial information) via text remains challenging. To alleviate this limitation, several works have expanded T2I diffusion models to incorporate additional conditioning modalities, such as sketches [45], depth maps [11; 53], boxes [42; 54], keypoints [53], reference images [11; 16; 52], color palettes [11], scene graphs [13; 50], or semantic maps [2; 18; 26; 46]. However, these studies often require training the entire model from scratch, which is not readily applicable to multimodal training or extending the modalities. On the other hand, GLIGEN [19] and T2I-Adapter [22] have employed the generative prior of the pretrained large-scale T2I diffusion models [30], to effectively accomplish customized image generation. Within this line of research, we propose training SD-based networks, adopting even more informative conditions, to enable scaling and composing multimodal-conditioned generation. ## 3 Method ### Preliminaries on Latent Diffusion Models The most powerful methods in denoising diffusion models include Latent Diffusion Models [30] and its successor, Stable Diffusion (SD). To reduce the high computational cost of a diffusion process on the pixel space, LDM learns a diffusion process on the latent space. A bi-directional mapping network is trained to produce the latent representation \(\mathbf{z}\) of the image \(\mathbf{x}\), and then a UNet-based denoising autoencoder is trained as a diffusion model on the latent \(\mathbf{z}\). The model \(f_{\mathbf{\theta}}\) generates less noisy samples, starting from noise-induced \(\mathbf{z}_{T}\) and conditioning on caption \(\mathbf{c}\), gradually producing \(\mathbf{z}_{T-1}\) to \(\mathbf{z}_{0}\). The training objective of LDM is thus described as follows: \[\min_{\mathbf{\theta}}\mathcal{L}_{\text{LDM}}=\mathbb{E}_{\mathbf{z},\mathbf{c}\sim \mathcal{N}(\mathbf{0},\mathbf{1}),t}\big{[}\|\mathbf{c}-f_{\mathbf{\theta}}(\mathbf{z}_{t},t,\bm {c})\|_{2}^{2}\big{]}, \tag{1}\] where \(t\) is uniformly sampled from time steps \(\{1,\cdots,T\}\), \(\mathbf{z}_{t}\) is the step-\(t\) noisy latent of \(\mathbf{z}\), and \(f_{\mathbf{\theta}}(\mathbf{z}_{t},t,\mathbf{c})\) is the \((t,\mathbf{c})\)-conditioned diffusion model. The denoising autoencoder is structured with several ResNet [6] and Transformer [44] blocks and predicts the noise \(\hat{\mathbf{\epsilon}}\) that has the same size as the input \(\mathbf{z}\). The time embedding is also injected into each ResNet block, as well as caption \(\mathbf{c}\) conditioned via a cross-attention layer of Transformers. SD uses a pretrained CLIP encoder [27] to embed the caption into textual features before the cross-attention. In this paper, we propose SD-based multimodal networks, DiffBlender, replacing \(\mathbf{c}\) in Eq. 1 with \(\mathbf{C}=\{\mathbf{c}_{\text{text}},\mathbf{c}_{1},\mathbf{c}_{2},\cdots\}\) to leverage more diverse modalities and overcome the limitations of unimodal conditioning such as prompts. \begin{table} \begin{tabular}{l|c c c c c c|c} \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{Supporting input modalities} & \multicolumn{1}{c|}{Partial} & \multirow{2}{*}{Multimodal training?} \\ \cline{2-2} \cline{5-8} & Sketch & & & & & & & \\ \hline \hline ControlNet [53] & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ \\ Composer [11] & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ \\ \hline GLIGEN [19] & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ & ✗ \\ T2I-Adapter [22] & ✓ & ✓ & ✗ & ✓ & ✓ & ✗ & ✓ & ✗ \\ DiffBlender & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: **Functional difference over previous related works.** From the standpoint of supported input modalities, we have classified them into three categories: conditions of image-form (_e_.\(g\)., sketch and depth), spatial token form (_e_.\(g\)., box and keypoints), and non-spatial form (_e_.\(g\)., color and style). \(\blacktriangle\) of ControlNet [53] indicates that its scale of learnable parameters is not full but more than half of them. ### DiffBlender We design our model, DiffBlender, to facilitate the convenient addition of different modalities by categorizing them into three types. The first type is image-form modality, which contains spatially rich information such as sketches. Although this type of modalities provides powerful conditioning, it often requires manual effort or the use of specialized processing tools. Alternatively, simpler forms of conditions, such as box coordinates or image embedding vectors, are more straightforward to be employed. To accommodate these types of conditions, we design our model to support tokenized representations, utilizing both spatial and non-spatial tokens, as shown in Figure 3. **Latent fusion for image-form conditions.** In order to incorporate image-form modalities, we have adopted sketches and depth maps. First, we obtain the latents of \(k\)-th image-form modality, \(\mathbf{h}_{\text{im},k}\), with a proposed encoder network that is composed of small ResNet blocks [6]. Then, we fuse them with the original visual tokens \(\mathbf{h}_{v}\) as an input to the residual block of the UNet as follows: \(\mathbf{h}^{\prime}_{v}\leftarrow\text{ResBlk}(\mathbf{h}_{v}+\sum_{k}\mathbf{h}_{\text{ im},k})\). Inspired by Mou _et al._[22], we conduct this fusing step at four downsampling ResNet blocks of the diffusion UNet to ensure its effectiveness. Besides, SD [30] consists of several self-attention (SA) and cross-attention (CA) layers in the Transformer blocks of UNet to combine visual tokens and condition the textual tokens. For convenience, this is denoted as \(\text{CA}(\mathbf{h}^{\prime\prime}_{v},\mathbf{h}_{\text{net}})\), where \(\mathbf{h}^{\prime\prime}_{v}=\text{SA}(\mathbf{h}^{\prime}_{v})\) and \(\mathbf{h}_{\text{text}}\) are textual features from the CLIP [27] encoder. DiffBlender modifies the visual and textual tokens with spatial and non-spatial conditional tokens, respectively, as described below. **Local self-attention for spatial tokens.** To accurately locate the desired positions of synthesized results, we incorporate grounding box and keypoints as input modalities. Unlike textual information, spatial tokens have information that directly affects visual tokens \(\mathbf{h}_{v}\) to change the structure or semantics outputs. As a result, users can determine where to place objects or how to pose human beings in generated samples. To this end, we concatenate the self-attended visual tokens with the spatial tokens of all conditions. Afterwards, we pass them through a local self-attention (**LSA**) module to obtain the intermediate visual tokens \(\tilde{\mathbf{h}}_{v}\). \[\tilde{\mathbf{h}}_{v}=\mathbf{h}^{\prime\prime}_{v}+\text{LSA}\big{(}\big{[}\mathbf{h}^{ \prime\prime}_{v};\{\mathbf{h}_{\text{sp},i}\}\big{]}\big{)},\quad\text{LSA}(\mathbf{ x})=\text{tanh}(\beta_{\ell})\cdot\text{SA}(\mathbf{x})[:V], \tag{2}\] where \(\mathbf{h}_{\text{sp},i}\) are spatial tokens for the condition \(\mathbf{c}_{\text{sp},i}\), projected by our embedding network, and \(V\) is the visual token length. This LSA module is inspired by the gated attention mechanism in [1, 19] to gradually impose conditions on the intact visual tokens, gated with a learnable parameter \(\beta_{\ell}\). **Global self-attention for non-spatial tokens.** In addition to the spatial modalities, DiffBlender is also designed to expropriate non-spatial conditions for guiding global expressions. To this end, we set color palette and style embedding as non-spatial modalities. These contain global and abstract information that influences over the entire image, such as style, color, texture, brightness, etc. DiffBlender concatenates non-spatial tokens with the original global information, textual tokens Figure 3: **Overview of DiffBlender architecture.** (a) illustrates the four types of conditions employed in DiffBlender and indicates where each part of information is used in the UNet layers. (b) focuses on the purple region in (a) to provide details of the DiffBlender’s conditioning process. The lock-marked layers represent being fixed as the original parameters of SD. The remaining modules, small hypernetworks, denote the learnable parameters of DiffBlender. \(\mathbf{h}_{\text{text}}\). Subsequently, tokens are passed through a global self-attention (**GSA**) module to obtain the intermediate textual tokens \(\tilde{\mathbf{h}}_{\text{text}}\). \[\tilde{\mathbf{h}}_{\text{text}}=\mathbf{h}_{\text{text}}+\text{GSA}\big{(}\big{[}\mathbf{ h}_{\text{text}};\{\mathbf{h}_{\text{np},j}\}\big{]}\big{)},\quad\text{GSA}(\mathbf{x})= \text{tanh}(\beta_{g})\cdot\text{SA}(\mathbf{x})[:L], \tag{3}\] where \(\mathbf{h}_{\text{np},j}\) are non-spatial tokens for the condition \(\mathbf{c}_{\text{np},j}\), and \(L\) is the textual token length. Consequently, the visual features are delivered to the next layer after cross-attending two intermediate features: \(\mathbf{h}_{v}\leftarrow\text{CA}(\tilde{\mathbf{h}}_{v},\tilde{\mathbf{h}}_{\text{text}})\). In lieu of employing the GSA module, a naive approach is to use only LSA for both spatial and non-spatial tokens. Every conditional token can be concatenated and attended to alongside the visual features. However, our experiments, as shown in Figure 4, suggest that using LSA tends to prioritize color or style conditions over prompt details. For instance, the blue color mistakenly paints a red motorcycle blue. Conversely, GSA delicately controls the generation because it regulates global information by only adjusting the textual features, compliant with the prompt. As a result, we conjecture that it is important to analyze the characteristics of each modality and apply proper attention mechanisms for creating well-conditioned outputs. ### Towards extending modalities Ideally, a multimodal diffusion model should be able to extend to additional modalities without requiring any change to the underlying architecture. As we have described before, DiffBlender is comprised of three modules according to each type of modality and has a flexible mechanism to fuse more image-form conditions or concatenate more tokens. To validate the scalability and adaptability of the proposed methods, we simulate the _modality extension_ scenario, as displayed in Figure 5, where the new modalities can be simply supported with minimal computational cost while the previous conditioning information is also preserved. We note that training image-form conditions in the last stage enables better learning of spatial tokens (_e_.\(g\)., box and keypoints) because the inherent spatial information in image-form conditions can impede the spatial token conditioning. The potential of extension lies in its ability to easily incorporate new modalities in the future. By training only specific parameters rather than re-training the entire model from scratch, users can extend the model to accumulate their own input modalities, further improving its performance and applicability. ### Mode-specific guidance: enhancing controllability Classifier-free guidance [8] is a simple yet effective method for conditional image generation [23; 34], modifying the predicted noise to include the gradient of the log-likelihood of \(p(\mathbf{C}|\mathbf{x}_{t})\). Given that \(p(\mathbf{C}|\mathbf{x}_{t})\propto p(\mathbf{x}_{t}|\mathbf{C})/p(\mathbf{x}_{t})\), we can obtain \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{C}|\mathbf{x}_{t})\propto\mathbf{\epsilon}^{*}(\mathbf{ x}_{t},\mathbf{C})-\mathbf{\epsilon}^{*}(\mathbf{x}_{t})\) if we know their exact scores. Instead, \(\nabla_{\mathbf{x}_{t}}p(\mathbf{x}_{t}|\mathbf{C})\) and \(\nabla_{\mathbf{x}_{t}}p(\mathbf{x}_{t})\) are parameterized via score estimators \(\hat{\mathbf{\epsilon}}(\mathbf{x}_{t},\mathbf{C})\) Figure 4: **Differences in modules—LSA or GSA—handling the non-spatial tokens.** Figure 5: **Examples of extending input modalities.** Once we finished training the model on box and color (100K tiers), (a) we then train it on style embedding while freezing (*) the LSA module (100K tiers). (b) After the style embedding training, we freeze both the LSA and GSA modules, followed by training the image-form conditioning module for sketch (50K tiers). and \(\hat{\mathbf{\epsilon}}(\mathbf{x}_{t},\emptyset)\), respectively, which are obtained by the denoising diffusion model. Thus, the adjusted noise prediction is \(\hat{\mathbf{\epsilon}}^{*}(\mathbf{x}_{t},\mathbf{C})=\hat{\mathbf{\epsilon}}(\mathbf{x}_{t}, \emptyset)+w(\hat{\mathbf{\epsilon}}(\mathbf{x}_{t},\mathbf{C})-\hat{\mathbf{\epsilon}}(\bm {x}_{t},\emptyset))\), where \(w\) is the guidance scale. In our setup, where the condition set is multimodal, the original classifier-free guidance pushes every condition to the same level of guidance. Unlike unimodal generation, our multimodal generation requires more subtle modulation of input conditions. In order to guide an arbitrary modality \(\mathbf{c}_{m}\) while other modalities are also given, we need to control the estimate of \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{c}_{m}|\mathbf{x}_{t},\mathbf{C}\setminus\mathbf{c}_{m})\), where \[p(\mathbf{c}_{m}|\mathbf{x}_{t},\mathbf{C}\setminus\mathbf{c}_{m})=\frac{p(\mathbf{c}_{m},\bm {x}_{t}|\mathbf{C}\setminus\mathbf{c}_{m})}{p(\mathbf{x}_{t}|\mathbf{C}\setminus\mathbf{ c}_{m})}\propto\frac{p(\mathbf{x}_{t}|\mathbf{C})}{p(\mathbf{x}_{t}|\mathbf{C} \setminus\mathbf{c}_{m})}. \tag{4}\] Thus, the mode-specific guidance direction becomes \(\hat{\mathbf{\epsilon}}(\mathbf{x}_{t},\mathbf{C})-\hat{\mathbf{\epsilon}}(\mathbf{x}_{t}, \mathbf{C}\setminus\mathbf{c}_{m})\), and consequently the adjusted noise prediction for \(\mathbf{c}_{m}\), in addition to the original classifier-free guidance, is \[\hat{\mathbf{\epsilon}}_{m}^{*}(\mathbf{x}_{t},\mathbf{C})=\hat{\mathbf{\epsilon}}^{*}( \mathbf{x}_{t},\mathbf{C})+\gamma(\hat{\mathbf{\epsilon}}(\mathbf{x}_{t},\mathbf{C})-\hat{ \mathbf{\epsilon}}(\mathbf{x}_{t},\mathbf{C}\setminus\mathbf{c}_{m})),\ \gamma\in\mathbb{R}, \tag{5}\] where \(\gamma\) is the controllable scale for mode-specific guidance. \(\gamma<0\) indicates reducing the impact of \(\mathbf{c}_{m}\), \(\gamma>0\) indicates increasing, and \(\gamma=0\) recovers the original classifier-free guidance. ## 4 Experiments ### Training **Dataset.** We have trained DiffBlender on the widely-used benchmark, the COCO2017 [20] train set, which contains nearly 120K real photos. To facilitate network training, we have prepared the following input modalities: 1) _Sketch_: we utilize the PiDiNet [41] followed by a sketch simplification algorithm [39] to extract the sketch of an image. 2) _Depth map_: we employ the monocular depth estimation methods [29]. 3) _Grounding box / Keypoints_: we use the annotation data provided by COCO2017. Each box is tagged with a text that denotes the object class. 4) _Color palette_: to capture the color statistics of given images, we employ the smoothed CIELab histogram [14]. We set the CIELab palette space with 11 hue values, 5 saturation values, and 5 light values, and use a smoothing sigma of 10. 5) _Style embedding_: For embedding style features from the given reference images, we employ the pretrained CLIP [27] image encoder. **Implementation details.** We train DiffBlender with batch size 32 using 8 A100 GPUs for 200K iterations, training the image-form modalities (_i.e_., sketch and depth map) only for the last 50K iterations. We set the learning rate as 5e-5 for AdamW [21], with the first 10K warm-up iterations. During the training, we apply the center crop to input images with a resolution of 512\(\times\)512. In addition, for classifier-free guidance, we randomly drop image-form conditions and others with \(50\%\) and \(10\%\) probabilities, respectively. We also use segmentation maps to randomly mask the fore-/background of the sketch, to enable robust training and conditioning on partial sketches. We provide detailed information on the embedding networks for each modality in Appendix A. ### Qualitative results of DiffBlender **Multimodal-conditioned generation.** DiffBlender exhibits the ability to generate images by incorporating various types of conditions and can even imagine missing modalities if they are not explicitly provided. For instance, the first column of Figure 1 shows the combination of five different conditions including text. If a real-world user wants a specific structure, one can provide detailed structure-related conditions with sketches or depth maps. In cases where it is challenging to convey global abstract information solely through text, more precise expression can be achieved by supplying color palettes or reference images. In the second column of Figure 1, even when the object structure is not explicitly given, specific positioning can be accomplished using box representations. As for the background, DiffBlender imaginatively generates an appropriate background as no condition is provided. The third column aims to specify a person's pose by the keypoints modality, with the partial sketch backgrounding and the reference image styling. In this context, DiffBlender opens up possibilities for numerous application scenarios, as depicted in Figure 2. Here, we have showcased a series of scenarios; starting with text and boxes; utilizing image-forms to specify the background layout, or posing the human's detailed keypoints; providing reference images for the abstract style; allowing free combination of colors; and even adjusting the strength of condition reflection by controlling guidance for each modality. From now on, we discuss representative examples of applications, also providing more results in Appendix C. **Reference-guided and semantic-preserved generation.** The preservation of an image's underlying structure while altering its style is a practical challenge. DiffBlender adeptly achieves this by extracting the sketch and depth map from a source structure image, as well as the style embedding and color palette from a reference style image. Combining these modalities, DiffBlender can generate the images that seamlessly blend the semantic and stylistic aspects. Figure 6 validates that DiffBlender effectively conveys new style onto the source images while maintaining their structural information. Intriguingly, DiffBlender can successfully combine two unrelated images, such as {jet, tiger} and {snowman, fire}, or even with artistic painting. **Object reconfiguration.** Thanks to the wide range of conditions available in DiffBlender, it becomes possible to reconstruct scenes containing diverse objects. In Figure 7, we extract a partial sketch of the background and the position of the object from the input image. By altering the prompt for each corresponding object, _e.g._, _donut, flowers_, or _dollars_, it can reconfigure the output images by creatively filling in the empty regions. A notable difference between DiffBlender's object reconfiguration and traditional inpainting-based image editing [49] lies in the fact that our method can exactly locate the object in the scene, or include new grounding information like a _fork_. While preserving the overall structure of the background, DiffBlender can make a sound adjustment to the scene by altering the object configuration or adapting to new conditions. ### Comparison with previous SOTA methods To assess the effectiveness of our proposed method in various modalities, we conduct a comparative analysis between DiffBlender and existing state-of-the-art T2I methods that can involve non-textual condition. We use the COCO2017 [20] validation set cropped to 512\(\times\)512 resolution. **Evaluation metrics.** T2I models are typically evaluated with FID (Frechet inception distance) [7] and CLIP score [27] to measure the fidelity of samples and the affinity with their text descriptions. However, these metrics are insufficient to assess the model's ability in multimodal settings. To this end, we also measure YOLO-v5 score [12] (AP\({}_{50}\)) to evaluate the correspondence of generated images with the input bounding boxes, SSIM (structural similarity index) [47] with the extracted sketches, and Depth score calculated by \(100-(10\times\ell_{2}\)-distance with depth maps). We average four conditional generative scores, _i.e._, CLIP, YOLO, SSIM, and Depth, for a comprehensive evaluation. **Comparison with baselines.** The quantitative evaluation results, summarized in Table 2, demonstrate that DiffBlender with multimodal conditions achieves the highest overall scores in terms of Figure 6: **Reference-guided and semantic-preserved generation.** When a source image is provided, DiffBlender demonstrates the results preserving the structure inherent in that image, while effectively incorporating the style elements from a reference image. Figure 7: **Object reconfiguration.** DiffBlender has the capability to flexibly reconstruct a new scene by utilizing partial information from an input image. This enables us to customize scenes while retaining the contextual cues and layout from the original image. both fidelity and conditional generative capability. Even with a unimodal condition, DiffBlender remains competitive with previous approaches. While GLIGEN [19] and T2I-Adapter-S/D [22] show fairly high scores in specific modalities, they fall short in accommodating other modalities that were not trained. We also highlight that the comparison between DiffBlender with all six modalities and DiffBlender with only sketch and depth implies that incorporating non-spatial modalities such as color palette and style embedding significantly enhances the generation quality. Figure 8 provides a visual comparison, where the samples are obtained by using the available condition(s) for each model. SD [30] exhibits the ability to generate high-quality result but lacks precision in accurately guiding the object and background layout. GLIGEN [19] can guide the object's position, however, it does not reflect finer details. T2I-Adapter [22] can incorporate object details by leveraging the sketch or depth map but has intrinsic limitation in capturing the global style of the reference image. PAIR-diff. [5] can condition on the reference image, but its usage is bounded because it requires input pair images sharing identical semantic objects. In contrast, DiffBlender has the advantage of utilizing all types of conditions, and we can specify not only the layout but also detailed poses or reference image information. ### Analysis **Controlling mode-specific guidance.** When conditioning a combination of diverse input modalities, it may be difficult for the model to impose every modality at the same level. We thus designed the way to guide a specific modality, formally described in Section 3.4. In Figure 9, we present two examples of guidance: on style embedding and sketch. The style guidance in a positive direction, strengthened by the reference image, draws flowers on the grass, whereas negative guidance removes them. Also, if we desire to increase the effect of the sketch to exactly follow the edge line of a sticker on the laptop lid, we can increase its guidance as presented. Note that while the mode-specific guidance enhances controllability, information from other modalities is still preserved, _e.g_., depth map or color. **Interpolating non-spatial conditions.** In this study, we examine the impact of the latent variables associated with global features, such as color palette and reference image embedding, which are utilized in our GSA module. In Figure 10, we fix input conditions, sketch, depth map, grounding boxes, prompt, and even starting noise, as depicted on the left side. After that, we randomly select \begin{table} \begin{tabular}{l|c|c c c c c|c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Modality} & \multirow{2}{*}{FID \(\downarrow\)} & \multirow{2}{*}{CLIP \(\uparrow\)} & \multirow{2}{*}{YOLO \(\uparrow\)} & \multirow{2}{*}{SSIM \(\uparrow\)} & \multirow{2}{*}{Depth \(\uparrow\)} & \multirow{2}{*}{AVG\({}_{\text{cond}}\) \(\uparrow\)} \\ & at inference & & & & & & & \\ \hline \hline GLIGEN [19] & Box & 19.21 & 24.72 & **17.53** & 30.80 & 87.14 & 40.05 \\ DiffBlender & Box & 19.57 & 25.02 & 15.82 & 30.82 & 87.17 & 39.71 \\ \hline T2I-Adapter-S [22] & Sketch & 20.23 & 26.48 & 11.18 & 32.09 & 87.98 & 39.43 \\ T2I-Adapter-D [22] & Depth\({}^{*}\) & 22.38 & **26.54** & 10.50 & 29.09 & 88.22 & 38.59 \\ DiffBlender & Sketch & 16.70 & 25.58 & 12.34 & 33.52 & 87.94 & 39.85 \\ DiffBlender & Depth & 16.36 & 25.70 & 11.30 & 30.56 & 88.12 & 38.92 \\ DiffBlender & Sketch + Depth & 16.85 & 24.50 & 14.11 & 33.01 & 88.25 & 39.97 \\ \hline **DiffBlender** & **All modals** & **14.99** & 25.30 & 15.89 & **34.92** & **88.44** & **41.14** \\ \hline \end{tabular} \end{table} Table 2: **Evaluation on COCO2017 validation set (5K).** For DiffBlender, the modalities unused at inference are provided as null inputs. \({}^{*}\) indicates T2I-Adapter-D trained on 600K pairs of LAIONESTHETICS [37]. **Best** and _second_ best results. Figure 8: **Visual comparison with baselines.** When the conditions on the left side are provided, we present the comparative results generated by each model with utilizing the available information. We set SD [30], GLIGEN [19], T2I-Adapter [22], and PAIR-diff. [5] as text-, box-, sketch-/depth-, and reference-based baselines over DiffBlender, respectively. color palettes or reference images, apply them, and interpolate between their latents to generate the results. It is remarkable to see that, during the smooth transition from gray and blue colors to black and brown colors, the seagull's positions are being preserved while the background composition undergoes imaginative transformations. In the style interpolation case, although there is no direct correlation between the fixed input conditions and the reference image, we can observe substantial transformations, especially in backgrounds, to actively incorporate the semantics and style of the reference image. ## 5 Conclusion In this paper, we propose DiffBlender, a novel multimodal T2I diffusion model that effectively incorporates diverse conditioning modalities. To this end, DiffBlender captures the distinctive properties of each input modality and adopts appropriate mechanisms such as local or global self-attention modules, accordingly. In addition, our method guarantees training efficiency by employing partial training on top of the well-pretrained diffusion model. Our model demonstrates impressive generation performance in faithfully reflecting complex combinations of conditions, making it applicable to a wide range of use cases. Through comprehensive evaluations, DiffBlender sets new standards in the multimodal generation, validating the superiority of DiffBlender over existing unimodal studies. Our work has a significant implication as it alleviates labor-intensive tasks by producing customized images based on detailed and diverse conditioning modalities. In future work, we study to extend DiffBlender to incorporate modalities beyond the image domain, such as audio or video frames, for broader controllability and to enhance the overall capabilities of the model. Figure 10: **Latent interpolation.** (a) and (b) present the results obtained through interpolating color palettes and reference image embeddings, respectively. Figure 9: **Mode-specific guidance.** Each row depicts the results of style and sketch guidance, respectively. We linearly manipulate the scale \(\gamma\), where the center indicates \(\gamma=0\) (original guidance).
2303.09633
Finiteness Conditions for the $n$-fold Tensor Product of Groups
Let $G$ be a finitely generated group. We prove that the $n$-fold tensor product $G^{\otimes n}$ is finite (resp. polycyclic) if and only $G$ is finite (resp. polycyclic). Further, assuming that $G$ is finitely presented, we show that $G^{\otimes n}$ is finitely presented if and only if $\gamma_n(G)$ is finitely presented. We also examine some finiteness conditions for the non-abelian tensor product of groups.
R. Bastos, G. Ortega
2023-03-16T20:28:10Z
http://arxiv.org/abs/2303.09633v1
# Finiteness conditions for the \(n\)-fold tensor product of groups ###### Abstract. Let \(G\) be a finitely generated group. We prove that the \(n\)-fold tensor product \(G^{\otimes n}\) is finite (resp. polycyclic) if and only \(G\) is finite (resp. polycyclic). Further, assuming that \(G\) is finitely presented, we show that \(G^{\otimes n}\) is finitely presented if and only if \(\gamma_{n}(G)\) is finitely presented. We also examine some finiteness conditions for the non-abelian tensor product of groups. Key words and phrases:finitely presented groups; finiteness conditions; non-abelian tensor product of groups; \(n\)-fold tensor product 2010 Mathematics Subject Classification: 20E06, 20E22, 20E34, 20F18, 20F24, 20J99 ## 1. Introduction Let \(G\) and \(H\) be groups each of which acts upon the other (on the right), \[G\times H\to G,\ (g,h)\mapsto g^{h};\ \ H\times G\to H,\ (h,g)\mapsto h^{g}\] and on itself by conjugation, in such a way that for all \(g,g_{1}\in G\) and \(h,h_{1}\in H\), \[g^{(h^{g_{1}})}=\left(\left(g^{g_{1}^{-1}}\right)^{h}\right)^{g_{1}}\ \ \text{and}\ \ h^{\left(g^{h_{1}}\right)}=\left(\left(h^{h_{1}^{-1}}\right)^{g}\right)^{ h_{1}}. \tag{1}\] In this situation we say that \(G\) and \(H\) act _compatibly_ on each other. Let \(G\) and \(H\) be groups that act compatibly on each other. The non-abelian tensor product \(G\otimes H\) was introduced by Brown and Loday [5] following works of Miller [17], Dennis [7] and Lue [16]. It is defined to be the group generated by all symbols \(\ g\otimes h,\ g\in G,\ h\in H\), subject to the relations \[gg_{1}\otimes h=(g^{g_{1}}\otimes h^{g_{1}})(g_{1}\otimes h)\ \ \ \text{and}\ \ \ g\otimes hh_{1}=(g\otimes h_{1})(g^{h_{1}}\otimes h^{h_{1}})\] for all \(g,g_{1}\in G,\,h,h_{1}\in H\). When \(G=H\) and all actions are conjugations, \(G\otimes G\) is the non-abelian tensor square. Now, there is a well defined action of \(G\) on \(G\otimes G\) by \[(g_{1}\otimes g_{2})^{g_{3}}=g_{1}^{g_{3}}\otimes g_{2}^{g_{3}},\] where \(g_{i}\in G\). Moreover, there is a natural action of \(G\otimes G\) on \(G\) given by \[g_{1}^{g_{2}\otimes g_{3}}=g_{1}^{[g_{2},g_{3}]}.\] With these (compatible) actions we write \(G^{\otimes 3}\) to denote the non-abelian tensor product \((G\otimes G)\otimes G\). Furthermore, for any \(n\geqslant 3\), we can inductively define the \(n\)-fold tensor product, denoted by \(G^{\otimes n}\), by considering the actions of \(G\) and \(G^{\otimes n-1}\) on each other defined by \[(((g_{1}\otimes g_{2})\otimes\ldots\otimes g_{n-2})\otimes g_{n-1})^{g_{n}}=( (g_{1}^{g_{n}}\otimes g_{2}^{g_{n}})\otimes\ldots\otimes g_{n-2}^{g_{n}}) \otimes g_{n-1}^{g_{n}}\] and \[g_{1}^{((g_{2}\otimes g_{3})\otimes\ldots\otimes g_{n-1})\otimes g_{n}}=g_{1} ^{[g_{2},g_{3},\ldots,g_{n}]}.\] Note that all involved actions are compatible. Furthermore, there is a well-defined homomorphism \(\lambda_{n}^{G}\colon G^{\otimes n}\to\gamma_{n}(G)\) defined on generators by \[((g_{1}\otimes g_{2})\otimes\ldots\otimes g_{n-1})\otimes g_{n}\mapsto[g_{1},g_{2},\ldots,g_{n}].\] Here \(\lambda_{2}^{G}\) corresponds to the derived map \(\kappa\) of [5] and \(\ker(\lambda_{2}^{G})\) is isomorphic to \(\pi_{3}(SK(G,1))\), where \(SK(G,1)\) is the suspension of an Eilenberg-MacLane space \(K(G,1)\). According to [23, Proposition 2.8], the sequence \[1\to\Delta(G)\to\ker(\lambda_{2}^{G})\to H_{2}(G)\to 1\] is exact, where \(\Delta(G)=\langle g\otimes g\mid g\in G\rangle\) and \(H_{2}(G)\) is the second homology group of the group \(G\). In the present article we consider some finiteness conditions for the non-abelian tensor product of groups and related constructions. Finiteness conditions for the non-abelian tensor product and related constructions were considered in a number of papers (see [1, 2, 8, 9, 10, 11, 15, 18, 24] and references therein for further results). In [10], Donadze and Garcia-Martinez proved that if \(G\) is a finitely generated group, then the \(n\)-fold tensor product \(G^{\otimes n}\) is finitely generated if and only if the subgroup \(\gamma_{n}(G)\) is finitely generated. Here, we prove an analogue of this theorem for finitely presented groups. Theorem 1.1.: _Let \(G\) be a finitely presented group. Then, the \(n\)-fold tensor product \(G^{\otimes n}\) is finitely presented if and only if \(\gamma_{n}(G)\) is finitely presented._ It is straightforward to see that the finiteness of \(G^{\otimes n}\) does not imply the finiteness of \(G\). For instance, if \(G=C_{p^{\infty}}\) is the Prufer group, then \(G^{\otimes n}\) is trivial, for any \(n\geqslant 2\) (see [20] for more details). The same phenomenon occurs with polycyclic groups. We describe the following finiteness conditions: Theorem 1.2.: _Let \(G\) be a finitely generated group._ 1. _The_ \(n\)_-fold tensor product_ \(G^{\otimes n}\) _is finite if and only if_ \(G\) _is finite._ 2. _The_ \(n\)_-fold tensor product_ \(G^{\otimes n}\) _is polycyclic if and only if_ \(G\) _is polycyclic._ Let \(G\) and \(H\) be groups that act compatibly on each other. As usual, the _derivative_ of \(G\) under the action of \(H\), denoted by \([G,H]\), is defined to be the subgroup \(\langle g^{-1}g^{h}\;\mid\;g\in G,h\in H\rangle\) of \(G\). Similarly, the subgroup \([H,G]=\langle h^{-1}h^{g}\;\mid\;h\in H,g\in G\rangle\) of \(H\) is called the derivative of \(H\) under \(G\). By [5, Proposition 2.3 (b)], there are epimorphisms: \(\lambda\colon G\otimes H\to[G,H]\) and \(\lambda^{\prime}\colon G\otimes H\to[H,G]\) given by \(\lambda(g\otimes h)=g^{-1}g^{h}\) and \(\lambda^{\prime}(g\otimes h)=h^{-g}h\), for each \(g\in G\) and \(h\in H\). In particular, in the context of the \(n\)-fold tensor product, it is customary to write \(\lambda_{n}^{G}\) rather than \(\lambda^{\prime}\). The authors of [9] show that if \(G\) and \(H\) are finitely generated groups acting on each other compatibly, then the non-abelian tensor product \(G\otimes H\) is finitely generated if and only if the derivatives \([G,H]\) and \([H,G]\) are finitely generated. It is natural to ask whether the analogue of the previous result is true for finitely presented groups. We obtain the following related result. Theorem 1.3.: _Let \(G\) and \(H\) be finitely generated groups acting compatibly on each other. If the derivative \([G,H]\) is finitely presented, then the non-abelian tensor product \(G\otimes H\) is finitely presented._ The paper is organized as follows. In the next section we present the proofs of Theorems 1.1 and 1.2. Moreover, we show that if \(G\) is finitely presented, then \(\ker(\lambda_{n}^{G})\) and \(n\)-th nilpotent multiplier \(M_{n}(G)\) are finitely generated, for each \(n\geq 2\) (see Lemma 2.2 and Corollary 2.3, below). In the third section we prove Theorem 1.3. In the final section we present some Schur-Baer type theorems for finitely presented groups. ## 2. \(n\)-fold tensor product of groups **Convention**.: Let \(H\) be a normal subgroup of \(G\). Then, the following homomorphisms \(G\otimes H\to[G,H]\) and \(G\wedge H\to[G,H]\), \(g\otimes h\mapsto[g,h]\), \(g\wedge h\mapsto[g,h]\), for each \(g\in G\), \(h\in H\), will be denoted by \([\;,\;]\). Let \(M,N,P\) be groups such that we can define crossed modules \(\mu:M\to P\) and \(\nu:N\to P\). Observe that with those maps, we can define compatible actions between \(M\) and \(N\). Define the fibre product as \[M\times_{P}N=\{(m,n)\in M\times N\,|\,\mu(m)=\nu(n)\}.\] The (non-abelian) exterior product \(M\wedge^{P}N\) is obtained as follows: \[M\wedge^{P}N=\frac{M\otimes N}{\langle m\otimes n|(m,n)\in M\times_{P}N\rangle^{M \otimes N}}.\] Proposition 2.1.: _Let \(M,N,P\) groups such that we can define crossed modules \(\mu:M\to P\) and \(\nu:N\to P\)._ 1. _(Brown and Loday,_ _[_5_, Theorem 2.12]__) There is a sequence:_ \[\Gamma\big{(}M\times_{P}N/\langle M,N\rangle\big{)}\to M\otimes N\to M\wedge^ {P}N\to 1,\] _where_ \(\Gamma\) _is Whitehead's universal quadratic functor defined in_ _[_26_]_ _and_ \(\langle M,N\rangle\) _is the image of_ \(\lambda\times\lambda^{\prime}\)_._ 2. _Let_ \(G\) _be a group and_ \(n\) _a positive integer. Then_ \[\Gamma\big{(}\gamma_{n}(G)/\gamma_{n+1}(G)\big{)}\to\ker\Big{(}[\,\ ]:G\otimes \gamma_{n}(G)\to\gamma_{n+1}(G)\Big{)}\to\] \[\to\ker\Big{(}[\,\ ]:G\wedge\gamma_{n}(G)\to\gamma_{n+1}(G)\Big{)}\to 1.\] Proof.: (2) Adjusting the first item to our case, let \(inc:\gamma_{n}(G)\to G\) and \(Id:G\to G\), then we have that, \[\gamma_{n}(G)\times_{G}G=\{(g,h)\in\gamma_{n}(G)\times G|\ g=h\}=\gamma_{n}(G),\] and for any \(g\in\gamma_{n}(G)\), \(h\in G\), \[\lambda\times\lambda^{\prime}(g\otimes h)=([g,h],[g,h]).\] Hence, \(\langle\gamma_{n}(G),G\rangle\cong\gamma_{n+1}(G)\). Finally, \[\gamma_{n}(G)\wedge^{G}G=\frac{\gamma_{n}(G)\otimes G}{\langle g\otimes h|g=h \rangle^{\gamma_{n}(G)\otimes G}}=\gamma_{n}(G)\wedge G.\] By the previous item, we obtain the following sequence: \[\Gamma\big{(}\gamma_{n}(G)/\gamma_{n+1}(G)\big{)}\to\gamma_{n}(G)\otimes G\to \gamma_{n}(G)\wedge G\to 1\] Therefore, it follows that, \[\Gamma\big{(}\gamma_{n}(G)/\gamma_{n+1}(G)\big{)}\to\ker\Big{(}[\,\ ]:G\otimes \gamma_{n}(G)\to\gamma_{n+1}(G)\Big{)}\to\] \[\to\ker\Big{(}[\,\ ]:G\wedge\gamma_{n}(G)\to\gamma_{n+1}(G) \Big{)}\to 1.\] First we will establish that if \(G\) is finitely presented, then so is the kernel \(\ker(\lambda_{n}^{G})\). Recall that \(G^{\otimes n}\) is a central extension of \(\ker(\lambda_{n}^{G})\) by \(\gamma_{n}(G)\). Lemma 2.2.: _Let \(G\) be a finitely presented group. Then the kernel \(\ker(\lambda_{n}^{G})\) is finitely generated, for each \(n\geq 2\)._ Proof.: The proof is by induction on \(n\). Assume that \(n=2\). It is well known that \(\ker\left([\,\ ]:G\wedge G\to[G,G]\right)\) is isomorphic to \(H_{2}(G)\) which is finitely generated because \(G\) is finitely presented [21, 14.1.5]. On the other hand, by [23, Proposition 2.8] we have a short exact sequence: \[1\to\Delta(G)\to\ker\left(\lambda_{2}^{G}:G\otimes G\to[G,G]\right)\to H_{2}(G) \to 1,\] where \(\Delta(G)=\langle g\otimes g\ \mid\ g\in G\rangle\). Since \(G^{ab}\) is finitely generated, \(\Delta(G^{ab})\) is also finitely generated [23, Section 3], which implies that \(\ker\left(\lambda_{2}^{G}:G\otimes G\to[G,G]\right)\) is finitely generated. Now, assuming that the lemma holds for \(\lambda_{n}^{G}\), we will prove the same for \(\lambda_{n+1}^{G}\). Consider the central extension of groups \[1\to\ker\left(\lambda_{n}^{G}\right)\to G^{\otimes n}\to\gamma_{n}(G)\to 1.\] By [4, Proposition 9], \[\Big{(}\ker\left(\lambda_{n}^{G}\right)\otimes G\Big{)}\to G^{\otimes n} \otimes G\to\gamma_{n}(G)\otimes G\to 1\] is an exact sequence. Denote the homomorphism \(\ker\left(\lambda_{n}^{G}\right)\otimes G\to G^{\otimes n}\otimes G\) by \(\sigma\), and its image by \(\sigma\Big{(}\ker\left(\lambda_{n}^{G}\right)\otimes G\Big{)}\). Then we have the following commutative diagram with exact rows: which yields the following exact sequence: \[\sigma\Big{(}\ker\left(\lambda_{n}^{G}\right)\otimes G\Big{)}\hookrightarrow \ker\left(\lambda_{n+1}^{G}\right)\twoheadrightarrow\ker\Big{(}[\,\ ]:\gamma_{n}(G)\otimes G\to \gamma_{n+1}(G)\Big{)}.\] We will show that the first and the last terms in this exact sequence are finitely generated. Since \(\ker\left(\lambda_{n}^{G}\right)\) is an abelian group and acts trivially on \(G\), by [14, Proposition 3.2] we have: \[\ker\left(\lambda_{n}^{G}\right)\otimes G=\ker\left(\lambda_{n}^{G}\right) \otimes_{G}I(G),\] where \(I(G)\) denotes the augmentation ideal of \(G\). Since \(G\) is finitely generated, \(I(G)\) is a finitely generated \(G\)-module. By the induction hypothesis we get that \(\ker\left(\lambda_{n}^{G}\right)\otimes_{G}G\) is a finitely generated group. Hence, \(\sigma\Big{(}\ker\Big{(}\lambda_{n}^{G}\Big{)}\otimes G\Big{)}\) is finitely generated. Moreover, by Proposition 2.1, there is an exact sequence: \[\Gamma\big{(}\gamma_{n}(G)/\gamma_{n+1}(G)\big{)}\to\ker\Big{(}[\,\ ]:G\otimes \gamma_{n}(G)\to\gamma_{n+1}(G)\Big{)}\to\] \[\to\ker\Big{(}[\,\ ]:G\wedge\gamma_{n}(G)\to\gamma_{n+1}(G)\Big{)}\to 1.\] Since \(\Gamma\big{(}\gamma_{n}(G)/\gamma_{n+1}(G)\big{)}\) is finitely generated, it suffices to show that \(\ker\Big{(}[\,\ ]:G\wedge\gamma_{n}(G)\to\gamma_{n+1}(G)\Big{)}\) is finitely generated. For the latter, we will use the following exact sequence (see [12, Remark 3]): \[H_{3}\big{(}G/\gamma_{n}(G)\big{)}\to\ker\Big{(}[\,\ ]:G\wedge\gamma_{n}(G)\to \gamma_{n+1}(G)\Big{)}\to H_{2}(G).\] We have already pointed out that \(H_{2}(G)\) is finitely generated. Moreover, since \(G/\gamma_{n}(G)\) is polycyclic, by [3, Corollary 5.5]\(H_{3}\big{(}G/\gamma_{n}(G)\big{)}\) is finitely generated, which completes the proof. We are now in a position to prove Theorem 1.1. Proof of Theorem 1.1.: Recall that \(G\) is finitely presented. We need to show that the \(n\)-th term of the lower central series \(\gamma_{n}(G)\) is finitely presented if and only if \(G^{\otimes n}\) is finitely presented. The subgroup \(\ker\Big{(}\lambda_{n}^{G}:G^{\otimes n}\to\gamma_{n}(G)\Big{)}\) is a central subgroup of \(G^{\otimes n}\) and, by Lemma 2.2, it is finitely presented. Consequently, the following extension \[1\to\ker\Big{(}\lambda_{n}^{G}:G^{\otimes n}\to\gamma_{n}(G)\Big{)}\to G^{ \otimes n}\to\gamma_{n}(G)\to 1,\] implies that the \(n\)-fold tensor product \(G^{\otimes n}\) is finitely presented if and only if \(\gamma_{n}(G)\) is finitely presented. Let \(G\) be a group and let \(F\) be a free group such that \(G=F/R\) for some normal subgroup \(R\) of \(F\). The \(n\)-th nilpotent multiplier \(M_{n}(G)\) is defined by \[M_{n}(G)=\frac{R\cap\gamma_{n}(F)}{\gamma_{n}(R,F)},\] where \(\gamma_{1}(R,F)=R\) and \(\gamma_{k+1}(R,F)=[\gamma_{k}(R,F),F]\). In particular, \(M_{1}(G)\cong H_{2}(G)\) is the Schur multiplier of \(G\). According to Hall [21, 14.1.5] the Schur multiplier of a finitely presented group \(G\), \(M_{1}(G)\), is finitely generated. We obtain the following related result. Corollary 2.3.: _Let \(G\) be a finitely presented group. Then the \(n\)-th nilpotent multiplier \(M_{n}(G)\) is finitely generated for all \(n\geq 1\)._ Proof.: If \(n=1\), then \(M_{1}(G)\) is finitely generated [21, 14.1.5]. Now, we can assume \(n\geq 2\). By [6], \(M_{n}(G)\) is an epimorphic image of \(\ker\left(\lambda_{n+1}^{G}:G^{\otimes n+1}\to\gamma_{n+1}(G)\right)\). By Lemma 2.2, \(M_{n}(G)\) is (abelian) finitely generated. Let \(\nu(G)\) be the group defined in [22] as \[\nu(G):=\langle G\cup G^{\varphi}\ |\ [g_{1},{g_{2}}^{\varphi}]^{g_{3}}=[{g_{1}} ^{g_{3}},({g_{2}}^{g_{3}})^{\varphi}]=[g_{1},{g_{2}}^{\varphi}]^{g_{3}},\ \ g_{i}\in G\rangle.\] The motivation for studying \(\nu(G)\) is the commutator connection: indeed, the map \(\Phi:G\otimes G\to[G,G^{\varphi}]\), defined by \(g\otimes h\mapsto[g,h^{\varphi}]\), for all \(g,h\in G\), is an isomorphism [22, Proposition 2.6] (see also Ellis and Leonard [13])). The following result is an immediate consequence of Lemma 2.2. Corollary 2.4.: _Let \(G\) be a finitely presented group. Assume that the derived subgroup \(G^{\prime}\) is finitely presented. Then \(\nu(G)\) and \(\nu(G)^{\prime}\) are finitely presented._ Proof.: By [23], there are short exact sequences: \[1\to\ker(\lambda_{2}^{G})\to\nu(G)^{\prime}\to G^{\prime}\times G^{\prime}\times G ^{\prime}\to 1\] and \[1\to G\otimes G\to\nu(G)\to G\times G\to 1.\] Applying Lemma 2.2 and Theorem 1.1 with \(n=2\), we obtain that \(\ker(\lambda_{2}^{G})\) and \(G\otimes G\) are finitely presented. Consequently, \(\nu(G)^{\prime}\) and \(\nu(G)\) are finitely presented. Remark 2.5.: _It is worthy to mention that part of the previous result is already known. More precisely, Kochloukova and Sidki proved that \(G\) is finitely presented if and only if \(\nu(G)\) is finitely presented [15, Theorem E]._ Now we will deal with Theorem 1.2: _Let \(G\) be a finitely generated._ 1. _The_ \(n\)_-fold tensor product_ \(G^{\otimes n}\) _is finite if and only if_ \(G\) _is finite._ 2. _The_ \(n\)_-fold tensor product_ \(G^{\otimes n}\) _is polycyclic if and only if_ \(G\) _is polycyclic._ Proof of Theorem 1.2.: First we consider the item (1). Assume that \(G\) is finite. Combining [11] and [4, Proposition 5], we deduce that \(G^{\otimes n}\) is finite. Conversely, assume that \(G^{\otimes n}\) is finite. First we prove that \(G^{ab}\) is finite. As \(G\) is finitely generated, we deduce that \(G^{ab}\) is finitely generated and \(G^{ab}\cong T\times F,\) where \(T\) is the torsion part and \(F\) the free part of \(G^{ab}\). In particular, \(T\) is finite. There exists an epimorphism \(G^{\otimes n}\to(G^{ab})^{\otimes n}\). Moreover, by [5], \((G^{ab})^{\otimes n}\cong(G^{ab})^{\otimes_{\mathbb{Z}}n}\), where "\(\otimes_{\mathbb{Z}}\)" denotes the usual tensor product of \(\mathbb{Z}\)-modules. Now, if \(F\) is infinite, then so is \((G^{ab})^{\otimes_{\mathbb{Z}}n}\), which implies that \(G^{\otimes n}\) is infinite too, a contradiction. Now, we need to show that the derived subgroup \(G^{\prime}\) is finite. Since \(\lambda_{n}^{G}\colon G^{\otimes n}\to\gamma_{n}(G)\) is an epimorphism, we deduce that \(\gamma_{n}(G)\) is finite. Consequently, it suffices to prove that if \(\gamma_{k}(G)\) is finite, then so is \(\gamma_{k-1}(G)\), where \(k\) is a positive integer with \(n\leq k<2\). To prove that \(\gamma_{k}(G)\) is finite is enough to show that \(\gamma_{k-1}(G)/\gamma_{k}(G)\) is finite. Since \(\gamma_{k}(G)/\gamma_{k+1}(G)\) is a finitely generated abelian group, is sufficient to show that \[[g_{1},g_{2},\ldots,g_{k-1}^{m}]\gamma_{k}(G) = [g_{1},g_{2},\ldots,g_{k-1}]^{m}\gamma_{k}(G),\] where \(m\) is a positive integer and \(g_{1},g_{2},\ldots g_{k-1}\in G\). To this end, we proceed by induction on \(m\). The formula is obvious if \(m=1\). Assume the formula is valid up to \(m-1\); we will prove it for \(m\). \[[g_{1},\ldots,g_{k-1}^{m}] = [g_{1},\ldots,g_{k-1}][g_{1},\ldots,g_{k-1}^{m-1}]^{g_{k-1}}\] \[= [g_{1},\ldots,g_{k-1}][g_{1},\ldots,g_{k-1}^{m-1}][g_{1},\ldots, g_{k-1}^{m-1},g_{k-1}]\] \[\equiv [g_{1},\ldots,g_{k-1}]^{m}\mod\gamma_{k}(G)\] Let \(|G^{ab}|=m\), then any \(m\) power of an element of \(G\) is an element of \(G^{\prime}\), so we have that the commutator \([g_{1},\ldots,g_{k-1}]\gamma_{k}(G)\) has order dividing \(m\). We conclude that \(G^{\prime}\) is finite, which completes the proof. Now we deal with item (2). Assume that \(G\) is polycyclic. According to Moravec's theorem [18], we deduce that the \(n\)-fold tensor product \(G^{\otimes n}\) is polycyclic. Conversely, assume that \(G^{\otimes n}\) is polycyclic. Therefore, \(\gamma_{n}(G)\) is polycyclic. Since polycyclic is a class closed to extension, it suffices to prove that \(G/\gamma_{n}(G)\) is polycyclic. Note that \(G/\gamma_{n}(G)\) is nilpotent finitely generated, in particular, polycyclic. Remark 2.6.: _It is worthy to mention that the case \(n=2\) in Theorem 1.2 (1) was considered in [20, Theorem 3.1]._ ## 3. On the non-abelian tensor product of finitely presented group In this section, we consider the influence of specific derivative subgroup in the structure of the non-abelian tensor product. This approach is mainly motivated by the following works: Brown, Johnson and Robertson [4, Section 8], Nakaoka [19], Visscher [25] and Donadze, Ladra and Thomas [9, Section 3]. Proof of Theorem 1.3.: Recall that \(G\) and \(H\) are finitely generated groups acting compatibly on each other, and the derivative \([G,H]\) is finitely presented. We need to prove that the non-abelian tensor product \(G\otimes H\) is finitely presented. By [5, Propostion 2.3 (b) and (d)], the non-abelian tensor product \(G\otimes H\) is a central extension of \(\ker(\lambda)\) by \([G,H]\), where \(\lambda\colon G\otimes H\to[G,H]\), \(g\otimes h\mapsto g^{-1}g^{h}\), is an epimorphism. According to [9, Proposition 5.1], we deduce that \(G\otimes H\) is finitely generated. Thus, it suffices to prove that \(\ker(\lambda)\) is finitely generated. As \(G\otimes H\) is finitely generated, we have an epimorphism \(p\colon F\to G\otimes H\), where \(F\) is a free group of finite ranking. Define \(\lambda^{\prime}\colon F\to[G,H]\) as the composition \(\lambda\circ p\). Then, from the following commutative diagram: we get an epimorphism \(\ker(\lambda^{\prime})\to\ker(\lambda)\) induced by \(p\colon F\to G\otimes H\). Moreover, as \(\ker(\lambda)\leq Z(G\otimes H)\), we have an epimorphism \[\frac{ker(\lambda^{\prime})}{[ker(\lambda^{\prime}),F]}\to\frac{\ker( \lambda)}{[ker(\lambda),G\otimes H]}=\ker(\lambda).\] Set \(K=\ker(\lambda^{\prime})\). Now, it remains to show that \(K/(K\cap F^{\prime})\) and \((K\cap F^{\prime})/([K,F])\) are finitely generated. Consider the following lattice of subgroups: Note that \(K/(K\cap F^{\prime})\) is isomorphic to a subgroup of the abelianization \(F^{ab}\) and so, finitely generated. Moreover, by [21, 11.4.15], \((K\cap F^{\prime})/([K,F])\) is isomorphic to the Schur multiplier \(M([G,H])\). According to Hall's theorem, the Schur multiplier \(M([G,H])\) is finitely generated [21, 14.1.5]. Consequently, \(\ker(\lambda)\) is finitely presented, the proof is complete. Remark 3.1.: _It is well-known that the finite presentability of the groups \(G\) and \(H\) does not imply the finite presentability of the non-abelian tensor product \(G\otimes H\). For instance, by [4, Proposition 6], if \(F\) is a free group of rank \(n\geq 2\), then the non-abelian tensor square \(F\otimes F\cong F^{\prime}\times\mathbb{Z}^{\frac{n(n+1)}{2}},\) where \(F^{\prime}\) is a free group of countably infinite rank._ Theorem 3.2.: _Let \(G\) and \(H\) be groups acting compatibly on each other. Assume that \(G\) is finite and \(H\) finitely generated. Then the non-abelian tensor product \(G\otimes H\) is finitely presented. Moreover, every subgroup of \(G\otimes H\) is finitely presented._ Proof.: By Theorem 1.3, the non-abelian tensor product \(G\otimes H\) is finitely presented. Now, we will show that every subgroup of \(G\otimes H\) is finitely presented. Indeed, consider the following exact sequence: \[1\to\ker(\lambda)\to G\otimes H\to[G,H]\to 1,\] where \([G,H]\leq G\) is the derivative of \(G\) under the action of \(H\). As \(G\otimes H\) is finitely generated, \(\ker(\lambda)\leq Z(G\otimes H)\) and the index \([G\otimes H:\ker(\lambda)]\leq||G,H]|<\infty\), we have the kernel \(\ker(\lambda)\) is finitely generated. Therefore \(G\otimes H\) is polycyclic-by-finite and the result follows. ## 4. Schur-Baer type theorem for finitely presented groups In this section, we will use the approach of [8] and [10] to prove a Schur-Baer type theorem for finitely presented groups. Recall that the group class \(\mathfrak{X}\) is called a Schur class if for any group \(G\) such that the factor \(G/Z(G)\) belongs to \(\mathfrak{X}\), also the derived subgroup \(G^{\prime}\) of \(G\) belongs to \(\mathfrak{X}\). Thus the famous Schur's theorem just states that finite groups form a Schur class [21, 10.1.4]. Schur's theorem admits a generalization to higher terms of the lower central series. More precisely, if \(G/Z_{i}(G)\) is finite, then \(\gamma_{i+1}(G)\) is finite (Baer, [21, 14.5.1]). In particular, the case \(i=1\) is, exactly, Schur's theorem. For more details see [21] and the references given there. Recall that an exact sequence of groups \[1\to N\to H\to G\to 1\] is called \(n\)-central if \(N\leq Z_{n}(H)\). The next lemma is taken from [10]. Lemma 4.1.: _Let \(1\to N\to H\to G\to 1\) be a \(n\)-central extension for a fixed positive integer \(n\). Then, there exists an epimorphism \(\tau\colon G^{\otimes n+1}\to\gamma_{n+1}(H)\) making the following diagram commutative_ Theorem 4.2.: _Let \(G\) be a finitely presented group and \(n\geq 1\). Then, the following are equivalent:_ _(i) \(\gamma_{n+1}(G)\) is finitely presented,_ _(ii) for arbitrary \(n\)-central extension of groups \(1\to N\to H\to G\to 1\), \(\gamma_{n+1}(H)\) is finitely presented._ Proof.: Let \(H\) be as in the theorem. By Lemma 4.1 there exists an epimorphism \(\tau:G^{\otimes n+1}\to\gamma_{n+1}(H)\) such that the following diagram commutes: From this diagram we get an epimorphism: \[\ker\left(\lambda_{n+1}^{G}:G^{\otimes n+1}\to\gamma_{n+1}(G)\right)\to\ker \left(\gamma_{n+1}(H)\to\gamma_{n+1}(G)\right)\!.\] Since \(\ker\left(\lambda_{n+1}^{G}\right)\) is an abelian group, we get that \(\ker\left(\gamma_{n+1}(H)\to\gamma_{n+1}(G)\right)\) is finitely presented by Lemma 2.2. Consequently, the extension of groups \[1\to\ker\left(\gamma_{n+1}(H)\to\gamma_{n+1}(G)\right)\to\gamma_{n+1}(H)\to \gamma_{n+1}(G)\to 1\] implies that \(\gamma_{n+1}(H)\) is finitely presented if and only if \(\gamma_{n+1}(G)\) is finitely presented. Proposition 4.3.: _Let \(\mathfrak{X}\) be one of the following classes of groups:_ 1. _the class of perfect finitely presented groups;_ 2. _the class of nilpotent finitely presented groups;_ _ 3. _the class of finitely presented groups in which every subgroup is finitely presented._ _Let \(G\in\mathfrak{X}\). Then, for each \(n\)-central extension of groups \(1\to N\to H\to G\to 1\), \(n\geq 1\), we have \(\gamma_{n+1}(H)\in\mathfrak{X}\)._ Proof.: We denote by \(\tau:G^{\otimes n+1}\to\gamma_{n+1}(H)\) the first vertical homomorphism in the diagram given in Lemma 4.1. (i): Since \(\gamma_{n+1}(G)=G\), by Theorem 4.2\(\gamma_{n+1}(H)\) is finitely presented. On the other hand, given two groups \(M\) and \(N\) acting on each other compatibly, we have the following relation (see [5, Proposition 2.3]): \[[m\otimes n,m^{\prime}\otimes n^{\prime}]=(m^{n}m^{-1})\otimes(^{m^{\prime}}n ^{\prime}n^{\prime-1}),\] for each \(m,m^{\prime}\in M\) and \(n,n^{\prime}\in N\). This implies that \(G^{\otimes n+1}\) is a perfect group, because \(G\) is perfect. Since \(\gamma_{n+1}(H)=Im(\tau)\), we get that \(\gamma_{n+1}(H)\in\mathfrak{X}\). (ii): By [25, Theorem 3.4 (i)], the \(n\)-fold tensor product \(G^{\otimes n+1}\) is nilpotent. Hence, \(\gamma_{n+1}(H)\) is nilpotent. Moreover, since \(G\) is finitely presented and nilpotent, \(\gamma_{k}(G)\) will be finitely presented for all \(k\geq 1\). Thus, by Theorem 4.2\(\gamma_{n+1}(H)\) is a finitely presented group. (iii): Let \(H_{1}\) be a subgroup of \(\gamma_{n+1}(H)\). We have to show that \(H_{1}\) is finitely presented. Let \(\delta:H\to G\) denote the epimorphism given in the proposition. Since \(\delta(H_{1})\) is a subgroup of \(G\), it is a finitely presented group. Therefore, it suffices to show that \(\ker\left(\delta|_{H_{1}}:H_{1}\to\delta(H_{1})\right)\) is finitely presented. We have the following commutative diagram: This implies an epimorphism \(\ker\left(\lambda_{n+1}^{G}|_{\tau^{-1}(H_{1})}\right)\to\ker\left(\delta|_{H _{1}}\right)\). Since \(\ker\left(\lambda_{n+1}^{G}|_{\tau^{-1}(H_{1})}\right)\subseteq\ker\left( \lambda_{n+1}^{G}\right)\), by Lemma 2.2\(\ker\left(\lambda_{n+1}^{G}|_{\tau^{-1}(H_{1})}\right)\) is a finitely generated abelian group. Hence, \(\ker\left(\delta|_{H_{1}}\right)\) is finitely presented. We are done with the proof. Remark 4.4.: _Let \(\mathfrak{X}\) be as in Proposition 4.3. Then \(\mathfrak{X}\) is a Schur class._ Corollary 4.5.: _Let \(\mathfrak{X}\) be a class of groups defined by_ \[\mathfrak{X}=\{G\:|\:G\:\text{is nilpotent-by-finite and finitely presented}\}.\] 1. _For arbitrary two groups_ \(G,H\in\mathfrak{X}\) _acting on each other compatibly, we have_ \(G\otimes H\in\mathfrak{X}\)_._ 2. _Then_ \(\mathfrak{X}\) _is a Schur class._ Proof.: (1) By [9, Corollary 3.6 (ii)], the non-abelian tensor product of two nilpotent-by-finite groups is nilpotent-by-finite. Now, it suffices to show that \(G\otimes H\) is finitely presented for each \(G,H\in\mathfrak{X}\). Since \(G\) is finitely generated nilpotent-by-finite, it follows that \(G\) and \([G,H]\) are polycyclic-by-finite and so, finitely presented. By Theorem 1.3, the non-abelian tensor product \(G\otimes H\) is finitely presented. (2) This follows from Lemma 4.1, Theorem 4.2. ## Acknowledgements The authors are very grateful to Guram Donadze and Norai Rocco for the interesting discussions and suggestions on the best approach to these results. The work of the first was supported by FAPDF and CNPq-Brazil. The second author was supported by CAPES-Brazil.
2303.16713
Maximal operator in Hölder spaces
We study the maximal operator on the variable exponent H\"older spaces in the setting of metric measure spaces. The boundedness is proven for metric measure spaces satisfying an annular decay property. Let us stress that there are no assumptions on the regularity of the variable exponent and the variable exponent can touch values $0$ and $1$. Furthermore, the continuity of the maximal operator between H\"older spaces is investigated. Those results are new even in the Euclidean setting.
Piotr Michał Bies, Michał Gaczkowski, Przemysław Górka
2023-03-29T14:13:21Z
http://arxiv.org/abs/2303.16713v1
# Maximal operator in Holder spaces ###### Abstract. We study the maximal operator on the variable exponent Holder spaces in the setting of metric measure spaces. The boundedness is proven for metric measure spaces satisfying an annular decay property. Let us stress that there are no assumptions on the regularity of the variable exponent and the variable exponent can touch values \(0\) and \(1\). Furthermore, the continuity of the maximal operator between Holder spaces is investigated. Those results are new even in the Euclidean setting. **Keywords**: metric measure spaces, annular decay property, maximal operator, variable exponent Holder spaces _Mathematics Subject Classification (2010):_ 42B25; 30L99 ## 1. Introduction The Hardy-Littlewood maximal operator \(M\) plays a very important role in the theory of function spaces. The boundedness of \(M\) in various types of function spaces is a central issue. It is well known that for \(1<p\leq\infty\), the maximal operator is bounded on \(L^{p}(X,d,\mu)\), where \((X,d,\mu)\) is a doubling metric measure space (see e.g. [6]). On the other hand, the maximal operator has been also studied in different function spaces, e.g.: Banach function spaces [11], Sobolev spaces [7], Lebesgue spaces with variable exponent [2], generalized Orlicz spaces [5]. Furthermore, Buckley proved [1] that \(M\) is bounded in Holder spaces \(C^{0,s}(X)\), where \((X,d,\mu)\) satisfies the \(\delta\)-annular decay property and \(\mu\) is doubling. More recently, Gorka [4] proved that the maximal operator is bounded in the space of continuous functions \(C(X)\), if \((X,d,\mu)\) satisfies the \(\delta\)-annular property. On the other hand, if no annular decay property is assumed, then \(Mf\) can fail to be continuous, even if \(f\in C^{0,1}(X)\) (see Example 1.4. in [1]). The main objective of the paper is to study the maximal operator in the variable exponent Holder spaces \(C^{0,\alpha(\cdot)}(X)\), where \((X,d,\mu)\) satisfies the \(\delta\)-annular property.1 We shall prove boundedness of maximal operator in \(C^{0,\alpha(\cdot)}(X)\). Let us stress that in our result there are no assumptions on the regularity of the variable exponent and the variable exponent can touch values \(0\) and \(1\). In particular, we do not assume log-Holder continuity of the exponent, which in the theory of variable exponent spaces is a commonly used assumption on the exponent. Moreover, the second main objective is the investigation of the continuity of the maximal operator between \(C^{0,\alpha(\cdot)}(X)\) and \(C^{0,\beta(\cdot)}(X)\). We prove that \(M\) is continuous if \(\sup_{x\in X}\beta(x)/\alpha(x)<1\). On the other hand, we show that \(M\) is discontinuous on \(C^{0,1}(\mathbb{R})\). This result is rather surprising if we take a look at Luiro's paper [9] about continuity of the maximal operator in Sobolev spaces. Let us make it clear that those results are new even in the Euclidean setting. The remainder of the paper is structured as follows. In Section 2, we introduce the notations and recall the definitions. Our first principal assertion, concerning the boundedness of the maximal operator in the variable exponent Holder spaces, is formulated and proven in Section 3. The last section is devoted to study the continuity of the maximal operator between Holder spaces. ## 2. Preliminaries Let \((X,d,\mu)\) be a metric measure space equipped with a metric \(d\) and the Borel measure \(\mu\). We assume that the measure of every open nonempty set is positive and that the measure of every bounded set is finite. We shall denote the average of an integrable function \(f\) over the measurable set \(A\), such that \(0<\mu(A)<\infty\), in the following manner \[\fint_{A}fd\mu=\frac{1}{\mu(A)}\int_{A}f\,d\mu.\] Let \(f:X\to\mathbb{R}\) be a locally integrable function, then the maximal function\(Mf\) is defined as follows \[Mf(x)=\sup_{r>0}\fint_{B(x,r)}|f|\,d\mu.\] Next, we recall the definition of annular decay property [1]. Given \(\delta\in(0,1]\), we say that the space \((X,d,\mu)\) satisfies the \(\delta\)-annular decay property if there exists a constant \(K\geq 1\) such that for all \(x\in X\), \(r>0\), \(0<\epsilon<1\), we have \[\mu\left(B(x,r)\setminus B(x,r(1-\epsilon))\right)\leq K\epsilon^{\delta}\mu (B(x,r)).\] Let \((X,d)\) be a metric space, by \(C(X)\) we denote the space of continuous functions on \(X\) such that the norm \[\|f\|_{C(X)}=\sup_{x\in X}|f(x)|\] is finite. Moreover, for \(\alpha:X\to[0,1]\) we denote by \(C^{0,\alpha(\cdot)}(X)\) the variable exponent Holder space, i.e. the space of \(f\in C(X)\) such that \[\|f\|_{C^{0,\alpha(\cdot)}(X)}:=\|f\|_{C(X)}+\sup_{x\neq y}\frac{|f(x)-f(y)|}{ d^{\alpha(x)}(x,y)}<\infty.\] ## 3. Boudedness of the maximal operator **Theorem 3.1**.: _Suppose that \(0<\delta\leq 1\), and that \((X,d,\mu)\) satisfies the \(\delta\)-annular property. If \(\alpha:X\to[0,\delta]\), then \(M:C^{0,\alpha(\cdot)}(X)\to C^{0,\alpha(\cdot)}(X)\) and there exists \(C_{1}>0\) such, that for \(f\in C^{0,\alpha(\cdot)}(X)\) the following estimate holds_ \[\|Mf\|_{C^{0,\alpha(\cdot)}(X)}\leq C_{1}\|f\|_{C^{0,\alpha(\cdot)}(X)}.\] Proof.: Let us fix \(f\in C^{0,\alpha(\cdot)}(X)\) such, that \(\|f\|_{C^{0,\alpha(\cdot)}(X)}=1\). By Theorem A from [4] we have that \(Mf\in C(X)\) and the following inequality holds \[\|Mf\|_{C(X)}\leq\|f\|_{C(X)}.\] Therefore, in order to prove Theorem 3.1, we need to show that for every \(x,y\in X\) such that \(x\neq y\) \[\frac{|Mf(x)-Mf(y)|}{d^{\alpha(x)}(x,y)}\leq C_{1}, \tag{1}\] where \(C_{1}=\max\left\{7,1+12K2^{\delta}\right\}\). Let us fix two distinct points \(x,y\in X\) and define \(a=d(x,y)\). If \(a>1\), then \[|Mf(x)-Mf(y)|\leq 2\|f\|_{C(X)}\leq 2\|f\|_{C(X)}a^{\alpha(x)},\] and (1) holds. Therefore, we can assume that \(0<a\leq 1\). Let us observe that (1) follows from the following inequality \[Mf(y)\geq Mf(x)-C_{1}\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\}. \tag{2}\] Indeed, if (2) holds, then \[Mf(x)-Mf(y)\leq C_{1}\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\},\] and \[Mf(y)-Mf(x)\leq C_{1}\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\}.\] Therefore, gathering the above inequalities we get (1). Hence, in order to finish the proof we need to show (2). We shall give the proof of (2) in two cases: \(\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\}=a^{\alpha(x)}\), \(\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\}=a^{\alpha(y)}\). **Case 1**\(\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\}=a^{\alpha(x)}\). By the very definition of \(Mf(x)\), we choose \(r>0\) such that \[Mf(x)\leq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(x,r)}|f|\,d\mu+a^{\alpha(x)}. \tag{3}\] We shall consider two subcases. **Subcase 1.1**\(r\leq a\). For \(z\in B(x,r)\cup B(y,r)\) we have \[|f(x)-f(z)|\leq 2a^{\alpha(x)}. \tag{4}\] Indeed, if \(z\in B(x,r)\), then \[|f(x)-f(z)|\leq d^{\alpha(x)}(x,z)\leq a^{\alpha(x)}\leq 2a^{\alpha(x)},\] and if \(z\in B(y,r)\), then \[|f(x)-f(z)|\leq d^{\alpha(x)}(x,z)\leq d^{\alpha(x)}(x,y)+d^{\alpha(x)}(y,z)< a^{\alpha(x)}+r^{\alpha(x)}\leq 2a^{\alpha(x)}.\] Thus, from inequality (4) we obtain \[\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(x,r)}|f|\,d\mu-\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(y,r)}|f|\,d\mu =\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(x,r)}\left(|f|-|f(x)|\right)\,d\mu- \mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(y,r)}\left(|f|-|f(x)|\right)\,d\mu\] \[\leq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(x,r)}||f|-|f(x)||\,d\mu+\mathchoice{{ \vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(y,r)}||f|-|f(x)||\,d\mu\] \[\leq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(x,r)}|f-f(x)|\,\,d\mu\leq 4a^{\alpha(x)}.\] Next, combining the above inequality with (3) we get \[Mf(y)\geq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(y,r)}|f|\,d\mu\geq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{B(x,r)}|f|\,d\mu-4a^{\alpha(x)}\geq Mf(x)-5a^{ \alpha(x)}.\] Hence, inequality (2) follows. **Subcase 1.2**\(r>a\). Let us introduce two sets \[S:=\left\{g\in L^{1}(B(x,r+2a))\left|\,\fint_{B(x,r)}|g|\,d\mu-\fint_{B(y,r+a)}|g| \,d\mu\leq 6K2^{\delta}a^{\alpha(x)}\right.\right\},\] and \[F:=\left\{g\in L^{1}(B(x,r+2a))\left|\,\|g\|_{L^{\infty}(B(x,r+2a))}\leq A \right.\right\},\] where \(A=A(x,r):=\min\left\{1,2(3r)^{\alpha(x)}\right\}\). We shall divide this part of the proof into three steps. **Step 1.1** If \(f\in S\), then (2) holds. Indeed, if \(f\in S\), then by the very definition of the maximal function we have \[Mf(y) \geq\fint_{B(y,r+a)}|f|\,d\mu\] \[\geq\fint_{B(x,r)}|f|\,d\mu-6K2^{\delta}a^{\alpha(x)}\] \[\geq Mf(x)-\left(1+6K2^{\delta}\right)a^{\alpha(x)},\] and (2) holds. **Step 2.1**\(F\subset S\). First, let us observe that due to the \(\delta\) -annular decay property we have \[\mu\left(B(x,r+2a)\setminus B(x,r)\right)\leq K\left(\frac{2a}{r+2a}\right)^{ \delta}\mu\left(B(x,r+2a)\right). \tag{5}\] Hence, if \(g\in F\) then \[\fint_{B(x,r)}|g|\,d\mu-\fint_{B(y,r+a)}|g|\,d\mu \leq\frac{1}{\mu(B(x,r))}\int_{B(x,r)}|g|\,d\mu-\frac{1}{\mu(B(y,r +a))}\int_{B(x,r)}|g|\,d\mu\] \[\leq\frac{\mu(B(y,r+a))-\mu(B(x,r))}{\mu(B(x,r))\mu(B(y,r+a))}A \mu(B(x,r))\] \[\leq\frac{\mu(B(x,r+2a))-\mu(B(x,r))}{\mu(B(x,r+2a))}A\leq 2K \left(\frac{2a}{r+2a}\right)^{\delta}(3r)^{\alpha(x)}\] \[\leq 6K\left(\frac{2a}{r+2a}\right)^{\delta}\left(\frac{r}{a} \right)^{\alpha(x)}a^{\alpha(x)}\leq 6K\left(\frac{2a}{r+2a}\right)^{\delta} \left(\frac{r}{a}\right)^{\delta}a^{\alpha(x)}\] \[=6K\left(\frac{2ar}{ar+2a^{2}}\right)^{\delta}a^{\alpha(x)}\leq 6 K2^{\delta}a^{\alpha(x)}.\] Thus, we get \(g\in S\). **Step 3.1**\(f\in S\). If \(\alpha(x)=0\) or \(r\geq\frac{1}{3}2^{-1/\alpha(x)}\) with \(\alpha(x)>0\), then \(A=1\) and since \(\|f\|_{C^{0,\alpha(\cdot)}(X)}=1\), we have \(f\in F\). Hence, by Step 2.1 we get \(f\in S\). We are left with the case \(\alpha(x)>0\) with \(r\in\left(a,\frac{1}{3}2^{-1/\alpha(x)}\right)\). Let us introduce two auxiliary functions: \[f_{1}=f-\sup_{z\in B(x,r+2a)}f(z),\] \[f_{2}=f-\inf_{z\in B(x,r+2a)}f(z).\] We claim that \(f_{1},f_{2}\in S.\) Indeed, for any \(\epsilon>0\) we find \(z_{1},z_{2}\in B(x,r+2a)\) such, that \(f_{1}(z_{1})\geq-\epsilon\) and \(f_{2}(z_{2})\leq\epsilon.\) Then, for \(z\in B(x,r+2a)\) \[|f_{1}(z)| \leq|f_{1}(z)-f_{1}(x)|+|f_{1}(x)-f_{1}(z_{1})|+|f_{1}(z_{1})|\] \[\leq d(z,x)^{\alpha(x)}+d(z_{1},x)^{\alpha(x)}+\epsilon\] \[\leq 2\left(r+2a\right)^{\alpha(x)}+\epsilon\] \[\leq 2\left(3r\right)^{\alpha(x)}+\epsilon,\] and in the same fashion we have \[|f_{2}(z)| \leq|f_{2}(z)-f_{2}(x)|+|f_{2}(x)-f_{2}(z_{2})|+|f_{2}(z_{2})| \leq d(z,x)^{\alpha(x)}+d(z_{2},x)^{\alpha(x)}+\epsilon\] \[\leq 2\left(r+2a\right)^{\alpha(x)}+\epsilon\leq 2\left(3r\right)^{ \alpha(x)}+\epsilon.\] Hence, \[\|f_{1}\|_{L^{\infty}(B(x,r+2a))}\leq 2\left(3r\right)^{\alpha(x)}+\epsilon,\, \|f_{2}\|_{L^{\infty}(B(x,r+2a))}\leq 2\left(3r\right)^{\alpha(x)}+\epsilon.\] Thus, we pass to the limit \(\epsilon\to 0^{+}\) and since \(A(x,r)=2\left(3r\right)^{\alpha(x)}\) we get \(f_{1},f_{2}\in F\subset S\). Let us observe that for \(z\in B(x,r+2a)\) \[-A\leq f_{1}(z)\leq 0\quad\text{and}\quad 0\leq f_{2}(z)\leq A.\] Therefore, if \(f(z_{0})\geq A\) for some \(z_{0}\in B(x,r+2a)\), then for any \(z\in B(x,r+2a)\) \[f(z)=f(z_{0})+f(z)-f(z_{0})\geq A+f_{1}(z)\geq A-A=0.\] Hence, since \(f_{2}\in S\), we get \[\fint_{B(x,r)}|f|\,d\mu-\fint_{B(y,r+a)}|f|\,d\mu =\fint_{B(x,r)}f\,d\mu-\fint_{B(y,r+a)}f\,d\mu\] \[=\fint_{B(x,r)}f_{2}\,d\mu-\fint_{B(y,r+a)}f_{2}\,d\mu\] \[=\fint_{B(x,r)}|f_{2}|\,d\mu-\fint_{B(y,r+a)}|f_{2}|\,d\mu\] \[\leq 6K2^{\delta}a^{\alpha(x)},\] and we have \(f\in S\). Similarly, if \(f(z_{0})\leq-A\) for some \(z_{0}\in B(x,r+2a)\), then for any \(z\in B(x,r+2a)\) \[f(z)\leq 0,\] and we obtain \[\fint_{B(x,r)}|f|\,d\mu-\fint_{B(y,r+a)}|f|\,d\mu =-\fint_{B(x,r)}f\,d\mu+\fint_{B(y,r+a)}f\,d\mu\] \[\leq 6K2^{\delta}a^{\alpha(x)}.\] Finally, if for every \(z\in B(x,r+2a)\) we have \(-A\leq f(z)\leq A\), then \(f\in F\) and therefore \(f\in S\). **Case 2**\(\min\left\{a^{\alpha(x)},a^{\alpha(y)}\right\}=a^{\alpha(y)}\). The structure of the proof, in this case, is similar to Case 1. Nevertheless, for the convenience of the reader and clarity of the proof, we give the full argumentation. Let us fix \(r>0\) such that \[Mf(x)\leq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(x,r)}|f|\,d\mu+a^{\alpha(y)}. \tag{6}\] **Subcase 2.1**\(r\leq a\). For \(z\in B(x,r)\cup B(y,r)\) we have \[|f(x)-f(z)|\leq 3a^{\alpha(y)}. \tag{7}\] Indeed, if \(z\in B(x,r)\), then \[|f(x)-f(z)| \leq|f(x)-f(y)|+|f(y)-f(z)|\leq d^{\alpha(y)}(x,y)+d^{\alpha(y)}( y,z)\] \[\leq 2d^{\alpha(y)}(x,y)+d^{\alpha(y)}(x,z)<2a^{\alpha(y)}+r^{ \alpha(y)}\leq 3a^{\alpha(y)},\] and if \(z\in B(y,r)\), then \[|f(x)-f(z)|\leq|f(x)-f(y)|+|f(y)-f(z)|\leq d^{\alpha(y)}(x,y)+d^{\alpha(y)}(y, z)<2a^{\alpha(y)}.\] Next, by inequality (7) we get \[\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(x,r)}|f|\,d\mu-\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(y,r)}|f|\,d\mu\leq 6a^{\alpha(y)},\] and gathering the above inequality with (6) we have \[Mf(y)\geq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(y,r)}|f|\,d\mu\geq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(x,r)}|f|\,d\mu-6a^{\alpha(y)}\geq Mf(x)-7a^{ \alpha(y)}.\] Therefore, inequality (2) follows. **Subcase 2.2**\(r>a\). We introduce two sets \[\tilde{S}:=\left\{g\in L^{1}(B(x,r+2a))\left|\,\mathchoice{{\vbox{\hbox{$ -$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(x,r)}|g|\,d\mu-\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(y,r+a)}|g|\,d\mu\leq 12K2^{\delta}a^{\alpha(y)} \right\},\] and \[\tilde{F}:=\left\{g\in L^{1}(B(x,r+2a))\left|\,\|g\|_{L^{\infty}(B(x,r+2a))} \leq A\right\},\] where \(A=A(y,r):=\min\left\{1,4(3r)^{\alpha(y)}\right\}\). **Step 1.2** If \(f\in\tilde{S}\), then (2) holds. Indeed, if \(f\in\tilde{S}\), then \[Mf(y) \geq\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}\!\int_{B(x,r)}|f|\,d\mu-12K2^{\delta}a^{\alpha(y)}\] \[\geq Mf(x)-\left(1+12K2^{\delta}\right)a^{\alpha(y)}.\] **Step 2.2**\(\tilde{F}\subset\tilde{S}\). Let \(g\in\tilde{F}\) then, inequality (5) yields \[\int_{B(x,r)}|g|\,d\mu-\int_{B(y,r+a)}|g|\,d\mu \leq 4K\left(\frac{2a}{r+2a}\right)^{\delta}(3r)^{\alpha(y)}\] \[\leq 12K\left(\frac{2a}{r+2a}\right)^{\delta}\left(\frac{r}{a} \right)^{\delta}a^{\alpha(y)}\] \[\leq 12K2^{\delta}a^{\alpha(y)}.\] **Step 3.2**\(f\in\tilde{S}\). If \(\alpha(y)=0\) or \(r\geq\frac{1}{3}4^{-1/\alpha(y)}\) with \(\alpha(y)>0\), then \(f\in\tilde{F}\) and by Step 2.2 we get \(f\in\tilde{S}\). Hence, we need to consider the case \(\alpha(y)>0\) with \(r\in\left(a,\frac{1}{3}4^{-1/\alpha(y)}\right)\). We shall use auxiliary functions \(f_{1},f_{2}\) defined in Step 3.1. Let us observe that \(f_{1},f_{2}\in\tilde{S}\). For a fixed \(\epsilon>0\) we choose \(z_{1},z_{2}\in B(x,r+2a))\) such, that \(f_{1}(z_{1})\geq-\epsilon\) and \(f_{2}(z_{2})\leq\epsilon\). Thus, for \(z\in B(x,r+2a)\) \[|f_{1}(z)| \leq|f_{1}(z)-f_{1}(y)|+|f_{1}(y)-f_{1}(z_{1})|+|f_{1}(z_{1})|\] \[\leq d(z,y)^{\alpha(y)}+d(z_{1},y)^{\alpha(y)}+\epsilon\leq 2d^{ \alpha(y)}(x,y)+d^{\alpha(y)}(x,z)+d^{\alpha(y)}(x,z_{1})+\epsilon\] \[\leq 2\left(r+2a\right)^{\alpha(y)}+2a^{\alpha(y)}+\epsilon \leq 4\left(3r\right)^{\alpha(y)}+\epsilon,\] and \[|f_{2}(z)| \leq|f_{2}(z)-f_{2}(y)|+|f_{2}(y)-f_{2}(z_{2})|+|f_{2}(z_{2})| \leq d(z,y)^{\alpha(y)}+d(z_{2},y)^{\alpha(y)}+\epsilon\] \[\leq 4\left(3r\right)^{\alpha(y)}+\epsilon.\] Therefore, \[\|f_{1}\|_{L^{\infty}(B(x,r+2a))}\leq 4\left(3r\right)^{\alpha(y)},\;\|f_{2}\| _{L^{\infty}(B(x,r+2a))}\leq 4\left(3r\right)^{\alpha(y)},\] and since \(A(y,r)=4\left(3r\right)^{\alpha(y)}\), we get \(f_{1},f_{2}\in\tilde{F}\subset\tilde{S}\). Finally, since \(f_{1},f_{2}\in\tilde{S}\) and \(-A\leq f_{1}(z)\leq 0\leq f_{2}(z)\leq A\) for \(z\in B(x,r+2a)\), we get that \(f\in\tilde{S}\). This finishes the proof of the theorem. ## 4. Continuity of the maximal operator **Theorem 4.1**.: _Let \(\delta\in(0,1]\) and \((X,d,\mu)\) satisfies the \(\delta\)-annular property. If \(\alpha:X\rightarrow(0,1]\) and \(\beta:X\rightarrow[0,1]\) satisfy \(\sup_{x\in X}\beta(x)/\alpha(x)<1\), then the operator_ \[M:C^{0,\alpha(\cdot)}(X)\to C^{0,\beta(\cdot)}(X)\] _is continuous._ Proof.: Let us note since \(\beta(\cdot)\leq\alpha(\cdot)\), we have \(Id:C^{0,\alpha(\cdot)}(X)\to C^{0,\beta(\cdot)}(X)\). Therefore, due to Theorem 3.1 we get \(M:C^{0,\alpha(\cdot)}(X)\to C^{0,\beta(\cdot)}(X)\) is bounded. In order to prove the continuity of \(M\) we fix \(f\in C^{0,\alpha(\cdot)}(X)\) and sequence \(\{f_{n}\}\subset C^{0,\alpha(\cdot)}(X)\) such that \(f_{n}\to f\) in \(C^{0,\alpha(\cdot)}(X)\). It is easy to see that \(Mf_{n}\to Mf\) in \(C(X)\). Thus, it is left to show that \[\sup_{x\neq y}\frac{|Mf_{n}(x)-Mf(x)-Mf_{n}(y)+Mf(y)|}{d(x,y)^{\beta(x)}}\to 0.\] By Theorem 3.1 we know that sequence \(\{Mf_{n}\}\) is bounded in \(C^{0,\alpha(\cdot)}(X)\), so we can assume that there exists \(N\geq 1\) such that \(\|Mf_{n}\|_{C^{0,\alpha(\cdot)}(X)}\leq N\) for all \(n\) and \(\|Mf\|_{C^{0,\alpha(\cdot)}(X)}\leq N\). Furthermore, we can assume that \(\|f_{n}-f\|_{C^{0,\alpha(\cdot)}(X}<1/2\) for large \(n\). Therefore, for \(x,y\in X\) such that \(x\neq y\), we get the following string of inequalities \[\frac{|Mf_{n}(x)-Mf(x)-Mf_{n}(y)+Mf(y)|}{d(x,y)^{\beta(x)}}\] \[=\left(\frac{|Mf_{n}(x)-Mf(x)-Mf_{n}(y)+Mf(y)|}{d(x,y)^{\alpha(x) }}\right)^{\frac{\beta(x)}{\alpha(x)}}|Mf_{n}(x)-Mf(x)-Mf_{n}(y)+Mf(y)|^{1- \frac{\beta(x)}{\alpha(x)}}\] \[\leq\left(\frac{|Mf_{n}(x)-Mf_{n}(y)|}{d(x,y)^{\alpha(x)}}+\frac{ |Mf(x)-Mf(y)|}{d(x,y)^{\alpha(x)}}\right)^{\frac{\beta(x)}{\alpha(x)}}\left(2 \|Mf_{n}-Mf\|_{C(X)}\right)^{1-\frac{\beta(x)}{\alpha(x)}}\] \[\leq(2N)^{\left(\frac{\beta}{\alpha}\right)^{+}}\left(2\|Mf_{n}- Mf\|_{C(X)}\right)^{1-\left(\frac{\beta}{\alpha}\right)^{+}},\] where \(\left(\frac{\beta}{\alpha}\right)^{+}=\sup_{x\in X}\frac{\beta(x)}{\alpha(x)}\). Hence, \[\sup_{x\neq y}\frac{|Mf_{n}(x)-Mf(x)-Mf_{n}(y)+Mf(y)|}{d(x,y)^{\beta(x)}}\leq (2N)^{\left(\frac{\beta}{\alpha}\right)^{+}}\left(2\|Mf_{n}-Mf\|_{C(X)}\right) ^{1-\left(\frac{\beta}{\alpha}\right)^{+}}.\] Since the right-hand side of the above inequality goes to \(0\) when \(n\rightarrow\infty\), the proof follows. **Theorem 4.2**.: _There exist \(f,f_{n}\in C^{0,1}(\mathbb{R})\) such that \(f_{n}\to f\) in \(C^{0,1}(\mathbb{R})\) and \(Mf_{n}\not\to Mf\) in \(C^{0,1}(\mathbb{R})\)._ Proof.: First of all we shall prove the following lemma. **Lemma 4.1**.: _Let \(T>0\) and let \(f\in C(\mathbb{R},\mathbb{R})\) be a \(T-\)periodic function. Then, for any \(x\in\mathbb{R}\) there exists \(r\in[0,T]\) such that_ \[Mf(x)=\mathchoice{\vbox{\hbox{$-$ }}\kern-13.499794pt}{\vbox{\hbox{$-$ }}\kern-12.149815pt}{\vbox{\hbox{$-$ }}\kern-9.899849pt}{\vbox{\hbox{$-$ }}\kern-8.999863pt}\!\int_{x-r}^{x+r}|f(t)|\,dt.\lx@note{f \hbox{$.$}}\] Proof.: Let us observe that for a fixed \(x\in\mathbb{R}\), the map \(f_{x}:\mathbb{R}\rightarrow\mathbb{R}\) defined as \(f_{x}(t)=f(x+t)\) is \(T-\)periodic and \(Mf_{x}(0)=Mf(x)\). Therefore, it is enough to prove the lemma for \(x=0\). Let us denote \[a=\max_{0\leq r\leq T}\mathchoice{\vbox{\hbox{$-$ }}\kern-13.499794pt}{\vbox{\hbox{$-$ }}\kern-12.149815pt}{\vbox{\hbox{$-$ }}\kern-9.899849pt}{\vbox{\hbox{$-$ }}\kern-8.999863pt}\!\int_{-r}^{r}|f(t)|\,dt.\] We shall show \[\mathchoice{\vbox{\hbox{$-$ }}\kern-13.499794pt}{\vbox{\hbox{$-$ }}\kern-12.149815pt}{\vbox{\hbox{$-$ }}\kern-9.8999849pt}{\vbox{\hbox{$-$ }}\kern-8.999863pt}\!\int_{-r}^{r}|f(t)|\,dt\leq a\] for all \(r\geq 0\). The inequality is obvious for \(0\leq r\leq T\). We shall prove it for \(r>T\). We can write \(r\) as \(nT+b\), where \(n\in\mathbb{N}\) and \(0\leq b<T\). Hence, \[\mathchoice{\vbox{\hbox{$-$ }}\kern-13.499794pt}{\vbox{\hbox{$-$ }}\kern-12.149815pt}{\vbox{\hbox{$-$ }}\kern-9.899849pt}{\vbox{\hbox{$-$ }}\kern-8.999863pt}\!\int_{-r}^{r}|f(t)|\,dt =\frac{\int_{-b}^{b}|f(t)|\,dt+\sum_{k=1}^{n}\left(\int_{b+(k-1)T}^{b+kT}|f(t)| \,dt+\int_{-b-kT}^{-b-(k-1)T}|f(t)|\,dt\right)}{2nT+2b}\] \[=\frac{\int_{-b}^{b}|f(t)|\,dt+2n\int_{-T/2}^{T/2}|f(t)|\,dt}{2 nT+2b}=\frac{2b\,\mathchoice{\vbox{\hbox{$-$ }}\kern-13.499794pt}{\vbox{\hbox{$-$ }}\kern-12.149815pt}{\vbox{\hbox{$-$ }}\kern-9.8999849pt}{\vbox{\hbox{$-$ }}\kern-8.999863pt}\!\int_{-b}^{b}|f(t)|\,dt+2nT\,\mathchoice{\vbox{\hbox{$-$ }}\kern-13.499794pt}{\vbox{\hbox{$-$ }}\kern-12.149815pt}{\vbox{\hbox{$-$ }}\kern-9.8999849pt}{\vbox{\hbox{$-$ }}\kern-8.999863pt}\!\int_{-r/2}^{7/2}|f(t)|\,dt}{2 nT+2b}\] \[\leq\frac{2ba+2nTa}{2nT+2b}=a,\] and the proof is complete. Now, we are in position to prove Theorem 4.2. Let \(f:\mathbb{R}\to\mathbb{R}\) be a continuous and \(2-\)periodic function such that \(f(x)=|x|\) for \(x\in[-1,1].\) Next, we define a sequence \(f_{n}(x)=f(x)-\frac{1}{n}\) for \(x\in\mathbb{R}.\) It is easy to see that \(f_{n}\to f\) in \(C^{0,1}(\mathbb{R}).\) We shall show \(Mf_{n}\not\to Mf\) in \(C^{0,1}(\mathbb{R}).\) It is obvious that \(Mf\) is \(2-\)periodic and even function. Thus, it is enough to recognize \(Mf\) on \([0,1].\) By the straightforward integration, for \(x\in[0,\frac{1}{2}]\) we have \[\fint_{x-r}^{x+r}|f(t)|\,dt=\begin{cases}x,&\text{for }r\in[0,x]\\ \frac{x^{2}+r^{2}}{2r},&\text{for }r\in[x,1-x]\\ 1-x+\frac{2x-1}{2r},&\text{for }r\in[1-x,1+x]\\ 2-\frac{r}{2}-\frac{x^{2}+2}{2r},&\text{for }r\in[1+x,2-x]\\ \frac{(r-2)x}{r}+\frac{1}{r},&\text{for }r\in[2-x,2],\end{cases}\] and for \(x\in[\frac{1}{2},1]\) we get \[\fint_{x-r}^{x+r}|f(t)|\,dt=\begin{cases}x,&\text{for }r\in[0,1-x]\\ \frac{-x^{2}+2x-1}{2r}-\frac{r}{2}+1,&\text{for }r\in[1-x,x]\\ \frac{x}{r}-\frac{1}{2r}-x+1,&\text{for }r\in[x,2-x]\\ \frac{x^{2}-2x+3}{2r}+\frac{r}{2}-1,&\text{for }r\in[2-x,1+x]\\ \frac{1-2x}{r}+x,&\text{for }r\in[1+x,2].\end{cases}\] Therefore, having in mind Lemma 4.1, we get \[Mf(x)=\begin{cases}2-\sqrt{x^{2}+2},&\text{for }0\leq x\leq\frac{1}{2}\\ x,&\text{for }\frac{1}{2}<x\leq 1.\end{cases}\] Next, for large \(n,\) we have \[\fint_{\frac{1}{2}-r}^{\frac{1}{2}+r}|f_{n}(t)|\,dt=\begin{cases}\frac{1}{2}- \frac{1}{n},&\text{for }r\in[0,\frac{1}{2}-\frac{1}{n}]\\ \frac{(n-2)^{2}}{8n^{2}r}+\frac{r}{2},&\text{for }r\in[\frac{1}{2}-\frac{1}{n}, \frac{1}{2}]\\ \frac{-n^{2}-4n+4}{8n^{2}r}-\frac{r}{2}+1,&\text{for }r\in[\frac{1}{2}, \frac{1}{2}+\frac{1}{n}]\\ \frac{1}{n^{2}r}+\frac{n-2}{2n},&\text{for }r\in[\frac{1}{2}+\frac{1}{n}, \frac{3}{2}-\frac{1}{n}]\\ \frac{3(3n^{2}-4n+4)}{8n^{2}r}+\frac{r}{2}-1,&\text{for }r\in[\frac{3}{2}- \frac{1}{n},\frac{3}{2}]\\ -\frac{3(3n^{2}+4n-4)}{8n^{2}r}-\frac{r}{2}+2,&\text{for }r\in[\frac{3}{2}, \frac{3}{2}+\frac{1}{n}]\\ \frac{2}{n^{2}r}+\frac{n-2}{2n},&\text{for }r\in[\frac{3}{2}+\frac{1}{n},2].\end{cases}\] Thus, by Lemma 4.1, we get \[Mf_{n}(1/2)=1-\sqrt{-\frac{1}{n^{2}}+\frac{1}{n}+\frac{1}{4}}.\] Furthermore, let us define \(d_{n}=\frac{1}{2}-\frac{1}{4n^{2}}.\) Again, basic integration gives us \[\fint_{d_{n}-r}^{d_{n}+r}|f_{n}(t)|\,dt=\begin{cases}d_{n}-\frac{1}{n},&\text{ for }r\in[0,d_{n}-\frac{1}{n}]\\ \frac{4n^{4}-16n^{3}+12n^{2}+8n+1}{32n^{4}r}+\frac{r}{2},&\text{ for }r\in[d_{n}-\frac{1}{n},d_{n}]\\ \frac{2n^{2}-1}{4n^{2}}+\frac{-2n^{2}+2n^{2}}{4n^{3}r},&\text{ for }r\in[d_{n},1-d_{n}] \\ \frac{-4n^{4}-16n^{3}+12n^{2}+8n-1}{32n^{4}r}-\frac{r}{2}+1,&\text{ for }r\in[1-d_{n},d_{n}+\frac{1}{n}]\\ \frac{3}{4n^{2}r}+\frac{2n^{2}-4n+1}{4n^{2}},&\text{ for }r\in[d_{n}+\frac{1}{n},2- \frac{1}{n}-d_{n}]\\ \frac{36n^{4}-48n^{3}+52n^{2}-8n+1}{32n^{4}r}+\frac{r}{2}-1,&\text{ for }r\in[2-\frac{1}{n}-d_{n},d_{n}+1]\\ \frac{2n^{2}-1}{4n^{2}}+\frac{-6n^{2}+8n-1}{4n^{3}r},&\text{ for }r\in[d_{n}+1,2-d_{n}],\\ \frac{-36n^{4}-48n^{3}+52n^{2}-8n-1}{32n^{4}r}-\frac{r}{2}+2,&\text{ for }r\in[2-d_{n},2+\frac{1}{n}-d_{n}]\\ \frac{5}{2n^{2}r}+\frac{2n^{2}-4n-1}{4n^{2}},&\text{ for }r\in[2+\frac{1}{n}-d_{n},2].\end{cases}\] Hence, by an elementary considerations, for large \(n\) we get \[\fint_{d_{n}-r}^{d_{n}+r}|f_{n}(y)|\,dy\leq Mf_{n}(1/2)\] for \(r\in[0,2].\) Hence, the above inequality and Lemma 4.1 yield \[Mf_{n}(d_{n})\leq Mf_{n}(1/2) \tag{8}\] for large \(n\). Finally, using the expression for \(Mf\), we easily get \[\frac{Mf(d_{n})-Mf(\frac{1}{2})}{\frac{1}{2}-d_{n}}\to\frac{1}{3}.\] Therefore, gathering (8) with the above convergence, for sufficiently big \(n\) we have \[\|Mf_{n}-Mf\|_{C^{0,1}(\mathbb{R})} \geq\frac{|Mf_{n}(\frac{1}{2})-Mf(\frac{1}{2})-Mf_{n}(d_{n})+Mf(d _{n})|}{|\frac{1}{2}-d_{n}|}\] \[=\frac{Mf(d_{n})-Mf(\frac{1}{2})}{\frac{1}{2}-d_{n}}+\frac{Mf_{n}( \frac{1}{2})-Mf_{n}(d_{n})}{\frac{1}{2}-d_{n}}\] \[\geq\frac{Mf(d_{n})-Mf(\frac{1}{2})}{\frac{1}{2}-d_{n}}\geq\frac {1}{6}.\] This proves that \(Mf_{n}\not\to Mf\) in \(C^{0,1}(\mathbb{R}).\) **Acknowledgement.** The research of the last author was funded by (POB Cybersecurity and data analysis) of Warsaw University of Technology within the Excellence Initiative: Research University (IDUB) programme.
2301.00708
Generalized periodicity theorems
Let $R$ be a ring and $\mathsf S$ be a class of strongly finitely presented (FP${}_\infty$) $R$-modules closed under extensions, direct summands, and syzygies. Let $(\mathsf A,\mathsf B)$ be the (hereditary complete) cotorsion pair generated by $\mathsf S$ in $\textsf{Mod-}R$, and let $(\mathsf C,\mathsf D)$ be the (also hereditary complete) cotorsion pair in which $\mathsf C=\varinjlim\mathsf A=\varinjlim\mathsf S$. We show that any $\mathsf A$-periodic module in $\mathsf C$ belongs to $\mathsf A$, and any $\mathsf D$-periodic module in $\mathsf B$ belongs to $\mathsf D$. Further generalizations of both results are obtained, so that we get a common generalization of the flat/projective and fp-projective periodicity theorems, as well as a common generalization of the fp-injective/injective and cotorsion periodicity theorems. Both are applicable to modules over an arbitrary ring, and in fact, to Grothendieck categories.
Leonid Positselski
2023-01-02T14:56:35Z
http://arxiv.org/abs/2301.00708v4
# Generalized periodicity theorems ###### Abstract. Let \(R\) be a ring and \(\mathsf{S}\) be a class of strongly finitely presented (\(\operatorname{FP}_{\infty}\)) \(R\)-modules closed under extensions, direct summands, and syzygies. Let \((\mathsf{A},\mathsf{B})\) be the (hereditary complete) cotorsion pair generated by \(\mathsf{S}\) in \(\mathsf{Mod}\)-\(R\), and let \((\mathsf{C},\mathsf{D})\) be the (also hereditary complete) cotorsion pair in which \(\mathsf{C}=\varinjlim\mathsf{A}=\varinjlim\mathsf{S}\). We show that any \(\mathsf{A}\)-periodic module in \(\mathsf{C}\) belongs to \(\mathsf{A}\), and any \(\mathsf{D}\)-periodic module in \(\mathsf{B}\) belongs to \(\mathsf{D}\). Further generalizations of both results are obtained, so that we get a common generalization of the flat/projective and fp-projective periodicity theorems, as well as a common generalization of the fp-injective/injective and cotorsion periodicity theorems. Both are applicable to modules over an arbitrary ring, and in fact, to Grothendieck categories. ###### Contents * 1 Generalized \(\operatorname{Fp}\)-injective/Injective and Cotorsion Periodicity * 2 Generalized Flat/Projective and \(\operatorname{Fp}\)-projective Periodicity I * 3 Direct Limit Closures of Classes of Finitely Presentables * 4 Exact Categories of Grothendieck Type * 5 Pure Exact Structure and Deconstructibility * 6 Classes of \(\kappa\)-Presentables and Their \(\kappa\)-Direct Limit Closures * 7 Generalized Flat/Projective and \(\operatorname{Fp}\)-projective Periodicity II ## Introduction ### Periodicity theorems in homological algebra apply to the following setup. Let \(R\) be an associative ring and \[0\xrightarrow{}M\xrightarrow{}L\xrightarrow{}M\xrightarrow{}0\] be a short exact sequence of (right) \(R\)-modules with the leftmost term isomorphic to the rightmost one. Then it is known that * if the \(R\)-module \(M\) is flat and the \(R\)-module \(L\) is projective, then the \(R\)-module \(M\) is projective (Benson and Goodearl 2000 [7], rediscovered by Neeman in 2008 [26]); * if the exact sequence \((*)\) is pure and the \(R\)-module \(L\) is pure-projective, then the \(R\)-module \(M\) is pure-projective (Simson 2002 [40]); 3. if the exact sequence \((*)\) is pure and the \(R\)-module \(L\) is pure-injective, then the \(R\)-module \(M\) is pure-injective (Stovicek 2014 [44]); 4. in particular, if the \(R\)-module \(M\) is fp-injective and the \(R\)-module \(L\) is injective, then the \(R\)-module \(M\) is injective; 5. if the \(R\)-module \(L\) is cotorsion, then the \(R\)-module \(M\) is cotorsion (Bazzoni, Cortes-Izurdiaga, and Estrada 2017 [4]); 6. if the ring \(R\) is right coherent and the right \(R\)-module \(L\) is fp-projective, then the \(R\)-module \(M\) is fp-projective (Saroch and Stovicek 2018 [37]); 7. over any ring \(R\), if the \(R\)-module \(L\) is fp-projective, then the \(R\)-module \(M\) is weakly fp-projective (Bazzoni, Hrbek, and the present author 2022 [5]). Periodicity phenomena are linked to behavior of the modules of cocycles in acyclic complexes. This means that the assertions (1-7) can be restated as follows: 1. in any acyclic complex of projective modules with flat modules of cocycles, the modules of cocycles are actually projective (so the complex is contactible); 2. in any pure acyclic complex of pure-projective modules, the modules of cocycles are pure-projective (so the complex is contractible); 3. in any pure acyclic complex of pure-injective modules, the modules of cocycles are pure-injective (so the complex is contractible); 4. in any acyclic complex of injective modules with fp-injective modules of cocycles, the modules of cocycles are actually injective (so the complex is contractible); 5. in any acyclic complex of cotorsion modules, the modules of cocycles are cotorsion; 6. in any acyclic complex of fp-projective right modules over a right coherent ring, the modules of cocycles are fp-projective; 7. in any acyclic complex of fp-projective modules (over any ring), the modules of cocycles are weakly fp-projective. We refer to the introduction to the preprint [5] for a more detailed discussion of the periodicity theorems (1-7) and (1\({}^{\rm c}\)-7\({}^{\rm c}\)). The aim of this paper is to obtain a common generalization of (1) and (6-7), and also a common generalization of (4) and (5), in the context of a chosen class of modules or objects in a Grothendieck category. Let us start with presenting the most symmetric and nicely looking formulation of a special case of our main results, and then proceed to further generalizations. Let \(R\) be a ring. An \(R\)-module is said to be _strongly finitely presented_ if it has an (infinite) resolution by finitely generated projective \(R\)-modules. In the terminology of the book [22], such modules are called "FP\({}_{\infty}\)-modules". Let \(\mathsf{S}\) be a class (up to an isomorphism, of course, a set) of strongly finitely presented (right) \(R\)-modules. Assume that the free \(R\)-module \(R\) belongs to \(\mathsf{S}\), and that the class of modules \(\mathsf{S}\) is closed under direct summands, extensions, and kernels of epimorphisms in \(\mathsf{Mod}\)-\(R\). In particular, for any module \(S\in\mathsf{S}\) there exists a (finitely generated) projective \(R\)-module \(P\) together with an \(R\)-module epimorphism whose kernels also belongs to \(\mathsf{S}\). The latter property is expressed by saying that "the class of modules \(\mathsf{S}\) is closed under syzygies". Denote by \(\mathsf{B}=\mathsf{S}^{\perp_{1}}\) the class of all \(R\)-modules \(B\) such that \(\operatorname{Ext}_{R}^{1}(S,B)=0\) for all \(S\in\mathsf{S}\). Furthermore, denote by \(\mathsf{A}={}^{\perp_{1}}\mathsf{B}\) the class of all \(R\)-modules \(A\) such that \(\operatorname{Ext}_{R}^{1}(A,B)=0\) for all \(B\in\mathsf{B}\). The pair of classes of modules \((\mathsf{A},\mathsf{B})\) is called _the cotorsion pair generated by \(\mathsf{S}\) in \(\mathsf{Mod}\)-\(R\)_. Let \(\mathsf{C}=\varinjlim\mathsf{S}\) denote the class of all \(R\)-modules that can be obtained as direct limits of diagrams of modules from \(\mathsf{S}\), indexed by directed posets. Since \(\mathsf{S}\) is a class of finitely presented modules, one can see that \(\varinjlim\mathsf{S}\) coincides with the direct limit closure or \(\mathsf{S}\) in \(\mathsf{Mod}\)-\(R\)[25, 15, 24]. Furthermore, since \(\mathsf{S}\) is a class of strongly finitely presented modules (\(\operatorname{FP}_{2}\) is sufficient), the class \(\mathsf{C}\) is closed under extensions in \(\mathsf{Mod}\)-\(R\)[3]. Taking into account the description of \(\mathsf{A}\) as the class of all direct summands of transfinitely iterated extensions of modules from \(\mathsf{S}\)[22, Corollary 6.14], one concludes that \(\mathsf{A}\subset\mathsf{C}\). Hence \(\mathsf{C}=\varinjlim\mathsf{A}\) is the class of all direct limits of modules from \(\mathsf{A}\). Denote by \(\mathsf{D}=\mathsf{C}^{\perp_{1}}\) the class of all \(R\)-modules \(D\) such that \(\operatorname{Ext}_{R}^{1}(C,D)=0\) for all \(C\in\mathsf{C}\). Then one has \(\mathsf{A}\subset\mathsf{C}\) and \(\mathsf{B}\supset\mathsf{D}\). Part (a) of the following theorem is one of the main results of this paper, while part (b) follows rather easily from a result of Bazzoni, Cortes-Izurdiaga, and Estrada [4, Theorem 4.7] together with a result of Angeleri Hugel and Trlifaj [3, Corollary 2.4]. **Theorem 0**.: _Let \(R\) be a ring and \(\mathsf{S}\) be a class of strongly finitely presented \(R\)-modules, containing the free \(R\)-module \(R\) and closed under direct summands, extensions, and kernels of epimorphisms. Put \(\mathsf{B}=\mathsf{S}^{\perp_{1}}\), \(\mathsf{A}={}^{\perp_{1}}\mathsf{B}\), \(\mathsf{C}=\varinjlim\mathsf{S}\), and \(\mathsf{D}=\mathsf{C}^{\perp_{1}}\). Then the following assertions hold:_ (a) _For any short exact sequence \((*)\) with \(L\in\mathsf{A}\) and \(M\in\mathsf{C}\), one has \(M\in\mathsf{A}\). In other words, in any acyclic complex of modules from \(\mathsf{A}\) with the modules of cocycles belonging to \(\mathsf{C}\), the modules of cocycles actually belong to \(\mathsf{A}\)._ (b) _For any short exact sequence \((*)\) with \(L\in\mathsf{D}\) and \(M\in\mathsf{B}\), one has \(M\in\mathsf{D}\). In other words, in any acyclic complex of modules from \(\mathsf{D}\) with the modules of cocycles belonging to \(\mathsf{B}\), the modules of cocycles actually belong to \(\mathsf{D}\)._ Theorem 0(a) is a common generalization of items (1) or (1\({}^{\rm c}\)) and (6) or (6\({}^{\rm c}\)) on the list of Section 0.0. Taking \(\mathsf{S}=\{R\}\) to be the class consisting of the free \(R\)-module \(R\) only, one obtains the flat/projective periodicity theorem of Benson-Goodearl [7, Theorem 2.5] and Neeman [26, Remark 2.15] as a particular case of Theorem 0(a). Assuming the ring \(R\) to be right coherent and taking \(\mathsf{S}\) to be the class of all finitely presented right \(R\)-modules, one obtains the fp-projective periodicity theorem of Saroch and Stovicek [37, Example 4.3] as a particular case of Theorem 0(a). Theorem 0(b) is a common generalization of items (4) or (4\({}^{\rm c}\)) (for coherent rings) and (5) or (5\({}^{\rm c}\)) on the list of Section 0.0. Assuming the ring \(R\) to be right coherent and taking \(\mathsf{S}\) to be the class of all finitely presented right \(R\)-modules, one obtains the fp-injective/injective periodicity theorem, essentially due to Stovicek [44, Corollary 5.5] (see also [4, Theorem 1.2(1) or 5.1(1)]), as a particular case of Theorem 0(b). Taking \(S=\{R\}\) to be the class consisting of the free \(R\)-module \(R\) only, one obtains the cotorsion periodicity theorem of Bazzoni, Cortes-Izurdiaga, and Estrada [4, Theorem 1.2(2) or 5.1(2)] as a particular case of Theorem 0(b). 0.2. Both parts (a) and (b) of Theorem 0 admit far-reaching generalizations in several directions simultaneously (allowing, in particular, to drop the coherence assumptions on the ring \(R\) in the preceding two paragraphs). Let us state these results. We consider a Grothendieck abelian category \(K\). For any class of objects \(S\subset K\), let \(\varinjlim S\subset K\) denote the class of all direct limits in \(K\) of diagrams of objects from \(S\) indexed by directed posets. Furthermore, given a regular cardinal \(\kappa\), we let \(\varinjlim^{(\kappa)}S\subset K\) denote the class of all direct limits in \(K\) of diagrams of objects from \(S\) indexed by \(\kappa\)_-directed_ posets. Here a poset \(X\) is said to be \(\kappa\)-directed if any its subset of cardinality less than \(\kappa\) has an upper bound in \(X\). Part (iii) of following theorem is the main result of this paper, formulated in full generality and strength. It is a generalization of Theorem 0(a). Parts (i) and (ii) are supplementary comments on part (iii), providing some sufficient conditions for validity of the main assumption in (iii). **Theorem A**.: _Let \(K\) be a Grothendieck abelian category, and let \(\kappa\) be a regular cardinal such that \(K\) is a locally \(\kappa\)-presentable category. Let \(S\subset K\) be a class of (some) \(\kappa\)-presentable objects closed under transfinitely iterated extensions indexed by ordinals smaller than \(\kappa\). Put \(C=\varinjlim^{(\kappa)}S\subset K\). Then_ (i) _If \(\kappa=\aleph_{0}\) is the countable cardinal and the class \(S\) consists of (some) objects of type \(FP_{2}\), then the class \(C\) is closed under extensions in \(K\)._ (ii) _If \(\kappa=\aleph_{0}\) and the class \(C\) is closed under extensions in \(K\), then the class \(C\) is deconstructible in \(K\)._ (iii) _For any regular cardinal \(\kappa\), assume that the class \(C\) is deconstructible in \(K\). Denote by \(A=\operatorname{\mathsf{Fil}}(S)^{\oplus}\) the class of all direct summands of transfinitely iterated extensions of objects from \(S\) in \(K\). Let \(B^{\prime}=S^{\perp_{\geq 1}}\cap C\) be the class of all objects \(B\in C\) such that \(\operatorname{Ext}^{n}_{K}(S,B)=0\) for all \(S\in S\) and \(n\geq 1\). Let \(A^{\prime}=C\cap{}^{\perp_{1}}B^{\prime}\) be the class of all objects \(A\in C\) such that \(\operatorname{Ext}^{1}_{K}(A,B)=0\) for all \(B\in B^{\prime}\) (so \(A\subset A^{\prime}\subset C\)). Then, in any acyclic complex of objects from \(A\) with the objects of cocycles belonging to \(C\), the objects of cocycles actually belong to \(A^{\prime}\)._ Let us empasize that deconstructibility in Theorem A is understood in the strong sense of the word. So, a class of objects \(C\) is _deconstructible_ if it is closed under transfinitely iterated extensions and contains a subset \(T\subset C\) such that all objects from \(C\) are transfinitely iterated extensions of objects from \(T\). Taking \(\kappa=\aleph_{0}\) and assuming \(S\) to be a class of strongly finitely presentable (\(FP_{\infty}\)) objects closed under extensions in \(K\) makes the assertions of Theorem A(i-ii) applicable, so the assumption of Theorem A(iii) is satisfied. This makes Theorem 0(a) a particular case of Theorem A (for \(K=Mod\)-\(R\)). Taking \(S\) to be the class of all \(\kappa\)-presentable objects in \(K\) makes the assumption of Theorem A(iii) satisfied as well, since \(C=K\) in this case. So one obtains the theorem of Bazzoni, Hrbek and the present author ([5, Theorem 4.1] for \(\kappa=\aleph_{0}\), listed above as item (7) or (\(7^{\rm c}\)) in the case of \(K=\mathsf{Mod}\)-\(R\); or [5, Remark 4.11] for other \(\kappa\)) as a particular case of Theorem A(iii). The next theorem is a generalization of Theorem 0(b). Part (i), which is the main claim, is rather easily deduced from a result of Stovicek and the present author [33, Theorem 6.1] (which, in turn, is a generalization of [4, Theorem 4.7]). Part (ii), which is a supplementary comment on part (i) (explaining what (i) means under some additional assumptions), turns out to be more involved. **Theorem B**.: _Let \(K\) be a Grothendieck category and \(S\subset K\) be a class of objects. Let \(T\subset K\) be any class of objects of finite projective dimension in \(K\) such that the union \(S\cup T\) contains a set of generators for \(K\). Denote by \(C\subset K\) the closure of \(S\cup T\) under coproducts, direct limits, extensions, and kernels of epimorphisms in \(K\). Then_ (i) _Let \(B=S^{\perp_{1}}\) be the class of all objects \(B\in K\) such that \(\operatorname{Ext}^{1}_{K}(S,B)=0\) for all \(S\in S\), and let \(D=C^{\perp_{1}}\) be the class of all objects \(D\in K\) such that \(\operatorname{Ext}^{1}_{K}(C,D)=0\) for all \(C\in C\). Then, for any acyclic complex of objects from \(D\) with the objects of cocycles belonging to \(B\), the objects of cocycles actually belong to \(D\)._ (ii) _If the class \(S\cup T\) consists of (some) finitely presentable objects and is closed under extensions and kernels of epimorphisms in \(K\), then \(C=\varinjlim(S\cup T)\) is the class of all direct limits of diagrams of objects from \(S\cup T\), indexed by directed posets._ One can see that in the context of Theorem B(ii) the class \(S\cup T\) has to consist of strongly finitely presentable (\(\operatorname{FP}_{\infty}\)) objects. Taking \(T=\varnothing\) makes Theorem 0(b) a particular case of Theorem B(i-ii) (for \(K=\mathsf{Mod}\)-\(R\)). Taking \(S\) to be the class of all finitely presentable objects in a locally finitely presentable abelian category \(K\) and \(T=\varnothing\), one obtains the assertion (4) or (\(4^{\rm c}\)) on the list of Section 0.0, essentially due to Stovicek [44, Corollary 5.5], as a particular case of Theorem B(i) (for \(K=\mathsf{Mod}\)-\(R\)). The proofs of Theorems 0(b) and B(i) are presented in Section 1. Theorem 0(a) is proved in Section 2. The proofs of Theorems A(i) and B(ii) are given in Section 3. Theorem A(ii) is proved in Section 5. The possibilities and difficulties of extending Theorem A(i-ii) to higher cardinals \(\kappa\) are discussed in Section 6. The proof of the main result, Theorem A(iii), is presented in Section 7. One comment on the style of the exposition may be in order. This paper is written with the intent to be at least partially understandable to readers not necessarily feeling at ease with advanced category-theoretic concepts. In order not to intimidate a reader mostly interested in module-theoretic rather than category-theoretic applications, category-theoretic terminology is introduced slowly and gradually as the paper progresses from the less general results such as Theorem 0 to the more general ones such as Theorems B and A. **Acknowledgement**.: I want to thank Michal Hrbek, Silvana Bazzoni, Jan Saroch, and Jan Trlifaj for helpful discussions and comments. Long conversations with Jan Stovicek were particularly illuminating, and to him goes my special gratitude. The author is supported by the GACR project 20-13778S and research plan RVO: 67985840. ## 1. Generalized Fp-injective/Injective and Cotorsion Periodicity In this section we prove Theorems 0(b) and B(i). This is not difficult, given the preceding results in [4, Theorem 4.7] and [33, Theorem 6.1]. The former theorem needs to be used together with [3, Corollary 2.4], and the latter one together with [33, Lemmas 4.2, 4.3, and 6.4]. Let us formally introduce some notation and terminology which was already used throughout the introduction. Given an abelian (or exact [10]) category \(K\) and a class of objects \(A\subset K\), one denotes by \(A^{\perp_{1}}\subset K\) the class of all objects \(X\in K\) such that \(\operatorname{Ext}^{1}_{K}(A,X)=0\) for all \(A\in A\). Dually, for a class of objects \(B\subset K\), the notation \({}^{\perp_{1}}B\subset K\) stands for the class of all objects \(Y\in K\) such that \(\operatorname{Ext}^{1}_{K}(Y,B)=0\) for all \(B\in B\). Similarly, \(A^{\perp_{\geq 1}}\subset K\) is the class of all objects \(X\in K\) such that \(\operatorname{Ext}^{n}_{K}(A,X)=0\) for all \(A\in A\) and \(n\geq 1\). Dually, \({}^{\perp_{\geq 1}}B\subset K\) is the class of all objects \(Y\in K\) such that \(\operatorname{Ext}^{n}_{K}(Y,B)=0\) for all \(B\in B\) and \(n\geq 1\). A class of objects \(A\subset K\) is said to be _generating_ (or _a class of generators_) if every object of \(K\) is a quotient object of a coproduct of objects from \(A\). A class of objects \(B\subset K\) is said to be _cogenerating_ (or _a class of cogenerators_) if every object of \(K\) is a subobject of a product of objects from \(B\). The previous definitions, as well as generally all category-theoretic definitions in this paper, are transferred from abelian to exact categories in the obvious way: all the mentions of "subobjects", "quotients", "monomorphisms", "epimorphisms", "exact sequences", etc., are understood to mean admissible monomorphisms, admissible epimorphisms, admissible exact sequences, etc. A pair of classes of objects \((A,B)\) in \(K\) is said to be a _cotorsion pair_ if \(A={}^{\perp_{1}}B\) and \(B=A^{\perp_{1}}\). Notice that, for any cotorsion pair \((A,B)\) in \(K\), the class \(A\subset K\) is closed under coproducts (i. e., those coproducts that exist in \(K\)), and the class \(B\subset K\) is closed under products (in the same sense) [12, Corollary 8.3], [14, Corollary A.2]. For any class of objects \(S\subset K\), the pair of classes \(B=S^{\perp_{1}}\) and \(A={}^{\perp_{1}}B\) is a cotorsion pair in \(K\). The cotorsion pair \((A,B)\) obtained in this way is said to be _generated_ by the class \(S\). Dually, for any class of objects \(T\subset K\), the pair of classes \(A={}^{\perp_{1}}T\) and \(B=A^{\perp_{1}}\) is also a cotorsion pair in \(K\). The latter cotorsion pair \((A,B)\) is said to be _cogenerated_ by the class \(T\). Let \((A,B)\) be a cotorsion pair in \(K\) such that the class \(A\) is generating and the class \(B\) is cogenerating in \(K\). So every object of \(K\) is a quotient object of an object from \(A\) and a subobject of an object from \(B\). These conditions are satisfied automatically for any cotorsion pair in an abelian category \(K\) with enough projective and injective objects (because all projective objects belong to \(A\) and all injective objects belong to \(B\)). In particular, this applies to the module categories \(K=Mod\)-\(R\). In the assumptions of the previous paragraph, the following conditions are equivalent [19, Theorem 1.2.10], [22, Lemma 5.24], [5, Section 1], [33, Lemma 4.1]: 1. the class \(\mathsf{A}\) is closed under kernels of epimorphisms in \(\mathsf{K}\); 2. the class \(\mathsf{B}\) is closed under cokernels of monomorphisms in \(\mathsf{K}\); 3. \(\operatorname{Ext}_{\mathsf{K}}^{n}(A,B)=0\) for all \(A\in\mathsf{A}\) and \(B\in\mathsf{B}\); 4. \(\operatorname{Ext}_{\mathsf{K}}^{n}(A,B)=0\) for all \(A\in\mathsf{A},\ B\in\mathsf{B}\), and \(n\geq 1\). A cotorsion pair \((\mathsf{A},\mathsf{B})\) satisfying conditions (1-4) is said to be _hereditary_. Given a class of objects \(\mathsf{L}\subset\mathsf{K}\), an object \(M\in\mathsf{K}\) is said to be \(\mathsf{L}\)_-periodic_ if there exists a short exact sequence \(0\longrightarrow M\longrightarrow L\longrightarrow M\longrightarrow 0\ (*)\) in \(\mathsf{K}\) with \(L\in\mathsf{L}\). We recall that the notation \(\varinjlim\mathsf{L}\subset\mathsf{K}\) stands for the class of all direct limits in \(\mathsf{K}\) of diagrams of objects from \(\mathsf{L}\) (indexed by directed posets). A short exact sequence of right \(R\)-modules \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) is said to be _pure_ if it remains exact after taking the tensor product with any left \(R\)-module. Equivalently, a short exact sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) is pure if and only if it remains exact after the functor \(\operatorname{Hom}_{R}(S,-)\) from any finitely presented right \(R\)-module \(S\) is applied [22, Definition 2.6 and Lemma 2.19]. If this is the case, the object \(K\) is said to be a _pure subobject_ of \(L\), while the object \(M\) is called a _pure epimorphic image_ (or a _pure quotient_) of \(M\). The _pure exact structure_ on \(\mathsf{Mod}\)-\(R\) is formed by the class of all pure exact sequences. The projective objects of the exact category \(\mathsf{Mod}\)-\(R\) with the pure exact structure are called _pure-projective_\(R\)-modules, and the injective objects are called _pure-injective_. An \(R\)-module \(S\) is said to be \(\operatorname{FP}_{n}\) (where \(n\geq 0\) is an integer) if it admits a fragment of projective resolution \(P_{n}\longrightarrow P_{n-1}\longrightarrow\cdots\longrightarrow P_{0} \longrightarrow S\longrightarrow 0\) with finitely generated projective modules \(P_{i}\). So a module is \(\operatorname{FP}_{0}\) if and only if it is finitely generated, and it is \(\operatorname{FP}_{1}\) if and only if it is finitely presented. A module \(S\) is said to be \(\operatorname{FP}_{\infty}\) if it admits a resolution by finitely generated projective modules; equivalently, this means that \(S\) is \(\operatorname{FP}_{n}\) for all \(n\geq 0\). Modules of type \(\operatorname{FP}_{\infty}\) are otherwise known as _strongly finitely presented_. A class of modules \(\mathsf{S}\) is said to be _closed under syzygies_ if for every module \(S\in\mathsf{S}\) there exists a short exact sequence \(0\longrightarrow K\longrightarrow P\longrightarrow S\longrightarrow 0\) with a projective module \(P\) and \(K\in\mathsf{S}\). For any other short exact sequence \(0\longrightarrow K^{\prime}\longrightarrow P^{\prime}\longrightarrow S\longrightarrow 0\) with a projective module \(P^{\prime}\), it then follows that \(K^{\prime}\oplus P\simeq K\oplus P^{\prime}\), which often implies that \(K^{\prime}\in\mathsf{S}\) as well. Dually, a class of modules \(\mathsf{T}\) is _closed under cosyzygies_ if for every module \(T\in\mathsf{T}\) there exists a short exact sequence \(0\longrightarrow T\longrightarrow J\longrightarrow L\longrightarrow 0\) with an injective module \(J\) and \(L\in\mathsf{T}\). For any other short exact sequence \(0\longrightarrow T\longrightarrow J^{\prime}\longrightarrow L^{\prime} \longrightarrow 0\) with an injective module \(J^{\prime}\), it then follows that \(L^{\prime}\oplus J\simeq L\oplus J^{\prime}\), which often implies that \(L^{\prime}\in\mathsf{T}\), too. Proof of Theorem 0(b) from Section 0.1.: Let us first prove the first assertion, then deduce the second one. In order to apply [4, Theorem 4.7], we need to show that \((\mathsf{C},\mathsf{D})\) is a hereditary cotorsion pair in \(\mathsf{Mod}\)-\(R\). First of all, \((\mathsf{C},\mathsf{D})\) is indeed a cotorsion pair by [3, Corollary 2.4] (see also [22, Corollary 8.42]). Alternatively, Corollary 5.4 below provides a more general result. To show that the cotorsion pair \((\mathsf{C},\mathsf{D})\) is hereditary, one can argue as follows. The cotorsion pair \((\mathsf{A},\mathsf{B})\) generated by \(\mathsf{S}\) in \(\mathsf{Mod}\)-\(R\) is hereditary, since the class \(\mathsf{S}\) is closed under syzygies [22, Corollary 5.25(a)], [5, Lemma 1.3], [33, Lemma 4.1]. Consequently, the class \(\mathsf{B}\) is closed under the cokernels of monomorphisms, and in particular, under cosyzygies in \(\mathsf{Mod}\)-\(R\). By [3, Corollary 2.4] or [22, Corollary 8.42], the cotorsion pair \((\mathsf{C},\mathsf{D})\) is cogenerated by the class of all pure-injective modules belonging to \(\mathsf{B}\). The class of all pure-injective modules is closed under cosyzygies by [22, Lemma 6.20], so the class of all pure-injective modules belonging to \(\mathsf{B}\) is also closed under cosyzygies. Applying [22, Corollary 5.25(b)], we conclude that the cotorsion pair \((\mathsf{C},\mathsf{D})\) is hereditary. Alternatively, one can use Proposition 3.5 below, which is a more general result. We also need to know that the class \(\mathsf{C}\) is closed under pure epimorphic images. This is [25, Proposition 2.1], [15, Section 4.1], [3, Theorem 2.3], or [22, Theorem 8.40]. By the latter two references, we also have \(\mathsf{A}\subset\mathsf{C}\), hence \(\mathsf{B}\supset\mathsf{D}\). Therefore, the result of [4, Theorem 4.7] is applicable to the cotorsion pair \((\mathsf{C},\mathsf{D})\), and it tells that the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) is closed under direct limits in \(\mathsf{Mod}\)-\(R\) for any \(\mathsf{D}\)-periodic module \(M\). Now, if \(M\in\mathsf{B}\), then the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) contains \(\mathsf{A}\). Thus \(\mathsf{C}=\varinjlim\mathsf{A}\subset{}^{\perp_{1}}\{M\}\) and \(M\in\mathsf{D}\). To deduce the second assertion of Theorem 0(b) from the first one, suppose given an acyclic complex \(D^{\bullet}\) of modules from \(\mathsf{D}\) with the modules of cocycles belonging to \(\mathsf{B}\). All one needs to do is to chop up the complex \(D^{\bullet}\) into short exact sequence pieces and take the infinite product of the pieces. The resulting short exact sequence has the form \((*)\) with \(L\in\mathsf{D}\) and \(M\in\mathsf{B}\), and the first assertion of the theorem can be applied to it, providing the desired conclusion. This argument uses the observations that countable products are exact in \(\mathsf{Mod}\)-\(R\), the classes \(\mathsf{D}\) and \(\mathsf{B}\) are closed under countable products, and the class \(\mathsf{D}\) is closed under direct summands in \(\mathsf{Mod}\)-\(R\) (cf. [11, proof of Proposition 7.6] or [18, Proposition 2]). Let \(\mathsf{K}\) be an abelian category. A class of objects \(\mathsf{C}\subset\mathsf{K}\) is said to be _self-generating_[5, Section 1], [33, Section 4] if for any epimorphism \(K\longrightarrow C\) in \(\mathsf{K}\) with \(C\in\mathsf{C}\) there exists a morphism \(C^{\prime}\longrightarrow K\) in \(\mathsf{K}\) with \(C^{\prime}\in\mathsf{C}\) such that the composition \(C^{\prime}\longrightarrow K\longrightarrow C\) is en epimorphism in \(\mathsf{K}\). A class of objects \(\mathsf{C}\) is said to be _self-resolving_[33, Section 6] if it is self-generating and closed under extensions and kernels of epimorphisms. Before proving Theorem B(i) as it is stated, let us explicitly formulate and prove the following periodicity assertion. **Theorem 1.1**.: _Let \(\mathsf{K}\) be a Grothendieck category and \(\mathsf{S}\subset\mathsf{K}\) be a class of objects. Let \(\mathsf{T}\subset\mathsf{K}\) be any class of objects of finite projective dimension in \(\mathsf{K}\) such that the union \(\mathsf{S}\cup\mathsf{T}\) contains a set of generators for \(\mathsf{K}\). Denote by \(\mathsf{C}\subset\mathsf{K}\) the closure of \(\mathsf{S}\cup\mathsf{T}\) under coproducts, direct limits, extensions, and kernels of epimorphisms in \(\mathsf{K}\)._ _Put \(\mathsf{B}=\mathsf{S}^{\perp_{1}}\) and \(\mathsf{D}=\mathsf{C}^{\perp_{1}}\subset\mathsf{K}\). Then, for any short exact sequence \((*)\) as in Section 0.0 with objects \(L\in\mathsf{D}\) and \(M\in\mathsf{B}\), one has \(M\in\mathsf{D}\). In other words, any \(\mathsf{D}\)-periodic object belonging to \(\mathsf{B}\) actually belongs to \(\mathsf{D}\)._ Proof.: The class of objects \(\mathsf{C}\) contains a set of generators for \(\mathsf{K}\) and is closed under coproducts; hence, in particular, it is self-generating. The class \(\mathsf{C}\) is also closed under extensions and kernels of epimorphisms in \(\mathsf{K}\); so it is self-resolving. Finally, the class \(\mathsf{C}\) is closed under direct limits in \(\mathsf{K}\), and the direct limits are exact in \(\mathsf{K}\). Thus the assumptions of [33, Theorem 6.1] are satisfied for the class \(\mathsf{C}\subset\mathsf{K}\), which tells that, for any \(\mathsf{D}\)-periodic object \(M\in\mathsf{K}\), the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) is closed under direct limits in \(\mathsf{K}\). By [5, Lemma 1.3] or [33, Lemma 4.1], we have \(\operatorname{Ext}^{n}_{\mathsf{K}}(C,D)=0\) for all objects \(C\in\mathsf{C}\), \(D\in\mathsf{D}\), and integers \(n\geq 1\). By [33, Lemma 4.2], it follows that the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) contains all objects of the class \(\mathsf{C}\) which have finite projective dimension in \(\mathsf{K}\). Thus \(\mathsf{T}\subset{}^{\perp_{1}}\{M\}\cap\mathsf{C}\). If \(M\in\mathsf{B}\), then we also have \(\mathsf{S}\subset{}^{\perp_{1}}\{M\}\cap\mathsf{C}\). On the other hand, [33, Lemma 4.3] tells that the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) is closed under extensions and kernels of admissible epimorphisms in the exact category \(\mathsf{C}\) (with the exact category structure inherited from the abelian exact structure of \(\mathsf{K}\)). Since the class \(\mathsf{C}\) is closed under extensions and kernels of epimorphisms in \(\mathsf{K}\), it follows that the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) is closed under extensions and kernels of epimorphisms in \(\mathsf{K}\). Finally, the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) is closed under coproducts in \(\mathsf{K}\), since it is closed under finite direct sums and direct limits. We have shown that the class \({}^{\perp_{1}}\{M\}\cap\mathsf{C}\) contains \(\mathsf{S}\cup\mathsf{T}\) and is closed under extensions, kernels of epimorphisms, coproducts, and direct limits in \(\mathsf{K}\). Hence we can conclude that \({}^{\perp_{1}}\{M\}\cap\mathsf{C}=\mathsf{C}\), so \(\mathsf{C}\subset{}^{\perp_{1}}\{M\}\) and \(M\in\mathsf{D}\). Proof of Theorem B(i) from from Section 0.3.: Let \(D^{\bullet}\) be an acyclic complex in \(\mathsf{K}\) with the terms \(D^{i}\in\mathsf{D}\) and the objects of cocycles \(B^{i}\in\mathsf{B}\). So we have short exact sequences \(0\longrightarrow B^{i}\longrightarrow D^{i}\longrightarrow B^{i+1} \longrightarrow 0\) in \(\mathsf{K}\). Taking the product of these short exact sequences over \(i\in\mathbb{Z}\), we obtain a sequence \[0\xrightarrow{}\prod\nolimits_{i\in\mathbb{Z}}B^{i}\xrightarrow{}\prod \nolimits_{i\in\mathbb{Z}}D^{i}\xrightarrow{}\prod\nolimits_{i\in\mathbb{Z}}B ^{i}\xrightarrow{}0. \tag{1}\] In order to show that (1) is exact, we apply [33, Lemma 6.4]. By assumption, the class \(\mathsf{S}\cup\mathsf{T}\) contains a set of generators of the Grothendieck category \(\mathsf{K}\). So there exists a family of objects \((G_{\xi})_{\xi\in\Xi}\) in \(\mathsf{S}\cup\mathsf{T}\) together with an epimorphism \(G=\coprod_{\xi\in\Xi}G_{\xi}\longrightarrow\prod_{i\in\mathbb{Z}}B^{i}\) in \(\mathsf{K}\). It remains to show that \(\operatorname{Ext}^{1}_{\mathsf{K}}(G,B^{i})=0\) for every \(i\in\mathbb{Z}\). By [12, Corollary 8.3] or [14, Corollary A.2], it suffices to check that \(\operatorname{Ext}^{1}_{\mathsf{K}}(G_{\xi},B^{i})=0\) for every \(i\in\mathbb{Z}\) and \(\xi\in\Xi\). There are two cases. If \(G_{\xi}\in\mathsf{S}\), then it remains to recall that \(B^{i}\in\mathsf{B}=\mathsf{S}^{\perp_{1}}\). If \(G_{\xi}\in\mathsf{T}\), then \(G_{\xi}\in\mathsf{C}\) and the projective dimension of \(G_{\xi}\) in \(\mathsf{K}\) is finite. From the short exact sequences \(0\longrightarrow B^{j}\longrightarrow D^{j}\longrightarrow B^{j+1} \longrightarrow 0\) we get \(\operatorname{Ext}^{1}_{\mathsf{K}}(G_{\xi},B^{i})\simeq\operatorname{Ext}^{2 }_{\mathsf{K}}(G_{\xi},B^{i-1})\simeq\operatorname{Ext}^{3}_{\mathsf{K}}(G_{ \xi},B^{i-2})\simeq\cdots=0\), since \(\operatorname{Ext}^{n}_{\mathsf{K}}(G_{\xi},D^{j})=0\) for all \(j\in\mathbb{Z}\) and \(n\geq 1\) as explained above (cf. [33, proof of Proposition 6.5]). So [33, Lemma 6.4] tells that the short sequence (1) is exact. Applying [12, Corollary 8.3] or [14, Corollary A.2] again, we see that both the classes \(\mathsf{B}\) and \(\mathsf{D}\) are closed under infinite products in \(\mathsf{K}\). Hence \(\prod_{i\in\mathbb{Z}}B^{i}\in\mathsf{B}\) and \(\prod_{i\in\mathbb{Z}}D^{i}\in\mathsf{D}\). So \(\prod_{i\in\mathbb{Z}}B^{i}\) is a \(\mathsf{D}\)-periodic object in \(\mathsf{B}\). By Theorem 1.1, it follows that \(\prod_{i\in\mathbb{Z}}B^{i}\in\mathsf{D}\). Finally, the class \(\mathsf{D}\) is closed under direct summands in \(\mathsf{K}\), hence \(B^{i}\in\mathsf{D}\) for all \(i\in\mathbb{Z}\). ## 2. Generalized Flat/Projective and Fp-projective Periodicity I The aim of this section is to prove Theorem 0(a). It is restated below as Theorem 2.9(a) and Corollary 2.11. The argument follows the ideas of the proof of [5, Theorems 0.7-0.8 or Corollaries 4.7-4.9]. The result is module-theoretic, but the proof has a category-theoretic flavor in that the approach of [5] needs to be applied _within the class \(\mathsf{C}\) viewed as an exact subcategory \(\mathsf{C}\subset\mathsf{Mod}\text{--}R\)_. Let \(\mathsf{K}\) be an exact category (in Quillen's sense). We suggest the survey paper [10] as a general reference source on exact categories. The definition of a (_hereditary_) _cotorsion pair_\((\mathsf{A},\mathsf{B})\) in \(\mathsf{K}\) was already given in the beginning of Section 1. The intersection of the two classes \(\mathsf{A}\cap\mathsf{B}\subset\mathsf{K}\) is called the _kernel_ of a cotorsion pair \((\mathsf{A},\mathsf{B})\). Let us define the important concept of a _complete_ cotorsion pair. A cotorsion pair \((\mathsf{A},\mathsf{B})\) in \(\mathsf{K}\) is said to be _complete_ if for every object \(K\in\mathsf{K}\) there exist (admissible) short exact sequences in \(\mathsf{K}\) of the form \[0\xrightarrow{\ \ }B^{\prime}\xrightarrow{\ \ }A\xrightarrow{\ \ }K\xrightarrow{\ \ }0 \tag{3}\] \[0\xrightarrow{\ \ }K\xrightarrow{\ \ }B\xrightarrow{\ \ }A^{\prime}\xrightarrow{\ \ }0 \tag{2}\] with \(A\), \(A^{\prime}\in\mathsf{A}\) and \(B\), \(B^{\prime}\in\mathsf{B}\). The sequence (2) is called a _special precover sequence_. The sequence (3) is called a _special preenvelope sequence_. Collectively, the sequences (2-3) are referred to as the _approximation sequences_. Let \(\mathsf{E}\subset\mathsf{K}\) be a full subcategory closed under extensions. Then we endow \(\mathsf{E}\) with the exact category structure _inherited from_ the exact category structure of \(\mathsf{K}\). The short exact sequences in the inherited exact structure on \(\mathsf{E}\) are the short exact sequences in \(\mathsf{K}\) with the terms belonging to \(\mathsf{E}\). **Lemma 2.1**.: _Let \((\mathsf{C},\mathsf{D})\) be a complete cotorsion pair in an exact category \(\mathsf{K}\). Then the exact category \(\mathsf{C}\) (with the exact structure inherited from \(\mathsf{K}\)) has enough injective objects. The class of all injective objects in \(\mathsf{C}\) is precisely the kernel \(\mathsf{C}\cap\mathsf{D}\) of the cotorsion pair \((\mathsf{C},\mathsf{D})\). Dually, the exact category \(\mathsf{D}\) has enough projective objects, and the kernel \(\mathsf{C}\cap\mathsf{D}\) is precisely the class of all projectives in \(\mathsf{D}\)._ Proof.: The proof is left to the reader. Let \(\mathsf{K}\) be an exact category and \(\mathsf{E}\subset\mathsf{K}\) be a full subcategory closed under extensions, endowed with the inherited exact category structure. Let \((\mathsf{A},\mathsf{B})\) be a complete cotorsion pair in \(\mathsf{K}\). We will say that the cotorsion pair \((\mathsf{A},\mathsf{B})\)_restricts to_ (_a complete cotorsion pair in_) the exact subcategory \(\mathsf{E}\) if the pair of classes \((\mathsf{E}\cap\mathsf{A},\mathsf{E}\cap\mathsf{B})\) is a complete cotorsion pair in \(\mathsf{E}\). **Lemma 2.2**.: _Let \((\mathsf{A},\mathsf{B})\) be a complete cotorsion pair in an exact category \(\mathsf{K}\), and let \(\mathsf{E}\subset\mathsf{K}\) be a full subcategory closed under extensions and kernels of admissible epimorphisms. Assume that \(\mathsf{A}\subset\mathsf{E}\). Then_ (a) _the cotorsion pair \((\mathsf{A},\mathsf{B})\) restricts to \(\mathsf{E}\), so \((\mathsf{A}\), \(\mathsf{E}\cap\mathsf{B})\) is a complete cotorsion pair in \(\mathsf{E}\);_ (b) _if the cotorsion pair \((\mathsf{A},\mathsf{B})\) is hereditary in \(\mathsf{K}\), then the restricted cotorsion pair \((\mathsf{A}\), \(\mathsf{E}\cap\mathsf{B})\) is hereditary in \(\mathsf{E}\)._ Proof.: This is fairly standard and easy to prove. The details can be found, e. g., in [29, Lemmas 1.5(a) and 1.6]. Given an additive category \(\mathsf{K}\), we denote by \(\mathbf{C}(\mathsf{K})\) the additive category of complexes in \(\mathsf{K}\) (with the usual morphisms of complexes) and by \(\mathbf{H}(\mathsf{K})\) the triangulated homotopy category of complexes in \(\mathsf{K}\). So the morphisms in \(\mathbf{H}(\mathsf{K})\) are the cochain homotopy classes of morphisms in \(\mathbf{C}(\mathsf{K})\). When \(\mathsf{K}\) is an exact category, the category \(\mathbf{C}(\mathsf{K})\) is endowed with the exact category structure in which a short sequence of complexes is exact if and only if it is exact at every degree. We denote by \(K^{\bullet}\longmapsto K^{\bullet}[n]\) the functor of grading shift on the complexes; so \(K^{\bullet}[n]^{i}=K^{n+i}\) for all \(n\), \(i\in\mathbb{Z}\). **Lemma 2.3**.: _Let \(\mathsf{K}\) be an exact category, and let \(A^{\bullet}\) and \(B^{\bullet}\) be two complexes in \(\mathsf{K}\). Assume that \(\operatorname{Ext}^{1}_{\mathsf{K}}(A^{n},B^{n})=0\) for every \(n\in\mathbb{Z}\). Then there is a natural isomorphism of abelian groups_ \[\operatorname{Ext}^{1}_{\mathbf{C}(\mathsf{K})}(A^{\bullet},B^{\bullet}) \simeq\operatorname{Hom}_{\mathbf{H}(\mathsf{K})}(A^{\bullet},B^{\bullet}[1]).\] Proof.: This is also standard and well-known. More generally, for any two complexes \(A^{\bullet}\) and \(B^{\bullet}\) in \(\mathsf{K}\), the subgroup of _termwise split_ extensions \(0\longrightarrow B^{\bullet}\longrightarrow C^{\bullet}\longrightarrow A^{ \bullet}\longrightarrow 0\) in \(\operatorname{Ext}^{1}_{\mathsf{K}}(A^{\bullet},B^{\bullet})\) is naturally isomorphic to the group of morphisms \(A^{\bullet}\longrightarrow B^{\bullet}\) in the homotopy category \(\mathbf{H}(\mathsf{K})\). We refer to [5, Lemma 1.6] for the details. At this point, let us specialize our discussion to Grothendieck abelian categories \(\mathsf{K}\). Let \(F\in\mathsf{K}\) be an object and \(\alpha\) be an ordinal. A family of subobjects \((F_{\beta}\subset F)_{0\leq\beta\leq\alpha}\) is said to be an \(\alpha\)_-indexed filtration_ on \(F\) if the following conditions are satisfied: * \(F_{0}=0\) and \(F_{\alpha}=F\); * \(F_{\gamma}\subset F_{\beta}\) for all \(0\leq\gamma\leq\beta\leq\alpha\); * \(F_{\beta}=\bigcup_{\gamma<\beta}F_{\gamma}\) for all limit ordinals \(\beta\leq\alpha\). An object \(F\in\mathsf{K}\) endowed with an ordinal-indexed filtration \((F_{\beta})_{0\leq\beta\leq\alpha}\) is said to be _filtered by_ the quotient objects \(S_{\beta}=F_{\beta+1}/F_{\beta},\ \ 0\leq\beta<\alpha\). In an alternative terminology, the object \(F\) is called a _transfinitely iterated extension_ (_in the sense of the direct limit_) of the objects \((S_{\beta})_{0\leq\beta<\alpha}\). Given a class of objects \(\mathsf{S}\subset\mathsf{K}\), the class of all objects in \(\mathsf{K}\) filtered by (objects isomorphic to) objects from \(\mathsf{S}\) is denoted by \(\mathsf{Fil}(\mathsf{S})\subset\mathsf{K}\). A class of objects \(\mathsf{F}\subset\mathsf{K}\) is said to be _deconstructible_ if there exists a _set_ of objects \(\mathsf{S}\subset\mathsf{K}\) such that \(\mathsf{F}=\mathsf{Fil}(\mathsf{S})\). It is easy to see that any deconstructible class (in the sense of this definition) is closed under transfinitely iterated extensions. The following result is known as the _Eklof lemma_[17, Lemma 1], [22, Lemma 6.2]. **Lemma 2.4**.: _For any class of objects \(\mathsf{B}\subset\mathsf{K}\), the class \({}^{\perp_{1}}\mathsf{B}\) is closed under transfinitely iterated exensions. In other words, \(\mathsf{Fil}({}^{\perp_{1}}\mathsf{B})={}^{\perp_{1}}\mathsf{B}\)._ Proof.: This assertion, properly understood (as per the definitions in Section 4 below), holds in any exact category \(\mathsf{K}\). See the references in [5, Lemma 1.1]. The general formulation can be also found in [33, Lemma 4.4.]. The next theorem is goes back to Eklof and Trlifaj [17, Theorems 2 and 10], [22, Theorem 6.11 and Corollary 6.14]. For any class of objects \(\mathsf{F}\subset\mathsf{K}\), we denote by \(\mathsf{F}^{\oplus}\subset\mathsf{K}\) the class of all direct summands of objects from \(\mathsf{F}\) in \(\mathsf{K}\). **Theorem 2.5**.: _Let \(\mathsf{K}\) be a Grothendieck category and \((\mathsf{A},\mathsf{B})\) be the cotorsion pair generated by a set of objects \(\mathsf{S}\subset\mathsf{K}\). Then_ (a) _If the class \(\mathsf{A}\) is generating in \(\mathsf{K}\), then the cotorsion pair \((\mathsf{A},\mathsf{B})\) is complete._ (b) _If the class \(\mathsf{Fil}(\mathsf{S})\) is generating in \(\mathsf{K}\), then \(\mathsf{A}=\mathsf{Fil}(\mathsf{S})^{\oplus}\)._ Proof.: This result, properly stated, holds in any locally presentable abelian category \(\mathsf{K}\). See [5, Theorem 1.2] for a discussion with references, and Theorem 4.3 below for a version for efficient exact categories. We refer to the book [1, Definition 1.9 and Theorem 1.11] for the definition of a _locally finitely presentable_ category. Any locally finitely presentable abelian category is Grothendieck [1, Proposition 1.59]. We will have a detailed discussion of such categories below in Section 3, where several further references are suggested. The abelian category of modules \(\mathsf{Mod}\)-\(R\) is locally finitely presentable for any ring \(R\). The following result of Stovicek was already used in a similar way in the paper [5], where it is stated as [5, Lemma 3.4]. **Proposition 2.6**.: _Let \(\mathsf{K}\) be a locally finitely presentable abelian category and \(\mathsf{S}\subset\mathsf{K}\) be a class of finitely presentable objects closed under extensions in \(\mathsf{K}\). Let \(A^{\bullet}\) be a complex in \(\mathsf{K}\) whose terms are \(\mathsf{S}\)-filtered objects. Then the complex \(A^{\bullet}\), viewed as an object of the abelian category of complexes \(\mathbf{C}(\mathsf{K})\), is filtered by bounded below complexes of objects from \(\mathsf{S}\)._ Proof.: This is the particular case of [42, (proof of) Proposition 4.3] for the countable cardinal \(\kappa=\aleph_{0}\). The argument is based on the Hill lemma ([42, Theorem 2.1] or [22, Theorem 7.10]). In addition to the abelian exact structure on the module category \(\mathsf{K}=\mathsf{Mod}\)-\(R\), we are interested in the pure exact structure. The definition of the pure exact structure on \(\mathsf{Mod}\)-\(R\) was already given in Section 1. A complex in \(\mathsf{Mod}\)-\(R\) is said to be _pure acyclic_ (or _pure exact_) if it is acyclic in the pure exact structure, i. e., can be obtained by splicing pure short exact sequences. The following result due to Neeman [26] and Stovicek [44] is a stronger version of the pure pure-projective periodicity theorem (item (2) or (2\({}^{\rm c}\)) on the list of Section 0.0). **Theorem 2.7**.: _Let \(R\) be an associative ring. Let \(P^{\bullet}\) be a complex of pure-projective \(R\)-modules, and let \(X^{\bullet}\) be a pure acyclic complex of \(R\)-modules. Then any morphism of complexes of \(R\)-modules \(P^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero._ Proof.: This was first stated in [44, Theorem 5.4] based on [26, Theorem 8.6]. We refer to the paper [4, Theorem 1.1] for a generalization, and to [5, Section 0.2 and proof of Theorem 4.3] for a discussion with some details. Now let \(\mathsf{S}\) be a class of finitely presented \(R\)-modules closed under finite direct sums and containing the free \(R\)-module \(R\). Put \(\mathsf{C}=\varinjlim\mathsf{S}\subset\mathsf{Mod}\text{--}R\). **Lemma 2.8**.: _The full subcategory \(\mathsf{C}=\varinjlim\mathsf{S}\) is closed under pure extensions (as well as pure submodules and pure epimorphic images) in \(\mathsf{Mod}\text{--}R\). In the exact category structure on \(\mathsf{C}\) inherited from the pure exact structure on \(\mathsf{Mod}\text{--}R\), a short sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) is exact if and only if the short sequence of abelian groups \(0\longrightarrow\operatorname{Hom}_{R}(S,K)\longrightarrow\operatorname{Hom }_{R}(S,L)\longrightarrow\operatorname{Hom}_{R}(S,M)\longrightarrow 0\) is exact for every module \(S\in\mathsf{S}\)._ Proof.: The first assertion is the result of [25, Proposition 2.2]. The second assertion claims that a short sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) with \(K\), \(L\), \(M\in\mathsf{C}\) is pure exact in \(\mathsf{Mod}\text{--}R\) if and only if the functor \(\operatorname{Hom}_{R}(S,-)\) takes it to a short exact sequence for every \(S\in\mathsf{S}\). The point is that any morphism \(T\longrightarrow M\) from a finitely presented \(R\)-module \(T\) into the module \(M\in\mathsf{C}=\varinjlim\mathsf{S}\) factorizes through some module \(S\in\mathsf{S}\). So if every morphism \(S\longrightarrow M\) lift to a morphism \(S\longrightarrow L\), then also every morphism \(T\longrightarrow M\) lifts to a morphism \(T\longrightarrow L\). Now we can formulate and prove the main results of the section (though we will need yet another lemma in between). **Theorem 2.9**.: _Let \(R\) be a ring and \(\mathsf{S}\) be a class of finitely presented \(R\)-modules, containing the free \(R\)-module \(R\) and closed under extensions and the kernels of epimorphisms. Put \(\mathsf{B}=\mathsf{S}^{\perp_{1}}\), \(\mathsf{A}={}^{\perp_{1}}\mathsf{B}\), \(\mathsf{C}=\varinjlim\mathsf{S}\), and \(\mathsf{D}=\mathsf{C}^{\perp_{1}}\subset\mathsf{Mod}\text{--}R\). Then_ (a) _in any acyclic complex of modules from \(\mathsf{A}\) with the modules of cocycles belonging to \(\mathsf{C}\), the modules of cocycles actually belong to \(\mathsf{A}\);_ (b) _let \(A^{\bullet}\) be a complex in \(\mathsf{Mod}\text{--}R\) whose terms belong to \(\mathsf{A}\), and let \(X^{\bullet}\) be an acyclic complex in \(\mathsf{Mod}\text{--}R\) whose terms belong to \(\mathsf{B}\cap\mathsf{C}\) and the modules of cocycles also belong to \(\mathsf{B}\cap\mathsf{C}\). Then any morphism of complexes of modules \(A^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero._ The following lemma tells that modules from the class \(\mathsf{B}\cap\mathsf{C}\) are "absolutely pure within the exact category \(\mathsf{C}\)". **Lemma 2.10**.: _In the notation of Theorem 0 or Theorem 2.9, let \(0\longrightarrow B\longrightarrow L\longrightarrow C\longrightarrow 0\) be a short exact sequence in \(\mathsf{Mod}\text{--}R\) with the terms \(B\), \(L\), \(C\in\mathsf{C}\). Assume that the module \(B\) belongs to the class \(\mathsf{B}\cap\mathsf{C}\). Then the short exact sequence \(0\longrightarrow B\longrightarrow L\longrightarrow C\longrightarrow 0\) is pure in \(\mathsf{Mod}\text{--}R\)._ Proof.: It is only important that \(C\in\mathsf{C}\) and \(B\in\mathsf{B}\). By Lemma 2.8, it suffices to check that any morphism \(S\longrightarrow C\) with \(S\in\mathsf{S}\) lifts to a morphism \(S\longrightarrow L\). This holds because \(B\in\mathsf{B}=\mathsf{S}^{\perp_{1}}\subset\mathsf{Mod}\text{--}R\). Proof of Theorem 2.9(b).: The argument follows the ideas of the proof of [5, Theorem 4.2], with suitable modifications. By the Eklof-Trlifaj theorem (Theorem 2.5(b)), we have \(\mathsf{A}=\mathsf{Fil}(\mathsf{S})^{\oplus}\). Without loss of generality we can assume that the terms of the complex \(A^{\bullet}\) belong to \(\mathsf{Fil}(\mathsf{S})\). Then, by Proposition 2.6 (applied in the case of the module category \(\mathsf{K}=\mathsf{Mod}\)-\(R\)), the complex \(A^{\bullet}\) is filtered by (bounded below) complexes with the terms belonging to \(\mathsf{S}\). By Lemma 2.3, for any complex \(A^{\bullet}\) with the terms in \(\mathsf{A}\) and any complex \(B^{\bullet}\) with the terms in \(\mathsf{B}\) we have an isomorphism of abelian groups \[\operatorname{Ext}^{1}_{\mathsf{C}(\mathsf{Mod}\text{-}R)}(A^{\bullet},B^{ \bullet}[-1])\simeq\operatorname{Hom}_{\mathbf{H}(\mathsf{Mod}\text{-}R)}(A^ {\bullet},B^{\bullet}).\] So, instead of showing that \(\operatorname{Hom}_{\mathbf{H}(\mathsf{Mod}\text{-}R)}(A^{\bullet},X^{ \bullet})=0\) as desired in the theorem, it suffices to prove that \(\operatorname{Ext}^{1}_{\mathsf{C}(\mathsf{Mod}\text{-}R)}(A^{\bullet},X^{ \bullet}[-1])=0\). In view of the Eklof lemma (Lemma 2.4) applied in the abelian category \(\mathsf{K}=\mathsf{C}(\mathsf{Mod}\text{-}R)\), the question reduces to showing that \(\operatorname{Ext}^{1}_{\mathsf{C}(\mathsf{Mod}\text{-}R)}(S^{\bullet},X^{ \bullet}[-1])=0\) for any complex \(S^{\bullet}\) with the terms belonging to \(\mathsf{S}\) and any complex \(X^{\bullet}\) as in the theorem. Using Lemma 2.3 again, we conclude that it suffices to show that any morphism of complexes \(S^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero. Finally, we observe that all finitely presented \(R\)-modules are pure-projective (by the definitions), while any acyclic complex of modules with the modules of cocycles in \(\mathsf{B}\cap\mathsf{C}\) is pure acyclic (by Lemma 2.10). Thus any morphism of complexes \(S^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero by the Neeman-Stovicek theorem (Theorem 2.7). Proof of Theorem 2.9(a).: It is clear that in the assumptions of the theorem all the modules from \(\mathsf{S}\) have to be strongly finitely presented (\(\operatorname{FP}_{\infty}\)). Thus [3, Theorem 2.3 and Corollary 2.4] or [22, Theorem 8.40, Corollary 8.42, and Theorem 6.19] are applicable, telling that \((\mathsf{C},\mathsf{D})\) is a complete cotorsion pair in \(\mathsf{Mod}\)-\(R\). The cotorsion pair \((\mathsf{A},\mathsf{B})\) is complete in \(\mathsf{Mod}\)-\(R\) by the Eklof-Trlifaj theorem (Theorem 2.5(a)). Both the cotorsion pairs \((\mathsf{A},\mathsf{B})\) and \((\mathsf{C},\mathsf{D})\) are hereditary, as it was explained in the proof of Theorem 0(b) in Section 1. Applying Lemma 2.2 to the abelian category \(\mathsf{K}=\mathsf{Mod}\)-\(R\) and the full subcategory \(\mathsf{E}=\mathsf{C}\), we conclude that \((\mathsf{A},\,\mathsf{B}\cap\mathsf{C})\) is a hereditary complete cotorsion pair in the exact category \(\mathsf{C}\). Lemma 2.1 tells that there are enough injective objects in the exact category \(\mathsf{C}\), and the class of such injective objects coincides with the intersection \(\mathsf{C}\cap\mathsf{D}\). Let \(A^{\bullet}\) be an acyclic complex of modules from \(\mathsf{A}\). Then one can easily see that the modules of cocycles of \(A^{\bullet}\) belong to \(\mathsf{A}\) if and only if the complex of abelian groups \(\operatorname{Hom}_{R}(A^{\bullet},B)\) is acyclic for any module \(B\in\mathsf{B}\). This holds because \((\mathsf{A},\mathsf{B})\) is a cotorsion pair in \(\mathsf{Mod}\)-\(R\) (so \(\mathsf{A}=\mbox{${}^{\perp_{1}}$}\mathsf{B}\)). We will use a version of this observation made within the exact category \(\mathsf{C}\). So let \(A^{\bullet}\) be an acyclic complex of modules from \(\mathsf{A}\) with the modules of cocycles belonging to \(\mathsf{C}\). Then we observe that the modules of cocycles of \(A^{\bullet}\) belong to \(\mathsf{A}\) if and only if the complex \(\operatorname{Hom}_{R}(A^{\bullet},B)\) is acyclic for any module \(B\in\mathsf{B}\cap\mathsf{C}\). This holds because \((\mathsf{A},\,\mathsf{B}\cap\mathsf{C})\) is a cotorsion pair in \(\mathsf{C}\), so \(\mathsf{A}=\mathsf{C}\cap\mbox{${}^{\perp_{1}}$}(\mathsf{B}\cap\mathsf{C})\). Now let \(D^{\bullet}\) be an injective resolution of the object \(B\) in the exact category \(\mathsf{C}\). So we have \(B\in\mathsf{B}\cap\mathsf{C}\) by assumption, and \(0\longrightarrow B\longrightarrow D^{0}\longrightarrow D^{1} \longrightarrow D^{2}\longrightarrow\cdots\) is an acyclic complex in \(\mathsf{Mod}\)-\(R\) with the modules \(D^{n}\in\mathsf{C}\cap\mathsf{D}\) and the modules of cocycles belonging to \(\mathsf{C}\). We observe that the modules of cocycles of the complex \(D^{\bullet}\) actually belong to \(\mathsf{B}\cap\mathsf{C}\), because \(\mathsf{D}\subset\mathsf{B}\) and the class \(\mathsf{B}\) is closed under cokernels of monomorphisms. Essentially, this is a restatement of the claim that the cotorsion pair \((\mathsf{A},\mathsf{B})\) is hereditary in \(\mathsf{Mod}\)-\(R\), or more specifically, that the cotorsion pair \((\mathsf{A},\mathsf{B}\cap\mathsf{C})\) is hereditary in \(\mathsf{C}\). Denote by \(X^{\bullet}\) the acyclic complex \((B\to D^{\bullet})\). Then the complex of abelian groups \(\operatorname{Hom}_{R}(A^{\bullet},X^{\bullet})\) is acyclic by Theorem 2.9(b) (which we have proved above). This holds because \(A^{\bullet}\) is a complex with the terms in \(\mathsf{A}\), while \(X^{\bullet}\) is an acyclic complex with the terms in \(\mathsf{B}\cap\mathsf{C}\) and the modules of cocycles in \(\mathsf{B}\cap\mathsf{C}\). On the other hand, the complex of abelian groups \(\operatorname{Hom}_{R}(A^{\bullet},D^{\bullet})\) is acyclic as well. This holds quite generally for any acyclic complex \(A^{\bullet}\) and any bounded below complex of injective objects \(D^{\bullet}\) in any exact category \(\mathsf{C}\). Notice that in the situation at hand the complex of modules \(A^{\bullet}\) is acyclic in the exact category \(\mathsf{C}\), as its modules of cocycles belong to \(\mathsf{C}\) by assumption. Since both the complexes \(\operatorname{Hom}_{R}(A^{\bullet},X^{\bullet})\) and \(\operatorname{Hom}_{R}(A^{\bullet},D^{\bullet})\) are acyclic, and the complex \(X^{\bullet}\) has the form \(X^{\bullet}=(B\to D^{\bullet})\), we can finally conclude that the complex of abelian groups \(\operatorname{Hom}_{R}(A^{\bullet},B)\) is acyclic. **Corollary 2.11**.: _Let \(R\) be a ring and \(\mathsf{S}\) be a class of finitely presented \(R\)-modules, containing the free \(R\)-module \(R\) and closed under extensions and the kernels of epimorphisms. Put \(\mathsf{A}=\mbox{${}^{\perp_{1}}$}(\mathsf{C}^{\perp_{1}})\) and \(\mathsf{C}=\varinjlim\mathsf{S}\). Then, for any short exact sequence \((*)\) as in Section 0.0 with modules \(L\in\mathsf{A}\) and \(M\in\mathsf{C}\), one has \(M\in\mathsf{A}\). In other words, any \(\mathsf{A}\)-periodic module belonging to \(\mathsf{C}\) actually belongs to \(\mathsf{A}\)._ Proof.: This is a corollary of Theorem 2.9(a), provable by splicing up a doubly unbounded sequence of short exact sequences \((*)\) and applying Theorem 2.9(a) to the resulting doubly unbounded complex. (Cf. [18, Proposition 1].) Proof of Theorem 0(a) from Section 0.1.: The first assertion of Theorem 0(a) is provided by Corollary 2.11, and the second one by Theorem 2.9(a). ## 3. Direct Limit Closures of Classes of Finitely Presentables The aim of this section is to prove Theorems A(i) and B(ii). They are restated below as Propositions 3.3 and 3.5. In this section we work with _locally finitely presentable_ abelian categories. We suggest the book [1] as a general reference source on nonadditive locally (finitely) presentable and (finitely) accessible categories. The definitions of a finitely presentable object and and a locally finitely presentable category can be found in [1, Definitions 1.1 and 1.9, and Theorem 1.11] (it is helpful to keep in mind that in abelian categories the notions of a generator and a strong generator coincide). All locally finitely presentable abelian categories have exact direct limit functors, so they are Grothendieck [1, Proposition 1.59]. The abelian category of modules over an arbitrary ring \(\mathsf{K}=\mathsf{Mod}\text{--}R\) is an important example of a locally finitely presentable abelian category. Finitely accessible categories [1, Definition 2.1 and Remark 2.2(1)] form a wider class than the locally finitely presentable ones. The theory of finitely accessible additive categories goes back to the paper [25, Section 2] (where they were not defined yet). Subsequently they were studied in the papers [15, 24] under the name of "locally finitely presented" additive categories. We suggest [32, Sections 8.1-8.2] as an additional reference source on locally finitely presentable abelian categories. Our proof of Theorem A(i) (Proposition 3.3 below) is a slight generalization of the argument in [32, Proposition 8.4]. _Locally finitely generated_ categories also form a wider class than the locally finitely presentable ones. We refer to [1, Section 1.E] for a general discussion of locally generated (nonadditive) categories and to [31, Corollary 9.6] for a very general form of the assertion that any locally finitely generated abelian category is Grothendieck. A good reference source on locally finitely generated Grothendieck categories and finitely generated/finitely presentable objects in them is [41, SSV.3]. The following definitions are very general. Let \(\mathsf{K}\) be a category with direct limits. An object \(S\in\mathsf{K}\) is said to be _finitely presentable_ if the functor \(\operatorname{Hom}_{\mathsf{K}}(P,-)\colon\mathsf{K}\longrightarrow\mathsf{ Sets}\) preserves direct limits. An object \(S\) is said to be _finitely generated_ if the same functor preserves the direct limits of diagrams of monomorphisms. An abelian category \(\mathsf{K}\) with set-indexed coproducts is said to be _locally finitely generated_ if it has a set of generators consisting of finitely generated objects. In particular, the category \(\mathsf{K}\) is _locally finitely presentable_ if it has a set of generators consisting of finitely presentable objects. Given a locally finitely presentable abelian category \(\mathsf{K}\), we denote by \(\mathsf{K}_{\mathsf{fp}}\subset\mathsf{K}\) the full subcategory of finitely presentable objects in \(\mathsf{K}\). The full subcategory \(\mathsf{K}_{\mathsf{fp}}\) is closed under cokernels [1, Proposition 1.3] and extensions [32, Lemma 8.1] in \(\mathsf{K}\). Similarly, the full subcategory of finitely generated objects in a locally finitely generated abelian category \(\mathsf{K}\) is closed under extensions and quotients [41, Lemma V.3.1 and Proposition V.3.2]. **Proposition 3.1**.: _Let \(\mathsf{K}\) be a locally finitely presentable abelian category and \(\mathsf{S}\subset\mathsf{K}_{\mathsf{fp}}\) be a class of finitely presentable objects closed under finite direct sums. Then the class of objects \(\varinjlim\mathsf{S}\subset\mathsf{K}\) is closed under coproducts and direct limits in \(\mathsf{K}\). An object \(L\in\mathsf{K}\) belongs to \(\varinjlim\mathsf{S}\) if and only if, for any object \(T\in\mathsf{K}_{\mathsf{fp}}\), any morphism \(T\longrightarrow L\) in \(\mathsf{K}\) factorizes through an object from \(\mathsf{S}\)._ Proof.: This is [25, Proposition 2.1], [15, Section 4.1], or [24, Proposition 5.11]. **Corollary 3.2**.: _Let \(\mathsf{K}\) be a locally finitely presentable abelian category and \(\mathsf{S}\subset\mathsf{K}_{\mathsf{fp}}\) be a class of finitely presentable objects closed under finite direct sums. Let \((H_{i})_{i\in I}\) be a direct system of objects \(H_{i}\in\varinjlim\mathsf{S}\), indexed by a directed poset \(I\). Then the kernel of the natural epimorphism_ \[\coprod_{i\in I}H_{i}\,\relbar\lim_{\longrightarrow i\in I}H_{i} \tag{4}\] belongs to \(\varinjlim\mathsf{S}\)._ Proof.: One can argue from purity considerations, observing that the epimorphism (4) is pure (in the sense of the definition from [15, Section 3], [44, Section 4] reproduced below in Section 5) and the class \(\varinjlim\mathsf{S}\) is closed under pure subobjects by the category-theoretic version of [25, Proposition 2.2]. Alternatively, one can notice that the kernel of (4) is a direct limit of coproducts of copies of the objects \(H_{i}\), following [6, proof of Proposition 4.1]; then it remains to refer to Proposition 3.1. The following definitions appeared in the papers [21, 9]. Let \(\mathsf{K}\) be a Grothendieck category and \(n\geq 1\) be an integer. An object \(S\in\mathsf{K}\) is said to be _of type_\(\operatorname{FP}_{n}\) if the functors \(\operatorname{Ext}^{i}_{\mathsf{K}}(S,-)\colon\mathsf{K}\longrightarrow \mathsf{Ab}\) preserve direct limits for all \(0\leq i\leq n-1\). So the objects of type \(\operatorname{FP}_{1}\) are, by the definition, the finitely presentable ones, while the objects of type \(\operatorname{FP}_{n}\) for \(n\geq 2\) form more narrow classes. An object \(S\in\mathsf{K}\) is said to be _of type_\(\operatorname{FP}_{0}\) if it is finitely generated. An object \(S\) is said to be _of type_\(\operatorname{FP}_{\infty}\) if it is of type \(\operatorname{FP}_{n}\) for every \(n\geq 0\), that is, in other words, the functors \(\operatorname{Ext}^{i}_{\mathsf{K}}(S,-)\colon\mathsf{K}\longrightarrow \mathsf{Ab}\) preserve direct limits for all \(i\geq 0\). We use the term _strongly finitely presentable object_ as a synonym for "type \(\operatorname{FP}_{\infty}\)". In the case of the module category \(\mathsf{K}=\mathsf{Mod}\)-\(R\), these definitions are equivalent to the ones from Section 1 (see [9, Corollary 2.14]). Closure properties of the classes of objects of type \(\operatorname{FP}_{n}\) and \(\operatorname{FP}_{\infty}\) in locally finitely presentable abelian categories \(\mathsf{K}\) are listed in [21, Corollary 3.3] and [9, Proposition 2.8]. In particular, [9, Proposition 2.8(1)] tells that the class of all objects of type \(\operatorname{FP}_{n}\) is closed under extensions in \(\mathsf{K}\). **Proposition 3.3**.: _Let \(\mathsf{K}\) be a locally finitely presentable abelian category and \(\mathsf{S}\) be a class of (some) objects of type \(\operatorname{FP}_{2}\) closed under extensions in \(\mathsf{K}\). Then the class of objects \(\varinjlim\mathsf{S}\) is also closed under extensions in \(\mathsf{K}\)._ Proof.: We follow the proof of [32, Proposition 8.4]. Given an abelian category \(\mathsf{K}\) and two classes of objects \(\mathsf{X}\), \(\mathsf{Y}\subset\mathsf{K}\), denote by \(\mathsf{X}*\mathsf{Y}\) the class of all objects \(Z\in\mathsf{K}\) for which there exists a short exact sequence \(0\longrightarrow X\longrightarrow Z\longrightarrow Y\longrightarrow 0\) in \(\mathsf{K}\) with \(X\in\mathsf{X}\) and \(Y\in\mathsf{Y}\). In the situation at hand, we need to prove that \(\varinjlim\mathsf{S}*\varinjlim\mathsf{S}\subset\varinjlim\mathsf{S}\). For this purpose, we claim that the three inclusions (5) \[\varinjlim\mathsf{T}*\varinjlim\mathsf{T}\subset\varinjlim\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Finally, the middle inclusion in (5) is provable similarly to the proof in [32]. Let \(\mathsf{K}\) be a Grothendieck abelian category, \(\mathsf{X}\subset\mathsf{K}\) be a class of objects, and \(\mathsf{T}\subset\mathsf{K}\) be a class of objects such that the functor \(\operatorname{Ext}_{\mathsf{K}}^{1}(T,-)\colon\mathsf{K}\longrightarrow\mathsf{ Ab}\) preserves direct limits for all objects \(T\in\mathsf{T}\). We claim that the inclusion \((\varinjlim\mathsf{X})*\mathsf{T}\subset\varinjlim(\mathsf{X}*\mathsf{T})\) holds. The argument from [32, second part of the proof of Proposition 8.4] applies. **Lemma 3.4**.: _Let \(\mathsf{K}\) be a Grothendieck category and \(\mathsf{S}\subset\mathsf{K}\) be a class of finitely generated objects containing a set of generators of \(\mathsf{K}\) and closed under finite direct sums and kernels of epimorphisms. Then all the objects from \(\mathsf{S}\) are strongly finitely presentable (type \(\operatorname{FP}_{\infty}\))._ Proof.: First of all, the category \(\mathsf{K}\) is locally finitely generated, since it has a set of finitely generated generators by assumption. In this context, it is explained in [41, Proposition V.3.4] that an object \(S\in\mathsf{K}\) is finitely presentable if and only if the kernel of any epimorphism onto \(S\) from any finitely generated object \(T\in\mathsf{K}\) is finitely generated. Following the argument in [41], based on [41, Lemma V.3.3], one can see that it suffices to let \(T\) range over the quotient objects of finite direct sums of objects from a chosen set \(\mathsf{G}\) of finitely generated generators of \(\mathsf{K}\). Furthermore, the passage to the quotients holds automatically, and so it suffices to let \(T\) range over the finite direct sums of objects from \(\mathsf{G}\). In the situation at hand, choosing \(\mathsf{G}\subset\mathsf{S}\), we conclude that all the objects in \(\mathsf{S}\) are finitely presentable. The rest of the proof is similar to [32, proof of Lemma 8.3]. For any two objects \(X\) and \(Y\) in any abelian category \(\mathsf{K}\), the abelian group \(\operatorname{Ext}_{\mathsf{K}}^{n}(X,Y)\) can be computed as the filtered direct limit of cohomology groups \(H^{n}\operatorname{Hom}_{\mathsf{K}}(R_{\bullet},Y)\), taken over the (large) filtered category of exact complexes \(\cdots\longrightarrow R_{2}\longrightarrow R_{1}\longrightarrow R_{0} \longrightarrow X\longrightarrow 0\) in \(\mathsf{K}\). Here the morphisms in the category of such arbitrary resolutions \(R_{\bullet}\longrightarrow X\) are the usual morphisms of complexes acting by the identity maps on the object \(X\) and viewed up to cochain homotopy. In the situation at hand, for any object \(S\in\mathsf{S}\), the full subcategory of resolutions \(T_{\bullet}\longrightarrow S\) consisting of objects \(T_{i}\in\mathsf{S}\) is cofinal in the category of all resolutions \(R_{\bullet}\longrightarrow S\) with \(R_{i}\in\mathsf{K}\). So one can compute the group \(\operatorname{Ext}_{\mathsf{K}}^{n}(S,Y)\) as the filtered direct limit of \(H^{n}\operatorname{Hom}_{\mathsf{K}}(T_{\bullet},Y)\) taken over all the resolutions \(T_{\bullet}\longrightarrow S\) with \(T_{i}\in\mathsf{S}\). Now, since all the objects of \(\mathsf{S}\) are finitely presentable, the functor \(\operatorname{Hom}_{\mathsf{K}}(T_{\bullet},-)\) takes direct limits in \(\mathsf{K}\) to direct limits of complexes of abelian groups. It remains to recall that the functors of cohomology of a complex of abelian groups preserve direct limits, and direct limits commute with direct limits. **Proposition 3.5**.: _Let \(\mathsf{K}\) be a Grothendieck category and \(\mathsf{S}\subset\mathsf{K}\) be a class of finitely generated objects containing a set of generators of \(\mathsf{K}\) and closed under extensions and kernels of epimorphisms. Then the class of objects \(\varinjlim\mathsf{S}\subset\mathsf{K}\) is closed under coproducts, direct limits, extensions, and kernels of epimorphisms._ Proof.: By Lemma 3.4, all objects in \(\mathsf{S}\) are finitely presentable, and in fact even strongly finitely presentable. As the class \(\mathsf{S}\) contains a set of generators for \(\mathsf{K}\), it follows that the category \(\mathsf{K}\) is locally finitely presentable. So Proposition 3.1 is applicable, telling that the class \(\varinjlim\mathsf{S}\) is closed under coproducts and direct limits in \(\mathsf{K}\). Furthermore, Proposition 3.3 tells that the class \(\varinjlim\mathsf{S}\) is closed under extensions. It remains to prove its closedness under kernels of epimorphisms. Let \(C\longrightarrow D\) be an epimorphism in \(\mathsf{K}\) between two objects \(C\), \(D\in\varinjlim\mathsf{S}\). Then there exists a direct system \((S_{i})_{i\in I}\), indexed by a directed poset \(I\), such that \(S_{i}\in\mathsf{S}\) for all \(i\in I\) and \(D=\varinjlim_{i\in I}S_{i}\). For every \(i\in I\), consider the pullback diagram (6) Here \(C_{i}\) is the pullback of the given epimorphism \(C\longrightarrow D\) and the natural morphism to the direct limit \(S_{i}\longrightarrow D\). As the index \(i\in I\) varies, the upper lines of (6) form a direct system of (epi)morphisms in \(\mathsf{K}\), whose direct limit is the epimorphism \(C\longrightarrow D\) in the lower line of the diagram. Choose a set of generators \(\mathsf{G}\subset\mathsf{S}\) of the category \(\mathsf{K}\), and put \(H=\coprod_{G\in\mathsf{G}}G\). For every index \(i\in I\), denote by \(\Xi_{i}\) the underlying set of the image of the abelian group map \(\operatorname{Hom}_{\mathsf{K}}(H,C_{i})\longrightarrow\operatorname{Hom}_{ \mathsf{K}}(H,S_{i})\) induced by the morphism \(C_{i}\longrightarrow S_{i}\). Then, for every pair of indices \(i<j\in I\), the transition morphism \(S_{i}\longrightarrow S_{j}\) induces a map of sets \(\Xi_{i}\longrightarrow\Xi_{j}\). So we obtain a direct system of sets \((\Xi_{i})_{i\in I}\). For any object \(K\in\mathsf{K}\) and any set \(\Xi\), let us denote by \(K^{(\Xi)}\) the coproduct of \(\Xi\) copies of \(K\) in \(\mathsf{K}\). Notice that the assigment \((K,\Xi)\longmapsto K^{(\Xi)}\) is a covariant functor \(\mathsf{K}\times\mathsf{Sets}\longrightarrow\mathsf{K}\) (i. e., a covariant functor of both the arguments \(K\in\mathsf{K}\) and \(\Xi\in\mathsf{Sets}\)). For every index \(i\in I\) we have a natural morphism \(h_{i}\colon H^{(\Xi_{i})}\longrightarrow S_{i}\) in \(\mathsf{K}\). Since \(H\) is a generator of the category \(\mathsf{K}\), and the morphism \(C_{i}\longrightarrow S_{i}\) is an epimorphism, the morphism \(h_{i}\) is an epimorphism in \(\mathsf{K}\) as well. As the index \(i\) varies, the morphisms \(h_{i}\) form a direct system \((h_{i})_{i\in I}\) in the category of morphisms in \(\mathsf{K}\). Let us show that the kernel \(L_{i}\) of the morphism \(h_{i}\) belongs to \(\varinjlim\mathsf{S}\). The object \(H^{(\Xi_{i})}\) is the coproduct of copies of all the objects \(G\in\mathsf{G}\), each of them taken with the multiplicity \(\Xi_{i}\). Since the object \(S_{i}\) is finitely generated, there exists a finite subcoproduct in this coproduct mapping epimorphically onto \(S_{i}\). So we have a direct sum decomposition \(H^{(\Xi_{i})}=H^{\prime}_{i}\oplus H^{\prime\prime}_{i}\), where \(H^{\prime}_{i}\) is a finite direct sum of objects from \(\mathsf{G}\) and the restriction of \(h_{i}\) onto \(H^{\prime}_{i}\) is an epimorphism \(h^{\prime}_{i}\colon H^{\prime}_{i}\longrightarrow S_{i}\). Denote by \(K_{i}\) the kernel of \(h^{\prime}_{i}\). We have constructed a pushout diagram Now \(H^{\prime}_{i}\in\mathsf{S}\), since the class \(\mathsf{S}\) is closed under finite direct sums and \(\mathsf{G}\subset\mathsf{S}\). Hence \(K_{i}\in\mathsf{S}\), as the class \(\mathsf{S}\) is closed under kernels of epimorphisms. On the other hand, \(H^{\prime\prime}_{i}\in\varinjlim\mathsf{S}\), since the class \(\varinjlim\mathsf{S}\) is closed under coproducts. As we already know that the class \(\varinjlim\mathsf{S}\) is closed under extensions, we can conclude from the short exact sequence \(0\longrightarrow\wideparentimes K_{i}\longrightarrow L_{i}\longrightarrow H^{ \prime\prime}_{i}\longrightarrow 0\) that \(L_{i}\in\varinjlim\mathsf{S}\). Passing to the direct limit of \(h_{i}\) over \(i\in I\), we see that the kernel of the epimorphism \[\varinjlim_{i\in I}H^{(\Xi_{i})}\longrightarrow\varinjlim_{i\in I}S_{i}=D\] belongs to \(\varinjlim\varinjlim\mathsf{S}=\varinjlim\mathsf{S}\). We already know from Corollary 3.2 that the kernel of the epimorphism (4) (for \(H_{i}=H^{(\Xi_{i})}\)) belongs to \(\varinjlim\mathsf{S}\). Since the class \(\varinjlim\mathsf{S}\) is closed under extensions, it follows that the kernel \(M\) of the composition of epimorphisms \[H=\coprod_{i\in I}H^{(\Xi_{i})}\longrightarrow\varinjlim_{i\in I}H^{(\Xi_{i} )}\longrightarrow D\] belongs to \(\varinjlim\mathsf{S}\). The final observation is that the epimorphism \(H=\coprod_{i\in I}H^{(\Xi_{i})}\longrightarrow D\) factorizes through the epimorphism \(C\longrightarrow D\), essentially due to the construction of the sets \(\Xi_{i}\) in the beginning of this proof. Now we consider the pullback diagram where \(X\) is the pullback of the pair of epimorphisms \(C\longrightarrow D\) and \(H\longrightarrow D\), while \(N\) is the kernel of the morphism \(C\longrightarrow D\). Since the epimorphism \(H\longrightarrow D\) factorizes through the epimorphism \(C\longrightarrow D\), the short exact sequence \(0\longrightarrow N\longrightarrow X\longrightarrow H\longrightarrow 0\) splits. We have \(M\in\varinjlim\mathsf{S}\) and \(C\in\varinjlim\mathsf{S}\), so it follows from the short exact sequence \(0\longrightarrow M\longrightarrow X\longrightarrow C\longrightarrow 0\) that \(\wideparentimes\varinjlim\mathsf{S}\). It remains to notice that the class \(\varinjlim\mathsf{S}\) is closed under direct summands (since it is closed under direct limits) in \(\mathsf{K}\). So \(\wideparentimes\varinjlim\mathsf{S}\) as \(N\) is a direct summand of \(X\). We conclude the section by presenting formal proofs of Theorems A(i) and B(ii). Proof of Theorem A(i) from from Section 0.2.: This is precisely the assertion of Proposition 3.3. Proof of Theorem B(ii) from from Section 0.3.: Applying Proposition 3.5 to the class \(\mathsf{S}\cup\mathsf{T}\subset\mathsf{K}\), we see that the class \(\varinjlim(\mathsf{S}\cup\mathsf{T})\) is closed under coproducts, direct limits, extensions, and kernels of epimorphisms in \(\mathsf{K}\). So \(\varinjlim(\mathsf{S}\cup\mathsf{T})\) is precisely the class \(\mathsf{C}\) as defined in the formulation of Theorem B. ## 4. Exact Categories of Grothendieck Type In this section we recall some basic concepts of the theory of _efficient exact categories_ and _exact categories of Grothendieck type_, as developed by Saorin and Stovicek [35, 43]. The exposition in Stovicek's paper [43] is particularly convenient as a reference source for our purposes. Let \(\mathsf{E}\) be a category. By a _well-ordered chain_ (of morphisms) in \(\mathsf{E}\) one means a direct system \((f_{\beta,\gamma}\colon E_{\gamma}\to E_{\beta})_{0\leq\gamma<\beta<\alpha}\) in \(\mathsf{E}\) indexed by an ordinal \(\alpha\). A well-ordered chain \((E_{\beta})_{0\leq\beta<\alpha}\) is said to be _smooth_ if \(E_{\beta}=\varinjlim_{\gamma<\beta}E_{\gamma}\) for all limit ordinals \(0<\beta<\alpha\). If the direct limit \(E_{\alpha}=\varinjlim_{\beta<\alpha}E_{\alpha}\) exists in \(\mathsf{E}\), then the natural morphism \(E_{0}\longrightarrow E_{\alpha}\) is said to be the _composition_ of the smooth chain \((E_{\beta})_{0\leq\beta<\alpha}\). The morphism \(E_{0}\longrightarrow E_{\alpha}\) is also called the _transfinite composition_ of the morphisms \(E_{\beta}\longrightarrow E_{\beta+1}\), where \(0\leq\beta<\alpha\). Let \(\mathsf{E}\) be a category, \(\mathsf{D}\) be a class of morphisms in \(\mathsf{E}\), and \(\kappa\) be a regular cardinal. An object \(X\in\mathsf{E}\) is said to be _\(\kappa\)-small relative to \(\mathsf{D}\)_ if, for any smooth chain \((E_{\beta})_{0\leq\beta<\alpha}\) indexed by an ordinal \(\alpha\) of cofinality \(\geq\kappa\) with the morphisms \(E_{\beta}\longrightarrow E_{\beta+1}\) belonging to \(\mathsf{D}\) for all \(0\leq\beta<\alpha\) and the direct limit \(E_{\alpha}=\varinjlim_{\beta<\alpha}E_{\alpha}\), the induced map of sets \[\varinjlim_{\beta<\alpha}\operatorname{Hom}_{\mathsf{E}}(X,E_{\beta})\ \smash{\rel \(S\subset E\), the class of all objects in \(E\) filtered by (objects isomorphic to) objects from \(S\) is denoted by \(Fil(S)\subset E\). The next definition is taken from [43, Definition 3.11]. An _exact category of Grothendieck type_ is an efficient exact category \(E\) satisfying the additional axiom 1. the category \(E\) is deconstructible in itself, i. e., there exists a _set_ of objects \(S\subset E\) such that \(E=Fil(S)\). The following result is important for our purposes. **Theorem 4.1**.: _Any exact category of Grothendieck type has enough injective objects._ Proof.: This is [43, Corollary 5.9]. The next lemma is an exact category version of [5, Lemma 1.4]. **Lemma 4.2**.: _Let \(E\) be an exact category and \(T\subset E\) be a class of objects. Put \(B=T^{\perp_{\geq 1}}\), and assume that every object of \(E\) is an admissible subobject of an object from \(B\) (in particular, this holds if there are enough injective objects in \(E\)). Then_ 1. \({}^{\perp_{1}}B={}^{\perp_{\geq 1}}B\subset E\)_;_ 2. _if the class_ \(A={}^{\perp_{1}}B={}^{\perp_{\geq 1}}B\) _is generating in_ \(E\)_, then_ \((A,B)\) _is a hereditary cotorsion pair in_ \(E\) _(as defined in Section_ 1_)._ Proof.: The argument from [5, Lemma 1.4] applies. All the injective objects of \(E\) always belong to \(B\); so if there are enough such injective objects, then every object of \(E\) is an admissible subobject of an object from \(B\). In part (b), it is helpful to keep in mind that the class \({}^{\perp_{1}}B\) is closed under coproducts in \(E\) for any class \(B\subset E\)[12, Corollary 8.3], [14, Corollary A.2]. So the conditions that any object of \(E\) is an admissible epimorphic image of an object from \(A\) and that it is an admissible epimorphic image of a coproduct of objects from \(A\) are equivalent. The following version of the Eklof-Trlifaj theorem for efficient exact categories was obtained in the papers [35, 43]. **Theorem 4.3**.: _Let \(E\) be an efficient exact category and \((A,B)\) be the cotorsion pair generated by a set of objects \(S\subset E\). Then_ 1. _If the class_ \(A\) _is generating in_ \(E\)_, then the cotorsion pair_ \((A,B)\) _is complete._ 2. _If the class_ \(Fil(S)\) _is generating in_ \(E\)_, then_ \(A=Fil(S)^{\oplus}\)_._ Proof.: This is [35, Corollary 2.15] or [43, Theorem 5.16]. The next proposition is the efficient exact category version of [5, Proposition 1.5]. **Proposition 4.4**.: _Let \(E\) be an efficient exact category and \(T\subset E\) be a set of objects. Put \(B=T^{\perp_{\geq 1}}\), and assume that the class \(B\) is cogenerating in \(E\) (in partular, this holds if \(E\) is an exact category of Grothendieck type). Put \(A={}^{\perp_{1}}B={}^{\perp_{\geq 1}}B\), as per Lemma 4.2, and assume that the class \(A\) is generating in \(E\). Then \((A,B)\) is a hereditary complete cotorsion pair in \(E\) generated by a certain set of objects \(S\)._ Proof.: If \(E\) is of Grothendieck type, then there are enough injective objects in \(E\) by Theorem 4.1. In view of Lemma 4.2 and Theorem 4.3, we only need to construct a set of objects \(\mathsf{S}\subset\mathsf{E}\) such that \(\mathsf{S}^{\perp_{1}}=\mathsf{T}^{\perp_{\geq 1}}\). Clearly, we have \(\mathsf{T}\subset\mathsf{A}\). Arguing by induction similarly to [5, proof of Proposition 1.5], it suffices to show that for every object \(S\in\mathsf{A}\) and an integer \(n\geq 2\) there exists a set of objects \(\mathsf{S}^{\prime}\subset\mathsf{A}\) such that for any given \(X\in\mathsf{E}\) one has \(\operatorname{Ext}_{\mathsf{E}}^{n}(S,X)=0\) whenever \(\operatorname{Ext}_{\mathsf{E}}^{n-1}(S^{\prime},X)=0\) for all \(S^{\prime}\in\mathsf{S}^{\prime}\). Let \(\mathsf{J}_{S}\) be a set of admissible monomorphisms provided by [43, Proposition 5.3] for the object \(S\in\mathsf{E}\). For every short exact sequence \(0\longrightarrow E\xrightarrow{j}H\longrightarrow S\longrightarrow 0\) in \(\mathsf{E}\) with \(j\in\mathsf{J}_{S}\), choose an admissible epimorphism \(A\longrightarrow H\) onto \(H\) from an object \(A\in\mathsf{A}\) (cf. the proof of Lemma 4.2), and set \(S^{\prime}\) to be the kernel of the composition \(A\longrightarrow H\longrightarrow S\). Then one has \(S^{\prime}\in\mathsf{A}\), since \(\mathsf{A}\) is closed under the kernels of admissible epimorphisms (as the cotorsion pair \((\mathsf{A},\mathsf{B})\) in \(\mathsf{E}\) is hereditary). Let \(\mathsf{S}^{\prime}\) be the set of all objects \(S^{\prime}\) obtained in this way. For any Yoneda extension class \(\xi\in\operatorname{Ext}_{\mathsf{E}}^{n}(S,X)\), there exists a short exact sequence \(0\longrightarrow Z\longrightarrow Y\longrightarrow S\longrightarrow 0\) in \(\mathsf{E}\) such that the class \(\xi\) is the composition of the class \(\operatorname{Ext}^{1}\) represented by this short exact sequence with some Yoneda extension class \(\eta\in\operatorname{Ext}_{\mathsf{E}}^{n-1}(Z,X)\). By construction, any short exact sequence \(0\longrightarrow Z\longrightarrow Y\longrightarrow S\longrightarrow 0\) in \(\mathsf{E}\) is a pushout of a short exact sequence \(0\longrightarrow E\xrightarrow{j}H\longrightarrow S\longrightarrow 0\) with \(j\in\mathsf{J}_{S}\), which in turn is a pushout of a short exact sequence \(0\longrightarrow S^{\prime}\longrightarrow A\longrightarrow S\longrightarrow 0\) with \(A\in\mathsf{A}\) and \(S^{\prime}\in\mathsf{S}^{\prime}\). It follows easily that \(\operatorname{Ext}_{\mathsf{E}}^{n-1}(S^{\prime},X)=0\) for all \(S^{\prime}\in\mathsf{S}^{\prime}\) implies \(\operatorname{Ext}_{\mathsf{E}}^{n}(S,X)=0\). The next theorem of Stovicek plays a key role in our proof of Theorem A(iii). **Theorem 4.5**.: _Let \(\mathsf{K}\) be a Grothendieck abelian category and \(\mathsf{E}\subset\mathsf{K}\) be a deconstructible class of objects (as defined in Section 2). Assume additionally that the full subcategory \(\mathsf{E}\) is closed under direct summands in \(\mathsf{K}\). Then the category \(\mathsf{E}\), endowed with the exact category structure inherited from the abelian exact structure of \(\mathsf{K}\), is an exact category of Grothendieck type._ Proof.: This is [43, Theorem 3.16]. **Lemma 4.6**.: _Let \(\mathsf{K}\) be a efficient exact category and \(\mathsf{E}\subset\mathsf{K}\) be a full subcategory closed under transfinitely iterated extensions, endowed with the inherited exact structure. Let \(\mathsf{S}\subset\mathsf{E}\) be a class of objects. Then the notation \(\mathsf{Fil}(\mathsf{S})\) is unambiguous: the class of all \(\mathsf{S}\)-filtered objects in \(\mathsf{E}\) coincides with the class of all \(\mathsf{S}\)-filtered objects in \(\mathsf{K}\)._ Proof.: This is a part of [45, Lemma 1.11] or [43, Lemma 3.18]. ## 5. Pure Exact Structure and Deconstructibility The aim of this section is to prove Theorem A(ii). We restate it now as the following Proposition 5.1. Let \(\mathsf{K}\) be a Grothendieck category. We will say that a class of objects \(\mathsf{F}\subset\mathsf{K}\) is _weakly deconstructible_ if there exists a set of objects \(\mathsf{T}\subset\mathsf{F}\) such that \(\mathsf{F}\subset\mathsf{Fil}(\mathsf{T})\). Clearly, a class of objects is deconstructible if and only if it is weakly deconstructible _and_ closed under transfinitely iterated extensions. **Proposition 5.1**.: _Let \(\mathsf{K}\) be a locally finitely presentable abelian category and \(\mathsf{S}\subset\mathsf{K}\) be a class of finitely presentable objects closed under finite direct sums. Then the class of objects \(\varinjlim\mathsf{S}\subset\mathsf{K}\) is weakly deconstructible._ The definition of the pure exact structure on the module category \(\mathsf{Mod}\)-\(R\) was already given in Section 1. It is extended to locally finitely presentable abelian categories \(\mathsf{K}\) as follows [15, Section 3], [44, Section 4]. A short exact sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) in \(\mathsf{K}\) is said to be _pure_ if the functor \(\operatorname{Hom}_{\mathsf{K}}(T,-)\colon\mathsf{K}\longrightarrow\mathsf{ Ab}\) takes it to a short exact sequence of abelian groups for any finitely presentable object \(T\in\mathsf{K}\). The _pure exact structure_ on \(\mathsf{K}\) is defined by the collection of all pure short exact sequences. With the pure exact structure in mind, one can speak about _pure subobjects_, _pure quotients_, _pure monomorphisms_, _pure epimorphisms_, _pure-projective objects_, and _pure acyclic complexes_ in \(\mathsf{K}\). For any small preadditive category \(\mathcal{S}\) (i. e., a small category enriched in abelian groups), we denote by \(\mathsf{Mod}\)-\(\mathcal{S}=\mathsf{Funct}_{\mathsf{ad}}(\mathcal{S}^{\mathsf{op}},\mathsf{Ab})\) the category of contravariant additive functors from \(\mathcal{S}\) to \(\mathsf{Ab}\), and by \(\mathcal{S}\)-\(\mathsf{Mod}=\mathsf{Funct}_{\mathsf{ad}}(\mathcal{S},\mathsf{Ab})\) the category of covariant additive functors \(\mathcal{S}\longrightarrow\mathsf{Ab}\). A small preadditive category \(\mathcal{S}\) can be viewed as a "ring with many objects" or "a nonunital ring with enough idempotents"; then the objects of \(\mathsf{Mod}\)-\(\mathcal{S}\) and \(\mathcal{S}\)-\(\mathsf{Mod}\) are interpreted as right and left \(\mathcal{S}\)-modules. The abelian category \(\mathsf{Mod}\)-\(\mathcal{S}\) is locally finitely presentable and has enough projective objects. Representable functors play the role of free modules with one generator in \(\mathsf{Mod}\)-\(\mathcal{S}\), and the projective objects are the direct summands of coproducts of representables. There is a naturally defined tensor product functor \(\otimes_{\mathcal{S}}\colon\mathsf{Mod}\)-\(\mathcal{S}\times\mathcal{S}\)-\(\mathsf{Mod}\longrightarrow\mathsf{Ab}\), and its derived functor \(\operatorname{Tor}_{*}^{\mathcal{S}}\) can be constructed as usual. Hence one can speak of _flat_ right and left \(\mathcal{S}\)-modules. We denote the full subcategory of flat modules by \(\mathsf{Mod}_{\mathsf{fl}}\)-\(\mathcal{S}\subset\mathsf{Mod}\)-\(\mathcal{S}\) (cf. the discussion in [5, Section 2]). Part (b) of the next proposition is a surprisingly nontrivial generalization of Lemma 2.8. **Proposition 5.2**.: _Let \(\mathsf{K}\) be a locally finitely presentable abelian category and \(\mathsf{S}\subset\mathsf{K}\) be a class of finitely presentable objects closed under finite direct sums. Then_ (a) _The full subcategory \(\mathsf{C}=\varinjlim\mathsf{S}\subset\mathsf{K}\) is closed under pure extensions (as well as pure subobjects and pure quotients) in \(\mathsf{K}\), so it inherits an exact category structure from the pure exact structure on \(\mathsf{K}\)._ (b) _In the inherited exact category structure on \(\mathsf{C}\), a short sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) is exact if and only if the short sequence of abelian groups \(0\longrightarrow\operatorname{Hom}_{\mathsf{K}}(S,K)\longrightarrow \operatorname{Hom}_{\mathsf{K}}(S,L)\longrightarrow\operatorname{Hom}_{ \mathsf{K}}(S,M)\longrightarrow 0\) is exact for every object \(S\in\mathsf{S}\)._ (c) _Denote by \(\mathcal{S}\) a small category equivalent to the full subcategory \(\mathsf{S}\subset\mathsf{K}\). Then there is a natural equivalence between the category \(\mathsf{C}\) and the category of flat right \(\mathcal{S}\)-modules, \(\mathsf{C}\simeq\mathsf{Mod}_{\mathsf{fl}}\)-\(\mathcal{S}\). Under this equivalence, the exact structure on \(\mathsf{C}\) inherited from the pure exact structure on \(\mathsf{K}\) corresponds to the exact structure on \(\mathsf{Mod}_{\mathsf{fl}}\)-\(\mathcal{S}\) inherited from the abelian exact structure on \(\mathsf{Mod}\)-\(\mathcal{S}\)._ Proof.: Part (a) is a straightforward generalization of [25, Propostion 2.2] which was already mentioned in the proof of Corollary 3.2. Part (c): the first assertion can be be obtained by combining [15, Theorems 1.4(2) and 4.1] (cf. [44, Proposition 4.2] and [5, Lemma 2.2]). The functor \(\mathsf{C}\longrightarrow\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\) assigns to an object \(C\in\mathsf{C}\subset\mathsf{K}\) the contravariant functor \(\operatorname{Hom}_{\mathsf{K}}(-,C)\colon\mathsf{K^{op}}\longrightarrow \mathsf{Ab}\) restricted to the full subcategory \(\mathsf{S}\subset\mathsf{C}\subset\mathsf{K}\). This functor identifies the full subcategory \(\mathsf{S}\subset\mathsf{C}\) with the full subcategory of representable functors in \(\mathsf{Mod}\text{--}\mathcal{S}\), and preserves direct limits (as the objects from \(\mathsf{S}\) are finitely presentable). For an arbitrary preadditive category \(\mathcal{S}\), the representable functors play the role of free modules with one generator in \(\mathsf{Mod}\text{--}\mathcal{S}\); when \(\mathcal{S}\) is an additive category, as in the situation at hand, these are the same things as the finitely generated free modules. It remains to recall that the flat modules are the direct limits of finitely generated free ones (also over a ring with many objects [27, Theorem 3.2]). To prove the second assertion, it suffices to say that the equivalence \(\mathsf{C}\simeq\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\), viewed as a functor \(\mathsf{C}\longrightarrow\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\), is exact (with respect to the respective inherited exact structures) by the definition of the pure exact structure on \(\mathsf{K}\). On the other hand, the inverse functor \(\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\longrightarrow\mathsf{C}\) is exact because every short exact sequence of flat modules is a direct limit of split short exact sequences. (Notice also that the full subcategory \(\mathsf{C}\subset\mathsf{K}\) is preserved by the direct limits by Proposition 3.1, and direct limits of pure short exact sequences are pure exact in \(\mathsf{K}\).) Part (b): with the argument from the proof of Lemma 2.8 in mind, the following still needs to be proved. Let \(K\longrightarrow L\longrightarrow M\) be a composable pair of morphisms in \(\mathsf{C}\). Suppose that the sequence of abelian groups \(0\longrightarrow\operatorname{Hom}_{R}(S,K)\longrightarrow\operatorname{Hom }_{R}(S,L)\longrightarrow\operatorname{Hom}_{R}(S,M)\longrightarrow 0\) is exact for every object \(S\in\mathsf{S}\). One has to show that the sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) is exact in \(\mathsf{K}\). When the class \(\mathsf{S}\) contains a set of generators of the abelian category \(\mathsf{K}\), this is obvious. In the general case, this is a restatement of the claim that the equivalence functor \(\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\longrightarrow\mathsf{C}\) is exact, which we have proved already in part (c). With the assertions of the proposition in mind, we will simply call the exact category structure on \(\mathsf{C}=\varinjlim\mathsf{S}\) described in Proposition 5.2 "the pure exact structure". Proof of Proposition 5.1.: We have to show that there is a set of objects \(\mathsf{T}\subset\mathsf{C}\) such that all the objects of \(\mathsf{C}\) are filtered by \(\mathsf{T}\) in \(\mathsf{K}\). Let us prove a stronger assertion instead: there exists a set of objects \(\mathsf{T}\subset\mathsf{C}\) such that all the objects of \(\mathsf{C}\) are _pure filtered_ by \(\mathsf{T}\), that is, filtered by objects from \(\mathsf{T}\) in the pure exact structure on \(\mathsf{C}\). Clearly, any filtration in the pure exact structure on \(\mathsf{C}\) is also a filtration in the pure exact structure on \(\mathsf{K}\), and consequently it is a filtration in the abelian exact structure on \(\mathsf{K}\) (as all pure short exact sequences in \(\mathsf{K}\) are exact). Now we use the equivalence of exact categories \(\mathsf{C}\simeq\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\) from Proposition 5.2(c). In view of this equivalence, it remains to observe that the class of all flat \(\mathcal{S}\)-modules is deconstructible (in itself viewed as an exact category, or equivalently, in the abelian category of modules \(\mathsf{Mod}\text{--}\mathcal{S}\), cf. Lemma 4.6). This is the result of [8, Lemma 1] (see also [22, Lemma 6.23]). **Remark 5.3**.: The assertion of Proposition 5.1 admits a far-reaching generalization: all the finite presentability conditions can be dropped. For any Grothendieck abelian category \(K\) and any _set_ of objects \(S\subset K\) closed under finite direct sums, the class \(\varinjlim S\subset K\) is weakly deconstructible. This is a Grothendieck category multiobject generalization of [30, Corollary 3.4], provable by an argument similar to the one in [30] and extending the proofs of Propositions 5.1-5.2 in the following way. For consistency of notation, let us denote again by \(\mathcal{S}\) the full additive subcategory in \(K\) corresponding to the class \(S\). Then there is no longer an category equivalence as in Proposition 5.2(c), but there is still a right exact, direct limit-preserving functor \(\widetilde{\Theta}\colon\mathsf{Mod}\text{--}\mathcal{S}\longrightarrow K\) left adjoint to the restricted Yoneda functor \(K\longmapsto\operatorname{Hom}_{K}(-,K)|_{S}\). In the spirit of the argument in [30], one can interpret \(\widetilde{\Theta}\) as a _tensor product_ functor. Specifically, this is a restriction of the category-theoretic tensor product operation \[\otimes_{\mathcal{S}}\colon\mathsf{Funct}_{\mathsf{ad}}(\mathcal{S}^{ \mathsf{op}},\mathsf{Ab})\times\mathsf{Funct}_{\mathsf{ad}}(\mathcal{S},K) \longrightarrow K\] (see [27, Section 1] for the definition). The functor \(\widetilde{\Theta}\) is constructed by tensoring the usual right \(\mathcal{S}\)-modules with one specific left \(\mathcal{S}\)-module given by the covariant identity inclusion functor \(\operatorname{Id}\colon\mathcal{S}=S\longrightarrow K\); so \[\widetilde{\Theta}(\mathcal{M})=\mathcal{M}\otimes_{\mathcal{S}}\operatorname {Id}\] for all \(\mathcal{M}\in\mathsf{Mod}\text{--}\mathcal{S}=\mathsf{Funct}_{\mathsf{ad}}( \mathcal{S}^{\mathsf{op}},\mathsf{Ab})\). Denoting by \(\Theta\colon\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\longrightarrow K\) the restriction of \(\widetilde{\Theta}\) to the full subcategory of flat modules \(\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\subset\mathsf{Mod}\text{--} \mathcal{S}\), one observes that \(\Theta\) is an exact functor (since all the exact sequences of flat modules are direct limits of split ones). The functor \(\Theta\) is _not_ fully faithful, but the full subcategory \(C\subset K\) is the essential image of \(\Theta\) (essentially for the reasons explained in [30]). Denoting by \(\mathcal{T}\subset\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}\) a set of flat modules such that \(\mathsf{Mod}_{\mathsf{fl}}\text{--}\mathcal{S}=\mathsf{Fil}(\mathcal{T})\), one concludes that \(\mathcal{T}=\Theta(\mathcal{T})\subset K\) is a set of objects such that \(\mathcal{T}\subset C\) and \(C\subset\mathsf{Fil}(\mathcal{T})\), since the functor \(\Theta\) preserves transfinitely iterated extensions. Proof of Theorem A(ii) from Section 0.2.: By assumption, the class \(C=\varinjlim S\) is closed under extensions in \(K\). Since \(C\) is also closed under direct limits in \(K\) by Proposition 3.1, it follows that \(C\) is closed under transfinitely iterated extensions in \(K\). Since, on the other hand, the class \(C\) is weakly deconstructible in \(K\) by Proposition 5.1, we can conclude that \(C\) is deconstructible under our assumptions. **Corollary 5.4**.: _Let \(K\) be a locally finitely presentable abelian category, and let \(S\subset K\) be a class of objects of type \(\operatorname{FP}_{2}\) (as defined in Section 3) containing a set of generators of the abelian category \(K\). Put \(C=\varinjlim S\subset K\). Then there is a complete cotorsion pair \((C,D)\) in \(K\)._ Proof.: By Theorem A(i-ii), or in other words, by Propositions 3.1, 3.3, and 5.1, the class \(C\) is deconstructible in \(K\); so \(C=\mathsf{Fil}(\mathcal{T})\) for a set of objects \(\mathcal{T}\subset K\). Furthermore, by assumption, the class \(C\) contains a set of generators of \(K\). Let \((C^{\prime},D)\) be the cotorsion pair in \(K\) generated by \(\mathcal{T}\). Applying Theorem 2.5, we conclude that \((C^{\prime},D)\) is a complete cotorsion pair and \(C^{\prime}=C\). ## 6. Classes of \(\kappa\)-Presentables and Their \(\kappa\)-Direct Limit Closures In this section we discuss generalizations of some results from Sections 2, 3, and 5 from the countable cardinal \(\aleph_{0}\) to arbitrary regular cardinals \(\kappa\). In particular, we present a version of Theorem A(i) for regular cardinals \(\kappa\), stated below as Proposition 6.4, and discuss the difficulties involved with an attempt to extend Theorem A(ii) to higher cardinals (see Remark 6.8). We refer to the book [1, Definitions 1.13 and 1.17, and Theorem 1.20] for the definitions of a \(\kappa\)_-presentable object_ and a _locally \(\kappa\)-presentable_ category (for a regular cardinal \(\kappa\)). The functors of \(\kappa\)-direct limit (i. e., direct limits indexed by \(\kappa\)-directed posets) are exact in any locally \(\kappa\)-presentable category [1, Proposition 1.59]. Up to an isomorphism, in a locally \(\kappa\)-presentable category there is only a set of \(\kappa\)-presentable objects [1, Remark 1.19]. Any Grothendieck abelian category is locally presentable, i. e., locally presentable for _some_ regular cardinal \(\kappa\)[42, Lemma A.1]. The following proposition is a generalization of Proposition 3.1 to higher cardinals. We state it for nonabelian and even nonadditive categories, as the abelian category case is no easier than the general one. **Proposition 6.1**.: _Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable category and \(\mathsf{S}\subset\mathsf{K}\) be a class of \(\kappa\)-presentable objects. Then the class \(\varinjlim^{(\kappa)}\mathsf{S}\) of all \(\kappa\)-direct limits of objects from \(\mathsf{S}\) in \(\mathsf{K}\) (i. e., direct limits indexed by \(\kappa\)-directed posets) is closed under \(\kappa\)-direct limits in \(\mathsf{K}\). An object \(L\in\mathsf{K}\) belongs to \(\varinjlim^{(\kappa)}\mathsf{S}\) if and only if, for any \(\kappa\)-presentable object \(T\) in \(\mathsf{K}\), any morphism \(T\longrightarrow L\) in \(\mathsf{K}\) factorizes through an object from \(\mathsf{S}\). If the class \(\mathsf{S}\) is closed under \(\kappa\)-small coproducts (i. e., coproducts indexed by sets of cardinality \(<\kappa\)), then the class \(\varinjlim^{(\kappa)}\mathsf{S}\) is closed under all coproducts in \(\mathsf{K}\)._ Proof.: Two assertions need to be explained: the "if" implication and the closedness under coproducts. The "only if" implication is obvious; and the closedness under \(\kappa\)-direct limits follows from the "if and only if". Concerning the "if" implication, the argument is similar to the one in [1, Proposition 1.22]. It is convenient to use [1, Theorem 1.5 and Remark 1.21] to the effect that it suffices to construct a \(\kappa\)-filtered category \(D\) and a \(D\)-indexed diagram \((S_{d})_{d\in D}\) of objects \(S_{d}\in\mathsf{S}\) in \(\mathsf{K}\) such that \(L=\varinjlim_{d\in D}S_{d}\). For this purpose, let \(D\) be the essentially small category of all pairs \(d=(S_{d},f_{d})\), where \(S_{d}\in\mathsf{S}\) and \(f_{d}\colon S_{d}\longrightarrow L\) is an arbitrary morphism. Morphisms in the category \(D\) are defined in the obvious way, and the construction of the diagram \(D\longrightarrow\mathsf{K}\) is also obvious. Concerning the coproducts, let \((I_{\xi})_{\xi\in\Xi}\) be a family of \(\kappa\)-directed posets, indexed by a set \(\Xi\); and let \((K_{i,\xi})_{i\in I}\) be a diagram in \(\mathsf{K}\), indexed by the poset \(I_{\xi}\) and given for every \(\xi\in\Xi\). Then the coproduct of \(\kappa\)-direct limits \(\coprod_{\xi\in\Xi}\varinjlim_{i\in I_{\xi}}K_{i,\xi}\) can be expressed as the following \(\kappa\)-direct limit of \(\kappa\)-small coproducts. Denote by \(J\) the set of all pairs \(j=(\Upsilon,t)=(\Upsilon_{j},t_{j})\), where \(\Upsilon\subset\Xi\) is a subset of cardinality smaller than \(\kappa\) and \(t\colon\Upsilon\longrightarrow\coprod_{v\in\Upsilon}I_{v}\) is a function assigning to every element \(v\in\Upsilon\) an element \(t(v)\in I_{v}\). Given two elements \(j\) and \(k\in J\), we say that \(j\leq k\) if \(\Upsilon_{j}\subset\Upsilon_{k}\) and, for every \(v\in\Upsilon_{j}\), the inequality \(t_{j}(v)\leq t_{k}(v)\) holds in \(I_{v}\). Then \(J\) is a \(\kappa\)-directed poset; and it is easy to construct a natural \(J\)-indexed diagram in \(\mathsf{K}\), with the object \(K_{j}=\coprod_{v\in\mathsf{T}_{j}}K_{t_{j}(v),v}\) sitting at the vertex \(j\in J\), such that \(\varinjlim_{j\in j}K_{j}=\coprod_{\xi\in\Xi}\varinjlim_{i\in I_{\xi}}K_{i,\xi}\). Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable abelian category. A short exact sequence \(0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0\) in \(\mathsf{K}\) is said to be \(\kappa\)_-pure_ if the functor \(\operatorname{Hom}_{\mathsf{K}}(T,-)\colon\mathsf{K}\longrightarrow\mathsf{Ab}\) takes it to a short exact sequence of abelian groups for any \(\kappa\)-presentable object \(T\in\mathsf{K}\). Any \(\kappa\)-direct limit of \(\kappa\)-pure short exact sequences is a \(\kappa\)-pure short exact sequence. The \(\kappa\)_-pure exact structure_ on \(\mathsf{K}\) is defined by the collection of all \(\kappa\)-pure short exact sequences. So one can speak about \(\kappa\)_-pure subobjects_, \(\kappa\)_-pure quotients_, \(\kappa\)_-pure acyclic complexes_, \(\kappa\)_-pure-projective objects_, etc. (similary to Section 5). We refer to the paper [2] for some details. **Proposition 6.2**.: _Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable abelian category and \(\mathsf{S}\subset\mathsf{K}\) be a class of \(\kappa\)-presentable objects closed under \(\kappa\)-small coproducts in \(\mathsf{K}\). Then the class of objects \(\varinjlim^{(\kappa)}\mathsf{S}\) is closed under \(\kappa\)-pure subobjects, \(\kappa\)-pure quotients, and \(\kappa\)-pure extensions \(\varinjlim^{(\kappa)}\mathsf{K}\)._ Proof.: This is the \(\kappa\)-version of [25, Proposition 2.2], provable in the similar way. It is helpful to keep in mind that the class of \(\kappa\)-presentable objects is closed under colimits of diagrams with less than \(\kappa\) vertices and arrows [1, Proposition 1.16]. The next corollary is a generalization of Corollary 3.2. **Corollary 6.3**.: _Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable abelian category and \(\mathsf{S}\subset\mathsf{K}\) be a class of \(\kappa\)-presentable objects closed under \(\kappa\)-small coproducts. Let \((H_{i})_{i\in I}\) be a \(\kappa\)-direct system of objects \(H_{i}\in\varinjlim^{(\kappa)}\mathsf{S}\), indexed by a \(\kappa\)-directed poset \(I\). Then the kernel of the natural epimorphism \(\coprod_{i\in I}H_{i}\longrightarrow\varinjlim_{i\in I}H_{i}\) (4) belongs to \(\varinjlim^{(\kappa)}\mathsf{S}\)._ Proof.: Both the proofs of Corollary 3.2 can be readily adopted to the situation at hand: one can use either Proposition 6.2, or the construction from [6, proof of Proposition 4.1] together with Proposition 6.1. Let \(\mathsf{K}\) be an abelian category with exact functors of \(\kappa\)-direct limits, and let \(n\geq 1\) be an integer. We will say that an object \(S\in\mathsf{K}\) is _of type \(\kappa\)-\(\mathrm{P}_{n}\)_ if the functors \(\operatorname{Ext}^{i}_{\mathsf{K}}(S,-)\colon\mathsf{K}\longrightarrow \mathsf{Ab}\) preserve \(\kappa\)-direct limits for \(0\leq i\leq n-1\). So the objects of type \(\kappa\)-\(\mathrm{P}_{1}\) are, by the definition, the \(\kappa\)-presentable ones. One can further define types \(\kappa\)-\(\mathrm{P}_{0}\) and \(\kappa\)-\(\mathrm{P}_{\infty}\), similarly to the discussion in Section 3, but we will not need these definitions. The following proposition refers to objects of type \(\kappa\)-\(\mathrm{P}_{2}\), which form a subclass of the class of \(\kappa\)-presentable objects. **Proposition 6.4**.: _Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable abelian category and \(\mathsf{S}\) be a class of (some) objects of type \(\kappa\)-\(\mathrm{P}_{2}\) closed under extensions in \(\mathsf{K}\). Then the class of objects \(\varinjlim^{(\kappa)}\mathsf{S}\) is also closed under extensions in \(\mathsf{K}\)._ Proof.: This is a straightforward generalization of Proposition 3.3, provable in the similar way. The class of all \(\kappa\)-presentable objects in \(\mathsf{K}\) is closed under extensions by [42, Lemma A.4] (the running assumption in [42] that the category is Grothendieck is not needed for this lemma). The following observations play the key role. Let \(K\) be an abelian category with exact functors of \(\kappa\)-direct limits. Then 1. for any two classes of objects \(X\) and \(Y\) in \(K\), one has \(X\ast\varlim_{\varrightarrow{\varrightarrow{\varrightarrow{\varrightarrow{ \varrightarrow{\varrightarrow{\varrightarrow{\varrightarrow{\varrightarrow{ \varrightarrow{\varrightarrow{\varrightarrow{\varrightarrow{\varrightarrow{\varrightarrow{ \varrightarrow{\varrightarrowrightarrow{\varrightarrow The next theorem is a generalization of Theorem 2.7 suggested in [5, Remark 4.11]. **Theorem 6.6**.: _Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable abelian category. Let \(P^{\bullet}\) be a complex of \(\kappa\)-pure-projective objects, and let \(X^{\bullet}\) be a \(\kappa\)-pure acyclic complex in \(\mathsf{K}\). Then any morphism of complexes \(P^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero._ Proof.: Let \(\mathsf{S}\) be the class of all \(\kappa\)-presentable objects in \(\mathsf{K}\). Applying Proposition 6.5(c), we conclude that the exact category \(\mathsf{K}\) with the \(\kappa\)-pure exact structure is equivalent to the exact category \(\mathsf{Mod}_{\kappa\text{-fl}}\text{-}\mathcal{S}\) of \(\kappa\)-flat right \(\mathcal{S}\)-modules. Notice that all the projective \(\mathcal{S}\)-modules are \(\kappa\)-flat, and the kernel of any surjective morphism from a projective \(\mathcal{S}\)-module to a \(\kappa\)-flat one is \(\kappa\)-flat by [28, Lemma 6.2(a)]. It follows that there are enough projective objects in the exact category \(\mathsf{Mod}_{\kappa\text{-fl}}\text{-}\mathcal{S}\), and these projective objects are precisely the projective \(\mathcal{S}\)-modules. Hence the equivalence of exact categories \(\mathsf{K}\simeq\mathsf{Mod}_{\kappa\text{-fl}}\text{-}\mathcal{S}\) takes the \(\kappa\)-pure-projective objects of \(\mathsf{K}\) to the projective \(\mathcal{S}\)-modules. It also takes \(\kappa\)-pure acyclic complexes to acyclic complexes in the exact category \(\mathsf{Mod}_{\kappa\text{-fl}}\text{-}\mathcal{S}\), which means acyclic complexes of \(\kappa\)-flat \(\mathcal{S}\)-modules with \(\kappa\)-flat modules of cocycles. As all \(\kappa\)-flat modules are flat, it remains to apply [26, Theorem 8.6(iii)\(\Rightarrow\)(i)] or [5, Theorem 4.4]. The following assertion extends Proposition 2.6 to arbitrary regular cardinals \(\kappa\). **Proposition 6.7**.: _Let \(\mathsf{K}\) be a locally \(\kappa\)-presentable Grothendieck category. Let \(\mathsf{S}\subset\mathsf{K}\) be a class of \(\kappa\)-presentable objects closed under transfinitely iterated extensions of families of objects of cardinality \(<\kappa\) (i. e., indexed by ordinals \(\alpha<\kappa\)). Let \(A^{\bullet}\in\mathbf{C}(\mathsf{Fil}(\mathsf{S}))\) be a complex in \(\mathsf{K}\) whose terms are \(\mathsf{S}\)-filtered objects. Then the complex \(A^{\bullet}\), viewed as an object of the abelian category of complexes \(\mathbf{C}(\mathsf{K})\), is filtered by bounded below complexes whose terms belong to \(\mathsf{S}\)._ Proof.: This is still [42, (proof of) Proposition 4.3]. Once again, the argument is based on the Hill lemma [42, Theorem 2.1]. **Remark 6.8**.: Let \(\mathsf{K}=\mathsf{Mod}\text{-}R\) be the module category and \(\mathsf{S}\subset\mathsf{Mod}\text{-}R\) be a class of objects of type \(\kappa\)-\(\mathrm{P}_{2}\) (or \(\kappa\)-\(\mathrm{P}_{\infty}\)) closed under transfinitely iterated extensions indexed by ordinals smaller than \(\kappa\). Then Propositions 6.1 and 6.4 tell that the class of modules \(\varinjlim^{(\kappa)}\mathsf{S}\) is closed under extensions, coproducts, and \(\kappa\)-direct limits in \(\mathsf{Mod}\text{-}R\). But is it closed under transfinitely iterated extensions? For \(\kappa=\aleph_{0}\) we said, in the proof of Theorem A(ii) in Section 5, that transfinitely iterated extensions are built up from extensions and direct limits. But this requires all direct limits (of chains of monomorphisms) and _not_ only \(\kappa\)-direct limits. On the other hand, does the analogue of Proposition 5.1 hold for \(\kappa\)? In other words, is the class \(\varinjlim^{(\kappa)}\mathsf{S}\) weakly deconstructible? Arguing similarly to the proof of Proposition 5.1 and using Proposition 6.5, it would be sufficient to know that the class of \(\kappa\)-flat \(\mathcal{S}\)-modules is weakly deconstructible. But is this true? Furthermore, the class \(\mathsf{C}=\varinjlim^{(\kappa)}\mathsf{S}\) is a _Kaplansky class_ in the sense of [20, 38]: for any regular cardinal \(\lambda\) there exists a regular cardinal \(\mu\) such that for any object \(C\in\mathsf{C}\) and any \(\lambda\)-presentable subobject \(X\subset C\) there exists a \(\mu\)-presentable subobject \(K\subset C\) such that \(X\subset K\) and both the objects \(K\) and \(C/K\) belong to \(\mathsf{C}\). This is provable using Proposition 6.2 and a suitable version of purification procedure (cf. [8, first paragraph of the proof of Theorem 5]). Still the class \(\mathsf{C}\) is _not_ closed under direct limits in general, but only under \(\kappa\)-direct limits; so [23, Lemma 6.9] or [38, Lemma 2.5(2)] cannot be used in order to deduce deconstructibility of \(\mathsf{C}\) (cf. [22, Sections 10.1-10.2]). Let us point out some partial answers to the questions above that are available in the literature. To begin with, we observe that the answers to the questions in the first two paragraphs of this remark _cannot_ both be always positive: the class \(\varinjlim^{(\kappa)}\mathsf{S}\) is _not_ deconstructible in general. Certainly not in the context of module categories \(\mathsf{K}=\mathsf{Mod}\)-\(\mathcal{T}\) over small preadditive categories (or "nonunital rings with enough idempotents") \(\mathcal{T}\). Indeed, let \(\mathsf{S}\) be the class of projective \(\mathcal{T}\)-modules with less than \(\kappa\) generators; so \(\varinjlim^{(\kappa)}\mathsf{S}\) is the class of \(\kappa\)-flat \(\mathcal{T}\)-modules. Suppose that the class of \(\kappa\)-flat \(\mathcal{T}\)-modules is deconstructible in \(\mathsf{Mod}\)-\(\mathcal{T}\). Then, by Theorem 4.5, the exact category \(\mathsf{Mod}_{\kappa\text{-}\mathsf{fl}}\)-\(\mathcal{T}\) would be of Grothendieck type. By Theorem 4.1, it would follow that there are enough injective objects in the exact category \(\mathsf{Mod}_{\kappa\text{-}\mathsf{fl}}\)-\(\mathcal{T}\). Take \(\mathcal{T}\) to be a small category equivalent to the category of \(\kappa\)-presented \(R\)-modules for a given ring \(R\). Then, by Proposition 6.5(c), the exact category \(\mathsf{Mod}\)-\(R\) with the \(\kappa\)-pure exact structure is equivalent to \(\mathsf{Mod}_{\kappa\text{-}\mathsf{fl}}\)-\(\mathcal{T}\). So it would follow that there exist enough \(\kappa\)-pure-injective \(R\)-modules. This is known to be _not_ true. See [39, Proposition 1.4, Remark 1.6, and Example 1.7] (also [13, Theorem 6.6]). Therefore, the assertion of Theorem A(ii) is _not_ true for regular cardinals \(\kappa>\aleph_{0}\) in general. Nevertheless, the class of \(\kappa\)-flat \(R\)-modules may be deconstructible for _some_ cardinals \(\kappa>\aleph_{0}\). In particular, [39, Theorem 3.3] claims that all \(\kappa\)-flat \(R\)-modules are projective if \(\kappa\) is greater or equal to a strongly compact cardinal that is greater than the cardinality of a ring \(R\). So the class of \(\kappa\)-flat \(R\)-modules is deconstructible in this case by Kaplansky's theorem [22, Corollary 7.14]. On the other hand, consider the case of the cardinal \(\kappa=\aleph_{1}\). In this context, the class of _flat Mittag-Leffler modules_[34, 16, 23, 38, 36] plays an important role. Any flat Mittag-Leffler modules is an \(\aleph_{1}\)-direct limit (in other words, an \(\aleph_{1}\)-direct union) of its projective submodules [23, Corollary 2.10], [22, Corollary 3.19]; so any flat Mittag-Leffler module is \(\aleph_{1}\)-flat. The converse is not true in general [39, Example 3.5]. However, over a left Noetherian ring \(R\), the class of flat Mittag-Leffler right \(R\)-modules is closed under \(\aleph_{1}\)-pure epimorphic images [39, Proposition 3.4], hence under \(\aleph_{1}\)-direct limits; so it coincides with the class of \(\aleph_{1}\)-flat right \(R\)-modules. Over any ring, the class of flat Mittag-Leffler modules is closed under pure submodules and transfinitely iterated extensions [22, Corollary 3.20(a)], and it is a Kaplansky class [38, Theorem 1.2(i) or 3.3], [22, Theorem 10.6]. However, if a ring \(R\) is not right perfect, then the class of flat Mittag-Leffler right \(R\)-modules is _not_ deconstructible [23, Corollary 7.3], [22, Theorem 10.13]; it fact, it is not even precovering [36, Theorem 3.3] (cf. [22, Theorem 7.21]). So, if \(R\) is not right perfect, then the class of flat Mittag-Leffler modules is not weakly deconstructible. We can conclude that, for any ring \(R\) that is left Noetherian but not right perfect, the class of \(\aleph_{1}\)-flat right \(R\)-modules is not weakly deconstructible. ## 7. Generalized Flat/Projective and \(\operatorname{Fp}\)-projective Periodicity II In this section we prove Theorem A(iii). It is restated below as Theorem 7.1(a). The argument is a more complicated version of the proof of Theorem 0(a) given in Section 2. It still follows the ideas of the proof of [5, Theorem 0.14 or 4.1] together with [5, Remark 4.11]. **Theorem 7.1**.: _Let \(\mathsf{K}\) be a Grothendieck category, and let \(\kappa\) be a regular cardinal such that \(\mathsf{K}\) is a locally \(\kappa\)-presentable category. Let \(\mathsf{S}\subset\mathsf{K}\) be a class of (some) \(\kappa\)-presentable objects closed under transfinitely iterated extensions indexed by ordinals smaller than \(\kappa\). Put \(\mathsf{C}=\varinjlim^{(\kappa)}\mathsf{S}\subset\mathsf{K}\), and denote by \(\mathsf{A}=\mathsf{Fil}(\mathsf{S})^{\oplus}\) the class of all direct summands of transfinitely iterated extensions of objects from \(\mathsf{S}\) in \(\mathsf{K}\)._ (a) _Assume that the class \(\mathsf{C}\) is deconstructible in \(\mathsf{K}\). Put \(\mathsf{B}^{\prime}=\mathsf{S}^{\perp_{\geq 1}}\cap\mathsf{C}\) and \(\mathsf{A}^{\prime}=\mathsf{C}\cap{}^{\perp_{1}}\mathsf{B}^{\prime}=\mathsf{C }\cap{}^{\perp_{\geq 1}}\mathsf{B}^{\prime}\) (so \(\mathsf{A}\subset\mathsf{A}^{\prime}\subset\mathsf{C}\)). Then, in any acyclic complex of objects from \(\mathsf{A}\) with the objects of cocycles belonging to \(\mathsf{C}\), the objects of cocycles actually belong to \(\mathsf{A}^{\prime}\)._ (b) _Put \(\mathsf{B}=\mathsf{S}^{\perp_{1}}\cap\mathsf{C}=\mathsf{A}^{\perp_{1}}\cap \mathsf{C}\) (so \(\mathsf{B}^{\prime}\subset\mathsf{B}\subset\mathsf{C}\)). Let \(A^{\bullet}\) be a complex in \(\mathsf{K}\) with the terms belonging to \(\mathsf{A}\), and let \(X^{\bullet}\) be an acyclic complex in \(\mathsf{K}\) with the terms belonging to \(\mathsf{B}\) and the modules of cocycles also belonging to \(\mathsf{B}\). Then any morphism of complexes \(A^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero._ **Lemma 7.2**.: _In the notation of Theorem A(iii) or Theorem 7.1(b), let \(0\longrightarrow B\longrightarrow L\longrightarrow C\longrightarrow 0\) be a short exact sequence in \(\mathsf{K}\) with the terms \(B\), \(L\), \(C\in\mathsf{C}\). Assume that the object \(B\) belongs to the class \(\mathsf{B}\). Then the short exact sequence \(0\longrightarrow B\longrightarrow L\longrightarrow C\longrightarrow 0\) is \(\kappa\)-pure in \(\mathsf{K}\)._ Proof.: It is only important that \(B\in\mathsf{S}^{\perp_{1}},\ L\in\mathsf{K}\), and \(C\in\mathsf{C}\). By Proposition 6.5(b), it suffices to check that any morphism \(S\longrightarrow C\) with \(S\in\mathsf{S}\) lifts to a morphism \(S\longrightarrow L\). This holds because \(B\in\mathsf{B}\subset\mathsf{S}^{\perp_{1}}\subset\mathsf{K}\). Proof of Theorem 7.1(b).: The argument is similar to the proofs of Theorem 2.9(b) and [5, Theorem 4.2]. First of all, one has \(\mathsf{S}^{\perp_{1}}=\mathsf{A}^{\perp_{1}}\subset\mathsf{K}\) by the Eklof lemma (Lemma 2.4) applied in the abelian category \(\mathsf{K}\); so \(\mathsf{S}^{\perp_{1}}\cap\mathsf{C}=\mathsf{A}^{\perp_{1}}\cap\mathsf{C}\). Without loss of generality we can assume that the terms of the complex \(A^{\bullet}\) belong to \(\mathsf{Fil}(\mathsf{S})\). Then, by Proposition 6.7, the complex \(A^{\bullet}\) is filtered by (bounded below) complexes with the terms belonging to \(\mathsf{S}\). By Lemma 2.3, for any complex \(A^{\bullet}\) with the terms in \(\mathsf{A}\) and any complex \(B^{\bullet}\) with the terms in \(\mathsf{B}\) we have an isomorphism of abelian groups \[\operatorname{Ext}^{1}_{\mathbf{C}(\mathsf{K})}(A^{\bullet},B^{\bullet}[-1]) \simeq\operatorname{Hom}_{\mathbf{H}(\mathsf{K})}(A^{\bullet},B^{\bullet}).\] So, instead of showing that \(\operatorname{Hom}_{\mathbf{H}(\mathsf{K})}(A^{\bullet},X^{\bullet})=0\) as desired in the theorem, it suffices to prove that \(\operatorname{Ext}^{1}_{\mathbf{C}(\mathsf{K})}(A^{\bullet},X^{\bullet}[-1])=0\). Making use of the Eklof lemma (Lemma 2.4) again, the question reduces to showing that \(\operatorname{Ext}^{1}_{\mathbf{C}(\mathsf{K})}(S^{\bullet},X^{\bullet}[-1])=0\) for any complex \(S^{\bullet}\) with the terms belonging to \(\mathsf{S}\) and any complex \(X^{\bullet}\) as in the theorem. Applying Lemma 2.3 again, we conclude that it suffices to show that any morphism of complexes \(S^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero. Finally, we observe that all \(\kappa\)-presentable objects are \(\kappa\)-pure-projective in \(\mathsf{K}\) (by the definitions), while any acyclic complex in \(\mathsf{K}\) with the objects of cocycles belonging to \(\mathsf{B}\) is \(\kappa\)-pure acyclic (by Lemma 7.2). Thus any morphism of complexes \(S^{\bullet}\longrightarrow X^{\bullet}\) is homotopic to zero by Theorem 6.6. Proof of Theorem 7.1(a).: The assumption of deconstructibility presumes that the class \(\mathsf{C}\) is closed under transfinitely iterated extensions in \(\mathsf{K}\). The class \(\mathsf{C}\) is also closed under direct summands, since it is closed under \(\kappa\)-direct limits by Proposition 6.1. So we have \(\mathsf{A}\subset\mathsf{C}\). We endow the full subcategory \(\mathsf{C}\subset\mathsf{K}\) with the exact category structure inherited from the abelian exact structure of \(\mathsf{K}\). Then the class \(\mathsf{S}\) is generating in \(\mathsf{C}\) by Corollary 6.3. Moreover, the exact category \(\mathsf{C}\) is of Grothendieck type by Theorem 4.5, and therefore it has enough injective objects by Theorem 4.1. Therefore, \(\mathsf{C}\cap{}^{\perp_{1}}\mathsf{B}^{\prime}=\mathsf{C}\cap{}^{\perp_{ \geq 1}}\mathsf{B}^{\prime}\) by Lemma 4.2(a) applied to the exact category \(\mathsf{E}=\mathsf{C}\) and the class of objects \(\mathsf{T}=\mathsf{S}\). We also have \(\mathsf{A}\subset{}^{\perp_{1}}\mathsf{B}^{\prime}\) by the Eklof lemma (Lemma 2.4). Hence \(\mathsf{A}\subset\mathsf{A}^{\prime}\). Now both the classes \(\mathsf{A}\) and \(\mathsf{A}^{\prime}\) are generating in \(\mathsf{C}\), and Lemma 4.2(b) with Proposition 4.4 tell that \((\mathsf{A}^{\prime},\mathsf{B}^{\prime})\) is a hereditary complete cotorsion pair in \(\mathsf{C}\). The hereditariness is important for our argument below. It is also worth noticing that \((\mathsf{A},\mathsf{B})\) is a (nonhereditary) cotorsion pair in \(\mathsf{C}\) by Theorem 4.3 (since the class \(\mathsf{S}\) is generating in \(\mathsf{C}\)). The notation \(\mathsf{Fil}(\mathsf{S})\) is unambiguous (means the same in \(\mathsf{K}\) and \(\mathsf{C}\)) by Lemma 4.6. Let \(A^{\bullet}\) be an acyclic complex of objects from \(\mathsf{A}\) in \(\mathsf{K}\) with the objects of cocycles belonging to \(\mathsf{C}\). Then \(A^{\bullet}\) is also an acyclic complex in the exact category \(\mathsf{C}\). One can easily see that the objects of cocycles of \(A^{\bullet}\) belong to \(\mathsf{A}^{\prime}\) if and only if the complex of abelian groups \(\operatorname{Hom}_{\mathsf{C}}(A^{\bullet},B)\) is acyclic for any object \(B\in\mathsf{B}^{\prime}\). This holds because \((\mathsf{A}^{\prime},\mathsf{B}^{\prime})\) is a cotorsion pair in \(\mathsf{C}\), or more specifically, because \(\mathsf{A}^{\prime}=\mathsf{C}\cap{}^{\perp_{1}}\mathsf{B}^{\prime}\). Now let \(J^{\bullet}\) be an injective resolution of the object \(B\) in the exact category \(\mathsf{C}\). So \(0\longrightarrow B\longrightarrow J^{0}\longrightarrow J^{1} \longrightarrow J^{2}\longrightarrow\cdots\) is an acyclic complex in \(\mathsf{K}\) with the objects of cocycles belonging to \(\mathsf{C}\) and the objects \(J^{n}\) injective in \(\mathsf{C}\). We observe that the objects of cocycles of the complex \(J^{\bullet}\) actually belong to \(\mathsf{B}^{\prime}\), because all the injective objects of \(\mathsf{C}\) belong to \(\mathsf{B}^{\prime}\) and the class \(\mathsf{B}^{\prime}\) is closed under the cokernels of admissible monomorphisms in \(\mathsf{C}\) (as the cotorsion pair \((\mathsf{A}^{\prime},\mathsf{B}^{\prime})\) in \(\mathsf{C}\) is hereditary). Denote by \(X^{\bullet}\) the acyclic complex \((B\to J^{\bullet})\). Then the complex of abelian groups \(\operatorname{Hom}_{\mathsf{C}}(A^{\bullet},X^{\bullet})\) is acyclic by Theorem 7.1(b) (which we have proved above). This holds because \(A^{\bullet}\) is a complex with the terms in \(\mathsf{A}\), while \(X^{\bullet}\) is an acyclic complex with the terms in \(\mathsf{B}\) and the objects of cocycles in \(\mathsf{B}\) (recall that \(\mathsf{B}^{\prime}\subset\mathsf{B}\)). On the other hand, the complex of abelian groups \(\operatorname{Hom}_{\mathsf{C}}(A^{\bullet},J^{\bullet})\) is acyclic as well, since the complex \(A^{\bullet}\) is acyclic in \(\mathsf{C}\) and \(J^{\bullet}\) is a bounded below complex of injective objects in \(\mathsf{C}\) (cf. the proof of Theorem 2.9(a)). Since both the complexes \(\operatorname{Hom}_{\mathsf{C}}(A^{\bullet},X^{\bullet})\) and \(\operatorname{Hom}_{\mathsf{C}}(A^{\bullet},J^{\bullet})\) are acyclic, and the complex \(X^{\bullet}\) has the form \(X^{\bullet}=(B\to J^{\bullet})\), we can conclude that the complex \(\operatorname{Hom}_{\mathsf{C}}(A^{\bullet},B)\) is acyclic. Proof of Theorem A(iii) from from Section 0.2.: This is precisely the assertion of Theorem 7.1(a). **Corollary 7.3**.: _Let \(K\) be a Grothendieck category, and let \(\kappa\) be a regular cardinal such that \(K\) is a locally \(\kappa\)-presentable category. Let \(S\subset K\) be a class of (some) \(\kappa\)-presentable objects closed under transfinitely iterated extensions indexed by ordinals smaller than \(\kappa\). Put \(C=\varinjlim^{(\kappa)}S\subset K\), and assume that the class \(C\) is deconstructible in \(K\). Denote by \(A=\operatorname{\mathsf{Fil}}(S)^{\oplus}\) the class of all direct summands of transfinitely iterated extensions of objects from \(S\) in \(K\). Put \(B^{\prime}=S^{\perp_{\geq 1}}\cap C\) and \(A^{\prime}=C\cap^{\perp_{1}}B^{\prime}=C\cap^{\perp_{\geq 1}}B^{\prime}\) (so \(A\subset A^{\prime}\subset C\)). Then, for any short exact sequence \((*)\) as in Section 0.0 with objects \(L\in A\) and \(M\in C\), one has \(M\in A^{\prime}\). In other words, any \(A\)-periodic object belonging to \(C\) actually belongs to \(A^{\prime}\)._ Proof.: This is a corollary of Theorem 7.1(a), provable similarly to Corollary 2.11. Finally, we use the opportunity to explicitly state the result suggested in [5, Remark 4.11] and deduce it from the results of this paper. **Corollary 7.4**.: _Let \(K\) be a locally \(\kappa\)-presentable Grothendieck category. Denote by \(S\) the class of all \(\kappa\)-presentable objects in \(K\). Put \(B=S^{\perp_{1}}\) and \(A={}^{\perp_{1}}B\subset K\); so \(A\) is the class of all direct summands of \(S\)-filtered objects in \(K\). Furthermore, put \(B^{\prime}=S^{\perp_{\geq 1}}\) and \(A^{\prime}={}^{\perp_{1}}B^{\prime}={}^{\perp_{\geq 1}}B^{\prime}\); so \(A\subset A^{\prime}\) and \(B\supset B^{\prime}\). Then, for any short exact sequence \((*)\) as in Section 0.0 with objects \(L\in A\) and \(M\in K\), one has \(M\in A^{\prime}\); in other words, any \(A\)-periodic object in \(K\) belongs to \(A^{\prime}\). In any acyclic complex of objects from \(A\) in \(K\), the objects of cocycles belong to \(A^{\prime}\)._ Proof.: One has \(A=\operatorname{\mathsf{Fil}}(S)^{\oplus}\) by Theorem 2.5(b), as the class \(S\) is generating in \(A\). Furthermore, by the definition of a locally \(\kappa\)-presentable category we have \(K=\varinjlim^{(\kappa)}S\). So the class \(C=\varinjlim^{(\kappa)}S=K\) is deconstructible (in itself) by [43, Proposition 3.13]. Now the first assertion of the corollary is provided by Corollary 7.3, and the second one by Theorem A(iii) or Theorem 7.1(a).
2310.19518
On the splitting of weak nearly cosymplectic manifolds
Weak almost contact manifolds, i.e., the linear complex structure on the contact distribution is approximated by a nonsingular skew-symmetric tensor, defined by the author and R. Wolak (2022), allowed a new look at the theory of contact manifolds. This article studies the curvature and topology of new structures of this type, called the weak nearly cosymplectic structure and weak nearly K\"{a}hler structure. We find conditions under which weak nearly cosymplectic manifolds become Riemannian products and characterize 5-dimensional weak nearly cosymplectic manifolds. Our theorems generalize results by H. Endo (2005) and A. Nicola-G. Dileo-I. Yudin (2018) to the context of weak almost contact geometry.
Vladimir Rovenski
2023-10-30T13:20:41Z
http://arxiv.org/abs/2310.19518v5
# On the splitting of weak nearly cosymplectic manifolds ###### Abstract Weak contact metric manifolds, i.e., the linear complex structure on the contact distribution is replaced by a nonsingular skew-symmetric tensor, defined by the author and R. Wolak (2022), allowed a new look at the theory of contact manifolds. This paper studies the curvature and topology of new structures of this type, called the weak nearly cosymplectic structure and weak nearly Kahler structure. We found conditions under which weak nearly cosymplectic manifolds become Riemannian products of two kinds, and characterized \(5\)-dimensional weak nearly cosymplectic manifolds. Our theorems generalize results by H. Endo (2005) and A. Nicola-G. Dileo-I. Yudin (2018) on curvature and splitting of nearly cosymplectic manifolds. **Keywords**: weak nearly cosymplectic manifold, weak nearly Kahler manifold, Killing vector field, Riemannian curvature tensor, totally geodesic foliation, Riemannian product. **Mathematics Subject Classifications (2010)** 53C15, 53C25, 53D15 ## 1 Introduction An important class of almost contact metric manifolds \(M\,^{2n+1}(\varphi,\xi,\eta,g)\) is given by cosymplectic manifolds, i.e., \(\nabla\varphi=0\), see [2]. Here, \(g\) is a Riemannian metric, \(\nabla\) is the Levi-Civita connection, \(\varphi\) is a \((1,1)\)-tensor, \(\xi\) is a Reeb vector field and \(\eta\) is a \(1\)-form, satisfying \(\eta(\xi)=1\) and \[g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\,\eta(Y),\quad X,Y\in\mathfrak{X}_{M},\] where \(\mathfrak{X}_{M}\) is the space of smooth vector fields on \(M\). Any such manifold is locally the product of a Kahler manifold \(M\,^{2n}(J,g)\), where \(\nabla J=0\), and a real line. Here, \(J\) is a \((1,1)\)-tensor satisfying \(J^{2}=-\mathrm{id}\,_{TM}\). Nearly Kahler structure \((J,g)\) was defined by A. Gray [9] using condition that the symmetric part of \(\nabla J\) vanishes. Nearly cosymplectic structure \((\varphi,\xi,\eta,g)\) was defined by D. Blair and D. Showers [3] using a similar condition that only the symmetric part of \(\nabla\varphi\) vanishes: \[(\nabla_{Y}\,\varphi)Y=0,\quad Y\in\mathfrak{X}_{M}. \tag{1}\] The curvature and topology of nearly cosymplectic manifolds have been studied by many authors, e.g., [2, 4, 7, 10, 12, 13, 18]. These odd-dimensional counterparts of nearly Kahler manifolds play a key role in the classification of almost contact metric manifolds, see [5]. They also appeared in the study of harmonic almost contact structures: a nearly cosymplectic structure, identified with a section of a twistor bundle, defines a harmonic map, see [12]. The Reeb vector field \(\xi\) of a nearly cosymplectic structure is a unit Killing vector field (an infinitesimal generator of isometries or symmetries). The influence of constant-length Killing vector fields on the Riemannian geometry has been studied by many authors, e.g., [1]. In dimensions greater than \(5\), a nearly cosymplectic manifold \(M\,^{2n+1}\) is locally isometric to the Riemannian product \(\mathbb{R}\times\bar{M}^{2n}\) or \(B^{5}\times\bar{M}^{2n-4}\), where \(\bar{M}\) is a nearly Kahler manifold and \(B\) is a nearly cosymplectic manifold, see [4]. Moreover, any \(5\)-dimensional nearly cosymplectic manifold has Einstein metric of positive scalar curvature, see [4]; and a \(3\)-dimensional nearly cosymplectic manifold is cosymplectic, see [7]. A well-known nearly cosymplectic manifold is a sphere \(S^{5}\) endowed with the almost contact metric structure induced by the almost Hermitian structure of \(S^{6}\) (defined by the cross product of the imaginary part of the octonions). In [14, 15, 16], we introduced metric structures on a smooth manifold that generalize the almost contact, cosymplectic, Sasakian, etc. metric structures. These so-called "weak" structures (the linear complex structure on the contact distribution is replaced by a nonsingular skew-symmetric tensor) made it possible to take a new look at the classical structures and find new applications. In [17] we defined new structures of this type, called the weak nearly cosymplectic structure and weak nearly Kahler structure, and asked the question: _under what conditions are weak nearly cosymplectic manifolds locally the Riemannian products_? In this article, we study the curvature and topology of weak nearly cosymplectic and weak nearly Kahler structures and find conditions (5) and (8) that are automatically satisfied by almost cosymplectic manifolds and under which weak almost cosymplectic manifolds are locally Riemannian products (see question above). The article is organized as follows. In Section 2, following the introductory Section 1, we recall necessary results on weak almost contact manifolds. Section 3 formulates auxiliary lemmas on the geometry of weak almost cosymplectic and weak almost Kahler structures. In Section 4, we generalize some results of the work [13] and prove that a weak nearly cosymplectic manifold \(M^{\,2n+1}\) (\(n>2\)) is locally the Riemannian product of either the real line and a weak nearly Kahler manifold, or, under certain conditions, a weak nearly Kahler manifold \(\bar{M}^{\,2n-4}(\bar{\varphi},\bar{g})\) with the property \(\bar{\nabla}(\bar{\varphi}^{\,2})=0\) and a weak nearly cosymplectic manifold of dimension \(5\). In Section 5, using the approach of [6] we prove three lemmas that allow us to characterize \(5\)-dimensional nearly cosymplectic manifolds and prove the splitting theorem. Our proofs use the properties of new tensors, as well as classical constructions. ## 2 Preliminaries A _weak almost contact structure_ on a smooth manifold \(M^{\,2n+1}\) (\(n\geq 1\)) is a set \((\varphi,Q,\xi,\eta)\), where \(\varphi\) is a \((1,1)\)-tensor, \(\xi\) is a vector field (called Reeb vector field), \(\eta\) is a \(1\)-form and \(Q\) is a nonsingular \((1,1)\)-tensor on \(TM\), satisfying, see [14, 15], \[\varphi^{2}=-Q+\eta\otimes\xi,\quad\eta(\xi)=1,\quad Q\,\xi=\xi. \tag{2}\] By (2), \(\eta\) defines a smooth \(2n\)-dimensional distribution \(\ker\eta\). We assume that \(\ker\eta\) is \(\varphi\)-invariant, i.e., \(\varphi(\ker\eta)\subset\ker\eta\) (as in the classical theory [2], where \(Q=\operatorname{id}_{\,TM}\)). By this and (2), \(\ker\eta\) is \(Q\)-invariant, i.e., \(Q(\ker\eta)\subset\ker\eta\), and the following is true: \[\varphi\,\xi=0,\quad\eta\circ\varphi=0,\quad\eta\circ Q=\eta,\quad[Q,\,\varphi ]:=Q\circ\varphi-\varphi\circ Q=0.\] A "small" (1,1)-tensor \(\widetilde{Q}=Q-\operatorname{id}_{\,TM}\) is a measure of the difference between a weak almost contact structure and an almost contact one. Note that \[[\widetilde{Q},\varphi]:=\widetilde{Q}\circ\varphi-\varphi\circ\widetilde{Q}= 0,\quad\eta\circ\widetilde{Q}=0,\quad\widetilde{Q}\,\xi=0.\] A weak almost contact structure \((\varphi,Q,\xi,\eta)\) on a manifold \(M\) will be called _normal_ if the following tensor \(N^{\,(1)}\) is identically zero: \[N^{\,(1)}(X,Y)=[\varphi,\varphi](X,Y)+2\,d\eta(X,Y)\,\xi,\quad X,Y\in\mathfrak{ X}_{M}.\] The Nijenhuis torsion \([\varphi,\varphi]\) of \(\varphi\) and \(d\eta\) are given by (see, for example, [2]) \[[\varphi,\varphi](X,Y)=\varphi^{2}[X,Y]+[\varphi X,\varphi Y]- \varphi[\varphi X,Y]-\varphi[X,\varphi Y],\] \[d\eta(X,Y)=\frac{1}{2}\,\{X(\eta(Y))-Y(\eta(X))-\eta([X,Y])\}.\] If there is a Riemannian metric \(g\) on \(M\) such that \[g(\varphi X,\varphi Y)=g(X,Q\,Y)-\eta(X)\,\eta(Y),\quad X,Y\in\mathfrak{X}_{M}, \tag{3}\] then \((\varphi,Q,\xi,\eta,g)\) is called a _weak almost contact metric structure_. A weak almost contact manifold \(M^{\,2n+1}(\varphi,Q,\xi,\eta)\) endowed with a compatible Riemannian metric \(g\) is called a _weak almost contact metric manifold_ and is denoted by \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\). Setting \(Y=\xi\) in (3), we get \(\eta(X)=g(X,\xi)\), as in the classical theory. By (3), \[g(X,Q\,X)=g(\varphi X,\varphi X)>0\] is true for any nonzero vector \(X\in\ker\eta\); thus, the tensor \(Q\) is symmetric and positive definite. A 1-form \(\eta\) on a smooth manifold \(M^{\,2n+1}\) is said to be _contact_ if \(\eta\wedge(d\eta)^{n}\neq 0\), e.g., [2]. A _weak contact metric structure_ is a weak almost contact metric structure satisfying \(d\eta=\Phi\), where the fundamental 2-form \(\Phi\) is defined by \(\Phi(X,Y)=g(X,\varphi Y),\ X,Y\in\mathfrak{X}_{M}\). **Lemma 2.1**.: _For a weak contact metric manifold \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\), the 1-form \(\eta\) is contact._ Proof.: We will build a \(\varphi\)-_basis_\(\{\xi,e_{1},\varphi\,e_{1},\ldots,e_{n},\varphi\,e_{n}\}\) consisting of mutually orthogonal non-zero vectors at a point \(x\in M\). Let \(e_{1}\in(\ker\eta)_{x}\) be a unit eigenvector of the a self-adjoint operator \(Q\) with the real eigenvalue \(\lambda_{1}\). Then \(\varphi\,e_{1}\in(\ker\eta)_{x}\) is orthogonal to \(e_{1}\) and \(Q(\varphi\,e_{1})=\varphi(Q\,e_{1})=\lambda_{1}\varphi\,e_{1}\). Thus, the subspace orthogonal to the plane \(span\{e_{1},\varphi\,e_{1}\}\) is \(Q\)-invariant. There exists a unit vector \(e_{2}\in(\ker\eta)_{x}\) such that \(e_{2}\perp span\{e_{1},\varphi\,e_{1}\}\) and \(Q\,e_{2}=\lambda_{2}e_{2}\) for some real \(\lambda_{2}\). Obviously, \(Q(\varphi\,e_{2})=\varphi(Q\,e_{2})=\lambda_{2}\varphi\,e_{2}\). All five vectors \(\{\xi,e_{1},\varphi\,e_{1},e_{2},\varphi\,e_{2}\}\) are nonzero and mutually orthogonal. Continuing in the same manner, we find an orthogonal basis \(\{\xi,e_{1},\varphi\,e_{1},\ldots,e_{n},\varphi\,e_{n}\}\) of \(T_{x}M\). Since \(d\eta=\Phi\), the value of \(\eta\wedge(d\eta)^{n}\) on the \(\varphi\)-basis is \[\eta\wedge(d\eta)^{n}(\xi,e_{1},\varphi\,e_{1},\ldots,e_{n},\varphi\,e_{n})=( d\eta)^{n}(e_{1},\varphi\,e_{1},\ldots,e_{n},\varphi\,e_{n})\neq 0.\] i.e., \(\eta\) is a contact 1-form. **Definition 2.1** (see [17]).: A weak almost contact metric structure is said to be _weak almost cosymplectic_, if both \(\Phi\) and \(\eta\) are closed, i.e., \(d\Phi=d\eta=0\). A normal weak almost cosymplectic structure is called _weak cosymplectic_. A weak almost contact metric structure is called _weak nearly cosymplectic_ if \(\varphi\) is a Killing tensor, i.e., see (1), \[(\nabla_{Y}\,\varphi)Z+(\nabla_{Z}\,\varphi)Y=0,\quad Y,Z\in\mathfrak{X}_{M}. \tag{4}\] **Example 2.1** (see [17]).: Let a Riemannian manifold \((M^{\,2n+1},g)\) admit two nearly cosymplectic structures with a common Reeb vector field \(\xi\) and a one-form \(\eta=g(\xi,\,\cdot)\). Suppose that the corresponding (1,1)-tensors \(\varphi_{1}\neq\varphi_{2}\) are such that the (1,1)-tensor \(\psi=\varphi_{1}\,\varphi_{2}+\varphi_{2}\,\varphi_{1}\) is nonzero. Then the (1,1)-tensor \(\varphi=(\cos t)\,\varphi_{1}+(\sin t)\,\varphi_{2}\) for small \(t>0\) satisfies (4) and \[\varphi^{2}=-\mathrm{id}+(\sin t\cos t)\,\psi+\eta\otimes\xi.\] Thus, \((\varphi,Q,\xi,\eta,g)\) is a weak nearly symplectic structure on \(M\) with \(Q=\mathrm{id}-(\sin t\cos t)\,\psi\). From the equalities \((\nabla_{\xi}\,\varphi)\,\xi=0\) and \(\varphi\,\xi=0\) (for a weak nearly cosymplectic manifold) we find that \(\xi\) is a geodesic vector field \((\nabla_{\xi}\,\xi=0)\). Recall [17] that (i) on a weak nearly cosymplectic manifold \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) with the property \[(\nabla_{X}\,Q)Y=0,\quad X\in\mathfrak{X}_{M},\ Y\in\ker\eta, \tag{5}\] the unit vector field \(\xi\) is Killing \((\pounds_{\xi}\,g=0)\). From (5) and \(\nabla_{\xi}\,\xi=0\) we obtain \(\nabla_{\xi}\,Q=0\). (ii) there are no weak nearly cosymplectic structures with the property (5), which are weak contact metric structures. Here \(\pounds\) is the Lie derivative and the following identity is true: \[(\pounds_{\xi}\,g)(X,Y)=g(\nabla_{X}\,\xi,Y)+g(\nabla_{Y}\,\xi,X),\quad X,Y\in \mathfrak{X}_{M}.\] **Example 2.2**.: Let \(M^{3}(\varphi,Q,\xi,\eta,g)\) be a three-dimensional weak nearly cosymplectic manifold with the condition (5). By (2), the symmetric tensor \(Q\) has on the plane field \(\ker\eta\) the form \(\lambda\operatorname{id}_{\ker\eta}\) for some positive \(\lambda\in\mathbb{R}\). It was shown in [17] that this structure reduces to the nearly cosymplectic structure \((\tilde{\varphi},\xi,\eta,\tilde{g})\) on \(M\), where \[\tilde{\varphi}=\lambda^{-\frac{1}{2}}\,\varphi,\quad\tilde{g}|_{\ker\eta}= \lambda^{\frac{1}{2}}\,g|_{\ker\eta},\quad\tilde{g}(\xi,\,\cdot)=g(\xi,\,\cdot).\] Since \(\dim M=3\), the nearly cosymplectic structure \((\tilde{\varphi},\xi,\eta,\tilde{g})\) is cosymplectic, [7, Theorem 4]. **Definition 2.2** (see [17]).: A Riemannian manifold \((\bar{M}^{\,2n},\tilde{g})\) of even dimension equipped with a skew-symmetric (1,1)-tensor \(\bar{\varphi}\) such that the tensor \(\bar{\varphi}^{\,2}\) is negative definite will be called a _weak Hermitian manifold_. Such \((\bar{\varphi},\tilde{g})\) will be called a _weak nearly Kahler structure_, if \((\bar{\nabla}_{X}\,\bar{\varphi})X=0\)\((X\in T\bar{M})\), where \(\bar{\nabla}\) is the Levi-Civita connection of \(\tilde{g}\), or, equivalently, \[(\bar{\nabla}_{X}\,\bar{\varphi})Y+(\bar{\nabla}_{Y}\,\bar{\varphi})X=0,\quad X,Y\in\mathfrak{X}_{\bar{M}}.\] A weak Hermitian manifold \(\bar{M}^{\,2n}(\bar{\varphi},\bar{g})\) will be called a _weak Kahler manifold_, if \(\bar{\nabla}\,\bar{\varphi}=0\). **Remark 2.1**.: Any weak Kahler manifold is weak nearly Kahler. Several authors studied the problem of finding skew-symmetric parallel 2-tensors (different from almost complex structures) on a Riemannian manifold and classified them, e.g., [8]. The idea of considering the entire bundle of almost-complex structures compatible with a given metric led to the twistor construction and then to twistor string theory. Thus, it may be interesting to consider the entire bundle of weak Hermitian or weak (nearly) Kahler structures that are compatible with a given metric. **Example 2.3** (see [17]).: Let \(\bar{M}(\bar{\varphi},\bar{g})\) be a weak nearly Kahler manifold, i.e., \((\bar{\nabla}_{X}\,\bar{\varphi})X=0\). To construct a weak nearly cosymplectic structure on the Riemannian product \(M=\bar{M}\times\mathbb{R}\) of \((\bar{M},\bar{g})\) and a Euclidean line \((\mathbb{R},\partial_{t})\), take any point \((x,t)\) of \(M\) and set \[\xi=(0,\partial_{t}),\quad\eta=(0,dt),\quad\varphi(X,\partial_{t})=(\bar{ \varphi}X,0),\quad Q(X,\partial_{t})=(-\bar{\varphi}^{\,2}X,\partial_{t}),\] where \(X\in T_{x}\bar{M}\). Note that if \(\bar{\nabla}_{X}\,\bar{\varphi}^{2}=0\)\((X\in T\bar{M})\), then the condition (5) holds. For a Riemannian manifold \((M,g)\) equipped with a Killing vector field \(\xi\), we get, see [19], \[\nabla_{X}\nabla_{Y}\,\xi-\nabla_{\nabla_{XY}}\,\xi=R_{\,X,\,\xi}\,Y, \tag{6}\] where the curvature tensor \(R\) is given by, e.g., [11], \[R_{X,Y}Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z,\quad X,Y,Z\in\mathfrak{X}_{M}.\] The curvature tensor of nearly cosymplectic manifolds satisfies \(g(R_{\xi,Z}\,\varphi X,\varphi Y)=0\), see [6]; thus the contact distribution of nearly cosymplectic manifolds is _curvature invariant_: \[R_{X,Y}Z\in\ker\eta,\quad X,Y,Z\in\ker\eta. \tag{7}\] For example, any 1-form \(\eta\) on a real space form has the property (7). We will prove, see (20) of Lemma 3.3, that a weak nearly cosymplectic manifold satisfies (7) if we assume a weaker condition \[R_{\tilde{Q}X,Y}Z\in\ker\eta,\quad X\in TM,\ Y,Z\in\ker\eta. \tag{8}\] From (8), using the first Bianchi identity, we obtain \[R_{X,Y}\,\widetilde{Q}Z\in\ker\eta,\quad Z\in TM,\ X,Y\in\ker\eta.\] Taking derivative of \(g(\varphi V,Z)=-g(V,\varphi Z)\), we see that \(\nabla_{Y}\varphi\) of a weak nearly cosymplectic manifold is skew-symmetric: \[g((\nabla_{Y}\varphi)V,Z)=-g((\nabla_{Y}\varphi)Z,V). \tag{9}\] The Ricci identity can be written as (e.g., [6]) \[g((\nabla_{X,Y}^{2}\varphi)V,Z)-g((\nabla_{Y,X}^{2}\varphi)V,Z)=g(R_{X,Y} \varphi V,Z)+g(R_{X,Y}V,\varphi Z). \tag{10}\] where the second derivative operator is given by \(\nabla_{X,Y}^{2}=\nabla_{X}\nabla_{Y}-\nabla_{\nabla_{X}Y}\). By calculating derivative of (9), we see that \(\nabla_{X,Y}^{2}\varphi\) is skew-symmetric: \[g((\nabla_{X,Y}^{2}\varphi)V,Z)=-g((\nabla_{X,Y}^{2}\varphi)Z,V).\] Auxiliary lemmas In this section, we consider weak nearly cosymplectic manifolds \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) with conditions (5) and (8) and generalize some well known results on nearly cosymplectic manifolds. We define a (1,1)-tensor field \(h\) on \(M\) as in the classical case, e.g., [6], \[h=\nabla\xi. \tag{11}\] Since \(\xi\) is a geodesic vector field \((\nabla_{\xi}\,\xi=0)\), we get \(h\,\xi=0\) and \(h(\ker\eta)\subset\ker\eta\). Since \(\xi\) is a Killing vector field, the tensor \(h\) is skew-symmetric: \[g(hX,\,X)=g(\nabla_{X}\,\xi,X)=\frac{1}{2}\,(\pounds_{\xi}g)(X,X)=0,\] and \(\nabla_{X}\,\eta=g(hX,\,\cdot)\) is true; moreover, \(\eta\) is a Killing 1-form: \[(\nabla_{X}\,\eta)(X)=0,\quad X\in\mathfrak{X}_{M}.\] We also get \(\eta\circ h=0\) and \[d\,\eta(X,\,\cdot)=\nabla_{X}\,\eta=g(hX,\,\cdot).\] Note that \(h=0\) if and only if the distribution \(\ker\eta\) is integrable, i.e., \([X,Y]\in\ker\eta\) (\(X,Y\in\ker\eta\)): \[g([X,Y],\xi)=2\,g(hY,X),\quad X,Y\in\ker\eta.\] The following lemma generalizes Lemmas 3.1 in [6]. **Lemma 3.1**.: _For a weak nearly cosymplectic manifold \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) we obtain_ \[(\nabla_{X}\,h)\,\xi=-h^{2}X, \tag{12}\] \[h\,\varphi+\varphi\,h=0\quad(h\text{ anticommutes with }\varphi),\] (13) \[(\nabla_{X}\,\varphi)\,\xi=-\varphi\,hX,\] (14) \[h\,Q=Q\,h\quad(h\text{ commutes with }Q). \tag{15}\] Proof.: Differentiating the equality \(h\,\xi=0\) and using the definition (11), we obtain (12): \[0=\nabla_{X}\,(h\,\xi)=(\nabla_{X}\,h)\,\xi+h(\nabla_{X}\,\xi)=(\nabla_{X}\, h)\,\xi+h^{2}X.\] Differentiating the equality \(g(\varphi\,Y,\xi)=0\) yields \[0=Xg(\varphi\,Y,\xi)=g((\nabla_{X}\,\varphi)Y,\xi)+g(\varphi\,Y,hX).\] Summing this with the equality \(g((\nabla_{Y}\,\varphi)X,\xi)+g(\varphi\,X,hY)=0\) and applying (4), gives (13): \[0=g(\varphi\,Y,hX)+g(\varphi\,X,hY)=-g((h\,\varphi+\varphi\,h)X,Y).\] Using \(\varphi\,\xi=0\) and the definition (11), we get (14): \[(\nabla_{X}\,\varphi)\,\xi=-\varphi(\nabla_{X}\,\xi)=-\varphi\,hX.\] By (13) and (2), using the equalities \(h\,\xi=0\) and \(\eta\circ h=0\), we obtain (15). The proofs of the following three lemmas, generalizing certain formulas in Lemmas 3.2, 3.4 and 3.5 in [6], are given in Section 5 (Appendix). **Lemma 3.2**.: _For a weak nearly cosymplectic manifold with the condition (5) we obtain_ \[g((\nabla_{X}\,\varphi)\varphi Y,Z)=g((\nabla_{X}\varphi)Y,\varphi Z )+\eta(Y)g(hX,Z)+\eta(Z)g(hX,Y)+\eta(Z)g(hX,\widetilde{Q}Y), \tag{16}\] \[g((\nabla_{\varphi X}\varphi)Y,Z)=g((\nabla_{X}\varphi)Y,\varphi Z )+\eta(X)g(hZ,Y)+\eta(Z)g(hX,Y)+\eta(Z)g(hX,\widetilde{Q}Y),\] (17) \[g((\nabla_{\varphi X}\varphi)\varphi Y,Z){=}-g((\nabla_{X} \varphi)Y,QZ){+}\eta(X)g(hZ,\varphi Y){+}\eta(Y)g(hX,\varphi Z){-}\eta(Z)g( \varphi hX,\widetilde{Q}Y). \tag{18}\] **Lemma 3.3**.: _The curvature tensor of a weak nearly cosymplectic manifold satisfies the equality_ \[g(R_{\varphi X,Y}Z,V)+g(R_{X,\varphi Y}Z,V)+g(R_{X,Y}\varphi Z,V)+g(R_{X,Y}Z, \varphi V)=0. \tag{19}\] _Moreover, if conditions (5) and (8) are true, then_ \[g(R_{\xi,Z}\,\varphi X,\varphi Y)=0, \tag{20}\] \[g(R_{\varphi X,\varphi Y}Z,V)-g(R_{X,Y}\varphi Z,\varphi V)=- \frac{1}{2}\,\delta(X,Y,Z,V),\] (21) \[g(R_{\varphi X,\varphi Y}\varphi Z,\varphi V)=g(R_{QX,QY}Z,V)- \eta(X)\,g(R_{\,\xi,QY}Z,V)\] \[\quad+\eta(Y)\,g(R_{\xi,QX}Z,V)-\frac{1}{2}\,\delta(\varphi X, \varphi Y,Z,V), \tag{22}\] _where_ \[\delta(X,Y,Z,V)=g(R_{X,Y}\widetilde{Q}Z,V)+g(R_{X,Y}Z,\widetilde{Q}V)-g(R_{ \widetilde{Q}X,Y}Z,V)-g(R_{X,\widetilde{Q}Y}Z,V).\] **Lemma 3.4**.: _For a weak nearly cosymplectic manifold with conditions (5) and (8), we obtain_ \[g((\nabla_{X}\,\varphi)Y,\varphi hZ)=\eta(X)\,g(hY,hZ)-\eta(Y)\, g(hX,hZ)\] \[-\frac{3}{2}\,\eta(X)\,g(hY,\widetilde{Q}hZ)-\frac{3}{2}\,\eta(Y )\,g(hX,\widetilde{Q}hZ)-\frac{3}{2}\,\eta(Z)\,g(hY,\widetilde{Q}hX). \tag{23}\] The following lemma generalizes Lemmas 3.3 in [6]. **Lemma 3.5**.: _For a weak nearly cosymplectic manifold \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) with conditions (5) and (8), we obtain_ \[(\nabla_{X}\,h)Y=g(h^{2}X,Y)\,\xi-\eta(Y)\,h^{2}X, \tag{24}\] \[R_{\,\xi,X}Y=-(\nabla_{X}\,h)Y,\] (25) \[\mathrm{Ric}\,(\xi,Z)=-\eta(Z)\,\mathrm{tr}\,h^{2}. \tag{26}\] _In particular, \(\nabla_{\xi}\,h=0\) and \(\mathrm{tr}(h^{2})=const\)._ Proof.: From (6) (since \(\xi\) is a Killing vector field) and (11), we find (25). Replacing \(Y\) by \(\varphi Y\) and \(Z\) by \(\varphi Z\) in \(g(R_{\,\xi,X}Y,Z)=-g((\nabla_{X}\,h)Y,Z)\), see (25), and using (20), we get \(g((\nabla_{X}\,h)\,\varphi Y,\varphi Z)=0\), i.e., \[g((\nabla_{X}\,h)Y,Z)=0,\quad Y,Z\in\ker\eta. \tag{27}\] Then, using (27), we find the \(\xi\)-component and \(\ker\eta\)-component of \((\nabla_{X}\,h)Y\): \[g((\nabla_{X}\,h)Y,\,\xi)=g(\nabla_{X}(h\,Y),\,\xi)=-g(h\,Y,\, \nabla_{X}\,\xi)=g(h^{2}X,Y),\] \[g((\nabla_{X}\,h)Y,Z)=\eta(Y)\,g((\nabla_{X}\,h)\,\xi,\,Z)=-\eta (Y)\,g(h^{2}X,Z)\quad(Z\in\ker\eta),\] from which (24) follows. From (24) with \(X=\xi\) we find \(\nabla_{\xi}\,h=0\). Let \(\{e_{i}\}\) (\(i=1,\ldots,2n+1\)) be a local orthonormal frame on \(M\) with \(e_{2n+1}=\xi\). Putting \(X=Y=e_{i}\) in (24), then using (7) and summing over \(i=1,\ldots,2n+1\), we get (26). Replacing \(Y\) by \(hY\) in (24), putting \(Y=Z=e_{i}\) in the resulting equation and summing over \(i=1,\ldots,2n+1\), we get \(\mathrm{tr}\,((\nabla_{X}\,h)\,h)=0\). This implies \(X(\mathrm{tr}(h^{2}))=0\) (\(X\in\mathfrak{X}_{M}\)), i.e., \(\mathrm{tr}(h^{2})=const\). **Remark 3.1**.: By (24)-(25), we get \[g(R_{\,\xi,X}Y,Z)=-g((\nabla_{X}\,h)Y,Z)=\eta(Y)\,g(h^{2}X,Z)-\eta(Z)\,g(h^{2} X,Y). \tag{28}\] The function \(\delta\) of a weak nearly cosymplectic manifold has the following symmetries: \[\delta(Y,X,Z,V)=\delta(X,Y,V,Z)=\delta(Z,V,X,Y)=-\delta(X,Y,Z,V).\] If (8) is true, then by (28), we get \(\delta(\xi,Y,Z,V)\!=\!\delta(X,\xi,Z,V)\!=\!\delta(X,Y,\xi,V)\!=\!\delta(X,Y,Z, \xi)\!=0\): \[\delta(X,Y,Z,\xi)=g(R_{\,\xi,\widetilde{Q}\,Z}Y,X)+g(R_{\,\xi,Z}\,\widetilde{Q }\,X,Y)+g(R_{\,\xi,Z}\,X,\widetilde{Q}\,Y)=0.\] Main results In Section 4.1, we prove the splitting of weak nearly cosymplectic manifolds with conditions (5) and (8). For almost contact metric manifolds, conditions (5) and (8) become trivial. In Section 4.2 we characterize \(5\)-dimensional weak nearly cosymplectic manifolds. ### The splitting theorem The following proposition generalizes [13, Proposition 4.2]. **Proposition 4.1**.: _For a weak nearly cosymplectic manifold with conditions (5) and (8), the eigenvalues (and their multiplicities) of the symmetric operator \(h^{2}\) are constant._ Proof.: From (28) and Lemma 3.5 we obtain \[(\nabla_{X}\,h^{2})Y=h(\nabla_{X}\,h)Y+(\nabla_{X}\,h)hY=g(X,h^{3}Y)\,\xi- \eta(Y)\,h^{3}X. \tag{29}\] Consider an eigenvalue \(\mu\) of \(h^{2}\) and a local unit vector field \(Y\) orthogonal to \(\xi\) such that \(h^{2}Y=\mu Y\). Applying (29) for any nonzero vector fields \(X\) and \(Y\), we find \[0 =g((\nabla_{X}\,h^{2})Y,Y)=g(\nabla_{X}\,(h^{2}Y),Y)-g(h^{2}( \nabla_{X}\,Y),Y)\] \[=X(\mu)\,g(Y,Y)+\mu\,g(\nabla_{X}\,Y,Y)-g(\nabla_{X}\,Y,h^{2}Y)=X (\mu)\,g(Y,Y),\] which implies that \(X(\mu)=0\) (\(X\in\mathfrak{X}_{M}\)). By Proposition 4.1, the spectrum of the self-adjoint operator \(h^{2}\) has the form \[Spec(h^{2})=\{0,-\lambda_{1}^{2},\ldots-\lambda_{r}^{2}\}, \tag{30}\] where \(\lambda_{i}\) is a positive real number and \(\lambda_{i}\neq\lambda_{j}\) for \(i\neq j\). If \(X\neq 0\) is an eigenvector of \(h^{2}\) with eigenvalue \(-\lambda_{i}^{2}\), then \(X,\varphi X,hX\) and \(h\,\varphi X\) are orthogonal nonzero eigenvectors of \(h^{2}\) with eigenvalue \(-\lambda_{i}^{2}\). Since \(h(\xi)=0\), the eigenvalue \(0\) has multiplicity \(2p+1\) for some integer \(p\geq 0\). Denote by \(D_{0}\) the smooth distribution of the eigenvectors with eigenvalue \(0\) orthogonal to \(\xi\). Let \(D_{i}\) be the smooth distribution of the eigenvectors with eigenvalue \(-\lambda_{i}^{2}\). Remark that the distributions \(D_{0}\) and \(D_{i}\) belong to \(\ker\eta\) and are \(\varphi\)-invariant and \(h\)-invariant. The following proposition generalizes [13, Proposition 4.3]. **Proposition 4.2**.: _Let \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) be a weak nearly cosymplectic manifold with conditions (5) and (8), and let the spectrum of the self-adjoint operator \(h^{2}\) have the form (30). Then,_ \((a)\) _each distribution \([\xi]\oplus D_{i}\)\((i=1,\ldots,r)\) is integrable with totally geodesic leaves. Moreover, if the eigenvalue \(0\) of \(h^{2}\) is not simple, then_ \((b)\) _the distribution \(D_{0}\) is integrable with totally geodesic leaves, and each leaf of \(D_{0}\) is endowed with a weak nearly Kahler structure \((\bar{\varphi},\bar{g})\), see Definition 2.2, with the property \(\bar{\nabla}(\bar{\varphi}^{\,2})=0\);_ \((c)\) _the distribution \([\xi]\oplus D_{1}\oplus\ldots\oplus D_{r}\) is integrable with totally geodesic leaves._ Proof.: Consider an eigenvector \(X\) of \(h^{2}\) with eigenvalue \(-\lambda_{i}^{2}\). Then \(\nabla_{X}\,\xi=hX\in D_{i}\). On the other hand, (29) implies that \(\nabla_{\xi}\,h^{2}=0\), and thus \(\nabla_{\xi}X\) is also an eigenvector of \(h^{2}\) with eigenvalue \(-\lambda_{i}^{2}\). Now, taking \(X,Y\in D_{i}\) and applying (29), we get \[h^{2}(\nabla_{X}\,Y)=-\lambda_{i}^{2}\,\nabla_{X}Y-(\nabla_{X}\,h^{2})Y=- \lambda_{i}^{2}\,\nabla_{X}Y+\lambda_{i}^{2}\,g(X,hY)\,\xi.\] Therefore, \[h^{2}(\varphi^{2}\nabla_{X}Y)=\varphi^{2}(h^{2}\nabla_{X}Y)=-\lambda_{i}^{2} \,\varphi^{2}(\nabla_{X}Y).\] Thus \(\varphi^{2}\nabla_{X}Y\in D_{i}\). Similarly, using (15), we get \(\widetilde{Q}\nabla_{X}Y\in D_{i}\). It follows that \[\nabla_{X}Y=-\widetilde{Q}\,\nabla_{X}Y-\varphi^{2}\nabla_{X}Y+\eta(\nabla_{X }Y)\,\xi,\] see (2), belongs to the distribution \([\xi]\oplus D_{i}\). This proves (a). Assume that the eigenvalue \(0\) of \(h^{2}\) is not simple. By (29), we get \((\nabla_{X}\,h^{2})Y=0\) for any linear independent vectors \(X,Y\) in \(D_{0}\), hence \(h^{2}(\nabla_{X}Y)=0\). Moreover, \[g(\nabla_{X}Y,\xi)=-g(Y,\nabla_{X}\,\xi)=-g(Y,hX)=0.\] Thus, the distribution \(D_{0}\) defines a totally geodesic foliation. By (13) and (15), the leaves of \(D_{0}\) are \(\varphi\)-invariant and \(Q\)-invariant. Thus, the weak nearly cosymplectic structure on \(M\) with conditions (5) and (8) induces a weak nearly Kahler structure \((\bar{\varphi},\bar{g})\) on each leaf of \(D_{0}\) with the property \(\nabla(\bar{\varphi}^{\,2})=0\), where \(\bar{\nabla}\) is the Levi-Civita connection of \(\bar{g}\). This proves (b). To prove (c) taking (a) into account, it is enough to show that \(g(\nabla_{X}Y,Z)=0\) for every \(X\in D_{i}\), \(Y\in D_{j}\)\((i\neq j)\) and \(Z\in D_{0}\). Indeed, from (29), we obtain \[g(\nabla_{X}\,Y,Z) =-(1/\lambda_{j}^{2})\,g(\nabla_{X}(h^{2}Y),Z)=-(1/\lambda_{j}^{ 2})\,g((\nabla_{X}\,h^{2})Y+h^{2}(\nabla_{X}Y),Z)\] \[=-(1/\lambda_{j}^{2})\,\eta(Z)\,g(X,h^{3}Y)-(1/\lambda_{j}^{2})\,g (\nabla_{X}Y,h^{2}Z),\] which vanishes since \(\eta(Z)=0\) and \(h^{2}Z=0\). The following proposition generalizes [13, Proposition 4.1] and does not use Lemmas 3.2-3.4. **Proposition 4.3**.: _For a weak nearly cosymplectic (non-weak-cosymplectic) manifold, \(h\equiv 0\) if and only if the manifold is locally isometric to the Riemannian product of a real line and a weak nearly Kahler (non-weak-Kahler) manifold._ Proof.: For every vector fields \(X,Y\) orthogonal to \(\xi\) we have \[2\,d\eta(X,Y)=g(\nabla_{X}\,\xi,Y)-g(\nabla_{Y}\,\xi,X)=2\,g(hX,Y). \tag{31}\] Thus, by the condition \(h=0\), the contact distribution \(\ker\eta\) is integrable. Any integral submanifold of \(\ker\eta\) is a totally geodesic hypersurface. Indeed, for every \(X,Y\in\ker\eta\), we have \[g(\nabla_{X}\,Y,\xi)=-g(Y,hX)=0.\] Since \(\nabla_{\xi}\,\xi=0\), by de Rham Decomposition Theorem (e.g., [11]), the manifold is locally isometric to the Riemannian product \(\mathbb{R}\times\bar{M}\). The weak almost contact metric structure induces on \(\bar{M}\) a weak almost Hermitian structure which, by conditions, is weak nearly Kahler. Conversely, if a weak nearly cosymplectic manifold is locally isometric to the Riemannian product \(\mathbb{R}\times\bar{M}\), where \(\bar{M}\) is a weak nearly Kahler manifold and \(\xi=(0,\partial_{t})\), then \(d\eta(X,Y)=0\)\((X,Y\in\ker\eta)\). By (31) and \(h\,\xi=0\), we get \(h=0\). We will generalize Theorem 4.5 in [13] on splitting of nearly cosymplectic manifolds. **Theorem 4.1**.: _Let \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) be a weak nearly cosymplectic (non-weak-cosymplectic) manifold of dimension \(2n+1>5\) with conditions (5) and (8). Then \(M\) is locally isometric to one of the following Riemannian products:_ \[\mathbb{R}\times\bar{M}^{\,2n},\quad B^{5}\times\bar{M}^{\,2n-4},\] _where \(\bar{M}\) is endowed with a weak nearly Kahler structure \((\bar{\varphi},\bar{g})\) with the property \(\bar{\nabla}(\bar{\varphi}^{\,2})=0\), and \(B^{5}\) is a weak nearly cosymplectic (non-weak-cosymplectic) manifold satisfying (5) and (8). If the manifold \(M\) is complete and simply connected, then, by the de Rham Decomposition Theorem, the isometry is global._ Proof.: If \(h\equiv 0\), then by Proposition 4.3, \(M\) is locally isometric to \(\mathbb{R}\times\bar{M}^{\,2n}\). Let \(h\neq 0\) on \(\ker\eta\setminus\{0\}\) and (30), where \(r\geq 1\) and each \(\lambda_{i}\) is a positive number. Since \(\dim M>5\), by Theorem 4.2 in Section 4.2, the eigenvalue \(0\) is not a simple eigenvalue. By (b) and (c) of Proposition 4.2, and according to de Rham Decomposition Theorem (e.g., [11]), \(M\) is locally isometric to the Riemannian product \(B\times\bar{M}\), where \(B\) is an integral submanifold of the distribution \([\xi]\oplus D(-\lambda_{1}^{2})\oplus\ldots\oplus D(-\lambda_{r}^{2})\), and \(\bar{M}\) is an integral submanifold of \(D_{0}\), which is endowed with a weak nearly Kahler structure \((\bar{\varphi},\bar{g})\) and, by the condition (5), has the property \(\bar{\nabla}(\bar{\varphi}^{\,2})=0\). Note that \(B\) is endowed with an induced weak nearly cosymplectic (non-weak-cosymplectic) structure, for which \(0\) is a simple eigenvalue of the operator \(h^{2}\). By Theorem 4.2 in Section 4.2, \(B\) is a \(5\)-dimensional manifold and \(\lambda_{1}=\ldots=\lambda_{r}\). Consequently, \(\dim\bar{M}=2n-4\) ### Characterization of 5-dimensional weak nearly cosymplectic manifolds Here, we use Lemmas 3.2-3.4 to characterize 5-dimensional weak nearly cosymplectic manifolds. **Proposition 4.4** (see Proposition 3.2 in [13]).: _Let \(\eta\) be a contact 1-form on a smooth manifold \(M\) of dimension \(2n+1>5\). Then, the following operator is injective:_ \[\Upsilon_{d\eta}:\beta\in\Lambda^{2}(M)\to d\eta\wedge\beta\in\Lambda^{4}(M),\] _where \(\Lambda^{p}(M)\) is the vector bundle of differential \(p\)-forms on \(M\)._ The following result generalizes Theorem 4.4 in [13] on 5-dimensional cosymplectic manifolds. **Theorem 4.2**.: _Let \(M^{\,2n+1}(\varphi,Q,\xi,\eta,g)\) be a weak nearly cosymplectic manifold with conditions (5) and (8) such that \(0\) is a simple eigenvalue of \(h^{2}\). Then \(M\) is a 5-dimensional manifold._ Proof.: **1**. We consider 2-forms \(\Phi_{k}(X,Y)=g(\varphi h^{k}X,Y)\), where \(k=0,1,2\); in particular, \(\Phi_{0}=-\Phi\). It is easy to calculate, see [14]: \[3\,d\Phi(X,Y,Z)=g((\nabla_{X}\,\varphi)Z,Y)+g((\nabla_{Y}\,\varphi)X,Z)+g(( \nabla_{Z}\,\varphi)Y,X).\] We will show that \[d\Phi_{0}=3\,\eta\wedge\Phi_{1},\quad d\Phi_{1}=3\,\eta\wedge\Phi_{2}. \tag{32}\] Indeed, applying (11) and \(\varphi\,\xi=0\), we find the \(\xi\)-component of \((\nabla_{X}\,\varphi)Y\): \[g((\nabla_{X}\,\varphi)Y,\xi)=-g((\nabla_{X}\,\varphi)\,\xi,Y)=g(\varphi \nabla_{X}\,\xi,Y)=g(\varphi hX,Y). \tag{33}\] Replacing \(Z\) by \(\varphi Z\) in (23) and using (13), we obtain \[g((\nabla_{X}\,\varphi)Y,-\varphi^{2}hZ)=\eta(X)\,g(hY,h\varphi Z )-\eta(Y)\,g(hX,h\varphi Z)\] \[+\frac{3}{2}\,\eta(X)\,g(h^{2}Y,\widetilde{Q}\,\varphi Z)+\frac{3 }{2}\,\eta(Y)\,g(h^{2}X,\widetilde{Q}\,\varphi Z). \tag{34}\] By conditions, \(h\neq 0\) on \(\ker\eta\setminus\{0\}\), thus from (34) we get \[g((\nabla_{X}\,\varphi)Y,V)=0,\quad X,Y,V\in\ker\eta.\] By the above and (33), using \(X=X^{\top}+\eta(X)\,\xi\) and \(Y=Y^{\top}+\eta(Y)\,\xi\), we obtain \[g((\nabla_{X}\,\varphi)Y,V)=\eta(V)\,g((\nabla_{X^{\top}}\, \varphi)Y^{\top},\xi)+\eta(X)\,g((\nabla_{\,\xi}\,\varphi)Y^{\top},V)+\eta(Y) \,g((\nabla_{X^{\top}}\,\varphi)\,\xi,V)\] \[=-\eta(V)\,g((\nabla_{X^{\top}}\,\varphi)\,\xi,Y^{\top})-\eta(X) \,g((\nabla_{Y^{\top}}\,\varphi)\,\xi,V)+\eta(Y)\,g((\nabla_{X^{\top}}\, \varphi)\,\xi,V)\] \[=\eta(V)\,g(\varphi hX,Y)+\eta(X)\,g(\varphi hY,V)+\eta(Y)\,g( \varphi hV,X), \tag{35}\] which implies that \(d\Phi_{0}=3\,\eta\wedge\Phi_{1}\). Similarly, using (35) and (28), we get \[g((\nabla_{X}\,(\varphi\,h))Y,Z)=g((\nabla_{X}\,\varphi)hY,Z)+g( \varphi(\nabla_{X}\,h)Y,Z)\] \[=\eta(X)\,g(\varphi\,h^{2}Y,Z)+\eta(Y)\,g(\varphi\,h^{2}Z,X)+\eta( Z)\,g(\varphi\,h^{2}X,Y),\] which implies \(d\Phi_{1}=3\,\eta\wedge\Phi_{2}\) and completes the proof of (32). From (32) we obtain \[0=d^{2}\Phi_{0}=3\,d\eta\wedge\Phi_{1}-3\,\eta\wedge d\Phi_{1}=3\,d\eta\wedge \Phi_{1}. \tag{36}\] **2**. Next we will show that if \(0\) is a simple eigenvalue of \(h^{2}\), then \(\eta\) is a contact form. We assume (30) with \(r\geq 1\), \(0\) being a simple eigenvalue. From (28) with \(Y=\xi\), using (12), we find the \(\xi\)-sectional curvature: \[K(\xi,X)=g(hX,hX)\quad(X\in\ker\eta,\ g(X,X)=1). \tag{37}\] By (37) and the assumption, the \(\xi\)-sectional curvature of \(M\) is positive. By [16, Theorem 3], we get a K-contact structure on \(M\) (i.e., a weak contact metric manifold, whose Reeb vector field is Killing, see [16]), thus \(\eta\) is a contact 1-form. **3**. If \(2n+1>5\), \(\eta\) being a contact form (see Lemma 2.1), the fact that \(d\eta\wedge\Phi_{1}=0\), see (36), implies \(\Phi_{1}=0\) - a contradiction by Proposition 4.4. Hence, \(M\) is a 5-dimensional manifold and the multiplicity of the eigenvalue \(-\lambda^{2}\) is 4. Appendix: proofs of Lemmas 3.2-3.4 Here, in the proofs of Lemmas 3.2-3.4 we use the approach [6], but our formulas also contain terms depending on the tensors \(Q\) and \(\widetilde{Q}\). **Proof of Lemma 3.2**. As in the proof of Lemma 3.4 in [6], differentiating (3) and using (13), (5) and the skew-symmetry of \(\nabla_{X}\,\varphi\), we get (16). We obtain (17) from (16) by the condition (4). Replacing \(Y\) by \(\varphi Y\) in (17) and using (16) and (2), we get (18). \(\square\) **Proof of Lemma 3.3**. **1**. Although the proof of (19) is similar to the proof of equation (3.4) in [6], we present it, because some formulas appearing in the proof of (19) are also used in the proof of (21). Differentiating (4), we find \[(\nabla^{2}_{X,Y}\,\varphi)Z+(\nabla^{2}_{X,Z}\,\varphi)Y=0. \tag{38}\] Applying the Ricci identity (10), from (38) and the skew-symmetry of \(\nabla^{2}_{X,Y}\,\varphi\) we get \[g(R_{X,Y}Z,\varphi V)-g(R_{X,Y}V,\varphi Z)+g((\nabla^{2}_{X,Z} \,\varphi)Y,V)-g((\nabla^{2}_{Y,Z}\,\varphi)X,V)=0. \tag{39}\] By Bianchi and Ricci identities, we find \[g(R_{X,Y}Z,\varphi V)=-g(R_{Y,Z}X,\varphi V)-g(R_{Z,X}Y,\varphi V)\] \[=g((\nabla^{2}_{Y,Z}\,\varphi)V,X)-g((\nabla^{2}_{Z,Y}\,\varphi) V,X)-g(R_{Y,Z}V,\varphi X)-g(R_{Z,X}Y,\varphi V). \tag{40}\] Substituting (40) into (39), it follows that \[g(R_{X,Z}Y,\varphi V)-g(R_{X,Y}V,\varphi Z)-g(R_{Y,Z}V,\varphi X)\] \[-g((\nabla^{2}_{Z,Y}\,\varphi)V,X)-g((\nabla^{2}_{X,Z}\,\varphi) V,Y)=2\,g((\nabla^{2}_{Y,Z}\,\varphi)X,V). \tag{41}\] On the other hand, using (38) and the Ricci identity (10), we see that \[g(R_{X,Z}Y,\varphi V)-g(R_{X,Z}V,\varphi Y)-g((\nabla^{2}_{X,Z} \,\varphi)Y,V)+g((\nabla^{2}_{Z,X}\,\varphi)Y,V)=0. \tag{42}\] Adding (42) to (41), we get \[2\,g(R_{X,Z}Y,\varphi V)-g(R_{X,Y}V,\varphi Z)-g(R_{Y,Z}V,\varphi X)-g(R_{X,Z }V,\varphi Y)=2\,g((\nabla^{2}_{Y,V}\,\varphi)Z,X). \tag{43}\] Swapping \(Y\) and \(V\) in (43), we find \[2\,g(R_{X,Z}V,\varphi Y)-g(R_{X,V}Y,\varphi Z)-g(R_{V,Z}Y,\varphi X)-g(R_{X,Z }Y,\varphi V)=2\,g((\nabla^{2}_{V,Y}\,\varphi)Z,X). \tag{44}\] Subtracting (44) from (43), and using the Bianchi and Ricci identities, we get \[g(R_{\varphi X,Z}Y,V)+g(R_{X,\varphi Z}Y,V)+g(R_{X,Z}\varphi Y,V)+g(R_{X,Z}Y, \varphi V)=0.\] which (by replacing \(Z\) and \(Y\)) gives (19). **2**. Replacing \(X\) by \(\varphi X\) in (19) and using (2), we have \[-g(R_{QX,Y}Z,V)+\eta(X)\,g(R_{\xi,Y}Z,V)+g(R_{\varphi X,\varphi Y }Z,V)\] \[+g(R_{\varphi X,Y}\varphi Z,V)+g(R_{\varphi X,Y}Z,\varphi V)=0. \tag{45}\] Exchanging \(X\) and \(Y\) in (45), we find \[g(R_{X,QY}Z,V)+\eta(Y)\,g(R_{\xi,X}Z,V)-g(R_{\varphi X,\varphi Y }Z,V)\] \[+g(R_{\varphi Y,X}\varphi Z,V)+g(R_{\varphi Y,X}Z,\varphi V)=0. \tag{46}\] Subtracting (46) from (45), we obtain \[2\,g(R_{\varphi X,\varphi Y}Z,V)-2\,g(R_{X,Y}Z,V)+\eta(X)\,g(R_ {\xi,Y}Z,V)-\eta(Y)\,g(R_{\xi,X}Z,V)\] \[+g(R_{\varphi X,Y}\varphi Z,V)-g(R_{\varphi Y,X}\varphi Z,V)+g(R_ {\varphi X,Y}Z,\varphi V)-g(R_{\varphi Y,X}Z,\varphi V)\] \[-g(R_{\widetilde{Q}X,Y}Z,V)-g(R_{X,\widetilde{Q}Y}Z,V)=0. \tag{47}\] Then, replacing \(Z\) by \(\varphi Z\) and also \(V\) by \(\varphi V\) in (19) and using (2), we get two equations \[-g(R_{X,Y}QZ,V)=-\eta(Z)\,g(R_{X,Y}\,\xi,V)-g(R_{X,Y}\varphi Z, \varphi V)\] \[-g(R_{X,\varphi Y}\varphi Z,V)-g(R_{\varphi X,Y}\varphi Z,V), \tag{48}\] \[-g(R_{X,Y}Z,QV)=-\eta(V)\,g(R_{X,Y}Z,\,\xi)-g(R_{X,Y}\varphi Z, \varphi V)\] \[-g(R_{\varphi X,Y}Z,\varphi V)-g(R_{X,\varphi Y}Z,\varphi V). \tag{49}\] Adding (48) to (49), we get \[-2\,g(R_{X,Y}Z,V)=g(R_{X,Y}\widetilde{Q}Z,V)+g(R_{X,Y}Z, \widetilde{Q}V)\] \[-\eta(Z)\,g(R_{X,Y}\,\xi,V)-2\,g(R_{X,Y}\varphi Z,\varphi V)-g(R_ {X,\varphi Y}\varphi Z,V)-g(R_{\varphi X,Y}\varphi Z,V)\] \[-\eta(V)\,g(R_{X,Y}Z,\,\xi)-g(R_{\varphi X,Y}Z,\varphi V)-g(R_{X, \varphi Y}Z,\varphi V).\] Substituting the above equation into (47), we have \[2\,g(R_{\varphi X,\varphi Y}Z,V)-2\,g(R_{X,Y}\varphi Z,\varphi V )-\eta(Z)\,g(R_{X,Y}\,\xi,V)-\eta(V)\,g(R_{X,Y}Z,\,\xi)\] \[+\eta(X)\,g(R_{\xi,Y}Z,V)-\eta(Y)\,g(R_{\xi,X}Z,V)+\delta(X,Y,Z,V) =0. \tag{50}\] Replacing \(X\) by \(\varphi X\) and also \(Y\) by \(\varphi Y\) in (50) and using (2), we obtain \[2\,g(R_{QX,QY}Z,V)-2\,\eta(X)\,g(R_{\xi,QY}Z,V)+2\,\eta(Y)\,g(R_ {\xi,QX}Z,V)-2\,g(R_{\varphi X,\varphi Y}\,\varphi Z,\varphi V)\] \[+\,\delta(\varphi X,\varphi Y,Z,V)=\eta(Z)\,g(R_{\xi,V}X,Y)-\eta( V)\,g(R_{\xi,Z}\,\varphi X,\varphi Y). \tag{51}\] Replacing \(V\) by \(\xi\) in (51), we obtain \[2\,g(R_{\,QX,QY}Z,\xi)-2\,\eta(X)\,g(R_{\xi,QY}Z,\xi)+2\,\eta(Y) \,g(R_{\xi,QX}Z,\xi)\] \[+\,g(R_{\xi,Z}\,\varphi X,\varphi Y)+\delta(\varphi X,\varphi Y, Z,\,\xi)=0. \tag{52}\] Replacing \(X\) by \(\varphi X\) and also \(Y\) by \(\varphi Y\) in (52), we obtain \[4\,g(R_{\,Q\,\varphi X,\,Q\,\varphi Y}Z,\xi)+2\,\delta(\varphi^{ 2}X,\varphi^{2}Y,Z,\,\xi)+2\,g(R_{\,\xi,Z}\,QX,QY)\] \[-2\,\eta(X)\,g(R_{\xi,Z}\,\xi,QY)+2\,\eta(Y)\,g(R_{\xi,Z}\,\xi,QX )=0. \tag{53}\] Adding (52) and (53), we get \[4\,g(R_{Q\,\varphi X,\,Q\,\varphi Y}Z,\xi)+g(R_{\xi,Z}\,\varphi X,\varphi Y)+\delta(\varphi X,\varphi Y,Z,\,\xi)+2\,\delta(\varphi^{2}X,\varphi ^{2}Y,Z,\,\xi),\] or, using \(Q=\operatorname{id}_{TM}+\widetilde{Q}\), \[3\,g(R_{\,\xi,Z}\,\varphi X,\varphi Y)=-4\,g(R_{\xi,Z}\, \widetilde{Q}\,\varphi X,\varphi Y)-4\,g(R_{\xi,Z}\,\varphi X,\widetilde{Q}\, \varphi Y)\] \[-4\,g(R_{\xi,Z}\,\widetilde{Q}\,\varphi X,\widetilde{Q}\, \varphi Y)+\delta(\varphi X,\varphi Y,Z,\,\xi)+2\,\delta(\varphi^{2}X,\varphi ^{2}Y,Z,\,\xi). \tag{54}\] From (54), using the condition (8), and the equality \(\delta(\varphi X,\varphi Y,Z,\xi)=0\), see Remark 3.1, we get (20). Since \(\varphi|_{\,\ker\eta}\) is non-degenerate, the distribution \(\ker\eta\) is curvature invariant, see (7). **3**. Using (50), (28) and symmetry of \(h^{2}\) and \(Q\), we obtain (21): \[2\,g(R_{\varphi X,\varphi Y}Z,V)-2\,g(R_{X,Y}\varphi Z,\varphi V )+\delta(X,Y,Z,V)=\eta(Z)\,g(R_{\,\xi,V}X,Y)\] \[+\eta(V)\,g(R_{\,\xi,Z}Y,X)-\eta(X)\,g(R_{\,\xi,Y}Z,V)+\eta(Y)\,g(R _{\,\xi,Z}Z,V)\] \[=\eta(Z)\,\eta(X)\,g(h^{2}V,QY)-\eta(Z)\,\eta(Y)\,g(h^{2}V,QX)\] \[+\eta(V)\,\eta(Y)\,g(h^{2}Z,QX)-\eta(V)\,\eta(X)\,g(h^{2}Z,QY)\] \[-\eta(X)\,\eta(Z)\,g(h^{2}X,QV)+\eta(X)\,\eta(V)\,g(h^{2}Y,QZ)\] \[+\eta(Y)\,\eta(Z)\,g(h^{2}X,QV)-\eta(Y)\,\eta(V)\,g(h^{2}X,QZ)=0.\] Note that (28) uses (24)-(25), which require conditions (5) and (8). **4**. Replacing \(X\) by \(\varphi X\) and \(Y\) by \(\varphi Y\) in (21) and using (2), we get (22). **Proof of Lemma 3.4**. We prove (23) following the proof of (3.50) in [6]. In Step 1, our formulas depend on four vectors \(X,Y,Z,V\in\mathfrak{X}_{M}\) and contain many additional \(\widetilde{Q}\)-dependent terms. In Step 2, the formulas depend only on three vectors from \(TM\) and contain few \(\widetilde{Q}\)-terms. **Step 1**. Differentiating (16) and using \(g((\nabla_{X}\varphi)(\nabla_{V}\varphi)Y,(\nabla_{X}\varphi)Z)\) gives \[g((\nabla_{V}\,\varphi)Y,(\nabla_{X}\,\varphi)Z)+g((\nabla_{X} \,\varphi)Y,(\nabla_{V}\,\varphi)Z)=g((\nabla_{V,X}^{2}\,\varphi)\varphi Y,Z)\] \[+g((\nabla_{V,X}^{2}\,\varphi)\varphi Z,Y)-g(hX,Z)\,g(hX,Y)- \eta(Z)\,g((\nabla_{V}h)X,Y)\] \[-g(hV,Y)\,g(hX,Z)-\eta(Y)\,g((\nabla_{V}h)X,Z)-\nabla_{V}\big{(} \eta(Z)\,g(hX,\widetilde{Q}Y)\big{)}. \tag{55}\] On the other hand, using (38), (43) and (2), we find \(\nabla^{2}\)-terms in (55): \[g((\nabla_{V,X}^{2}\,\varphi)\varphi Z,Y)=g((\nabla_{V,\varphi Z }^{2}\,\varphi)Y,X)=-g(R_{X,Y}V,QZ)+\eta(Z)\,g(R_{X,Y}V,\xi)\] \[-(1/2)\,g(R_{X,V}\varphi Z,\varphi)Y-(1/2)\,g(R_{V,Y}\,\varphi Z,\varphi X)-(1/2)\,g(R_{X,Y}\varphi Z,\varphi V), \tag{56}\] \[g((\nabla_{V,X}^{2}\,\varphi)\varphi Y,Z)=g((\nabla_{V,\varphi Y }^{2}\,\varphi)Z,X)=-g(R_{X,Z}V,QY)+\eta(Y)\,g(R_{X,Z}V,\xi)\] \[-(1/2)\,g(R_{X,V}\varphi Y,\varphi Z)-(1/2)\,g(R_{V,Z}\varphi Y, \varphi X)-(1/2)\,g(R_{X,Z}\varphi Y,\varphi V). \tag{57}\] Using (28), we get from (55) and (56)-(57) the equality \[g((\nabla_{V}\,\varphi)Y,(\nabla_{X}\,\varphi)Z)+g((\nabla_{X} \,\varphi)Y,(\nabla_{V}\,\varphi)Z)=-g(R_{X,Z}V,QY)\] \[+\eta(Y)\,g(R_{X,Z}V,\xi)-g(R_{X,Y}V,QZ)+\eta(Z)\,g(R_{X,Y}V,\xi)\] \[-(1/2)\,g(R_{X,V}\varphi Y,\varphi Z)-(1/2)\,g(R_{V,Z}\varphi Y, \varphi X)-(1/2)\,g(R_{X,Z}\varphi Y,\varphi V)\] \[-(1/2)\,g(R_{X,V}\varphi Z,\varphi Y)-(1/2)\,g(R_{V,Y}\varphi Z, \varphi X)-(1/2)\,g(R_{X,Y}\varphi Z,\varphi V)\] \[-g(hV,Z)\,g(hX,Y)-\eta(Z)\,g((\nabla_{V}h)X,Y)\] \[-g(hV,Y)\,g(hX,Z)-\eta(Y)\,g((\nabla_{V}h)X,Z)-\nabla_{V}\big{(} \eta(Z)\,g(hX,\widetilde{Q}Y)\big{)}. \tag{58}\] From (58), applying (21) twice, i.e., \[g(R_{Y,V}\varphi Z,\varphi X)=g(R_{\varphi X,\varphi Z}V,Y) \stackrel{{\eqref{eq:21}}}{{=}}g(R_{X,Z}\varphi V,\varphi Y)- \delta(X,Z,V,Y),\] \[g(R_{Y,X}\varphi Z,\varphi V)=g(R_{\varphi V,\varphi Z}X,Y) \stackrel{{\eqref{eq:21}}}{{=}}g(R_{V,Z}\varphi X,\varphi Y)- \delta(V,Z,X,Y),\] we get \[g((\nabla_{V}\,\varphi)Y,(\nabla_{X}\,\varphi)Z)+g((\nabla_{X} \,\varphi)Y,(\nabla_{V}\,\varphi)Z)=-g(R_{X,Z}V,QY)\] \[-g(R_{X,Y}V,QZ)+g(R_{V,Z}\varphi X,\varphi Y)+g(R_{X,Z}\varphi V, \varphi Y)\] \[+\eta(Y)\,g(R_{X,Z}V,\xi)+\eta(Z)\,g(R_{X,Y}V,\xi)-g(hV,Z)\,g(hX,Y)\] \[-\eta(Z)\,g((\nabla_{V}h)X,Y)-g(hV,Y)\,g(hX,Z)-\eta(Y)\,g((\nabla_ {V}h)X,Z)\] \[-(1/2)\,\delta(X,Z,V,Y)-(1/2)\,\delta(V,Z,X,Y)-\nabla_{V}\big{(} \eta(Z)\,g(hX,\widetilde{Q}Y)\big{)}. \tag{59}\] From (59), replacing \((\nabla_{V}h)X\) by (28), we get \[g((\nabla_{X}\,\varphi)Z,(\nabla_{V}\,\varphi)Y)+g((\nabla_{X} \,\varphi)Y,(\nabla_{V}\,\varphi)Z)\] \[+g(R_{X,Z}V,QY)+g(R_{X,Y}V,QZ)-g(R_{V,Z}\varphi X,\varphi Y)\] \[-g(R_{X,Z}\varphi V,\varphi Y)+g(hV,Y)\,g(hX,Z)+g(hV,Z)\,g(hX,Y)\] \[=(1/2)\delta(X,Z,V,Y)+(1/2)\delta(V,Z,X,Y)-\nabla_{V}\big{(}\eta( Z)\,g(hX,\widetilde{Q}Y)\big{)}. \tag{60}\] Replacing in (60) \(Z\) and \(V\) by \(\varphi Z\) and \(\varphi V\), we find \[g((\nabla_{X}\,\varphi)\varphi Z,(\nabla_{\varphi V}\,\varphi)Y)+ g((\nabla_{X}\,\varphi)Y,(\nabla_{\varphi V}\,\varphi)\varphi Z)\] \[+g(R_{X,\varphi Z}\varphi V,QY)-g(R_{X,\varphi Z}\varphi^{2}V, \varphi Y)+g(R_{X,Y}\varphi V,\varphi QZ)\] \[-g(R_{\varphi V,\varphi Z}\varphi X,\varphi Y)+g(hX,Y)\,g(h\varphi V,\varphi Z)+g(hX,\varphi Z)\,g(h\varphi V,Y)\] \[=(1/2)\,\delta(X,\varphi Z,\varphi V,Y)+(1/2)\,\delta(\varphi V,\varphi Z,X,Y). \tag{61}\] Using (16), (17), (18), (2) and Lemma 3.1, we find \[g((\nabla_{X}\,\varphi)\varphi Z,(\nabla_{\varphi V}\,\varphi)Y)=g (Q(\nabla_{X}\,\varphi)Z,(\nabla_{V}\,\varphi)Y)\] \[-g(X,\varphi hZ)\,g(V,\varphi hY)-\eta(V)\,g((\nabla_{X}\,\varphi )Z,\varphi hY)+g(QhX,Z)\,g(hV,Y)\] \[+\eta(Z)g(\varphi hX,(\nabla_{V}\,\varphi)Y)-\eta(Z)\,\eta(V)g(hX,hY)+g(QhX,Z)g(hV,\widetilde{Q}Y), \tag{62}\] \[g((\nabla_{X}\,\varphi)Y,(\nabla_{\varphi V}\,\varphi)\varphi)Z )=-g(Q(\nabla_{X}\,\varphi)Y,(\nabla_{V}\,\varphi)Z)\] \[+\eta(V)g(\varphi hZ,(\nabla_{X}\varphi)Y)-\eta(Z)g(\varphi hV,( \nabla_{X}\varphi)Y)+g(\varphi hX,Y)g(\varphi hZ,\widetilde{Q}V). \tag{63}\] From (28) and Lemma 3.1, we have \[g(R_{X,\varphi Z}\varphi V,Y)-g(R_{X,\varphi Z}\varphi^{2}V, \varphi Y)=g(R_{X,\varphi Z}\varphi V,Y)\] \[+g(R_{X,\varphi Z}QV,\varphi Y)-\eta(X)\,\eta(V)\,g(h^{2}Z,QY). \tag{64}\] On the other hand, from (19) and (21) it follows that \[g(R_{X,Z}\varphi V,\varphi Y)+g(R_{X,Z}\varphi^{2}V,Y)+g(R_{X, \varphi Z}\varphi V,Y)+g(R_{\varphi X,Z}\varphi V,Y)=0, \tag{65}\] \[g(R_{\varphi X,Z}\varphi V,\varphi^{2}Y)=g(R_{\varphi^{2}X, \varphi Z}V,\varphi Y)+(1/2)\,\delta(\varphi X,Z,V,\varphi Y). \tag{66}\] From (66), using (2), we find \[-g(R_{\varphi X,Z}\varphi V,QY)+\eta(Y)\,g(R_{\varphi X,Z}\varphi V,\xi)=-g(R_{QX,\varphi Z}V,\varphi Y)\] \[+\eta(X)\,g(R_{\xi,\varphi Z}V,\varphi Y)+(1/2)\,\delta(\varphi X,Z,V,\varphi Y). \tag{67}\] Summing up the formulas (65) and (67) (and using (65), (66), (28) and Lemma 3.1), we obtain \[g(R_{X,\varphi Z}\varphi V,Y)+g(R_{QX,\varphi Z}V,\varphi Y)= \eta(V)\,\eta(X)\,g(h^{2}Y,Z)\] \[-\eta(Z)\,\eta(Y)\,g(h^{2}V,QX)+g(R_{X,Z}\,QV,Y)-g(R_{X,Z}\varphi V,\varphi Y)\] \[+\eta(V)\,g(R_{\xi,Y}Z,X)+g(R_{\varphi X,Z}\varphi V,\widetilde{ Q}Y)+(1/2)\,\delta(\varphi X,Z,V,\varphi Y). \tag{68}\] Substituting (68) into (64), we get \[g(R_{X,\varphi Z}\varphi V,Y)-g(R_{X,\varphi Z}\varphi^{2}V, \varphi Y)=g(R_{X,Z}\,QV,Y)-g(R_{X,Z}\varphi V,\varphi Y)\] \[+\eta(V)\,\eta(Z)\,g(h^{2}Y,X)-\eta(Z)\,\eta(Y)\,g(h^{2}X,QV)- \eta(X)\,\eta(V)\,g(h^{2}Z,QY)\] \[+g(R_{X,\varphi Z}\widetilde{Q}V,\varphi Y)+g(R_{\varphi X,Z} \varphi V,\widetilde{Q}Y)+(1/2)\,\delta(\varphi X,Z,V,\varphi Y). \tag{69}\] By means of (28) and (22), we have \[g(R_{X,Y}\varphi V,\varphi Z)-g(R_{\varphi V,\varphi Z}\varphi X,\varphi Y)=g(R_{V,Z}\varphi X,\varphi Y)\] \[-g(R_{V,Z}QX,QY)-\eta(X)\,\eta(Z)\,g(h^{2}QY,V)+\eta(X)\,\eta(V) \,g(h^{2}QY,Z)\] \[+\eta(Y)\,\eta(Z)\,g(h^{2}QX,V)-\eta(Y)\,\eta(V)\,g(h^{2}QX,Z)\] \[-(1/2)\,\delta(\varphi X,\varphi Y,Z,V)-(1/2)\,\delta(X,Y,Z,V). \tag{70}\] Substituting (62), (63), (69) and (70) into (61), and using Lemma 3.1, we obtain \[g(Q(\nabla_{X}\,\varphi)Z,(\nabla_{V}\,\varphi)Y)-g(Q(\nabla_{X }\,\varphi)Y,(\nabla_{V}\,\varphi)Z)\] \[+\eta(V)\,g(\varphi hZ,(\nabla_{X}\,\varphi)Y)-\eta(V)\,g((\nabla _{X}\,\varphi)Z,\varphi hY)\] \[+\eta(Z)\,g(\varphi hX,(\nabla_{V}\,\varphi)Y)-\eta(Z)\,g(\varphi h V,(\nabla_{X}\,\varphi)Y)\] \[+g(R_{X,Z}\,QV,Y)-g(R_{X,Z}\varphi V,\varphi Y)+g(R_{V,Z}\varphi X,\varphi Y)-g(R_{V,Z}QX,QY)\] \[-g(X,\varphi hZ)\,g(V,\varphi hY)+g(QhX,Z)\,g(hV,Y)\] \[+2\,\eta(V)\,\eta(Z)\,g(h^{2}Y,X)-\eta(X)\,\eta(Z)\,g(h^{2}QY,V)- \eta(Y)\,\eta(V)\,g(h^{2}Z,QX)\] \[-g(hX,Y)\,g(hV,QZ)+g(hX,\varphi Z)\,g(h\varphi V,Y)\] \[=(1/2)\,\delta(X,\varphi Z,\varphi V,Y)+(1/2)\,\delta(\varphi V, \varphi Z,X,Y)+(1/2)\,\delta(\varphi X,\varphi Y,Z,V)\] \[+(1/2)\,\delta(X,Y,Z,V)-(1/2)\,\delta(\varphi X,Z,V,\varphi Y)\] \[-g(R_{X,\varphi Z}\widetilde{Q}V,\varphi Y)-g(R_{\varphi X,Z} \varphi V,\widetilde{Q}Y)-g(R_{X,\varphi Z}\varphi V,\widetilde{Q}Y)\] \[-g(R_{X,Y}\varphi V,\varphi\widetilde{Q}Z)-g(QhX,Z)\,g(hV, \widetilde{Q}Y)-g(\varphi hX,Y)\,g(\varphi hZ,\widetilde{Q}V). \tag{71}\] Adding (71) to (60), we obtain \[2\,g((\nabla_{X}\,\varphi)Z,(\nabla_{V}\,\varphi)Y)+g(\widetilde{Q} (\nabla_{X}\,\varphi)Z,(\nabla_{V}\,\varphi)Y)-g(\widetilde{Q}(\nabla_{X}\, \varphi)Y,(\nabla_{V}\,\varphi)Z)\] \[+\eta(V)\,g(\varphi hZ,(\nabla_{X}\,\varphi)Y)-\eta(V)\,g(\phi hY, (\nabla_{X}\,\varphi)Z)\] \[+\eta(Z)\,g(\phi hX,(\nabla_{V}\,\varphi)Y)-\eta(Z)\,g(\phi hV,( \nabla_{X}\,\varphi)Y)\] \[+2\,g(R_{X,Z}V,Y)-2\,g(R_{X,Z}\varphi V,\varphi Y)+2\,g(hX,Z)\,g( hV,Y)\] \[+2\,\eta(V)\,\eta(Z)\,g(h^{2}Y,X)-\eta(X)\,\eta(Z)\,g(h^{2}QY,V)- \eta(Y)\,\eta(V)\,g(h^{2}Z,QX)\] \[=(1/2)\,\delta(X,Z,V,Y)+(1/2)\,\delta(V,Z,X,Y)\] \[-(1/2)\,\delta(\varphi X,Z,V,\varphi Y)+(1/2)\,\delta(X,\varphi Z,\varphi V,Y)\] \[+(1/2)\,\delta(\varphi V,\varphi Z,X,Y)+(1/2)\,\delta(\varphi X, \varphi Y,Z,V)+(1/2)\,\delta(X,Y,Z,V)\] \[-g(R_{X,\varphi Z}\widetilde{Q}V,\varphi Y)-g(R_{\varphi X,Z} \varphi V,\widetilde{Q}Y)-g(R_{X,\varphi Z}\varphi V,\widetilde{Q}Y)\] \[-g(R_{X,Z}V,\widetilde{Q}Y)-g(R_{X,Z}\,\widetilde{Q}V,Y)-g(R_{X,Y }\varphi V,\varphi\widetilde{Q}Z)\] \[-g(R_{X,Y}V,\widetilde{Q}Z)+g(R_{\widetilde{Q}X,\widetilde{Q}Y}V,Z)+g(R_{\widetilde{Q}X,Y}V,Z)+g(R_{X,\widetilde{Q}Y}V,Z)\] \[-g(hX,QZ)\,g(hV,\widetilde{Q}Y)-g(hX,\widetilde{Q}Z)\,g(hV,Y)\] \[-g(\phi hX,Y)\,g(\phi hZ,\widetilde{Q}V)+g(hX,Y)\,g(hV,\widetilde {Q}Z)-\nabla_{V}\big{(}\eta(Z)\,g(hX,\widetilde{Q}Y)\big{)}. \tag{72}\] Swapping \(X\leftrightarrow Z\) and \(W\leftrightarrow Y\) in (72), we obtain \[2\,g((\nabla_{Z}\,\varphi)X,(\nabla_{Y}\,\varphi)V)+g( \widetilde{Q}(\nabla_{Z}\,\varphi)X,(\nabla_{Y}\,\varphi)V)-g(\widetilde{Q}( \nabla_{Z}\,\varphi)V,(\nabla_{Y}\,\varphi)X)\] \[+\eta(Y)\,g(\phi hX,(\nabla_{Z}\,\varphi)V)-\eta(Y)\,g(\phi hV,( \nabla_{Z}\,\varphi)X)\] \[+\eta(X)\,g(\phi hZ,(\nabla_{Y}\,\varphi)V)-\eta(X)\,g(\phi hY, (\nabla_{Z}\,\varphi)V)\] \[+2\,g(R_{Z,X}Y,V)-2\,g(R_{Z,X}\varphi Y,\varphi V)+2\,g(hZ,X)\,g( hY,V)\] \[+2\,\eta(Y)\,\eta(X)\,g(h^{2}V,Z)-\eta(Z)\,\eta(X)\,g(h^{2}QV,Y)- \eta(V)\,\eta(Y)\,g(h^{2}X,QZ)\] \[=(1/2)\delta(Z,X,Y,V)+(1/2)\delta(Y,X,Z,V)\] \[-(1/2)\delta(\varphi Z,X,Y,\varphi V)+(1/2)\delta(Z,\varphi X, \varphi Y,V)\] \[+(1/2)\,\delta(\varphi Y,\varphi X,Z,V)+(1/2)\,\delta(\varphi Z,\varphi V,X,Y)+(1/2)\,\delta(Z,V,X,Y)\] \[-g(R_{Z,\varphi X}\widetilde{Q}Y,\varphi)-g(R_{\varphi Z,X} \varphi Y,\widetilde{Q}V)-g(R_{Z,\varphi X}\varphi Y,\widetilde{Q}V)\] \[-g(R_{Z,X}Y,\widetilde{Q}V)-g(R_{Z,X}\,\widetilde{Q}Y,V)-g(R_{Z, Y}\varphi Y,\varphi\widetilde{Q}X)\] \[-g(R_{Z,V}Y,\widetilde{Q}X)+g(R_{\widetilde{Q}Z,\widetilde{Q}V}Y,X)+g(R_{\widetilde{Q}Z,V}Y,X)+g(R_{Z,\widetilde{Q}V}Y,X)\] \[-g(hZ,QX)\,g(hY,\widetilde{Q}V)-g(hZ,\widetilde{Q}X)\,g(hY,V)\] \[-g(\phi hZ,V)\,g(\phi hX,\widetilde{Q}Y)+g(hZ,V)\,g(hY,\widetilde {Q}X)-\nabla_{Y}\big{(}\eta(X)\,g(hZ,\widetilde{Q}V)\big{)}. \tag{73}\] Then subtracting the gotten equation (73) from (72) and using (4), we get \[\eta(V)\,g(\phi hZ,(\nabla_{X}\,\varphi)Y)-\eta(V)\,g(\phi hY,( \nabla_{X}\,\varphi)Z)\] \[+\eta(Z)\,g(\phi hX,(\nabla_{V}\,\varphi)Y)-\eta(Z)\,g(\phi hV,( \nabla_{X}\,\varphi)Y)\] \[-\eta(Y)\,g(\phi hX,(\nabla_{Z}\,\varphi)V)+\eta(Y)\,g(\phi hV,( \nabla_{Z}\,\varphi)X)\] \[-\eta(X)\,g(\phi hZ,(\nabla_{Y}\,\varphi)V)+\eta(X)\,g(\phi hY,( \nabla_{Z}\,\varphi)V)\] \[+2\,\eta(V)\,\eta(Z)\,g(h^{2}Y,X)-2\,\eta(Y)\,\eta(X)\,g(h^{2}V,Z)\] \[=-\delta(\varphi X,Z,V,\varphi Y)-\delta(X,Y,\varphi V,\varphi Z)+ \delta(X,Y,Z,V)+\delta(X,\varphi Z,\varphi V,Y)\] \[+\delta(\varphi X,\varphi Y,Z,V)-2\,g(R_{X,Y}V,\widetilde{Q}Z)-g( R_{X,\varphi Z}\varphi V,\widetilde{Q}Y)-g(R_{\varphi X,Z}\varphi Y, \widetilde{Q}V)\] \[-g(R_{X,Y}\varphi V,\varphi\widetilde{Q}Z)+g(R_{\widetilde{Q}X,Y}V,Z)+g(R_{QX,\widetilde{Q}V}V,Z)\] \[+g(R_{Z,V}\varphi Y,\varphi\widetilde{Q}X)+g(R_{Z,V}Y,\widetilde{Q} X)-g(R_{QZ,\widetilde{Q}V}Y,X)\] \[-g(\phi hX,Y)\,g(\phi hZ,\widetilde{Q}V)+g(hX,Y)\,g(hV, \widetilde{Q}Z)-\nabla_{V}\big{(}\eta(Z)\,g(hX,\widetilde{Q}Y)\big{)}\] \[+g(\phi hZ,V)\,g(\phi hX,\widetilde{Q}Y)-g(hZ,V)\,g(hY,\widetilde {Q}X)+\nabla_{Y}\big{(}\eta(X)\,g(hZ,\widetilde{Q}V)\big{)}. \tag{74}\] **Step 2**. Putting \(\xi\) on \(V\) of (74), and using Lemma 3.1, (7), \(\nabla_{\xi}\big{(}\eta(Z)\,g(hX,\widetilde{Q}Y)\big{)}=0\) and Remark 3.1 (that all \(\delta\)-terms vanish), we get \[g((\nabla_{X}\,\varphi)Y,\varphi hZ)-g((\nabla_{X}\,\varphi)Z, \varphi hY)=-\eta(Z)\,g(h^{2}X,Y)\] \[+\eta(Y)\,g(h^{2}X,Z)+\eta(Z)\,g(h^{2}X,\widetilde{Q}Y)+\eta(Y)\,g (h^{2}X,\widetilde{Q}Z)\] \[+2\,g(R_{X,Y}\xi,\widetilde{Q}Z)-g(R_{\widetilde{Q}X,Y}\xi,Z)-g( R_{QX,\widetilde{Q}Y}\xi,Z)-g(R_{Z,\xi}Y,\widetilde{Q}X),\] or, using (28), \[g((\nabla_{X}\,\varphi)Y,\varphi hZ)-g((\nabla_{X}\,\varphi)Z, \varphi hY)=-\eta(Z)\,g(h^{2}X,Y)+\eta(Y)\,g(h^{2}X,Z)\] \[+\eta(X)\,g(h^{2}Z,\widetilde{Q}Y)+\eta(Y)\,g(h^{2}X,\widetilde{ Q}Z)+\eta(Z)\,g(h^{2}X,\widetilde{Q}Y), \tag{75}\] from which, using (4), we obtain \[g((\nabla_{Y}\,\varphi)X,\varphi hZ)+g((\nabla_{X}\,\varphi)Z, \varphi hY)=\eta(Y)\,g(hX,hZ)-\eta(Z)\,g(hX,hY)\] \[-\eta(X)\,g(h^{2}Z,\widetilde{Q}Y)-\eta(Y)\,g(h^{2}X,\widetilde{ Q}Z)-\eta(Z)\,g(h^{2}X,\widetilde{Q}Y). \tag{76}\] Swapping \(X\) and \(Y\) in (76) we get \[g((\nabla_{X}\,\varphi)Y,\varphi hZ)+g((\nabla_{Y}\,\varphi)Z, \varphi hX)=\eta(X)\,g(hY,hZ)-\eta(Z)\,g(hY,hX)\] \[-\eta(Y)\,g(h^{2}Z,\widetilde{Q}X)-\eta(X)\,g(h^{2}Y,\widetilde{ Q}Z)-\eta(Z)\,g(h^{2}Y,\widetilde{Q}X),\] and substituting (76) into the resulting equation, we get \[g((\nabla_{Z}\,\varphi)Y,\varphi hX)+g((\nabla_{Z}\,\varphi)X, \varphi hY)\] \[=2\,\eta(Z)\,g(hX,hY)-\eta(X)\,g(hY,hZ)-\eta(Y)\,g(hX,hZ)\] \[+2\,\eta(Y)\,g(h^{2}Z,\widetilde{Q}X)+2\,\eta(X)\,g(h^{2}Y, \widetilde{Q}Z)+2\,\eta(Z)\,g(h^{2}Y,\widetilde{Q}X). \tag{77}\] By swapping \(Z\) and \(X\) in (7), we get \[g((\nabla_{X}\,\varphi)Y,\varphi hZ)+g((\nabla_{X}\,\varphi)Z, \varphi hY)\] \[=2\,\eta(X)\,g(hZ,hY)-\eta(Z)\,g(hY,hX)-\eta(Y)\,g(hZ,hX)\] \[+2\,\eta(Y)\,g(h^{2}X,\widetilde{Q}Z)+2\,\eta(Z)\,g(h^{2}Y, \widetilde{Q}X)+2\,\eta(X)\,g(h^{2}Y,\widetilde{Q}Z),\] and adding the above equation and (75) we get (23). \(\Box\) ## 6 Conclusions We have shown that the weak nearly cosymplectic structure is useful for studying almost contact metric structures and Killing vector fields. Some results on nearly cosymplectic manifolds (see [4, 6, 13]) were extended to weak nearly cosymplectic manifolds with the conditions (5) and (8) and the splitting theorem was proven. Our conjecture is that the conditions (5) and (8) are also sufficient for a weak nearly Sasakian manifold of dimension greater than five to be Sasakian - this could answer the question in [17] and generalize Theorem 3.3 in [13]. Based on the numerous applications of nearly cosymplectic structures, we expect that certain weak structures will also be useful for differential geometry and physics, for example, in twistor string theory and QFT.
2308.14510
ATMOSPHERIX: I- An open source high resolution transmission spectroscopy pipeline for exoplanets atmospheres with SPIRou
Atmospheric characterisation of exoplanets from the ground is an actively growing field of research. In this context we have created the ATMOSPHERIX consortium: a research project aimed at characterizing exoplanets atmospheres using ground-based high resolution spectroscopy. This paper presents the publicly-available data analysis pipeline and demonstrates the robustness of the recovered planetary parameters from synthetic data. Simulating planetary transits using synthetic transmission spectra of a hot Jupiter that were injected into real SPIRou observations of the non-transiting system Gl 15 A, we show that our pipeline is successful at recovering the planetary signal and input atmospheric parameters. We also introduce a deep learning algorithm to optimise data reduction which proves to be a reliable, alternative tool to the commonly used principal component analysis. We estimate the level of uncertainties and possible biases when retrieving parameters such as temperature and composition and hence the level of confidence in the case of retrieval from real data. Finally, we apply our pipeline onto two real transits of HD~189733 b observed with SPIRou and obtain similar results than in the literature. In summary, we have developed a publicly available and robust pipeline for the forthcoming studies of the targets to be observed in the framework of the ATMOSPHERIX consortium, which can easily be adapted to other high resolution instruments than SPIRou (e.g. VLT-CRIRES, MAROON-X, ELT-ANDES)
B. Klein, F. Debras, J. -F. Donati, T. Hood, C. Moutou, A. Carmona, M. Ould-elkhim, B. Bézard, B. Charnay, P. Fouqué, A. Masson, S. Vinatier, C. Baruteau, I. Boisse, X. Bonfils, A. Chiavassa, X. Delfosse, W. Dethier, G. Hebrard, F. Kiefer, J. Leconte, E. Martioli, V. Parmentier, P. Petit, W. Pluriel, F. Selsis, L. Teinturier, P. Tremblin, M. Turbet, O. Venot, A. Wyttenbach
2023-08-28T11:57:37Z
http://arxiv.org/abs/2308.14510v2
ATMOSPHERIX: I- An open source high resolution transmission spectroscopy pipeline for exoplanets atmospheres with SPIRou ###### Abstract Atmospheric characterisation of exoplanets from the ground is an actively growing field of research. In this context we have created the ATMOSPHERIX consortium: a research project aimed at characterizing exoplanets atmospheres using ground-based high resolution spectroscopy. This paper presents the publicly-available data analysis pipeline and demonstrates the robustness of the recovered planetary parameters from synthetic data. Simulating planetary transits using synthetic transmission spectra of a hot Jupiter that were injected into real SPIRou observations of the non-transiting system Gl 15 A, we show that our pipeline is successful at recovering the planetary signal and input atmospheric parameters. We also introduce a deep learning algorithm to optimise data reduction which proves to be a reliable, alternative tool to the commonly used principal component analysis. We estimate the level of uncertainties and possible biases when retrieving parameters such as temperature and composition and hence the level of confidence in the case of retrieval from real data. Finally, we apply our pipeline onto two real transits of HD 189733 b observed with SPIRou and obtain similar results than in the literature. In summary, we have developed a publicly available and robust pipeline for the forthcoming studies of the targets to be observed in the framework of the ATMOSPHERIX consortium, which can easily be adapted to other high resolution instruments than SPIRou (e.g. VLT-CRIRES, MAROON-X, ELT-ANDES). keywords: exoplanets - planets and satellites: atmospheres - planets and satellites: gaseous planets - techniques: spectroscopic - methods: data analysis ## 1 Introduction More than 5 000 exoplanets were discovered in the last decade, paving the way for statistical exploration of the orbital and physical properties of planetary systems (e.g., Udry and Santos, 2007; Fulton et al., 2017; Debras et al., 2021). One of the main objective for the next decade is now to understand thoroughly the physical nature of individual planets. This necessarily requires an in-depth study of their atmosphere in order to lift degeneracies between seemingly identical planets in terms of mass and radius (e.g., Valencia et al. 2013). JWST and Ariel (Tinetti et al., 2021) space missions will play a key role in that venture by providing high quality observations for a large number of exoplanets. However, space-based observations of planet atmospheres have limits which are best overcome from the ground using high-resolution spectroscopy (HRS) with numerous large telescopes. Through cross-correlation high resolution spectroscopy, relying on the statistical comparison between an absorption or emission spectrum of the planet atmosphere and theoretical models, one can extract the planetary signal which is typically 10 to 100 times weaker than the noise. Since the first successful characterisation of an exoplanet atmosphere with high-resolution spectroscopy, a decade ago by Snellen et al. (2010), this technique has been substantially refined (e.g., Brogi et al., 2012; de Kok et al., 2013; Birkby et al., 2013; Brogi et al., 2016; Alonso-Floriano et al., 2019; Brogi & Line, 2019; Giacobe et al., 2021; Guililuy et al., 2022), and has acquired the necessary maturity to become a reliable complement to forthcoming space-based missions (Brogi et al., 2017; Brogi & Line, 2019; Kasper et al., 2023). SPIRou (Donati et al., 2020), a high-resolution near-infrared (nIR) spectropolarimeter mounted at the Canada-France-Hawaii Telescope, is perfectly suited for this task. Thanks to its broad continuous wavelength coverage of the near-infrared, from 0.9 to 2.5 \(\mu\)m, SPIRou has the ability to resolve a high number of molecular lines in the emission or transmission spectra of planetary atmospheres. More specifically, the observations are divided into 50 overlapping diffraction orders spanning the \(Y\),\(J\),\(H\) and \(K\) bands at a resolving power of \(\sim\)70 000 (\(\sim\) 2.28 km.s\({}^{-1}\) velocity bin). SPIRou has already been successfully used to detect water and carbon monoxide by performing emission spectroscopy of \(\tau\) Boo (Pelletier et al., 2021) and transmission spectroscopy during two transits of HD 189733 b (Boucher et al., 2021) as well as to detect Helium on several targets (Allart et al., 2023, Masson et al. in prep.). In most cases, the absorption lines of the planet atmosphere were found to be Doppler shifted compared to theoretical predictions which remains to be understood in the light of atmospheric circulation (see e.g. Flowers et al., 2019). With the aim of optimising the capabilities of SPIRou for the characterisation of the atmosphere of exoplanets, we have gathered a large, French-led community of observers and theoreticians, specializing in exoplanet atmospheres and stellar observations and simulations, under the ATMOSPHERIX program. This program aims at observing a wide range of exoplanets over several years in order to (i) constrain the composition of their atmosphere, (ii) probe the pressure-temperature (PT) profile and the amplitude of atmospheric winds through Doppler spectroscopy, and (iii) characterise the extended atmosphere through the He I metastable triplet at 1083 nm. Additionally, long-term repeated observations of a sample of planets will allow us to better understand variability in exoplanet atmospheres (e.g., Armstrong et al., 2016; Komacek & Showman, 2020; Cho et al., 2021). This paper introduces a series of studies of the atmosphere of transiting planets observed with nIR high-resolution spectrographs as part of the ATMOSPHERIX program. In this study, we present our publicly-available code1 to extract a planet transmission spectrum from a time-series of nIR high-resolution spectra. The extraction pipeline is applied to (i) sequences of synthetic transmission spectra of a hot Jupiter that mimics the properties of HD 189733 b and that was injected into SPIRou observations of the bright quiet M dwarf Gl 15 A and (ii) on the two same transits of HD 189733 b as Boucher et al. (2021). The Gl 15 A input data sets are described in Section 2. Section 3 details the algorithm to extract the planet atmosphere signal from the observed sequence of spectra and infer the planet atmosphere parameters in a statistically-robust way. We then present our retrieval methods on synthetic data in Section 4 and their application on real data in Section 5. We discuss our results and their implications for real targets in Section 6 and conclude in Section 7. A companion paper (Debras et al., submitted to MNRAS) studies the biases and degeneracies in the planet atmosphere parameters retrieved with the pipeline presented here. Footnote 1: [https://github.com/baptklein/ATMOSPHERIX_DATA_RED](https://github.com/baptklein/ATMOSPHERIX_DATA_RED) ## 2 Observations and Planet Injection We simulate nIR spectroscopic observations of a planetary transit by injecting a synthetic planet atmosphere spectrum into a sequence of spectra of the bright M dwarf Gl 15 A, collected with SPIRou in October 2020 (see Table 1). We first describe the stellar data before detailing how we injected the planet. ### Input stellar spectra Gl 15 A has been intensively monitored with SPIRou over the last 3 years and does not have any known transiting planet, making it an ideal target to benchmark our data analysis code. Our input observations consist in a series of 192 consecutive spectra collected with SPIRou on October 8, 2020, spanning a total of 5 hours. We divided these 192 spectra into two series of 96 spectra (the odd and even file numbers, respectively) in order to ensure the robustness of our pipeline on two sets of data. The peak signal-to-noise ratio (SNR) per 2.28 km.s\({}^{-1}\) velocity bin ranges from 250 to 320 (median of 291), and the airmass from 1.1 and 1.3 (see panels 2 and 4 of Figure 1). Note that the binary companion, Gl 15 B, is a M 3.5 dwarf located at 146 au from Gl 15 A (Reid et al., 1995). The velocimetric effect of this binary system is neglected in the present analysis given the low acceleration that B induces an A's RV (about 2m.s\({}^{-1}\)per year; Howard et al., 2014). We also neglect the RV effect of the two recently-detected planets around Gl 15 A (Howard et al., 2014; Pinamonti et al., 2018), inducing respective signatures of 1.68 and 2.5 m.s\({}^{-1}\) modulated with orbital periods of 11.44 and 7600 d, respectively. For this paper, the SPIRou observations were reduced using the version 0.6.132 of APER0, the official data reduction software (DRS) of the instrument Cook et al. (2022). In short, the pipeline applies the optimal extraction method of Horne (1986) to extract each individual exposure from the H4RG detector (Artigau et al., 2018). The wavelength solution is obtained by combining calibration exposures of a UNe hollow-cathol lamp and a thermally-stabilised Fabry-Perotelon (Bauer et al., 2015; Hobson et al., 2021). APER0 performs a correction of the telluric contamination using a method, summarised in Cook et al. (2022, see Section 8), which will be presented in forthcoming paper (Artigau et al. in prep.). This technique applies TAPAS (Bertaux et al., 2014) to pre-clean telluric absorption and the low level residuals are removed in using a data set of spectrum of hot stars observed in different atmospheric conditions to build a residual models in function of few parameters (optical depths of H\({}_{2}\)O and of dry components). Note that the deepest telluric lines (relative absorption larger than 90%) are masked out by the pipeline as the low amount of transmitted flux will most likely result in an inaccurate telluric correction. Our input sequence of spectra contains the blaze- and telluric-corrected spectra. ### Planet injection We then inject synthetic planet atmosphere transmission spectra on top of the APERO-provided telluric-corrected spectra of Gl 15 A. We consider the case of a hot Jupiter (based on HD 189733 b, Bouchy et al., 2005). The injected planet spectra are generated using petitRADTRANS(Molliere et al., 2019), which gives the planet radius as a function of wavelength assuming an isothermal planet atmosphere solely containing chosen molecules at a constant volume mixing ratio. This radius is then transformed into an absorbed flux by multiplying the total flux by the ratio of planetary to stellar radius squared, called transit depth. This model is then convoluted with a Gaussian of half-width 2 SPIRou pixels (i.e. \(\sim\)4.5 km.s\({}^{-1}\) for a resolving power of \(\sim\)70 000 in the nIR Donati et al., 2020) to account for the instrumental broadening. Since Gl 15 A is significantly smaller than HD 189733, some adjustments of the injected planet parameters are needed to keep the simulations realistic. We decided to conserve 4 quantities: (i) the transit depth, (ii) the transit duration, (iii) the ratio between the stellar radius and the atmospheric scale height and (iv) a consistent atmospheric temperature at the limbs. Our synthetic planet therefore has a lower mass, radius and velocimetric amplitude than HD189733 b, but a larger surface gravity. Although not a physical planet (the orbital mass is not consistent with the gravitational mass), the injected planet represents a good observational analog of a hot Jupiter. The stellar properties were left untouched in our simulated data. The parameters adopted for synthetic planet are given in Table 1. The planet orbit is assumed circular with a mid-transit time corresponding to the mean values of our observation times (see the two transits in Figure 1). For each spectrum, we generate a transit curve \(F_{\mathbf{C}}\) using the python module batman(Kreidberg, 2015), assuming an aligned circular planet orbit and using the H-band quadratic limb-darkening coefficients computed in Claret & Bloemen (2011) \begin{table} \begin{tabular}{c c c} \hline \hline **Stellar parameters** & \multicolumn{2}{c}{**G1 15 A**} \\ & Value & Reference \\ \hline Mass (\(M_{\odot}\)) & 0.398 \(\pm\) 0.004 & Cr22 \\ Radius (\(R_{\odot}\)) & 0.388 \(\pm\) 0.013 & Cr22 \\ Effective temperature (K) & 3603 \(\pm\) 60 & Cr22 \\ \(H\) magnitude & 4.476 \(\pm\) 0.2 & Cu03 \\ Systemic velocity [km.s\({}^{-1}\)] & 11.73 \(\pm\) 0.1 & Fo18 \\ Limb Darkening (Quadratic) & 0.0156, 0.313 & Cl11 \\ \hline **Planet parameters** & & \\ \hline Transit depth (\%) & 2.2 \(\pm\) 0.1 & 2.2 & Ad19 \\ Radius (\(R_{J}\)) & 1.142 \(\pm\) 0.040 & 0.57 & Ad19 \\ Mass (\(M_{J}\)) & 1.13 \(\pm\) 0.05 & 0.568 & Ad19 \\ g (m.s\({}^{-2}\)) & 22.45 \(\pm\) 1.5 & 45.29 & From planet mass and radius \\ Orbital period (d) & 2.218579 \(\pm\) 0.000001 & 2.218577 & Ad19 \\ Mid transit time (BDT TBD) & 2458334.90899 \(\pm\) 0.0007 & 2459130.8962180 & Ad19 \\ Inclination (deg) & 85.3 \(\pm\) 0.2 & 90.0 & Ad19 \\ Eccentricity & \(\sim\)0.0 & 0.0 & – \\ Equilibrium temperature (K) & 1203 \(\pm\) 39 & 900 & Ad19 \& Ro21 \\ Orbital semi-amplitude (km.s\({}^{-1}\)) & 151.2 \(\pm\) 4.5 & 120.0 & – \\ Transit duration (h) & 1.84 \(\pm\) 0.04 & 1.84 & Ad19 \\ \hline \hline \end{tabular} \({}^{\dagger}\) To gain some space in the table, we use aliases for the references. Cr22, Cu03, Fo18, Cl11, Ad19 and Ro21 stand respectively for Cristofari et al. (2022), Cutri et al. (2003), Fouque et al. (2018), Claret & Bloemen (2011), Addison et al. (2019) and Rosenthal et al. (2021). \end{table} Table 1: Physical parameters for Gl 15 A, HD189733 b and for the simulated hot Jupiter used in the study. When taken from the literature, the reference for each parameter is indicated in the right-hand column\({}^{\dagger}\). Note that references cited for planet parameters refer to HD189733 b. Figure 1: Continuum-normalised transit light-curve of HD 189733 b (top panel), airmass (panel 2), topocentric-to-stellar rest frame RV correction (RV\({}_{\rm{out}}\), panel 3) and peak SNR per velocity bin during the two simulated transits of the HD 189733 b analog (panel 4). Note that RV\({}_{\rm{out}}\) contains the RV contributions of the barycentric Earth motion and of the systemic velocity of the star. On panels 1 and 4, the two different transits are respectively shown as blue dots and pink crosses. The vertical gray band (resp. vertical gray dotted line) indicates the mid-transit primary transit (resp. mid-transit time) of the simulated planet. The horizontal gray dotted line on the bottom panel indicates the average value of the peak SNR for the observed spectra. for GI 15 A's properties (see Table 1). From the resulting transiting curves, we compute transit window functions \(\mathbf{W_{\mathrm{C}}}\) whose values range from 0 (for out-of-transit times) and 1 (at mid-transit time), using \(\mathbf{W_{\mathrm{C}}}=(1-\mathbf{F_{\mathrm{C}}})/\max{(1-\mathbf{F_{\mathrm{C}}})}\). We then build a sequence of planet transmission spectra by applying the following steps to the synthetic planet atmosphere spectrum. First, the simulated planet atmosphere spectrum is multiplied by the transit window function \(\mathbf{W_{\mathrm{C}}}(t)\) at time \(t\). Second, the window-weighted simulated spectrum is shifted in the Earth rest frame by correcting for the Barycentric Earth Radial Velocity \(V_{\mathrm{BS}}(t)\), the stellar systemic velocity \(V_{\mathrm{SYS}}\) and the RV signature \(V_{\mathrm{RV}}(t)\) induced by the injected planet on the host star. We then shift this spectrum by an additional \(30\,\mathrm{km\,s^{-1}}\) corresponding to the planetary shift in velocity during transit plus three times the SPIRou resolution to ensure that the stellar and planetary molecular lines are separated. Note that, as GI 15 A is a M-dwarf star, it contains water and carbon monoxide in its atmosphere which complicates the planetary retrieval. This is discussed further in the companion paper. The resulting synthetic spectrum is then convolved at SPIRou's spectral resolution, and multiplied by the stellar spectrum observed at time \(t\). Our input sequence of spectra is finally built by repeating the steps listed above to all the observed spectra. The final intensity \(I_{\mathrm{I}}\) as a function of time \(t\) and wavelength \(\lambda\) can thus be expressed as follows, \[I_{\mathrm{I}}(t,\lambda)=I_{\mathrm{I}}(t,\lambda)\left(1-\mathbf{W_{\mathrm{C}} }(t)\left(\frac{R_{\mathrm{p}}(\lambda^{\prime}(t))}{R_{\star}}\right)^{2} \right), \tag{1}\] where \(I_{\mathrm{i}}\) is the intial intensity (i.e. the APERO reduced SPIRou observations), \(R_{\star}\) the stellar radius2, \(R_{\mathrm{p}}\) the planetary radius (degraded at SPIRou resolution) which depends on wavelength because of the wavelength-dependent opacity of the planet atmosphere and \(\lambda^{\prime}\) the Doppler-shifted wavelength. As explained above: Footnote 2: Note that the potential wavelength-dependencies of \(R_{\star}\) are expected to be corrected in steps (i) and (ii) of our data cleaning procedure (see Section 3.1) and are therefore neglected in the simulations. We also assume that limb-darkening coefficients are wavelength-independent over SPIRou’s spectral range. \[\lambda^{\prime}(t)=\lambda\left(1-\frac{K_{\star}\sin(2\pi\phi(t))+V_{\star} (t)}{c_{0}}\right), \tag{2}\] \[V_{\star}(t)=V_{\mathrm{BE}}(t)-V_{\mathrm{SYS}}+30\mathrm{km\,s^{-1}} \tag{3}\] where \(K_{\star}\) is the orbital RV semi-amplitude of the star, \(\phi\) the planet orbital phase centered on the mid transit, \(V_{\star}(t)\) the non orbital Doppler shift and \(c_{0}\) the speed of light. Finally, we have also created synthetic sequences without the star, where the model is injected into a map of white noise with a variance equal to the SNR of the observations modulated by the blaze function that increases the variance at the edges of the orders. These synthetic models do not need to go through any further step of data analysis, and serve as references to identify the effects of the data analysis on the atmospheric retrieval. ## 3 Data processing ### Data cleaning The extraction of the planetary signal requires to correct for the residual telluric absorption lines, the stellar spectrum and additional correlated noise induced by the instrument and the observing conditions. Following Boucher et al. (2021), we can perform an additional masking of the telluric lines with absorption deeper than 70% of the continuum level3. Diffraction orders 57 to 54 (i.e. \(\sim\)1 300 to \(\sim\)1 500 nm) and 42 to 40 (i.e., \(\sim\)1 800 to \(\sim\)2 000 nm), located within nIR water absorption bands, are discarded in what follows4. We use a data analysis that resembles that of Boucher et al. (2021) albeit with a few differences described below. Our analysis consists of the following steps, independently applied to each of the 42 remaining diffraction orders, and illustrated on a given order in Fig. 2. Footnote 3: This mask extends from the line center until the relative absorption is lower than 5% of the continuum level. Footnote 4: Due to their significant fraction of high-absorption / saturated telluric lines, keeping these orders in the analysis leads systematically to degraded results. 1. We create an out-of-transit stellar reference spectrum, \(\mathbf{I_{\mathrm{ref}}}\), by averaging the out-of-transit spectra interpolated in the stellar rest frame (the star moves by \(\sim\)200 m.s\({}^{-1}\) during the course of the transit). This reference spectrum, shifted back to the Earth rest frame, is linearly adjusted in flux to each observed spectrum \(\mathbf{I_{\mathrm{obs}}}\), and \(\mathbf{I_{\mathrm{obs}}}\) is divided by this best-fitting solution. This operation is then performed once again to the resulting spectra (hereafter reference-corrected spectra), but, this time, the out-of-transit reference spectrum is computed in the Earth rest frame in order to remove residual telluric contamination. Note that, contrary to Boucher et al. (2021), the rest of our data analysis is conducted in the Earth rest frame rather than in the stellar rest frame, so that the interpolation of the noise only affects the master (median) spectra and not individual spectra. Additionally, note that our data reduction enables the user to correct for planet-induced distortions of the stellar line profiles (e.g., center-to-limb variations or the Rossiter-Maclaughlin effect; see Chiavassa & Brogi 2019). At each epoch, the code linearly fits a user-provided distorted stellar spectrum to the data prior to step (i) and normalise the data. As planet-induced distortions of the stellar line profiles are not taken into account in our simulations, we will not give further details on its implementation in this paper and redirect the reader to a paper in preparation (Klein et al., in prep). Note that, as shown in the panel [b] of Figure 2, low-frequency variations in time and wavelength domains are still identifiable after correcting for the reference spectrum. These residual variations are most likely due to modal noise from the fibers (Oliva et al., 2019) and requires additional normalisation. 2. We normalise each residual spectrum by a noise-free continuum estimate, computed using a rolling mean window, and remove outliers in the normalised spectra using a 5-\(\sigma\) clipping procedure. These two steps are repeated until no outlier is flagged in the data. By measuring how the variance of the normalised spectra varies with the size of the rolling window, we find that a minimum width of \(\sim\)23 km.s\({}^{-1}\)(10 SPIRou pixels) is required to reliably average the spectrum noise. The exact size of the window has no more than a marginal impact on the recovered planet signature, provided that it is small-enough to correct for the low-frequency structures induced by modal noise in the data. In what follows, we fix the window size to 50 pixels (115 km.s\({}^{-1}\)) and discard the same amount of points at the ends of each diffraction order. 3. At this stage, outliers have been removed in the wavelength space, for each spectrum individually. However, some pixels (i.e. wavelength bins) might still exhibit large temporal variations (e.g. due to telluric contamination). In order to flag and remove these high-variance pixels, we compute the variance in the wavelength space (i.e. for each pixel), and perform an iterative parabolic fit with a 5-\(\sigma\) outlier removal to the variance distribution5. The outliers flagged in the process are masked out in the subsequent steps of the data processing. Footnote 5: Note that, as a result of the blazed grating, the noise at the edge of each diffraction order is larger than in the center, which justifies the choice of a parabolic fit (iv) Our data processing pipeline features an optional additional filtering step to correct for the variation of residual telluric absorption with airmass. Accordingly to Brogi et al. (2018), we fit a second order polynomial of the log of the intensity with airmass and remove it out (hence divide it out in intensity). However, the airmass is no more than an incomplete proxy of the water telluric absorption, expected to unpredictably change on short time scales during the observations. As a consequence, several studies have chosen to bypass this step (e.g., de Kok et al., 2013; Boucher et al., 2021; Giacobbe et al., 2021) in favour of a more statistical approach (often based on PCA). In this study, we have kept the quadratic airmass detrending but it can be easily by-passed in our pipeline (and the order of the polynomial can also straightforwardly be changed). Figure 2: Time series of spectra in Order 46 (1643 to 1695 nm) at subsequent steps of the data processing as detailed in Sections 3.1 and 3.2. From top to bottom: [a] Blaze- and telluric-corrected APERO-provided spectra (prior to step (i) in Section 3.1); [b] spectra corrected from the master-out template (step (i) in Section 3.1); [c] Normalised spectra (step (ii) in Section 3.1); and [d] PCA-corrected spectra (with 8 components removed; see Section 3.2). Note that time series of normalised spectra cleaned with the auto-encoder, visually similar to the bottom panel, is not shown here. The distribution of the variance in wavelength of the processed spectra is compared to the APERO-provided photon noise in Fig. 3. The dispersion of the processed spectra remains similar to the APERO estimates for the blue half of the spectrum, but is significantly higher in the \(H\) and \(K\) bands. This is most likely due to the fact that modal and thermal noises, stronger at redder wavelengths, are not including in the APERO estimation of the photon noise. To correct for residual correlated noise due to both imperfect corrections of the stellar and Earth absorption spectra and instrumental noise, we apply an additional data-driven procedure describe in the following section. ### PCA and Auto-encoder The last step of the data processing is conceptually different to the others in the sense that we aim at getting rid of the correlated variance in time in our data on which we have no physical priors. Defining a deterministic framework to do so seems impracticable as we expect this correlated noise to highly depend on the target and on the observing conditions. This step is therefore necessarily a data-driven approach. In practice, we have developed two different methods that we independently apply to the data. Having two methods to statistically reduce the correlated noise in the data provides additional robustness to any claim of planet atmosphere detection and prevents false positives. We stress that both methods are applied to the \(\log\) of the reduced data where we can consider at first order that the total spectra is a linear combination of the planet's and noise. Our first method is based on principal component analysis (PCA). PCA is a linear method, that recovers the dominant source of correlated variance from an eigenvector decomposition of the covariance matrix: the principal components are these eigenvectors sorted by decreasing eigenvalues. This technique has been extensively used and discussed in several HRS-based planet atmosphere studies (e.g, de Kok et al., 2013; Damiano et al., 2019; Brogi and Line, 2019; Boucher et al., 2021; Pelletier et al., 2021). Our second method uses a deep-learning approach based on an auto-encoder, and is a new method in the HRS exoplanet community. An auto-encoder is an artificial neural network which aims at reproducing the dominant features of a data set by encoding them into a much lower number of points through subsequent reduction matrices. Initially proposed by Hinton and Salakhutdinov (2006), it is now widely used in many fields of applied mathematics and in some astrophysical works as well (e.g., Yang and Li, 2015; Cotar et al., 2021). In essence, both methods rely on reducing dimensionality by transforming our spectra into smaller sets, but, unlike PCA, our auto-encoder is not linear and we lose information about how the data is coded6. We now give details on the practical implementation of bohts methods in the next two paragraphs. Footnote 6: See this comparison between the PCA and auto-encoder: [https://towardsdatascience.com/autoencoders-vs-pca-when-to-use-which-73de063f5d7](https://towardsdatascience.com/autoencoders-vs-pca-when-to-use-which-73de063f5d7). #### 3.2.1 PCA implementation In our data analysis pipeline, the PCA-based dimensional reduction is applied independently to each order in the time domain7. The number of components associated with correlated noise and subsequently discarded for the analysis is tuned using the following procedure, illustrated on a given order in Fig. 4. For each order, we generate 5 to 10 sequences of spectra matching our observed wavelengths and times, but containing only uncorrelated Gaussian noise Figure 3: Variance at the center of each diffraction order after processing the data as described in Section 3.1 before and after the PCA (or auto-encoder) reduction (resp. dark-blue stars and light-blue triangles). As a reference, the photon noise estimate provided by APERO is shown in gold filled circles. Each point and error bars give the mean and standard deviation across all spectra. Orders removed due to strong telluric contamination are indicated by the vertical gray bands. The position of the \(YJHK\) photometric bands is indicated by the horizontal magenda solid lines. of level similar to our empirical photon noise estimate. To account for the larger noise level at the edges of the order, the sequences of noise are amplified by the normalised inverse of the square root of the blaze function. In principle, these sequences are free from correlated noise and can be used as references to tune our PCA. We apply PCA to each sequence of noise and store the largest eigenvalue, \(S_{\mathrm{max}}\). When we apply PCA to our observed sequences of processed spectra, any component associated with an eigenvalue significantly larger than \(S_{\mathrm{max}}\) (e.g. \(2\times S_{\mathrm{max}}\), see the red dotted line in Fig. 4) likely encloses a significant amount of correlated noise, and is discarded. For Gl 15 A, this procedure typically removes 4 PC in the blue part of the spectrum and 8 in the reddest part, which we attribute to the complex structure of the stellar atmosphere and/or modal noise. For the hotter star HD 189733 (see section 5), we typically remove 2 components in the blue part and 5 to 7 in the red part. Note that injecting the synthetic planet signature to the noise maps has a (i) marginal impact on the eigenvalues and (ii) affects all the components by more-or-less the same factor, ensuring the planet atmosphere spectrum is not removed in the process. This effect is further discussed in Section 3.3.3. Finally, note that the weighted PCA framework of Delchambre (2014), is also implemented in our publicly-available data processing code (via the wpca python module). #### 3.2.2 Auto-encoder implementation As in Cotar et al. (2021), our implementation of the auto-encoder relies on 4 layers, which allows us to reduce the data dimensionality from a few thousands pixels (the size of a corrected SPIRou order, which varies from 2000 to 4000 pixels after removal of bad pixels from telluric correction) to 8. Each SPIRou order has a different auto-encoding process, but, in the same order, the spectra have a common encoding and decoding matrix: as for PCA, the auto-encoder takes into account time correlated features. We train the network in the following way: we randomly select 70% of the reduced spectra (panel 3 of Fig. 2), encode them through the four layers which reduce dimensionality (to 1024, then 256, then 64 then 8 pixels) and then reconstruct the original spectra. All these numbers are just optimisation of the auto-encoding process, and can of course be changed (the number of layers as well). This creates an auto-encoder which is then applied on the 30% remaining spectra in order to validate that their reconstruction is reliable as well. After 5000 iterations, we consider that the network has sufficiently learned and use the resulting encoding matrix as our final neural network. We then apply the final auto-encoder on the reduced spectra to create a reconstruction of the dominant feature and remove it from the observations. This is equivalent to the way we apply PCA, although we completely lose the information about the number of components order by order and how they are encoded. The auto-encoder takes much longer to run than PCA (typically 1min per order on a GPU against 0.1s on a CPU for PCA) because of the learning curve. However, once the algorithm has learned and its transformation matrices are defined for a given sequence of spectra, it takes only a few milliseconds to run it on a CPU. ### Uncovering the planetary signature #### 3.3.1 Template matching Once the reduced data have been cleaned through the PCA or auto-encoder, the planetary signal is still largely buried under the noise as can be seen on the last panel of Figure 2. The use of a correlation function between a theoretical model and the reduced data has therefore been proposed since the first successful exoplanet atmosphere characterisation by HRS of Snellen et al. (2010). As is done in the literature, we first create an atmospheric model at extremely high spectral resolution (between 300 000 and 1 million) with petitRADTRANS. We then use this model to build a sequence of synthetic spectra matching the observing epochs and wavelengths. This requires to Doppler-shift the model by the planet RV, \(V_{\mathrm{p}}\), computed at each observed planet orbital phase \(\phi\) using \[V_{\mathrm{p}}(\phi)=K_{\mathrm{p}}\sin(2\pi\phi)+V_{\mathrm{sys}}, \tag{4}\] for different values of the planet velocimetric semi-amplitude (K\({}_{\mathrm{p}}\)) and systemic Doppler shift (V\({}_{\mathrm{sys}}\)), and convoluting the shifted models with a Gaussian at SPIRou's resolving power. The synthetic sequence is then processed with some of the key steps described in the previous sections, as we expect the data analysis to affect the planetary spectra. This is described in Section 3.3.3. Finally, we build sequences of processed synthetic spectra for a range of K\({}_{\mathrm{p}}\) and V\({}_{\mathrm{sys}}\) values, and compute the scalar product between each of these sequences and the observed spectra to create a correlation function (as in Boucher et al. (2021)): \[CCF=\sum_{i}\frac{d_{i}m_{i}}{\sigma_{i}^{2}}, \tag{5}\] where \(d_{i}\), \(m_{i}\) and \(\sigma_{i}\) are respectively the observed flux, the model value and the flux uncertainty at pixel \(i\) (corresponding to time \(t\) and wavelength \(\lambda\)). Our correlation maps typically extend from K\({}_{p}\) = 0 km.s\({}^{-1}\) to twice the theoretical value of K\({}_{\mathrm{p}}\), computed from the masses of the star and planet and the semi-amplitude of the planet-induced stellar RV wobble. For V\({}_{\mathrm{sys}}\), we typically explore a 200 km.s\({}^{-1}\) wide window centered on 0. A detection can be claimed if the maximum of correlation between the reduced data and the model is obtained close to the injected semi-amplitude and Doppler shift. Following Boucher et al. (2021), we define \(\sigma_{i}\) as the standard deviation of the value of the pixel \(i\) weighted by the S/N of each spectrum: Figure 4: Eigenvalues associated to each PCA component of SPIRou échelle orders 77 (–999 nm, light-blue symbols), 45 (\(\sim\) 1707 nm, gold crosses) and 32 (\(\sim\) 2 400 nm, dark-red stars). The eigenvalues for each of the three orders have been vertically shifted for better clarity. For each order, larger symbols indicate components which have been flagged, in Sec. 3.2, as dominated by correlated noise and removed in the analysis, i.e. 2, 5 and 7 components for orders 77, 45 and 32, respectively. \[\sigma_{i}^{2}=\sigma^{2}(t,\lambda)=\frac{\sum_{i}\left(d(t,\lambda)-\overline{d( \lambda)}\right)^{2}}{N_{\rm spectra}}\frac{\overline{\rm SNR}}{\overline{\rm SNR }(t)} \tag{6}\] where the bar denotes a time average, \(N_{\rm spectra}\) is the number of spectra and \(d_{i}=d(t,\lambda)\). Finally, in order to convert the correlation values to significance of detection, we divide the former by the standard deviation of the correlation map in regions dominated by white noise (i.e. away from the planetary signal), as frequently done in the literature. Note that the cross-correlation analysis is only used for first-order searches of planet signatures, whereas a more statistically robust (but more time consuming) exploration of the parameter space is performed in the Bayesian framework described in Section 3.3.2. In terms of speed, we tried to optimise the calculation of this correlation in the public code, and for a low resolution map (\(50\times 50\) points in \(K_{p}\) and \(V_{\rm sys}\)), it typically takes a couple of minutes per transit over the whole SPIRou domain on one processor. We have not parallelized it as this is sufficiently efficient for the use we make of it, but it would be very straightforward to do so by splitting the calculations for different regions of the (\(K_{\rm p}\),\(V_{\rm sys}\)) map. #### 3.3.2 MCMC and nested sampling Finally, we have the possibility to robustly explore the parameter space in a Bayesian framework. We implemented two methods: a Markov Chain Monte Carlo algorithm (MCMC) based on the python module emcee (Foreman-Mackey et al., 2013) and a nested sampling algorithm based on the python module pymultineset (Buchner et al., 2014; Feroz and Hobson, 2008; Feroz et al., 2009, 2019). The second one is typically 50 times faster than the first one, and, in our tests, we never noticed any difference in the results of these two methods. We therefore present only results using the nested sampling algorithm in the rest of the paper, but having the possibility to use both samplers allows us to have independent avenues to validate the results. Both methods rely on the a likelihood \(\mathcal{L}\), defined in Brogi and Line (2019) and Gibson et al. (2020) by \[\mathcal{L}=\prod_{i}\frac{1}{\sqrt{2\pi\zeta_{i}}}\mathrm{exp}\left\{-\frac{ [am_{i}-d_{i}]^{2}}{b\zeta_{i}^{2}}\right\}, \tag{7}\] where \(\zeta_{i}\) accounts for the uncertainty of the \(i^{th}\) pixel and \(a\) and \(b\) are scaling factors to account for incorrect modelling and incorrect estimation of the order variance, respectively. In the rest of this paper, a is set to 1. The main difference between the two approaches is that Brogi and Line (2019) uses a unique \(\zeta\) value that does not depend on the pixel. They derive the likelihood relative to \(\zeta\) and select the value that cancels the derivative, hence ensuring a maximum of likelihood, whereas Gibson et al. (2020) allows \(\zeta\) to be defined pixel by pixel. When applying the Brogi and Line (2019) likelihood, \(b\) is set to 1 and we calculate the optimal \(zeta\) for each spectrum (which is what the authors advise (M. Brogi, private comm.)). Essentially, this is similar to say that our log-likelihood is the sum of the log-likelihoods of each spectrum, with a different \(\zeta\) optimised for each spectra. On the other hand, with the Gibson likelihood, the values of \(\zeta_{i}\) are defined from prior information: in this paper we chose \(\zeta_{i}=\sigma_{i}\), as defined in Eq.(6) The \(b\) value is then obtained in a similar manner than the \(\zeta\) in Brogi and Line (2019): we chose \(b\) that cancels the derivative of the likelihood, as explained in Gibson et al. (2020). We have the freedom to optimise this \(b\) value for (i) each spectrum, (ii) each order, (iii) each transit, or (iv) globally. We found that, for the simple tests presented in this paper, the four options provided very similar results. The typical time to converge a nested sampling algorithm for a model with 4 parameters (\(K_{p}\), \(V_{\rm sys}\), temperature and water mass mixing ratio) and 384 live points, which are the nested sampling equivalent of a walker in a usual MCMC framework, is 2-3 hours on 36 processors. This gets drastically longer with more molecules as we face a memory issue, which is inherent to petitRADTRANS for now (P. Molliere, private discussion). This problem could be overcome by precomputing a grid of models and interpolating in this grid rather than calculating a model at each iteration (which is our choice here), but that becomes prohibitively complicated with too large a number of molecules (typically \(\gtrsim 3\)). #### 3.3.3 Degrading the model Although the PCA or the auto-encoder mainly remove planet-unrelated noise, they do affect the planetary signature in the data. There is a lot of work in the literature to reproduce at best the degradation by PCA onto the synthetic model so as to optimise the template matching function and/or likelihood calculations (Brogi and Line, 2019; Gibson et al., 2020; Boucher et al., 2021; Pelletier et al., 2021). Skipping this step leads to significant errors in the retrieved atmosphere parameters (see the discussion in 6.3). In order to be performed in the nested sampling algorithm, this degradation must be as fast as possible numerically. We have therefore implemented the fastest of these methods for PCA (i.e. that of Gibson et al. (2022)), which we detail below. We have not yet found an equivalently fast method for the auto-encoder, because of non linearities in the process, and, therefore, we limit the statistical exploration of the parameter space to PCA-reduced data. However, using PCA we have realised that _not_ degrading the model was not an issue for molecular detection when performing template matching: it only reduces the significance marginally. Our use of auto-encoder Figure 5: Zoom on a planet atmosphere model degraded with Gibson et al. (2022)’s framework (see Section 3.3.3) when removing 0, 1, 3 or 5 principal components (PC). can therefore be applied to molecular detection through template matching, but is not yet suited for parameter retrieval. Our implementation of the Gibson et al. (2022) method for PCA works as follows. During the data reduction, we store the removed eigenvectors (i.e. associated with correlated noise) for each order into a matrix U (calculated in log-space). We then multiply U by its pseudo-inverse U\({}^{\dagger}\) to create an orthogonal projector of the vector space defined by these eigenvectors. We then project the logarithm of our synthetic sequence model M on this vector space, and remove it from M. Our final degraded sequence M\({}^{\prime}\) is therefore given by \[M^{\prime}=\exp\left(\log M-UU^{\dagger}\log M\right), \tag{8}\] We stress again that \(U\) changes from one order to the other. As in Gibson et al. (2022), we do not need to take the weights into account as they can be naturally implemented in the weighted PCA algorithm. Fig. 5 shows the effect of such a degradation on an isothermal model of our HD189733 b-analog, containing only water, when 1, 3 and 5 PCA components are removed of the data (for order 52 here). ### Including rotation and winds Brogi et al. (2016) defined a framework to include the effect of rotation and winds on a 1D transmission spectra in a phase dependent manner. This method is however quite time consuming and hence not suitable for a large parameter space exploration. In Appendix A, we show that the inclusion of rotation can be expressed as a double convolution at mid transit. This provides a very fast first-order calculation of the effects of rotation (and eventually winds, see the appendix), albeit not as accurate as the framework of Brogi et al. (2016), since it does not take the phase dependence or limb darkening effects into account. In what follows, we rely on the equations presented in Appendix A to recover planetary rotation at first order in our nested sampling algorithm. In particular, note that by separating Eq. 27 into its blueshifted and redshifted components, one could straightforwardly create a transmission spectrum where both hemisphere have different physical parameters. This hemispheric dichotomy is notably applied in a forthcoming work of the ATMO-SPHERIX consortium (Hood et al., in prep.). ## 4 Application on simulated data ### Simple isothermal model Following the process described in Section 2.2, we first inject a simple, isothermal atmospheric model, containing only water with a volume mixing ratio of \(10^{-2}\) and a temperature of 900K, in the APERO-provided telluric-corrected spectra of GI 15 A. As a first step, we run the cross-correlation analysis to the data processed using the different steps described in Sec. 3.1, but prior PCA (or auto-encoder) cleaning. Unsurprisingly, strong signatures at low semi-amplitudes and velocity shifts dominate the correlation map, confirming that residuals water lines from GI 15 A and the Earth atmosphere are still prominent in the reduced spectra. Note that detrending the data with airmass is not sufficient to uncover the injected signal, which confirms that a PCA / auto-encoder treatment is needed. In contrast, fair detections of the injected signals are obtained when the data are cleaned with PCA or auto-encoder. In terms of cross-correlation, the signal was found at a signal-to-noise ratio of about 6, as shown in Fig. 6 and 7. Both the auto-encoder- and the PCA-based treatments yield similar level of signal detection, thereby confirming that the auto-encoder is a reliable, robust approach to PCA, which would gain at being developed further. The relatively large significance of detection (of the order of what can be found on hot Jupiters; see Line et al.2021) can be explained by the fact that the planet absorption template used in the cross-correlation is the same as the injected model. In terms of parameter retrieval from the nested sampling algorithm, Fig. 8 shows the corner plot of our results. Our MCMC process has converged towards Gaussian-looking posterior densities, roughly matching the injected parameters. Temperature and water content are unsurprisingly degenerated, but we do recover the injected values in the 1-\(\sigma\) ellipse of posteriors. Note that changing the white noise realisation (by fitting the other synthetic transit) or the likelihood definition (see Section 3.3.2) only marginally affects the retrieved parameters, which remain consistent within \(\sim\)1\(\sigma\). This confirms that our analysis and parameter estimation process do not introduce strong biases in the retrieval. Additionally, we tested the effects of changes in the input water content and did not find systematic trends in the parameters recovered using different likelihood definitions: a same likelihood can overestimate the water content for one synthetic model and underestimate it for another model. On real Figure 6: Cross-correlation significance as a function of Doppler velocity and orbital semi-amplitude for the sequence of GI 15 A spectra, in which an isothermal planet atmosphere model has been injected (see Section 4.1), and reduced using the procedure described in Section 3.1 and PCA-cleaning (Section 3.2) Figure 7: Same as Fig. 6, this time for the encoder-reduced data. data, we therefore recommend to gather results from different likelihoods to define conservative error bars in the retrieved parameters. ### Including rotation We have first tried to recover rotation from a model that did not include any, and we found that our nested sampling algorithm could not differentiate between a non rotated model or models with equatorial rotation lower than than 1 km.s\({}^{-1}\). Then we have created a model with a planetary rotation of 3 km.s\({}^{-1}\) at the equator, following appendix A (hence assuming the rotation axis is perpendicular to the orbit). When we try to recover this model with a non rotating model, the best-fit value of the water mixing ratio is decreased by a factor of \(\sim\) 30 and we also get lower values for the temperature. This is due to the fact that rotation decreases the strength of the absorption lines by spreading them on a larger width (see Fig. 9), whereas temperature and water content typically increase the line contrasts. As we are mostly sensitive to line amplitude and not shape, this creates a degeneracy between rotation and temperature/composition. When we try to retrieve a model with rotation, we recover the correct parameters in the posteriors as shown in Fig.10 but the mean recovered values for water and temperature are 3 and 2\(\sigma\) away from the maximum of posterior probability for water and temperature. No matter the rotation speed, we always obtained too low water content and too high temperature, showing that this is intrinsic to the analysis which exhibits a bias for large rotation rate. As demonstrated in Appendix B, we have performed a wide range of tests and demonstrated that it most likely comes from a lack of model degradation. Indeed, although our model is degraded consistently with the PCA applied to the data, the first phases of the data analysis (average stellar spectrum division, moving average normalisation and airmass detrending which worsens this effect when included) are not applied to the models during retrievals. This mainly affect models that are almost constant with time: very broadened (hence fast rotating) spectra or planets with low semi amplitudes 8. We are working on finding the most efficient way to include this additional model degradation in the Nested Sampling algorithm in order not to increase too much the numerical cost. Until then, our analysis is biased to lower molecular content for models with large rotation rate. Footnote 8: For real planet, this effect is probably smaller because of the variability of the planet spectra, due to intrinsic variability and 3D geometric projections ### Including clouds Our next test was to study the influence of clouds in our data analysis which are a major limitation for atmospheric analysis (e.g., Kreidberg et al. (2014)). HRS has the potentiality to see above the clouds when they are deep enough (Gandhi et al., 2020; Hood et al., 2020) and we first tried to recover synthetic planets with gray cloud deck at different pressure levels. When the cloud deck was below 0.1 bar, we recover the model roughly at the same amplitude than the non cloudy model. This is consistent with the fact that we expect to probe pressure levels around 10-1000 Pa through water absorption. When we move the clouds higher up in the atmosphere, we typically lose one point of signal to noise detection per order of magnitude in pressure, until 0.1 mbar where the SNR becomes lower than 2. However, some absorption lines are still theoretically observable (Gandhi et al., 2020) and combining several observations might allow to push this limit upwards. We have then ran a retrieval including clouds on a model with no clouds and obtained that, even if clear models exhibit higher likelihood, models with clouds are not excluded by our analysis: we obtain a degeneracy between water content and cloud coverage. This is not surprising as HRS is only sensitive to the variations of the atmospheric absorption, and not its absolute value. Additional constraints, such as LRS or fixed temperature can lift this degeneracy, as we will see in the application to real data on section 5. Finally, we have created a model with a gray cloud deck at 10mbar and applied our multisensor algorithm which results are shown on Fig.11. Globally, the fit is poorer which is expected as the amplitude of planetary lines is reduced. We recover a very tight degeneracy between water and cloud top pressure spanning almost 5 orders of magnitude in water, showing that we lose our capability to obtain a precise water mixing ratio in that case without further constraints. This will be discussed again in section 5 where the change in the temperature profile allows to lift part of this degeneracy. This confirms our test with the non cloudy model: the use of HRS alone does not allow to lift the cloud-composition-temperature degeneracy, and additional information must be added. However, we note that our model does recover the injected model at the 1\(\sigma\) level, which validates our pipeline for (simple) cloudy models. ## 5 Application on real data ### Short description of the data In order to validate entirely our pipeline, we have applied it on real data with already published result for comparison. We have therefore used the two transits of HD 189733 to observed by SPIRou as in Boucher et al. (2021) (hereafter B21). We shortly detail these data here, a larger discussion can be found in B21. The physical parameter of the planet are referenced in Table 1. Two transits of HD 189733 b were observed as part of the Spirou Legacy Survey (SLS, PI: J.-F. Donati). The first transit was observed on UT 2018 September 22 (hereafter Sep18), as part of SPIRou commissioning observations, and the second on UT 2019 June 15 (hereafter Jun19). The first data set consists of 2.5 hr, divided into 36 exposures, where the first 21 are in transit and the remaining 15 are out-of-transit. The second data set consists of 50 exposures in total, where 24 are in transit, 12 before, and 14 after transit, for a total of \(\sim\)3.5 hr. The data were reduced using APERO version 0.6.132. In both observations, the airmass remains below 1.3 and even below 1.15 during the transit. The mean signal to noise ratio per order ranges from 50 in the telluric contaminated region and in the bluest part of the instrument to 250 in the center of the H band. Conditions were photometric for both transit sequences, with an average seeing of around 0\({}^{\circ}\).82 as estimated from the guiding images. ### Data analysis and retrieval We have therefore applied our pipeline on these two transits of HD 189733 b in order to retrieve atmospheric signatures and compare with B21. In order to be as consistent as possible with their methods, we have used the additional telluric correction of B21 (i.e. masking telluric lines deeper than 70% from the continuum level, as described in Section 3.1), and the detrending with airmass was only performed through PCA (see Section 3). However, as presented in Section 3, our pipeline has some intrinsic differences with B21, notably (i) we interpolate only the reference spectrum, and not individual ones, and (ii) the number of PCA components to remove is decided automati cally by the pipeline. In both data sets, the number of PCA removed ranges from 1 in the bluest orders to 4/5 in the reddest orders. In the template matching algorithm, we have used the exact same model than the best model of B21. The resulting cross-correlation map is shown in Fig. 12 to be compared with their Figure 5. The comparison is excellent: the maximum is recovered at the expected theoretical semi-amplitude (151 km.s\({}^{-1}\)) and the recovered Doppler shift is comparable within B21 with less than 500m.s\({}^{-1}\) of difference. We obtain a slightly higher maximum of correlation (SNR of 4.6 compared to 4 for B21) with a lower amplitude for a same non planetary peak obtained in both our works at \(K_{p}\sim 270\) km.s\({}^{-1}\) and \(V_{\rm sys}\approx-75\) km.s\({}^{-1}\), showing that our pipeline corrects better for spurious signatures. We also applied our autoencoder on these data and found that the detection was slightly lower (SNR of 4.2) but exactly at the same position and the secondary spurious peak disappeared, confirming that it is not a physical signature. Additionally, the negative maximum of correlation next to the positive maximum, which is often recovered in studies using PCA disappears with the autoen Figure 8: Posterior densities resulting from a **pymultinest** retrieval of the datacontaining the injected planet atmosphere signal described in Section 4.1, with the simple isothermal model described in Section 4.1. The blue squares with error bars show the best-fit values, with the \(\pm 1\sigma\) uncertainties, and the red line and crosses indicate the injected values coder. This is promising towards a more global use of this technique: it shows that coupling PCA and autoencoder can allow to disentangle between numerical and physical signatures which will be of real added values for planets with lower atmospheric detectability. We have then used our nested sampling framework to retrieve parameters and compare with Fig. 11 of B21. We created models of HD 189733 b using petitRADTRANS with the water line list POKAZATEL from Exomol Polyansky et al. (2018); Tennyson et al. (2016) and a temperature profile from Guillot (2010) as in B21. The resulting posterior distributions is shown in Fig.13 with the red crosses being the mean recovered values of B21. We see that we are perfectly consistent with their recovered parameters although we recover a higher temperature (which is more consistent with physical expectation of the temperature at the limbs of HD 189733 b (e.g., Drummond et al. (2018))), a higher water content and deeper cloud top pressure. Comparing to our test on synthetic data, it is striking how well the deep cloud top pressure is recovered. This surprising good retrieval led us to consider an isothermal profile as shown in Fig.C1. As we expected, our algorithm then does not distinguish between a high cloud deck-high water content and deep cloud deck-low water content. We indeed see two gaussian distributions in the water posterior density, as in Fig. 11. This shows how different variables are intricated, as we explore further in our companion paper, and how adding information on the temperature can lift the degeneracies in other parameters. As already mentioned, a combination with low resolution spectroscopy would also allow to solve this discrepancy: the observation of the slope of the continuum favours clear atmospheres (Sing et al., 2016; Barstow, 2020). In Appendix C, we show two other posterior distributions when including rotation: one where the rotation speed is imposed as the expected tidally locked value for HD 189733 b (2.6 km.s\({}^{-1}\) at the equator) and one where the rotation speed is left as a free parameter. In both cases, we did not include clouds as they only increase complexity and are disfavoured, as mentioned above. For the first one, Fig. C2, we obtain similar results than in Section 4.2: we recover a lower water MMR and a higher temperature with larger error bars. This globally confirms that the 1D, non rotated water MMR is a good estimate as this is coherent with our analysis on simulated data. We also recover a higher \(K_{p}\) and the systemic velocity is changed by 500m.s\({}^{-1}\), being then perfectly in line with B21. In the second one, Figure 9: Top: rotation kernel (arbitrary units) in plain orange line and instrumental profile in dashed line as a function of velocity similar to Brogi et al. (2016). Bottom: zoom on the absorption lines of a model containing only water, broadened at SPIRou resolution, and the same model when applying a rotation with an equatorial velocity of 3000 m.s\({}^{-1}\). Fig.C3, the error bars are expectedly larger as rotation adds another degeneracy but we still recover consistent parameters and show that the data are consistent with tidally locked rotation. In summary, this application to real data confirms the validity of our methods globally. ## 6 Discussion ### Errors on K\({}_{p}\) In our simulations (and in general HRS planet observations), the absorption lines of the planet atmosphere are largely drowned in the noise, which limits the achievable precision of our recovered parameters. From our simulated data, e.g., Fig. 8, we see that the 3 \(\sigma\) error on the velocity is of the order of a third of a SPIRou pixel, and the 3 \(\sigma\) error on \(K_{p}\) leads to a shift of half a pixel at the beginning and end of the transit. These are extremely simplified cases as the injected Figure 10: Corner plot showing the results of a pymultinest retrieval with a model containing only water with a mass mixing ratio of \(10^{-2}\) and a rotation with equatorial velocity of 3 km s\({}^{-1}\). The different figure elements are the same as in Fig. 8. and recovered model are very similar and it is therefore expected that, for real data, the error on \(\mathrm{K_{p}}\) can be a factor of a few larger. It does however provide a good understanding of the precision of the method: being limited at the half pixel precision points towards the fact that we predominantly recover the center of the lines and not their shapes. Another interesting aspect is that, although we have performed a lot of simulations, we never recovered a mean \(\mathrm{K_{p}}\) value that was lower than the injected \(\mathrm{K_{p}}\) in synthetic data. In contrast, the broadening induced by the rotation can lead to a recovered mean \(\mathrm{K_{p}}\) more than \(20\,\mathrm{km.s^{-1}}\) higher than the injected value. We attribute this effect to the data reduction process: the division by the median as well as the PCA/autoencoder aims at suppressing signals that are almost constant in time over the whole sequence, hence that have low \(\mathrm{K_{p}}\) values. The PCA is also applied to the models, which reduce this effects, but not the first steps of the data analysis which have a lower but non zero impact on the model. This artificially enhances the recovered \(\mathrm{K_{p}}\) and is coherent with the fact that this trend gets worse with higher rotation rates: the broadening of the lines make them more sensitive to the data analysis. The Doppler shift between the Figure 11: Corner plot showing the results of a pymultinest retrieval with a model containing only water with a mass mixing ratio of \(10^{-2}\) and gray cloud deck at 10mbar. The different figure elements are the same as in Fig. 8 with \(P_{\mathrm{clouds}}\) being the recovered cloud top pressure. beginning and end of transit in our fiducial sequence is about 12 km.s\({}^{-1}\): a line broadened by a rotation kernel of a few km.s\({}^{-1}\) will be more affected by the pipeline than non-broadened lines. Finally, we note that V\({}_{\rm sys}\) is well recovered by our model, confirming that blueshifts (or redshifts) in observed data will most likely be of physical origin (i.e. atmosphere dynamics such as winds). Globally, the fact that this technique performs better at high K\({}_{\rm p}\) was known and expected: a better separation between planetary, stellar and telluric lines during the course of the transit increases the level of detection. This will be a challenge for e.g., PLATO targets (Rauer et al., 2014) which will have no detectable Doppler variation between the star and planet during transit. Through our consortium, we have obtained part of the transit of HIP 41378f, an inflated Neptune mass planet on a 1.5 year orbit (see e.g. Alam et al. (2022)), that shift by less than a meter per second from the stellar line over the course of the transit. It will be a good test to optimize our method for planets with low K\({}_{\rm p}\), or a proof of the absolute limitation of this method in the context of slowly displacing planets. ### Water-temperature degeneracy As we expected, there is a degeneracy between composition and temperature. Although their physical effect on the atmosphere is different, both these parameters affect strongly the amplitude of the lines: temperature through a change in pressure scale height and composition through a change in opacity (and to a lesser extent to the pressure scale height as well). It is however important to notice that our analysis is not biased: although the maximum of likelihood does not necessarily correspond to the injected model, the injected parameters do lie in the ellipse of recovered parameters. A naive way to reduce the impact of this degeneracy is to provide informative priors but one obviously has to be careful about their physical motivation. Essentially, we need other diagnostics to lift the degeneracies. Two main diagnostics come into mind: (i) visible observations, where the strength of the absorption lines such as Ca+, Na or FeI is so much larger than it can provide tighter constraint on the rotation or temperature through the line shape, and (ii) combination with low resolution spectroscopy (LRS). LRS can provide a reference value for the strength of the absorption line, hence a tighter constraint on the temperature-composition degeneracy. This in turn will provide a more appropriate estimate of dynamical mechanisms in the planet. ### Degrading the model As we discussed in section 1, we degrade the model because of the PCA treatment accordingly to the fast method developed by Gibson et al. (2022). However, as mentioned in section 4.2 and discussed in Appendix B, we do not degrade the model from the data analysis steps prior to PCA which leads to errors for broadened models. This is work in progress and is not discussed further here. Regarding the PCA degradation, we have made several tests to try to understand how this step was important. As can be seen on Fig. 5, the degradation first leads to reduce the depth of several absorption lines. When we do not degrade the model, we therefore expect to recover lower mixing ratio or lower temperature to mimic this effect. This is exactly what we obtained, and when we tried to recover only the water content the error could reach a factor of 20 with a non degraded model. Degrading the model correctly is therefore of primordial importance. Essentially, without this step of model degradation, our recovered Figure 12: Top: cross-correlation map between the combined transits of HD 189733 b analyzed with PCA and the atmospheric model used in B21. The white dashed line show the theoretical position with no atmospheric Doppler shift. Middle: Same with the auto-encoder. Bottom: cross-correlation significance from the PCA-reduced data for individual transits and both transits and an orbital semi amplitude equal to the planetary semi amplitude (151.2 km.s\({}^{-1}\)). The black dashed line is the 0 Doppler shift. values were much further away to the real data and sometimes incompatible at the 3\(\sigma\) level. The correction provided by Gibson et al. (2022) allows to correct these effects, while only costing a small amount of calculation time. We are looking for such an efficient method to apply with the auto-encoder. ### Perspectives of exploration As the goal of this paper is mainly to present the pipeline, we have focused on a few examples but we could obviously not cover all the issues in the analysis of planetary atmospheres. Our companion paper, Debras et al. submitted, tackles the importance of biases and degeneracies in a few different situations. There are however many aspects that we did not include in these first ATMOSPHERIX papers. Probably the most important is that we have not mentioned at all 3D effects, although they are known to impact the retrieval of atmospheric parameters (Flowers et al., 2019; Caldas et al., 2019; Pluriel et al., 2020, 2022; Falco et al., 2022). It would have been too large a task to explore these effects for a benchmark study, and we dedicate it to individual planet studies with the forthcoming works of the ATMOSPHERIX consortium. As we mention in the introduction, the way forward in our understanding of planetary atmospheres is the combination of low- and high-resolution spectroscopy. The method to combine these obser Figure 13: Posterior density for our nested sampling algorithms applied on the two transits of HD 189733 b with a temperature profile from Guillot (2010). The red crosses are the B21 recovered values. The water quantity is in mass mixing ratio, contrary to B21 in volume mixing ratio. vations efficiently has been presented and discussed in Brogi et al. (2017), and an application with SPIRou data has already been performed (Boucher et al., 2023). We are therefore working on a similar benchmark paper with the combination of space and ground based data, notably in the goal of exploiting at best the JWST and Ariel capabilities. Finally, we chose to focus on infrared transmission spectroscopy with SPIRou but this pipeline could in theory be easily adapted to emission/reflection spectroscopy or data from another instrument/wavelength range. Emission spectroscopy has already been performed with SPIRou observations (Pelletier et al., 2021) with similar methods and we have gathered and reduced visible data from MAROON-X with our pipeline, requiring marginal changes. Therefore, our pipeline is straightforwardly applicable to a much broader range of observations, which will be presented in the future. ## 7 Conclusion In this paper, we have presented our publicly-available pipeline to (i) generate synthetic SPIRou transmission spectroscopy data, (ii) reduce the data with state-of-the-art methods including PCA and the use of an auto-encoder and (iii) analyse the data to recover the injected planetary signal either through cross-correlation or statistical exploration of the parameter space in a Bayesian framework. We have also included a fast way to include and retrieve (super-) rotation in planetary atmospheres. By creating synthetic sequences, we demonstrated the validity of our pipeline and explored its limitations. We have first confirmed that the auto-encoder was a working, independent method to combine with PCA to recover planetary signals. We have shown that our method is unbiased for simple 1D models, but some issues remain in the retrieval for models with large rotation rates. We have explored the impact of clouds, showing that they can also bias the results and require additional constraints to be properly recovered. Importantly, we have confirmed that degrading the model was needed to ensure a proper retrieval. We have implemented the Gibson et al. (2022) method for PCA and are still working on a fast method for the auto-encoder and for the non-PCA steps of the data analysis as well. When the model is not degraded, the retrieved value of the mixing ratio in our tests could be more than 1 dex away from the input value, and the temperature about 200K wrong. Finally, we have applied our pipeline on real SPIRou observations of HD 189733 b and obtained slightly better results than the literature for the detection and characterisation of the atmosphere. We recover water at a SNR greater than 4.5 with a volume mixing ratio in line with the literature and a temperature profile consistent with physical priors. We also show that we are consistent with a tidally locked rotation rate, an important result for hot Jupiters. In conclusion, we have benchmarked our pipeline for atmospheric observations of exoplanets. A companion paper (Debras et al., submitted) tests its limitations for non isothermal and multi-species models. With the ongoing observations of the ATMOSPHERIX consortium, we have proven to be ready for the challenge of atmospheric detection and characterisation and will present results from SPIRou observations in a suite of papers to come. ## Acknowledgements Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated from the summit of Maunakea by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The observations at the Canada-France-Hawaii Telescope were performed with care and respect from the summit of Maunakea which is a significant cultural and historic site. BK acknowledges funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 865624, GPRV). FD thanks the CNRS/INSU Programme National de Planetologie (PNP) and Programme National de Physique Stellaire (PNPS) for funding support. This work was supported by the Action Specifique Numerique of CNRS/INSU. This work was granted access to the HPC resources of CALMIP supercomputing center under the allocation 2021-P21021. JFD, C, M, I, B. X. B, A.C., X.D.,G.H., F.K acknowledge funding from Agence Nationale pour la Recherche (ANR, project ANR-18-CE31-0019 SPIaSH). AM, BC, BB, SV acknowledge funding from Programme National de Planetologie (PNP) and the Scientific Council of the Paris Observatory. X.D. and A.C. acknowledge funding by the French National Research Agency in the framework of the Investisements d'Avenir program (ANR-15-IDEX-02), through the funding of the "Origin of Life" project of the Universite de Grenoble Alpes.JL acknowledges funding from the European Research Council (ERC) and the European Union's Horizon 2020 research and innovation programme (grant agreement No. 679030/WHIPLASH), and from the french state: CNES, Programme National de Planetologie (PNP), the ANR (ANR-20-CE49-0009: SOUND). JFD and BK acknowledge funding from the European Research Council (ERC) under the H2020 research & innovation programme (grant agreement #740651 NewWorlds). Part of the work has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). WP acknowledge financial support from the SNSF for project 200021_200726. PT acknowledges supports by the European Research Council under Grant Agreement ATMO 757858. E.M. acknowledges funding from FAPEMIG under project number APQ-02493-22 and research productivity grant number 309829/2022-4 awarded by the CNPq, Brazil. Finally, we thank the anonymous referee for valuable comments and suggestions throughout the paper. ## Data Availability The data are available upon request to the author and the code to analyze them is publicly available.
2303.01509
EPAM: A Predictive Energy Model for Mobile AI
Artificial intelligence (AI) has enabled a new paradigm of smart applications -- changing our way of living entirely. Many of these AI-enabled applications have very stringent latency requirements, especially for applications on mobile devices (e.g., smartphones, wearable devices, and vehicles). Hence, smaller and quantized deep neural network (DNN) models are developed for mobile devices, which provide faster and more energy-efficient computation for mobile AI applications. However, how AI models consume energy in a mobile device is still unexplored. Predicting the energy consumption of these models, along with their different applications, such as vision and non-vision, requires a thorough investigation of their behavior using various processing sources. In this paper, we introduce a comprehensive study of mobile AI applications considering different DNN models and processing sources, focusing on computational resource utilization, delay, and energy consumption. We measure the latency, energy consumption, and memory usage of all the models using four processing sources through extensive experiments. We explain the challenges in such investigations and how we propose to overcome them. Our study highlights important insights, such as how mobile AI behaves in different applications (vision and non-vision) using CPU, GPU, and NNAPI. Finally, we propose a novel Gaussian process regression-based general predictive energy model based on DNN structures, computation resources, and processors, which can predict the energy for each complete application cycle irrespective of device configuration and application. This study provides crucial facts and an energy prediction mechanism to the AI research community to help bring energy efficiency to mobile AI applications.
Anik Mallik, Haoxin Wang, Jiang Xie, Dawei Chen, Kyungtae Han
2023-03-02T09:11:23Z
http://arxiv.org/abs/2303.01509v1
# EPAM: A Predictive Energy Model for Mobile AI ###### Abstract Artificial intelligence (AI) has enabled a new paradigm of smart applications - changing our way of living entirely. Many of these AI-enabled applications have very stringent latency requirements, especially for applications on mobile devices (e.g., smartphones, wearable devices, and vehicles). Hence, smaller and quantized deep neural network (DNN) models are developed for mobile devices, which provide faster and more energy-efficient computation for mobile AI applications. However, how AI models consume energy in a mobile device is still unexplored. Predicting the energy consumption of these models, along with their different applications, such as vision and non-vision, requires a thorough investigation of their behavior using various processing sources. In this paper, we introduce a comprehensive study of mobile AI applications considering different DNN models and processing sources, focusing on computational resource utilization, delay, and energy consumption. We measure the latency, energy consumption, and memory usage of all the models using four processing sources through extensive experiments. We explain the challenges in such investigations and how we propose to overcome them. Our study highlights important insights, such as how mobile AI behaves in different applications (vision and non-vision) using CPU, GPU, and NNAPI. Finally, we propose a novel Gaussian process regression-based general predictive energy model based on DNN structures, computation resources, and processors, which can predict the energy for each complete application cycle irrespective of device configuration and application. This study provides crucial facts and an energy prediction mechanism to the AI research community to help bring energy efficiency to mobile AI applications. mobile AI, predictive energy model, energy improvement, latency reduction, DNN ## I Introduction Artificial intelligence (AI) is shaping every aspect of human lives nowadays. Furthermore, mobile devices, i.e., smartphones, tablets, wearable devices, and autonomous and unmanned aerial vehicles, are heavily invested in AI applications, having cellular networks, edge, and cloud computing in the backbone. AI applications consume considerably high energy and memory of these devices. How AI uses these resources defines a device's potential to interact with wireless networks. Therefore, it is crucial to understand the characteristics of AI applications running on a mobile device, which pushes back to the question -- how can we accurately predict the energy consumption of mobile AI irrespective of device configurations to ensure better service and user experience? AI applications' energy consumption may depend on various properties of a system. First, the AI models that are crafted in specific ways to fit mobile devices due to the models' high computation and energy requirements, impact the applications' behaviors. Research works suggest accelerating the processing time of deep neural networks (DNNs) by quantizing [1], which is a compression technique run on DNN models that can reduce the model size by converting some tensor operations to integers from floating points or reducing the weights or parameters in a model, but at the cost of degraded accuracy. Quantized DNN (Q-DNN) models are generally investigated for vision-based applications, the most thriving areas of AI. Second, mobile AI is not limited to vision applications only. Modern-day mobile devices are rigged with non-vision applications as well, such as intelligent recommendations, natural language processing (NLP), smart reply, speech recognition, and speech-to-text conversion. While most of the research focuses on applications based on computer vision, acquiring a thorough knowledge of mobile AI is only possible by including non-vision applications. Third, the processing source used to run the AI models affects their performance. Besides central processing units (CPUs) with high processing speeds, some devices are now equipped with graphics processing units (GPUs), which enables DNN models to run faster than ever, especially for vision applications [2]. In addition, neural network application programming interfaces (NNAPI) are also developed to make the processing of DNN models faster using CPUs, GPUs, or neural processing units (NPUs) [3]. These state-of-the-art technologies are researched for mobile AI only to improve inference latency. Lastly, the hardware configuration of mobile devices is distinctive and contributes to energy consumption with a unique signature. The system-on-chip (SoC), CPU/GPU parameters, and memory dictate how an AI application runs on a specific device. In this paper, we argue that a predictive energy model for a mobile AI application requires considering all of the parameters mentioned above. Without collecting accurate and precise latency, energy, and memory consumption data, it is not possible to design a predictive energy model which is applicable to all AI applications with different model sizes and device configurations. This paper presents the measurement data of AI applications collected through experiments and proposes a novel model of Energy Prediction for AI in Mobile devices (EPAM), which can provide a highly accurate prediction of the energy consumption of a mobile AI application irrespective of device configuration and AI models, and thus contribute to improving the overall performance. **Motivations:** While mobile AI is often concluded as "no one-size-fits-all solution" [4], it is the responsibility of the research community to provide the developers with precise measurement data and a way to predict energy consumption. Our research shows that the power varies for the same device with the change in processing sources (Fig. 1(a)). The granularity of power consumption over a unit period of time needs to be measured to develop a predictive energy model, which is not provided by the current works. Battery profilers provided by third-party applications do not support precise energy data collection [5]. Hence, the use of an external power measurement tool becomes necessary [6]. Moreover, DNN models with different sizes and layers do not have a similar impact on the latency, energy, and memory usage, which is presented in Fig. 1(b), where it is evident that the correlation among latency, energy, and memory is not linear at all. An interesting observation here is that the Quantized EfficientNet model causes high latency and energy despite using the lowest memory, due to its compatibility issues with NNAPI, which is described in detail in section V-A. This motivates us to collect data from a physical testbed to validate this correlation before proposing a predictive energy model. **Challenges:** Designing a predictive energy consumption model for mobile AI is not straightforward. First, a general energy prediction model is challenging to develop _due to different categorical and numerical variables involved in the non-parametric behavior of the energy consumption of AI applications_. The regression model cannot be linear since all the parameters do not have the same weight in all applications and configurations. Second, measuring mobile AI parameters is challenging due to _complicated power terminal design in the latest mobile devices_. Synchronizing the timestamps of latency and energy data brings further difficulties as the retrieved log files have different formats. However, these parameters must be measured since they are required for training the regression model. Finally, the _experiments should be controllable and repeatable_ for enthusiastic researchers. Therefore, the environment must be chosen wisely so that all the experiments can be carried out in a similar condition. **Our contributions:** Our contributions in this paper are summarized as follows: * **Experimental research and analysis of different mobile AI applications:** We set up an experimental testbed with four different smartphones (Table I) and use a vision application (image classification) and two non-vision applications (NLP and speech recognition) with seven different DNN models (Table II). The testbed is described in detail in Section IV. We investigate different mobile AI parameters through an extensive experimental study. The latency, power consumption, and memory usage of individual segments of the pipelines of three AI applications are measured for different applications using single- and multi-threads CPU, GPU, and NNAPI and for different DNN models. Our experiment shows that the total energy consumption of a mobile AI application is related to the device configuration, AI model, latency, and memory. * **Predictive energy model for mobile AI:** We propose a novel Gaussian process regression-based general predictive energy model for mobile AI (EPAM) based on DNN structure, memory usage, and processing sources to predict the energy consumption of mobile AI applications irrespective of device configurations (Section III). EPAM requires offline training with past datasets. The trained model can be used to predict the overall energy consumption which reduces the necessity for further energy measurement and helps the developers design energy-efficient mobile AI applications. Finally, we evaluate the performance of our proposed predictive energy model EPAM with our experimental data (Section V-D). The evaluation shows that EPAM provides highly accurate energy prediction of vision and non-vision AI applications for different DNN models on unique mobile devices. ## II Related Work **Vision and non-vision mobile AI with float and quantized models:** Floating point and quantized models are investigated for vision applications, e.g., image classification, segmentation, super-resolution, and object detection, to create benchmarks using inference latency for mobile devices [7]. Quantized models are introduced in [8] to lower the energy consumption as well. In addition, non-vision AI applications are also researched to achieve high accuracy and low latency [9]. Nevertheless, a predictive energy model for mobile AI requires analysis of complete behaviors of vision and non-vision mobile AI applications using floating point and quantized models, which are not yet explored. **Latency and energy in different processors:** Mobile AI applications behave differently in terms of latency and accuracy based on the processing sources [4, 7]. Research works are done on maximizing CPU threads [10] and hardware acceleration for DNN models. The use of GPU is also studied for improving the training and inference time for mobile AI [2]. NPU architectures are explored as well to expedite neural network operations [3, 11]. However, there is no fundamental framework to describe the impact of individual processing sources on energy consumption for different mobile AI applications with disparate DNN models. **Energy modeling for mobile AI and prediction:** Energy measurement is necessary to describe mobile AI applications' detailed behaviors. Eprof [12] and E-Tester [13] are proposed to measure and test the battery drain of mobile devices, which use a finite state machine to measure the energy. However, these methods lack in providing granular and precise energy data since they only act on system call traces. Researchers have proposed different energy models for vision [14] and non-vision [15] applications. Furthermore, predictive energy models are developed for devices, and sensors [16]. Nonetheless, developing accurate predictive energy models general to all mobile AI applications requires knowledge of all the environmental parameters such as network and model size, memory usage, and the hardware accessed to run the AI application. Fig. 1: (a) Power consumption by different processors for the same time interval for MoibileNet Float and (b) mean inference latency, energy, and memory usage for float and quantized DNN models on Huawei Mate40Pro. ## III EPAM: Overview of the Predictive Model The energy prediction of mobile AI involves a high dimension of influencing variables, making it a non-parametric model. Let us assume that the set of input data points is \(X^{1:D}\), where D is the total number of dimensions. If we consider this a noisy observation, then we find the posterior distribution as \[P(E(X)\propto P(E(X)|\Lambda^{1:D})/P(\Lambda^{1:D}|E(X)), \tag{1}\] where \(E(X)\) is the observed energy at data points \(X^{1:D}\) and \(\Lambda^{1:D}=\{X^{1:D},E\}\) is observation points. Using Gaussian process [17], \(E(X)\) can be described as \(E(X)\sim\mathcal{N}(\mu,K)\), where \(\mu=[mean(X^{1}),\dots,mean(X^{D})]\) is the mean and \(K_{ij}=k(x_{i},x_{j})\) is the covariance or Kernel function, where \(x_{i}\) and \(x_{j}\) are distinct data points. As new data points \(X_{*}\) are provided, the posterior distribution of predicted energy \(E(X_{*})\) can be modeled as \[P(E(X_{*})|\Lambda^{1:D})\sim\mathcal{N}(\mu(X_{*}),K(X_{*})) \tag{2}\] _The kernel must be chosen carefully_ as there exists a clear link between kernel functions and predictions [18], which contribute to the hyper-parameter optimization. From our experimental data, we observe _the influencing parameters on total energy consumption are sparse and vary over a broad range including both numerical and categorical variables_. Hence, we choose the automatic relevance determination (ARD) exponential squared kernel for our predictive model, which automatically puts different weights on the parameters with differential scales assessing their significance to the model. Hence, our kernel equation becomes: \[K(x_{i},x_{j})=\sigma_{f}^{2}\exp[(-\frac{1}{2})\sum_{m=1}^{D}\frac{(x_{im}-x _{jm})^{2}}{\sigma_{m}^{2}}], \tag{3}\] where \(\sigma_{f}^{2}\) is the hyper-parameter to be optimized and \(\sigma_{m}^{2}\) is the covariance of the \(m^{th}\) dimension. Finally, the log-likelihood of the trained model can be expressed as \[\begin{split}\log P(E(X)|X^{1:D})&=-\frac{1}{2}E( X)^{T}(K+\sigma_{D}^{2}I)^{-1}E(X)\\ &-\frac{1}{2}\log det(K+\sigma_{D}^{2}I)-\frac{D}{2}\log 2 \pi,\end{split} \tag{4}\] where \(I\) is an identity matrix. EPAM is first trained offline with the observation data points, then is run with an application alongside. The prediction is done either simultaneously or at the end of an application. In this research, we train the model with a dataset containing \(85,500\) data, validate with \(19,496\), and test with \(10,000\) data. ## IV Experimental Setup **a) AI applications**: Three mobile AI applications are used in this research: image classification, NLP, and speech recognition. In image classification, as shown in Fig. 2(a), first, the image is captured by the camera sensor, which then goes through a Bayer filter and image signal processor, and, then is stored in an image buffer. The image frame is then scaled and cropped to be previewed while simultaneously going to an image reader, converted from YUV color format to RGB, and cropped according to the input size of the DNN model. Then the converted and cropped frame is taken as the DNN input, generating the classification results to display. The NLP question-answer application takes both the paragraph input and the question input from the keyboard (Fig. 2(b). The paragraph is then represented with token, segment, and position embeddings. The keyboard input goes through character, basic, and word piece tokenizer. These embeddings and tokens are passed to a feature converter providing input to the DNN model. The model finds the answer to the question input and highlights it in the paragraph. Speech recognition application records, converts, and decodes the audio input. The decoded audio signal is converted to a spectrogram by running a short-time Fourier transform (STFT) along with the calculation of the Mel frequency cepstral coefficients (MFCCs). The spectrogram and MFCC are passed to the DNN model. The predicted word is then displayed on the phone as depicted in Fig. 2(c). **b) Testbed:** We implement the applications mentioned above on four Android OS-based smartphones from different manufacturers with distinct configurations to make the measurement study robust with a wide range of parameters. Table I shows the specifications of the smartphones used in the experiment. However, the intended thorough investigation of mobile AI brings several challenges during the experiment. Android Studio, along with other third-party contributors, provides developers with memory and battery profilers, which cannot generate the data necessary to measure memory usage and power consumption precisely. In this experiment, we collect latency timestamp data of each segment of a mobile AI pipeline along with their corresponding memory usage. To measure the energy consumption, we use an external power measurement tool "Monsoon Power Monitor" that provides data sampled at every 0.2 ms interval. However, due to the delicate design of power input terminals, the latest smartphones need to be heated and opened to remove the battery, and then are connected to the power monitor. After careful measurement of power data, Fig. 2: Pipelines of the mobile AI applications studied in this research. they are matched with the corresponding latency timestamps. To make the experiment environment controllable, we carry out all the experiments in a similar condition, e.g., brightness, camera focus, image resolution, background applications, processing sources, and test dataset. We use \(640\times 480\) pixels as the image resolution, and TensorFlow Lite Delegate to control the processing sources. The 2017 COCO test dataset, WH-questions, and fixed single words are used for testing the classification, NLP, and speech recognition, respectively. In addition, even without any applications running in the background, there is always a minimal power consumption - which we call the _base power_. To distinguish the mobile AI power from the base power, an additional layer is used before the actual AI application. **c) AI models:** In this research, we use seven DNN models for three different applications. In Table II, the details of each model, including the input size, number of layers, and the trained model size (occupied storage space) are shown. **d) Performance metric:** We evaluate all the AI applications' performances in terms of their latency, energy consumption, and memory usage. The total energy consumption is controlled by latency and memory usage, as well as the category of AI applications, processing sources, model types (float and quantized), and DNN structure and model size. ## V Results and Discussion We conduct experiments with all the devices listed in Table I and models listed in Table II by switching to different processing sources, such as CPU thread 1 and thread 4, GPU, and NNAPI. Models 1 to 5 are for vision-based AI, and models 6 and 7 are for non-vision-based AI applications. It is to be noted that models 2, 4, 6, and 7 do not support GPU processing due to a lack of TensorFlow Lite optimization. In general, the applications have input data processing (combining image generation and conversion in classification) and inference tasks. In this paper, we show some of the interesting findings due to space constraints. ### _Latency and energy consumption of mobile AI_ The end-to-end latency and energy consumption per cycle for all the models with different processing sources are shown in Fig. 3. First, we can see that quantized models decrease the inference latency (\(13\%\)) and energy consumption (\(25\%\)) from their respective float models. Additionally, there is a reduction in the overall latency of \(4\%\) when switching to a 4-thread from a single-thread CPU. However, in quantized models, the multi-thread CPU processing slightly increases the total energy consumption (\(3\%\) on average). The use of GPU even lowers the end-to-end latency and energy consumption compared to the use of single-thread CPU (\(8\%\) and \(27\%\) respectively on average) and 4-thread CPU (\(7\%\) and \(25\%\) respectively on average). On the contrary, NNAPI behaves differently than the other three processing sources on different devices. For models 4 and 5, NNAPI increases latency and energy considerably. Our insight here is that NNAPI can perform better with sufficient hardware support from the manufacturers. An interesting fact about the NLP application is that the text processing step shows an entirely different latency pattern. This segment takes user input which does not take uniform time, i.e., it varies with user habits of typing and thinking of the question. Hence, the processing stage here is completely unpredictable for different users. In NLP, each input consumes around \(5.7\) J, whereas, another non-vision application, speech recognition takes around \(161.85\) mJ to process one speech input sampled at a rate of \(16\) kHz using a single-thread CPU. NNAPI consumes the least latency and energy for speech recognition. In addition, we examine the power consumption charts of different applications and processing sources (Fig. 4). We observe a slight initiation delay for every application (marked with red arrows in Fig. 4), which varies with using different processors and applications. This delay occurs during the time when the application interface initiates till the activity-start point, which is mainly originated by different hardware components being accessed at the beginning of an AI application, such as the camera, keyboard, speaker, and microphone. Besides, different processor delegations (e.g., GPU and NNAPI) are also done during this period. **Highlights:**_Non-vision applications cannot be generalized for latency and energy like vision-based ones. GPU processing is not supported by non-vision applications, which should be explored widely. The initiation delay (i.e., the delay between the activity trigger and start point) varies along AI models, processing sources, and applications, which is caused by accessing different hardware components by mobile AI applications._ ### _DNN structures and their inference latency and energy_ DNN structures define the way inference activities work in a mobile AI application. The behavior of DNN structures varies across different kinds of applications as well, e.g., vision and non-vision AI. For instance, a smaller DNN structure for vision applications can incur higher latency and energy than a larger non-vision DNN structure. Inference latency and energy consumption per cycle are shown in Fig. 5 for DNN models with single-thread CPU processing. We observe that model 5 takes longer inference time and energy due to its larger structure than the other vision-based AI models. The longest latency and highest energy are evident in model 6 (a complex structure comprising 2541 layers). **Highlights:**_DNN structures influence inference latency and energy significantly, but the relationship is not linear at all. Generally, larger DNN structures are responsible for higher latency and energy for a mobile AI application._ ### _DNN model size, memory usage, and inference energy_ DNN model size (i.e., the storage space occupied by the model) impacts memory usage and energy consumption during inference. From our experiment, we observe that model 7 has the lowest model size, hence causing the lowest memory and energy consumption, whereas model 6 has the highest size, memory, and energy consumption. This is more evident from Fig. 6, which shows a comparison among all the models' sizes, inference memory, and energy consumption for single-thread CPU processing. **Highlights:**_Lower memory used by mobile AI applications ensures computation resources and energy for other mobile device activities. From this perspective, quantized and smaller DNN models are best suited for mobile AI. The larger the storage occupied by a DNN model, the higher the memory and energy consumption._ Fig. 4: Power consumption pattern for (a) classification, (b) NLP, and (c) speech recognition. Fig. 5: Inference latency and energy consumption per cycle by DNN models. Fig. 3: End-to-end mean latency and energy consumption per cycle of vision-based models 1–5 for (a) single- and (b) multi-thread CPU, (c) GPU, and (d) NNAPI, and non-vision-based (e) model 6 and (f) model 7. Fig. 6: Comparison of DNN model size, inference memory usage, and inference energy consumption. ### _Performance evaluation of EPAM_ We develop and train the Gaussian process regression-based predictive energy model, EPAM, with each device's SoC, CPU frequency, no. of cores, memory size, processing sources, no. of threads, application type, DNN model, DNN structure, memory usage, processing latency, and inference latency from the large experimental dataset from this research to predict the total energy consumption per application cycle (data processing and inference for each input). We use an empty basis function, and ARD squared exponential kernel function for the hyper-parameter optimization. We use device-1, 2, and 4 for training and validation, and device-3 for 1-step ahead prediction testing. Due to page limitation, we show only a few prediction results in Fig. 7. We observe that EPAM's energy prediction per cycle is highly accurate for all the models. The overall root mean squared error (RMSE) is \(0.075\) (\(3.06\%\)), and the marginal log-likelihood value is \(-1.449\times 10^{2}\), which show that the trained model is a good fit for the prediction. The prediction latency depends on the machine used in running the model. **Highlights:**_EPAM further helps developers and users to perceive the performance of individual AI applications in terms of energy with high accuracy - which is the primary motivation of this research work. The larger and more diverse the training dataset, the higher the prediction accuracy._ ## VI Conclusion In this paper, we presented a comprehensive study of mobile AI applications with different processing sources and AI models. Overcoming the challenges with measurement, we conducted experiments to assess the performance of different AI models, processing sources, and devices. Our measurement work shows that the latency, energy consumption, and memory usage vary based on DNN models and processing sources. Mobile AI systems' performance is substantially improved using quantized models than floating-point models in terms of latency and energy. Another important finding is that the storage space occupied by DNN models influences the memory and energy consumed during inference almost linearly. Additionally, non-vision applications follow a different trend of latency and energy consumption than vision-based AI since their input processing techniques differ from vision applications. Every AI application has an initiation delay caused by accessing various hardware components of mobile devices, which varies for different models and configurations. Moreover, the latency, memory, AI model, and device configuration impact the total energy consumption for a complete application cycle, albeit at different correlations. This non-linear correlation in a non-parametric model led to our proposed predictive energy model, EPAM, based on Gaussian process regression. Finally, we trained and validated EPAM with the vast dataset obtained from our experiment. The evaluation of EPAM shows high accuracy with an overall RMSE of 0.075 (\(3.06\%\)). Developers can use EPAM to predict the energy consumption of their mobile AI applications without measuring the energy externally to improve the comprehensive user experience. To summarize, this novel predictive energy model, EPAM, will help the mobile AI research community design energy-improved applications considering all the control factors and parameters that can reduce energy requirements to enable better service for smartphones, wearable devices, and autonomous vehicles.
2308.04703
Data Player: Automatic Generation of Data Videos with Narration-Animation Interplay
Data visualizations and narratives are often integrated to convey data stories effectively. Among various data storytelling formats, data videos have been garnering increasing attention. These videos provide an intuitive interpretation of data charts while vividly articulating the underlying data insights. However, the production of data videos demands a diverse set of professional skills and considerable manual labor, including understanding narratives, linking visual elements with narration segments, designing and crafting animations, recording audio narrations, and synchronizing audio with visual animations. To simplify this process, our paper introduces a novel method, referred to as Data Player, capable of automatically generating dynamic data videos with narration-animation interplay. This approach lowers the technical barriers associated with creating data videos rich in narration. To enable narration-animation interplay, Data Player constructs references between visualizations and text input. Specifically, it first extracts data into tables from the visualizations. Subsequently, it utilizes large language models to form semantic connections between text and visuals. Finally, Data Player encodes animation design knowledge as computational low-level constraints, allowing for the recommendation of suitable animation presets that align with the audio narration produced by text-to-speech technologies. We assessed Data Player's efficacy through an example gallery, a user study, and expert interviews. The evaluation results demonstrated that Data Player can generate high-quality data videos that are comparable to human-composed ones.
Leixian Shen, Yizhi Zhang, Haidong Zhang, Yun Wang
2023-08-09T04:49:14Z
http://arxiv.org/abs/2308.04703v1
# Data Player: Automatic Generation of Data Videos ###### Abstract Data visualizations and narratives are often integrated to convey data stories effectively. Among various data storytelling formats, data videos have been garnering increasing attention. These videos provide an intuitive interpretation of data charts while vividy articulating the underlying data insights. However, the production of data videos demands a diverse set of professional skills and considerable manual labor, including understanding narratives, linking visual elements with marration segments, designing and crafting animations, recording audio narrations, and synchronizing audio with visual animations. To simplify this process, our paper introduces a novel method, referred to as Data Player, capable of automatically generating dynamic data videos with marration-animation interplay. This approach lowers the technical barriers associated with creating data videos rich in marration. To enable narration-animation interplay, Data Player constructs references between visualizations and text input. Specifically, it first extracts data into tables from the visualizations. Subsequently, it utilizes large language models to form semantic connections between text and visuals. Finally, Data Player encodes animation design knowledge as computational low-level constraints, allowing for the recommendation of suitable animation presets that align with the audio narration produced by text-to-speech technologies. We assessed Data Player's efficacy through an example gallery, a user study, and expert interviews. The evaluation results demonstrated that Data Player can generate high-quality data videos that are comparable to human-composed ones. Visualization, Narration-animation interplay, Data video, Human-Al collaboration ## 1 Introduction Visualization and corresponding descriptions often work together for data storytelling. Combining data visualization and narratives, data videos have become popular among practitioners as a visual storytelling form in fields such as journalism, marketing, and education [5, 43]. Over more than a decade of research, data videos have demonstrated their ability to deliver condensed information, increase audience engagement, and support comprehension and memorization of data facts in data communication [6, 12, 52]. In a data video, rich information is usually packed compactly and delivered through the coordination of audio narrations and animated graphics. As indicated by the dual-coding theory, human cognition can process verbal and visual objects simultaneously, and both of them play an essential role [15]. However, creating such a data video requires a variety of skills in multiple areas, including understanding narratives, visual animation design, narration scripting, and time alignment of audio and animations, which are usually difficult to perform for novices without instruction. To help users overcome these barriers, various technologies have been developed for different aspects of data video creation. For example, the visualization community has developed data visualization animation-specific technologies to facilitate the animation creation process, such as declarative specification grammars [19, 68, 24], authoring systems [7, 18, 57], and automated algorithms [51, 59, 25]. However, they neglect the importance of narration-animation interplay. In a recent study, Cheng et al. [12] investigated the role of narrations and animations in data videos. They found that users usually have static visual designs and text descriptions at hand for storytelling, and narration-animation interplay can effectively enhance liveliness compared to static forms. There are also a set of works that link static text and visualization together, using text-visual interplay to enhance readability in the form of interactive documents [29, 36], visualization annotations [27], etc. However, they do not fully exploit the potential of data animation to model how the data changes over time or space. In conclusion, existing authoring tools lack features for integrating narration with data animations in data videos for engaging storytelling. To address this gap, this paper targets to design an intuitive and powerful approach that enables the automatic creation of informative data videos with narration-animation interplay from static visualizations and accompanying descriptive text. To achieve this, we first conducted a formative study to understand users' process of crafting data videos and explore the key design considerations of data video creation in their prior experience. From the study, we derived a set of high-level design constraints. The interviewees also expressed the need for support in understanding narratives, linking text and visuals, generating animations, and aligning the timeline of audio and animations. In response to the feedback, we take the first step towards automating the generation of data videos with narration-animation interplay and design Data Player, which automates the four stages above, lowering the technical barriers of creating data videos, especially for novices. To enable narration-animation interplay, Data Player constructs references between visualizations and text input. We first extract data from input visualizations so that we convert the text-visual linking challenge into a matching problem between data table rows and narration segments. Subsequently, Data Player leverages the powerful natural language understanding ability of Large Language Models (LLMs) to associate narrative words with related data table rows, thereby establishing links between textual and visual elements. Data Player further produces a sequence of animation by modeling animation design as a Constraint-Satisfaction Problem (CSP). In detail, text-to-speech technologies are adopted to automatically generate audio narrations with timestamps of each word. Data Player then encodes design knowledge learned from the formative study and existing literature into computational low-level constraints, which are further fed into the constraint solver to generate suitable animation sequence with a pre-defined animation library. Finally, the audio and animations are rendered into a data video with narration-animation interplay. To evaluate the liveliness of data videos produced by Data Player, we curated an example gallery and conducted a user study and an expert interview. The results showed that the automatic-generated data videos are comparable to the human-composed ones, suggesting that Data Player can effectively produce data videos with narration-animation interplay, conveying the intended information while engaging the audience. The main contributions of this paper are as follows: * A formative study to understand users' processes and key design considerations of data video creation, leading to a set of high-level design constraints for the automatic coordination of narration and animation in data videos. * Data Player, an innovative approach that takes the first step towards the automatic generation of vivid data videos with narration-animation interplay from a static visualization and its description. Data Player leverages LLMs to understand narratives and establish text-visual links. It further uses constraint programming to recommend suitable animation sequences. * An example gallery, a user study, and an expert interview to evaluate the effectiveness of Data Player. The results demonstrated that Data Player can automatically produce human-comparable data videos with narration-animation interplay. ## 2 Related Work Data Player draws upon prior efforts in data video creation, visualization-text interplay, and constraint-based generation approach. ### _Data Videos Creation_ Data video is one of creative data presentation genres [43], which uses animations and audio in addition to static data to provide additional channels of communication for information transformation [6]. Prior studies have contributed insights into the comprehension and creation of animated data visualizations. For example, Tversky et al. [58] first suggested two design principles, i.e., Congruence and Apprehension, which was followed by freer and Robertson [20], who proposed ten specific guidelines that focus on animated transitions in statistical data visualizations based on the two initial overarching principles. Amini et al. [5] conducted a systematic analysis of 50 data videos, identifying the most commonly used visualization types and attention cues, as well as high-level narrative structures. Their findings confirmed that animation in data videos has a positive effect on viewer engagement [6]. Thompson et al. [56] analyzed design primitives of animated data-driven graphics from four perspectives: object, graphics, data, and timing. Further examining the animated visual narratives in data videos, Shi et al. [52] developed a design space for motion design. Furthermore, numerous authoring and programming tools have been created and are being continually developed to facilitate the production of animations and data videos. These tools are intended to enable creators to bring their ideas to life in a more efficient and effective manner [10]. General video creation tools (e.g., Adobe After Effects and Premier) provide fine-grained control of videos, but they require a high level of expertise and manual effort and are not tailor-made for data videos. In the visualization community, animation-specific grammars (e.g., Canis [19], Gemini [24], and Animated Vega-Lite [68]) have been developed to provide high-level specification languages for implementing keyframe-based animated transitions in data graphics. However, it requires programming skills and can be laborious for the authors. To ease the process, interactive user interfaces have emerged to enable novices to create their own data videos. DataClips [7] allows novices to create data videos by selecting and concatenating pre-defined animations from a comprehensive library, which includes clips that are suitable for visualizing different types of data. Based on the library, Kinethatarts [28] enhances users' emotional engagement by improving the storytelling aspect of data presentation without compromising users' comprehension of the data. CAST [18] and Data Animator [57] support recommendations for auto-completion so that users only need to provide keyframes. Researchers have also developed automatic approaches to further reduce time-consuming manual operations. Gemini2 [25] improves Gemini [24] by providing keyframe suggestions to help users create visually appealing animations. InfoMotion [59] enables the recommendation of animations of infographics based on their graphical properties and information structures. AutoClips [51] allows users to easily input a sequence of data facts, which are then automatically transformed into a polished data video. However, these works have neglected an important channel, narration, when applying these techniques to data videos. Cheng _et al._[12] recently investigated the role and interplay of narrations and animations and identified close links between the two perspectives. Following the study, our work serves as the first step towards the automatic generation of data videos with narration-animation interplay. ### _Visualization-Text Interplay_ The interplay between visualization and text plays an important role in data storytelling [46]. Recent studies have shown that the separation of text and charts may cause a split-attention effect and introduce cognitive burden for users [29]. By contrast, linking visualization and text can promote the communication of data facts [53], support the interpretation of machine learning models [21], and enhance readers' comprehension and engagement [67, 29]. Given these benefits, researchers have actively integrated visualizations and text for interactive purposes in data presentations. For example, Vis-Annotator [27] automatically presents annotated charts according to the text description. Kong et al. [26] proposed a crowdsourcing method to collect high-quality annotations for the references of charts and text. Subsequently, automatic techniques are proposed to link text and charts with rule-based algorithms [38]. Latif et al. [29] further proposed a framework to construct references between text and charts in data documents by explicitly declaring the links. The study from Kim et al. [23] found that text-table linking in documents can support readers to pursue content better with highlighted cells. And the interactive data articles enhanced with widgets such as "stepper" and "scroller" also enable the control for users to be more autonomous during their reading [37, 67]. In addition, CrossData [11] leverages text-data links to interactively author data documents. To further ease the process of creating text-chart connections and support chart highlighting, the following studies have developed programming language [16], authoring tools [54], and interactive approaches [9, 36]. Different from static charts and documents, the dynamic changes with time progressing in videos grant it its own narrative structures [43]. Hence, the visualization-text linking in static data stories needs to be extended to narration-animation interplay in data videos [12]. To further unleash the power of integrating oral narration and visual animation in data videos, our work steps towards the automatic transformation of static text and visualizations into engaging data videos with narration-animation interplay. ### Constraint-Based Generation Approach Constraint-based approaches have been widely applied to generate visualizations [39, 49], interface alternatives [65, 55], and short videos [13, 14]. For example, URL2Video [14] captures quality materials and design styles extracted from a web page and converts them into a short video given temporal and visual constraints. While not taking narratives into consideration and are not data-oriented, it inspires us to extract design elements from static visualizations and arrange them in the animation timeline based on pre-defined constraints. However, the scenario of data video introduces new challenges for the design of narration-animation interplay [12]. On the other hand, Moritz et al. [39] demonstrated that theoretical design knowledge can be expressed in a concrete, extensible, and testable form by modeling them as a collection of constraints. Therefore, we adopt a constraint-based method to model our derived design knowledge about narration-animation interplay and incorporate them into an automatic creation workflow. The resulting approach can recommend data video designs satisfying different aspects of guidelines and further facilitate designers' crafting. ## 3 Formative Study The goals of the formative study are to (1) understand practitioners' process of video crafting, and (2) explore the key design considerations of narration-animation interplay in their previous design experiences. ### Participants To achieve the above goals, we recruited 10 participants from both academia and industry with diverse backgrounds, including professional designers of video, motion graphics, animation, film post-producer, and visualization researchers. They have all acquired professional training or degrees, including three Ph.D.s, five M.S.s, and two B.S.s. All of them have experience in data video crafting through professional tools (e.g., Adobe After Effects and Premiere) or other simplified video creation tools (e.g., Microsoft PowerPoint and iMovie), with a self-reporting level of familiarity with this area (\(M=4.12\), \(SD=0.83\), range = 3-5 with \(1=\) "No Experience" and \(5=\) "Expert"). They were aged 24-32 years (5 females and 5 males, \(M=27\), \(SD=3.10\)). We recruited them through online advertising and word-of-mouth. ### Study Setup The study procedure consists of two sessions with retrospective analysis and semi-structured interviews, respectively. First, we conducted a retrospective analysis, which has been proven to be an effective method for reconstructing participants' behaviors, rationales, and emotions for previous events [42]. Participants were asked to provide and show 2-3 examples from their prior data storytelling works. To promote reflection, they were required to demonstrate the creation process and explain the rationale for their design decisions. Finally, 25 videos were presented, covering 8 common chart types (e.g., maps, bars, lines, etc.). After the retrospective analysis, we held one-on-one semi-structured interviews with the participants. The questions focused on concrete examples of narration-animation interplay in the works shown, allowing participants to recall more details of their designs and provide more useful information. We also explored the participants' views on design principles of narration-animation interplay by asking about the role of narratives and motions in their projects and the relationship between them. Finally, participants shared their difficulties in crafting data videos, particularly in aligning narrations and animations, providing insights for automatic workflows. The entire process lasted about 90 minutes and was recorded for subsequent analysis. The participants were compensated $15 for their time. ### Data Video Creation Process Participants normally prepare materials including visualization vector graphics and narrative scripts before crafting data videos in professional software such as Adobe After Effects, Cinema 4D, and Blender. During this preparation phase, they focus on the aesthetic design of their graphics and descriptive narrative writing, with little attention to the dynamic interplay between them and the motion effects of output videos. In some cases, graphics and texts may also be given to the creator by other collaborators such as graphic participants or screenwriters. After that, our findings identified four distinct design stages in terms of video crafting, which is of interest to our research. **Stage 1: Refining Narration Text.** At this stage, participants try to collect the text descriptions for the charts and compose the narration text of the video to produce audio narration. If the text description is not created by them, they need to understand the intents that the text authors or storytellers would like to convey to the audience. In this stage, data video creators usually decompose the messages in the text narration and formulate the messages to convey to prepare for their further design of animation in the videos. **Stage 2: Building Visual References.** Based on the formulated messages, participants match them to the visual references in the visualizations. Visual references are graphic elements in charts, such as the specific line in a line chart or the related rectangle in a bar chart. Participants build visual references associated with the messages in the narratives. Sometimes they may group them if there are some relationships between the visual elements. For example, one interviewee told us that he usually grouped two comparative data elements so that he designs animations for them in the subsequent motion design phase. **Stage 3: Animation Design with Semantic Metaphors.** After preparing the text narration and visualizations, participants design animations with semantic metaphors in line with the intent of storytelling. For example, if the storyteller wants to express a rising trend for a line chart, participants would make a dynamic growth curve from the lowest point to the highest point. When emphasizing the visual elements, participants may modify transparency, saturation, contours, or other visual properties as animation cues to highlight information. Additionally, three types of animation effects are most commonly adopted: entering, exiting, and emphasizing. **Stage 4: Coordination of Audio Narration and Animations.** Finally, participants align the created series of animations with the audio created by the voiceover artist on the timeline. They often take the approach of manually adjusting the keyframes, setting the start frame and the end frame of the animation at the corresponding time points in the narrative audio. Most interviewees (8/10) reported that this process was very time-consuming and laborious because they needed to listen to the audio and watch the animation repeatedly to check for out-sync. ### Design Constraints Through retrospective analysis and interviews, we found that participants were concerned about the echoes of narratives and animations in four dimensions: visual structure, data facts, semantics, and temporality. Further, we derived a set of design constraints from interviewees' considerations collected from the study, as well as existing literature [12, 52, 58, 6, 20]. First, the participants emphasized the importance of visual constraints in creating an organized and logical presentation. They suggested grouping relevant visual elements associated with the same data fact, establishing a sense of hierarchy and facilitating audience comprehension. Additionally, they recommended introducing unrelated background elements, such as titles and axes, at the beginning of the video to provide context. Maintaining consistent animation effects within groups of similar elements further enhances clarity. The participants mentioned the importance of data interplay in enhancing audience understanding. They advocated for echoing narrations and animations by reiterating conveyed information, as well as selecting and animating visual elements relevant to the data facts in the narrative script. This approach helps emphasize key points and maintain the audience's attention. Moreover, incorporating animations into data facts, rather than other narratives, can improve comprehension. The participants also highlighted the role of semantic rules in conveying the intended message accurately, which refers to the implicit interactions that arise from the meanings and intentions of visual elements and their narrations in voiceover. They underscored the need to align the semantic intents of narrations and animations to avoid confusion. Furthermore, they suggested supplementing missing narrative information with annotations, such as labels, explanations, and context, to provide a more comprehensive representation of data. Finally, temporal interplay emerged as another critical aspect. The participants stressed the importance of coordinating animation sequences with narrative structures to preserve the meaning of the information, synchronizing narrations and animations for consistency, and adapting the timing of animations to match narrations, ensuring that the information remains manageable for the audience. We regard these design constraints as high-level because they cannot be directly translated into actionable computational programs, but lay the foundation for the subsequent formalization of low-level design constraints, which will be discussed in detail in Section 4.4.2. ## 4 Data Player In this section, we first introduce the conceptual model of data video with a set of design variables. Then we give an overview of Data Player, and further discuss the two important modules: text-visual linking and animation sequence generation. ### Data Video Modeling Data videos consist of three design elements: visualizations, narrations, and animations. \[\mathit{video}:=(\mathit{visualization},\mathit{narration},\mathit{animation}) \tag{1}\] #### 4.1.1 Visual Elements Overall, visualizations are composed of a set of visual elements or element groups in a given static graphic. \[\mathit{visualization}:=\mathit{visual\ element}|\mathit{groups} \tag{2}\] \[\mathit{group}:=\mathit{visual\ elements} \tag{3}\] The visual elements can be marks, axis, legends, annotations, etc. Each is defined as a tuple: \[\mathit{visual\ element}:=(\mathit{id},\mathit{type},\mathit{data}) \tag{4}\] where \(\mathit{id}\) is the unique identifier, \(\mathit{type}\) indicates the graphic shapes and visualization structure of the element, and \(\mathit{data}\) is embedded in each visual element like dSVG in Canis [19]. For each element, \(\mathit{data}\) can be null and can also correspond to multiple data items. #### 4.1.2 Narration Entities Static narration text will be converted into audio speech, and each entity will be an audio unit with time; thus, a conceptual model of the narration text is: \[\mathit{narration}:=\mathit{narration\ entities} \tag{5}\] \[\mathit{narration\ entity}:=(\mathit{audio},\mathit{time}) \tag{6}\] \[\mathit{time}:=(\mathit{start},\mathit{duration}) \tag{7}\] where \(\mathit{start}\) refers to the start timestamp of an audio unit and \(\mathit{duration}\) refers to the time span that an audio unit lasts. #### 4.1.3 Animation Elements The animation sequence applied in data videos is a series of animation \(\mathit{units}\). The animation is defined as: \[\mathit{animation}:=\mathit{animation\ units} \tag{8}\] \[\mathit{animation\ unit}:=(\mathit{visual\ elements},\mathit{time},\mathit{ action},\mathit{effect}) \tag{9}\] where each animation unit targets a visual element or a group of elements and declares a start time and a duration. Additionally, the \(\mathit{action}\) specifies which kind of animation _effect_ can be applied. ### Overview Prior research has pinpointed specific features of narration-animation interplay in high-quality data videos [12]. Additionally, the formative study identified four common stages in the process of creating data videos, each of which requires considerable time and manual effort from users. We aim to automate the process of creating data videos with narration-animation interplay, making them more accessible and user-friendly for novice users. To this end, we design Data Player that consists of two modules: (1) _Understand input narration text and visualization, and semantically link them_. Narration text frequently captures the central messages of data stories, incorporating data facts and insights associated with the visualization. The module automatically parses the text to extract the data facts presented in the narration. Further, it establishes connections between narration segments and visual elements within the visualization, which can be further leveraged to create audio and animations in the data video. (2) _Recommend animation sequence and synchronize audio narration and visual animations_. Using visual cues corresponding to the words spoken in the narration is crucial in data video creation, and animations can make data more engaging and memorable for the viewer. To avoid any confusion or misleading the audience, the module automatically generates an appropriate animation sequence that serves the same purpose and intent as the storytelling, and synchronizes the audio narration with the visual animation, ensuring that the viewer receives the information clearly and coherently. We propose an automatic pipeline to guide the design of Data Player, as shown in Figure 2. First, a static visualization and corresponding narration text are inputted in the form of Scalable Vector Graphics (SVGs) and plain text (a), respectively, so that they can be decoupled into multiple visual elements and narration segments (b). Then, Text-To-Speech (TTS) techniques are used to generate audio voiceovers and return timestamps of each word, which also act as the timeline of the data video (c). Furthermore, the Large Language Model (LLM) is adopted to establish the semantic links between visual components (one or a group of graphic elements) and narrative entities (one or more words) based on the data facts to be told (d). To be specific, the linking module identifies the visual elements of the visualization inputs that can be animated, extracts data facts from them into tables, and links the table rows with semantic entities in the narration. After that, the animation generation module encodes collected design knowledge about narration-animation interplay into computational constraint programs and leverages the constraint solver to generate a suitable animation sequence with pre-designed animation press based on the established text-visual links (e). Moreover, the module seeks to automatically organize animation sets in alignment with the generated audio timeline. It makes temporal decisions to allocate using constraint-based programming. As a result, a sequence of audio-animation packs is specified, which are further rendered into the data video (f). ### _Text-Visual Linking_ To generate data videos with narration-animation interplay, it is crucial to understand the narration text and its relations with the visual elements. We propose an LLM-based approach, shown in Figure 3, to generate these semantic links for animation. By extracting visual candidates that can be semantically linked in the visualization, LLM is then used to match these candidates with relevant narration segments. We illustrate this below with an example of a 15-day PM2.5 value in Beijing. #### 4.3.1 Data Extraction As described in Section 4.1.1, each visual element has a _data_ property that contains semantic information. To effectively organize and utilize this information, we transfer the semantics into data tables and group elements based on the data items they contain. Visual candidates in the visualization can be divided into two categories: basic graphical representations (e.g., marks, axes, legends, etc.) and annotations [40] (Figure 3-a). First, as demonstrated in Figure 3-b, our method consolides the data information in the SVG into a basic data table, which includes all values represented by graphical marks and axes. We also maintain a map that correlates each data table row to the corresponding visual elements. For example, the first row of data (Day: 1st, PM2.5 Value: 54.8) corresponds to both a bar mark (id is bar-0) and an x-label (id is x-label-0). While the data table captures most of the information present in the static chart, there may be missing information, particularly in regard to annotations, which play an important role in information communication. Referring to the annotation design space proposed by Ren et al. [40], we divide annotations into text, graphics (including shapes and images), and their combinations. These elements contain valuable semantic information, such as the text ("hazardous") and the red rule annotation in Figure 3-a, both of which express information about the hazardous threshold (300). Therefore, we extract a separate data table and group the corresponding elements. Overall, each input data visualization will derive one data table for the chart marks and optionally one or more data tables for the annotations. #### 4.3.2 Linking by an LLM We have extracted semantics from static visualization graphics, as well as the mapping relationship between semantics and visual elements. To link narration segments (one or more words) and visual components (one or a group of graphical elements), the next step is to detect the occurrences of similar entities of these semantics in narration text. We further model this linking problem as a matching problem, semantically matching narration segments (Figure 3-c) with data table rows (Figure 3-b), and then mapping the data table rows to visual elements. Specifically, we leverage the powerful natural language understanding ability of LLM (We use the OpenAI part-3.5-turbo model in our work.) to link the two perspectives, as shown in Figure 3-d. The prompt engineering aims to ask the LLM to accept data tables and narration words as input, and output semantic links as "(_narration segments_)[table \(x\): \(R_{i},...\)]", where \(x\) is the table index and \(R_{i}\) is the data table row index (see Figure 3-b). Inspired by existing successful prompt engineering experiences [1, 34], the prompt includes few-shot pre-defined examples for a better illustration of our task. The LLM output is shown in Figure 3-e. Finally, having links between table rows and narration segments in hand, we further obtain text-visual links with the help of mapping relationship between table rows and visual elements. In addition, we also fine-tune and dedipulate the text-visual links to avoid unnecessary animations in the subsequent steps. Our approach has several advantages over existing works about establishing references between text and charts [27, 8, 29]. First, we formulate the problem of text-visual linking into a problem of matching data table rows and narration words, which allows us to capture the semantic relationships between text and visuals more effectively. For instance, interpreting the phrase "the following day" requires an understanding of the temporal context of the data table, which is difficult to achieve by traditional rule-based linking methods. Second, we leverage LLMs, state-of-the-art language models, to perform similarity matching between data table rows and narration words with high accuracy due to larger knowledge support and better natural language understanding ability compared to prior NLP packages. However, traditional methods outperform the LLM-based method in terms of real-time performance. ### _Animation Sequence Generation_ In this subsection, we introduce the animation recommendation module, which encodes collected high-level design knowledge into low-level constraints to automatically generate a suitable animation sequence. #### 4.4.1 Animation Modeling According to the formative study and existing literature [12], designers concern with three types of semantics to implement appropriate animation effects. Specifically, they distinguish the semantic beginning and end of the narrative description of one data fact, as well as the emphasis intent in it and the information complement to the chart. Therefore, based on the definition in Section 4.1.3, we specify three animation actions: "enter" animations are applied when an object appears on the canvas and "exit" animations are applied when an object disappears from the canvas, while "emphasis" animations are applied to draw attention to an object that is already on the canvas. Each action includes several commonly seen changes in visual channels (e.g., "fade", "grow", "zoom", etc.). The timing and duration of the animations can also be adjusted to suit specific needs. #### 4.4.2 Constraint Encoding Prior work has demonstrated methods to generate designs from a set of design constraints [14, 39, 55], which motivates us to formulate narration-animation interplay design into a Constraint-Satisfaction Problem (CSP). In detail, We model the design elements discussed Fig. 2: The pipeline of automatic generation of data videos with narration-animation interplay. in Section 4.1 (i.e., visualization, narration, and animation) with encoded variables. Each variable has a domain. For example, animation actions include "enter", "exit", and "emphasize" (Section 4.4.1), and each action corresponds to a set of animation effects. The variable domains of visual elements and narration entities are derived from the generated text-visual links (Section 4.3.2). To generate the animation sequence, Data Player assigns concrete values to specific variables and leverages the CSP solver to explore numerous combination alternatives in the large search space. Specifically, we encode high-level design knowledge summarized from the formative study and existing literature [20, 52, 58, 6, 52] as computational low-level constraints. All constraints are formalized as equations and fed into the Z3 [17] CSP solver. The solver outputs suitable animations. For instance, the animation sequence specified for the example in Figure 3 is shown in Figure 4 (top). Ultimately, the audio narration and visual animations are rendered into a dynamic data video (.mp4 format) with narration-animation interplay with the FFmpe multimedia framework [2]. The low-level constraint encoding is detailed as follows: First, to ensure basic visual design quality, we use the established text-visual links to generate visual structure and data facts constraints, matching textual and visual entities and grouping visual elements. Specifically, we design linking constraints that allow only the visual elements that are linked to specific narration segments to be animated. The integrity constraints ensure that all elements involved in text-visual links need to be animated. We design group constraints to group the visual elements that are related to data facts in the text-visual links. Meanwhile, we design the association constraints to ensure that if one element is linked to narration, itself and other elements in the same data group can be animated. In addition, our consistency constraints specify that elements from different groups that are visually consistent should apply the same animation. Second, we encode sets of temporal constraints to time-align animations and narration. Each group of constraints specifies how different elements of the data video should be timed and arranged on the timeline according to their type, effect, and relation to the narration. First, narration text inherently contains a chronological relationship between words. We further use Microsoft Azure Text-to-Speech services [4] to automatically generate audio narration and obtain the timestamps of each word in the audio, which also acts as the timeline to arrange animation effects applied to the visual elements. On this basis, we encode a duration constraint to determine that the animation effects are triggered by the onset of the first word in each linked narration segment, and last for the duration of the corresponding text span in the audio. The last frame of the previous animation will be retained for the time period when no animation is applied. The conflict constraints enforce the inherent logical order of animation actions. For example, visual elements can only be emphasized or disappeared after they appear, and elements cannot be emphasized after they disappear. And on screen constraints determine when an element appears or disappears from the canvas based on the "_on_screen_" variable assigned to each element. If "_on_screen_" is true at time \(t\), then the corresponding element is visible at that time. Otherwise, it is hidden. By assigning different values of "_on_screen_" to each element at different timestamps, we can create a table that shows which elements are on the canvas at any given moment. The table can help us control the animation actions to avoid overlapping or conflicting movements. For instance, visual elements that have an enter animation applied will not appear on the canvas until the animation is triggered. Elements that have an exit animation applied will disappear from the canvas after the animation. Elements that do not have any animation applied will appear on the canvas by default. In addition, a set of order constraints defines an optional logical sequence of elements such as background, title, axis and data items and the synchronization constraints ensure that elements in the same data group activate together. Third, different animations can produce different effects and serve different purposes. To align the semantic intents between the linked narration segments and animated visual elements, we encode a variety of constraints to assign appropriate animations from the pre-defined library to visual elements based on the data facts being presented, the visual structure, and the desired audience engagement. We also specify constraints on animated annotations to avoid messing up the canvas. Furthermore, we define a series of implicit mappings with priorities between the involved visualization structures and the appropriate animation combinations based on long-term practical experience. These mappings are effective for defining the animations of the elements within a group. For instance, in a pie chart, the sector and its corresponding legend elements (e.g., symbol and label) are usually bound into a group. So we define a new animation called "pie-wheel-and-legend-fly-in", which means that the pie chart's sector will wheel clockwise and the legend-related elements will fly in at the same time, as shown in Figure 4 (middle). As a result, we can apply only one animation to multiple elements, avoiding specifying animations for each element individually. On this basis, we define an objective function to minimize the number of animations used: \(\min\sum_{i=1}^{m}A_{i}\), where \(A_{i}\) is the number of animations applied to the \(i\)-th text-visual link and \(m\) is the number of text-visual links. This function ensures that the module uses Figure 3: A walkthrough example illustrating the text-visual linking workflow. Data information behind the visualization (a) is extracted into data tables (b). The LLM with appropriate prompts (d) accepts the narration text (c) and data tables (b) as input and outputs semantic links between data table rows and narration segments (e). our predefined animation combinations as much as possible to maintain narrative coherence. #### 4.4.3 Animation Presets A comprehensive library of animation effects for each action and mappings between semantics and effects can enable a wide range of designs. However, constructing such a large-scale library requires significant development costs. Thus, we utilize a small set of pre-designed animation effects based on the GSAP animation platform [3] for different actions as a technology probe and proof-of-concept to explore our main research concern [61]. For instance, "fade-in" and "wipe" for entrance, "zoom-out" and "fade-out" for exiting, and "shine" and "change-color" for emphasis, etc. Additionally, depending on different chart types and element orientations, the configurations of one animation effect are adjusted, such as the "grow" effect for bar marks, "wipe" for lines, and "wheel" for circular marks. In the future, we will explore more vivid animations to enrich the library. ## 5 Evaluation To evaluate the liveliness of data videos generated by Data Player, we (1) built an example gallery from real-world data storyelling practices, (2) conducted a user study to compare automatic-generated videos with those created by novices and designers, and (3) performed expert interviews to further understand the difference between automatic-generated data videos and human-composed ones. ### _Example Gallery_ To demonstrate the expressiveness of the automatic approach, we generate a variety of example data videos based on a set of public design files. These examples cover a wide range of visualisation types (e.g., bar, pie, line, etc.) and narrative themes (e.g., PM2.5, tourism, stock price, tax payment, etc.). Figure 1 and Figure 4 show a subset of cases, more data video examples can be found in [https://datavideos.github.io/Data_Player/](https://datavideos.github.io/Data_Player/). ### _User Study_ In this study, we aim to understand the quality of data videos produced by Data Player, by comparing them with data videos produced by novices and designers. #### 5.2.1 Dataset To prepare data videos for the user study, we collected six sets of static charts and their descriptions from real-world data storytelling practices, including a line chart with stroked point markers that shows spending on outbound tourism of Chinese tourists (short as "Chinese Tourists", Figure 4 (bottom)), a pie chart that describes the future outlook of the tourism sector over the next year (short as "Tourism Sector", Figure 4 (middle)), an annotated bar chart that depicts the PM 2.5 value of Beijing observed in 15 days (short as "PM 2.5", Figure 4 (top)), a stacked bar chart that describes America's tax system (short as "Tax Payment", Figure 1), a diverging stacked bar chart for sentiments towards a set of eight questions with a 5-point Likert scale (short as "Likert Scale"), and a multi-series line chart that shows the stock prices for five high-tech companies (short as "Stock Prices"). #### 5.2.2 Procedure **Pre-experimental preparation.** We first invited four participants to manually create videos, including two novices and two professional designers. The novices were unfamiliar with video creation and only had experience using visualization to present data insights and using MS PowerPoint to create animations. The two designers' daily work involved using professional video creation tools like Adobe After Effects. The two designers also participated in our prior formative study (Section 3). Based on the static visualizations and descriptions collected from the internet (Section 5.2.1), each participant was asked to create three videos. In order to control the conditions of comparison, we implemented an interactive authoring tool [61] that allows participants to manually specify the links between narration segments and visual elements, apply appropriate animations, generate audio from text, Fig. 4: Automatic-generated data videos by Data Player. Each example includes a sequence of animation. The animation will be triggered when the audio narration reaches the corresponding segment. The snapshot images show the effect after the animation has been triggered. fine-tune the timeline, and preview the data video. They can ask us any questions they have during the manual creation process to ensure they can master the tools without compromising the quality of their creative data videos. Finally, we confirmed with them that the data videos they produced were representative of their level of design. For each piece of material, we also obtained an automatically generated version using Data Player. In total, we collected six novice-composed data videos, six designer-created data videos, and six automatic-generated videos. **During experiment.** We recruited another 10 participants (denoted as P1 to P10) who have data analysis and visualization needs in their daily work, including data analysts, ML researchers, software engineers, and visualization researchers. Each participant viewed six sets of videos, each set including three versions: novice-created, automatic-generated, and designer-composed. The sequence of the three videos and the six sets was shuffled. We also provided the textual descriptions that were used to create the data videos. The participants can watch the video repeatedly as they like and were asked to rate the overall quality of each video in terms of expressiveness and liveliness using a 5-point Likert scale. They were encouraged to leave reasons for the decisions and speak of any comments on any aspect of the videos. After all data videos have been viewed and evaluated, we asked participants to identify the automatic-generated one in each set. The experiment lasted about 60 minutes. #### 5.2.3 Results **Subjective Satisfaction:** The results of the participants' ratings are shown in Figure 5. From left to right are novice-produced, automatic-generated, and designer-created data videos. Overall, participants had a positive sentiment towards all the data videos. The majority of participants rated the videos as "agree" or "strongly agree", with mean values all greater than or equal to 3.0. In detail, the results showed that the videos automatically produced by Data Player were generally well-received by the participants, with a mean score of 4.4 (_SD_=0.70) for "PM2.5", 4.3 (_SD_=0.82) for "Chinese Tourists", 4.1 (_SD_=0.57) for "Tax Payment", 4.8 (_SD_=0.42) for "Stock Prices", 4.5 (_SD_=0.53) for "Likert Scale", and 4.8 (_SD_=0.42) for "Tourism Sector". These scores were higher than the mean ratings given to the novice-produced data videos and comparable to the mean ratings given to the designer-created videos. In terms of the specific topics of the data videos, the automated data videos received the highest mean ratings for "PM2.5" and "Tourism Sector", with means of 4.4 and 4.8, respectively. The designer-composed videos had slightly higher mean ratings than automatic-generated ones for other topics. However, the paired t-test results showed that there is no significant difference in ratings between automatic-generated and designer-created data videos for all topics. We also found that for visualizations with relatively simple structures and patterns, there is little variation in the user ratings of different types of videos. For example, all three types of videos received the same ratings for "Tourism Sector" (both 4.8) and the paired t-test results show that the difference is not significant between ratings for "Chinese Tourists" and "Stock Prices". However, when the chart structure is more complex (e.g., "Tax Payment" and "Likert Scale"), especially when the narrative expresses some in-depth content, novices often encounter difficulties to tell the story more reasonably, and the designers' experience can help them deal with these situations better. The automatic algorithm was able to effectively communicate the information in a clear and engaging way, at a level between novices and designers. Participants can identify the automatic-generated videos from the three versions with an average accuracy of only 31.7%. This means that the automatic-generated videos were close to the human-composed ones. Moreover, we found that 45% of the misjudged videos were actually composed by novices, and 23.3% were actually made by designers. This suggests that the level of skill and creativity of the human composers also affected the perception of the evaluators. **Feedback:** All participants agreed that narration-animation interplay can enhance the efficiency and vividness of data insights communication compared to static forms. They also praised the data videos' expressiveness, liveliness, and overall quality, and noted that they would consider using the automatic method in their own work. P2 said, "_the videos dynamically present information while ensuring completeness. I was so surprised that the wonderful data videos were generated automatically._" They also expressed a preference for some aspects that were focused on in our design knowledge. For example, all participants appreciated the use of animation in the right context. P4 also commented, "_the videos were visually consistent overall, as the same animations were used for similar visual structures._" We also learned some lessons about users' preferences. Some participants criticized the visual effects and style of the videos and hoped that the algorithm could better meet individualized needs. For example, P1 did not like the "Bounce" animation effect of bar marks, and P3 did not expect the "Change Color" effect to always be red. For legend, P3 preferred to have the graphical elements and their corresponding legends presented together in sequence so that the audience can get an immediate understanding of the visual elements. While P1 preferred to see the legend first to get a general impression of the context, and then the marks and the narration appear in sequence. This indicated that a user interface is needed to extend the existing automation pipeline and incorporate humans into the workflow. ### Expert Interview To further compare the difference between automatic-generated videos and human-composed ones, we invited six experts to provide feedback through interviews, including four designers (denoted as D1 - D4), of whom D1 and D2 helped us create designer-composed videos in the user study (Section 5.2.2), and two visualization researchers (denoted as V1 and V2), who have more than five years of experience conducting visualization research and publishing visualization papers in major conferences (e.g., IEEE VIS and ACM CHI). Moreover, all of them joined our prior formative study (Section 3) and provided valuable feedback. They were asked to watch the six sets of data videos used in the user study, and they were informed of the respective versions in advance. Then they need to provide specific feedback on comparing different versions. In addition, they were also asked to comment on the method's strengths and weaknesses, its possible application scenarios, and the future outlook. Overall, all the participants agreed that all data videos were able to effectively interact between narration and animation. When comparing the three data video versions, the participants generally thought that there was no very obvious difference between them overall. However, D1 and D2 found that humans (especially designers) did a better job fine-tuning the timeline. In the creation of data videos, the animation duration is dependent on the length of the narration segment in the text-visual link. This may result in animations that are too short, such as those corresponding to only one word, which can lead to user confusion, particularly for animations intended to emphasize certain points. They (D1 and D2) also noted that they spent a certain time previewing Figure 5: User study results with a 5-point Likert scale. From left to right are novice-produced, automatic-generated, and designer-created videos. and iteratively refining the animations for the designer-created videos, including adjusting the trigger timestamp and duration of animations, so they can mitigate this issue. Participants (D1, D3, D4, V1, and V2) also pointed out the animation-intensive nature of Data Player. D1 stated, "_the automatic algorithm feels like it wants to add animations to every sentence,_" while V2 agreed, "_this level of animation density can be tiring for viewers._" and V1 complemented, "_I often found it hard to keep track of multiple animated visual elements at the same time, especially when they move or change in different ways._" The participants also provided valuable insights into the potential application scenarios and future improvement directions. All of them agreed that Data Player is an effective tool for empowering novices to create data videos. V1 and V2 also suggested that Data Player can be used as a module integrated into other large systems to prototype more complex videos, automate slide design, etc. D3 commented that our automatic technique can be further extended to enable other creative ways of storytelling, such as scrollytelling [44] and interactive data videos [22], which requires more interactive and exploratory experience. D1 and D4 expected that professional video production software (e.g., Adobe After Effects) can integrate the animation recommendation module. Even though all participants praised the convenience of Data Player, some of them (V1, V2, D2, D4) also suggested further automatically generating static visualizations and marration text for users. More importantly, during the process, the system should allow users to fine-tune the generated results at each stage of data video generation to provide an interactive human-in-the-loop experience. ## 6 Discussion **Automation vs. Personalization.** Creating marration-enriched data videos is a highly specialized and time-consuming task. Data Player can help users automate this process based on the input static visualization and description. It enables rapid exploration of design alternatives, thus increasing efficiency for data insight presentation. However, in our user study, we found that participants' personal preferences (e.g., animation effects and visual styles) may affect their ratings of data videos. These highly personalized needs are difficult to be thoroughly satisfied in a full-automatic algorithm [62]. Subsequently, we plan to design a user interface and develop Human-AI collaboration methods [30]. First, the automated method can prototype data videos (including automatic generation of visualizations and corresponding marration text), and then users can further modify them with fine-grained control on the interface. Next, the system can allow users to input their preferences at different stages of data video generation, and progressively generate data videos. Moreover, the system can provide various animation examples for users to select and adapt to their own designs [48], and the system can also automatically learn personalized needs from the user's interaction history through multi-modal interactive task learning [41]. Another interesting approach would be to help users maintain a personalized knowledge base. Based on existing professional design knowledge, users can continuously expand and update their personalized rules. **Application of LLMs.** We applied LLMs to match marration segments and visual elements. Before this, we tried traditional NLP packages and BERT-based n-gram similarity matching schemes, which were somewhat mechanical and rigid. Recently, LLMs (e.g., chatGPT and GPT 4) have demonstrated remarkable capabilities in generating and understanding natural language. After using LLMs, the matching module achieved better accuracy and flexibility. However, existing LLMs still have some inherent limitations, such as being not good at complex computational tasks, producing inconsistent outputs in different rounds, taking a long time for generation, and hallucination problems, etc. These factors affect the accuracy, timeliness, and practicality of our methods to some extent. But we believe that future research will address these issues. In the future, we will mainly explore three directions based on LLMs: The first is how to better improve the accuracy of text-visual linking with LLMs, such as exploring more accurate prompts and integrating with other interactive tools [63]. The second is to further expand the existing automatic pipeline based on LLMs, such as helping users interactively generate and modify narrations [45] and automatically create and update visualizations [60] based on the data table. The third is to fit the use of LLMs within human-in-the-loop scenarios. For example, LLMs can extract insights from data. Users can also choose the insights of interest based on their analytic tasks [47], and further leverage LLMs to automatically generate targeted marration text, visualizations, and chart annotations. Furthermore, given some specific material (e.g., a visualization or a narration segment), LLMs can be used for similarity searching to obtain more material to aid storytelling [31, 35]. **The completeness of design knowledge.** Design knowledge plays an important role in the narration-animation interplay design. In this paper, we have explored several key design constraints from the formative study and existing literature. However, it is important to note that these constraints are not exhaustive and we only consider them as a minimal set to assist data video creators and researchers in crafting narration-animation interplay for data videos. There may be other factors that have an impact on the narration-animation interplay in different contexts and domains (e.g., spatial alignment, visual complexity, and cognitive load). Therefore, we encourage further research to investigate more design guidelines to assist data video creators in creating more effective and engaging marration-animation interplay. Additionally, the target function in our current animation recommendation module can be further improved. In the future, we can also consider setting hard (must be satisfied) and soft (will be penalized if not satisfied) constraints [39] to enable more flexible recommendations. **Understanding emotions in the narration.** Our automatic pipeline primarily focuses on the semantic matching of narration segments and visual elements, as well as the rationality of animations. We also use text-to-speech technology to automatically generate audio narration. This allows us to generate reasonable and vivid data videos, but we do not further understand the emotions from the narration. In the future, it would be interesting to investigate the emotional design space in narration-animation interplay and express emotions in the narration with appropriate emotional animations, tones, and visual cues [28, 64]. Additionally, in order to draw attention to significant events and changes in animation, it is important to use sound effects, which are also not considered in our pipeline. Furthermore, having background elements appear at the beginning can also serve to set the tone and mood of the video. For example, if the video is presenting visualization related to a serious topic, such as a disease outbreak, having a somber and serious title card at the beginning can help to establish the tone of the video. **Limitations and Future Work.** In our user study, we found that the quality of the input visualization and narration itself can influence users' judgments. In the future, we can use users' own materials to create videos and then ask them to compare the videos with their previous data presentation forms. Another limitation of our study was the relatively small number of participants. It would be interesting to conduct a larger crowdsourced user study to further verify the quality of the automated data videos and to investigate the factors that contribute to their perceived quality. Additionally, Data Player currently only supports data video generation with a single chart and structured data. In the future, it can be extended to more visualization scenarios (e.g., multiple charts or dashboard [33], glyph [66], and infographics [59]) and data types (e.g., geographic [32], graph [50], and word cloud [64]) to tell more complex stories. ## 7 Conclusion To streamline the complex process of crafting narration-animation interplay for data videos, this paper proposes Data Player, which enables the automatic transformation of static text and visualizations into engaging data videos, enabling more novices to share their insights and research findings using data videos. Data Player leverages advanced LLMs to extract data facts and establish text-visual links, and uses constraint programming to recommend animations for the links and time-align audio narrations and visual animations. The results of the user study indicated that the data videos automatically produced by Data Player were well-received by participants and comparable in quality to human-composed ones. We hope that the approach can help people with little video production experience quickly create high-quality data videos, and inspire future research about narration-animation interplay in data storytelling.
2301.08432
Search of the pair echo signatures in the high-energy light curve of GRB190114C
A model of the time delayed electromagnetic cascade "echo" is applied to the bright gamma-ray burst GRB190114C - the first gamma-ray burst to be contemporaneously detected in high and very high energy gamma-ray bands. It is shown that the internal spread of the cascade in the absence of the intervening magnetic fields dilutes the "echo" emission over $10^3-10^5$ seconds depending on the energy. Accounting for the measured source flux in the $0.3-1$ TeV gamma-ray band, the prediction of the "echo" model is shown to match the detected lower-energy gamma-ray emission $10^4$ seconds after the burst. However, the "echo" emission remains indistinguishable from the intrinsic GRB190114C flux within the measurement uncertainties. Implications of this in the context of the intergalactic magnetic field measurement are discussed.
Ievgen Vovk
2023-01-20T06:10:34Z
http://arxiv.org/abs/2301.08432v2
# Search of the pair echo signatures in the high-energy light curve of GRB190114C ###### Abstract A model of the time delayed electromagnetic cascade "echo" is applied to the bright gamma-ray burst GRB190114C - the first gamma-ray burst to be contemporaneously detected in high and very high energy gamma-ray bands. It is shown that the internal spread of the cascade in the absence of the intervening magnetic fields dilutes the "echo" emission over \(10^{3}-10^{5}\) seconds depending on the energy. Accounting for the measured source flux in the \(0.3-1\) TeV gamma-ray band, the prediction of the "echo" model is shown to match the detected lower-energy gamma-ray emission \(10^{4}\) seconds after the burst. However, the "echo" emission remains indistinguishable from the intrinsic GRB190114C flux within the measurement uncertainties. Implications of this in the context of the intergalactic magnetic field measurement are discussed. ## I Introduction Propagation of the very-high-energy (VHE; \(\gtrsim 100\) GeV) gamma rays over cosmological distances inevitably leads to their partial absorption due to interaction with the extragalactic microwave and infrared/optical photon fields [1; 2; 3]. The power absorbed is transferred to the electromagnetic cascades initiated in the process, eventually transforming the initial multi-TeV photons into lower, GeV energy secondary gamma-ray "pair echo" emission [4; 5; 6]. Development of these cascades, spanning over \(\sim 100\) Mpc distances, is sensitive to the intervening magnetic field, providing a potential opportunity to measure the extremely weak magnetic field in the intergalactic space [6; 7; 8]. In the presence of the non-negligible intergalactic magnetic field (IGMF), the cascade and the corresponding secondary gamma ray emission are spread both in time and in angle, reducing the observable secondary flux. Consequently, non-detection of such secondary emission from several hard-spectrum blazars has been used to set a lower limit on the strength of IGMF [9; 10; 11; 12; 13; 14; 15; 16; 17] at redshift \(z\sim 0.1\). The nature of this IGMF remains, however, uncertain. An opportunity to distinguish the astrophysical and cosmological origins of IGMF would open if its redshift evolution is traced to the redshifts of \(z\sim 1-2\), where the field of an astrophysical origin should not have yet developed [18]. Obtaining IGMF constraints at such redshifts is, however, challenging due to the progressively increasing with redshift gamma-ray absorption, limiting the number of detectable persistent gamma ray emitters. Several transient sources, though - such as gamma-ray bursts (GRB) and flaring active galactic nuclei (AGN) - have been reported detected at TeV energies at redshifts \(z\sim 0.5-1\)[19; 20; 21; 22], potentially allowing to expand the redshift range of IGMF measurements / constraints. Indeed, HE (high-energy; \(\gtrsim 100\) MeV) and VHE observations of such transients have been discussed earlier as a viable tool to constrain IGMF [23; 24; 25; 26; 27]. Detectability of the secondary emission from such transients, required for IGMF measurement, depends crucially on the intrinsic time delay of the cascade emission, caused by its internal scatter due to the angular spreads of the electron-positron pair production and consequent inverse Compton (IC) emission. Such effects can be accounted for with the recently developed general-purpose Monte Carlo (MC) codes ELMAG, CRPopa and CRBeam [28; 29; 30], where the time delay can estimated as a difference between the propagation duration of the source primary and cascade secondary photons. However, the required accuracy of \(\epsilon\simeq c\Delta t/d\sim 10^{-17}\) (for \(\Delta t=10\) s and \(d\approx 10^{28}\) cm at redshift of \(z=1\)) may be challenging to achieve with them1, calling for specialized calculations of cascade light curves of transients sources. Semi-analytical models, neglecting the exact angular dependencies of these processes, predict that at GeV energies this time delay may be as large as \(\Delta t\sim 10^{2}-10^{4}\) s [e.g. 6; 26], potentially exceeding the duration of GRBs or even short AGN flares [e.g. 32; 33]. Footnote 1: E.g. the commonly used double-precision floating-point format has the precision of \(\epsilon=2^{-53}\approx 10^{-16}\)[31] Perhaps, the first opportunity to search for such cascade emission at redshifts larger than \(z\sim 0.1\) has presented itself with contemporaneous HE and VHE observations of a bright GRB190114C at redshift \(z\approx 0.42\) with Fermi/LAT and MAGIC telescopes [34; 22]. In this manuscript the measured VHE light curve of GRB190114C is used to predict the lower energy cascade emission, that is then tested against the HE measurements. To this end, a refined description of the cascade time delayed light curve, accounting for the exact energy and angular dependencies of pair production and IC emission, is developed and presented below. Calculation of the pair echo emission Here it is assumed that the cascade starts with the absorption of the initial VHE gamma ray in interaction with the extragalactic background light photon field (EBL), modelled following [35]. Electrons and positrons, born in the process, emit secondary photons in IC scattering of the cosmic microwave background (CMB) (given that CMB photon number density exceeds that of EBL by more than two orders of magnitude, additional IC scattering on EBL has a minor effect). Only the first generation of the pairs and their IC emission is included in the calculations. For the initial source spectra not exceeding few TeVs (case for the measured VHE emission of GRB190114C), secondary gamma rays of the first generation do not exceed few tens of GeV in energy. The emission from the subsequent generations thus falls to MeV energies, outside of the energy range Fermi/LAT [36]. In the absence of IGMF, the angular distribution of the pairs, responsible for internal cascade scatter, is set by the corresponding pair production cross section, a back-reaction from inverse Compton emission [e.g. 24] and the angular profile of the initial VHE emission. However, for TeV energies considered here the pairs angular spread due to IC back-reaction is \(\sqrt{\langle\theta_{IC}\rangle}\approx 2\times 10^{-8}\gamma_{6}^{-1/2}\) rad [24] (where \(\gamma_{6}\) is the electron/positron Lorentz factor in units of \(10^{6}\)) - much smaller than the typical \(\sim 10^{-6}\gamma_{6}^{-1}\) rad spread of the pair production or IC emission itself and may be neglected. Therefore only angular (and energy) distributions of the pair production and IC emission are considered here. With a single cascade generation considered, the calculation of the secondary emission is performed in the hybrid semi-analytical / Monte Carlo manner in the following steps: (a) calculation of the radial (redshift) distribution of the \(e^{+}/e^{-}\) pairs injected following the initial VHE photons absorption, (b) calculation of their angular and energy distributions, (c) random sampling of the pairs following these distributions, (d) calculation of the secondary gamma-ray emission via IC scattering, accounting for the corresponding particle energy loss and (e) calculation of the time delay for every point of the electron / positron trajectories. These are outlined below. ### Radial distribution of the electron-positron pairs Radial distribution of the electron-positron pairs resulting from the initial gamma rays of energy \(E_{\gamma}\) emitted at the redshift \(z_{e}\) is set by the optical depth \(\tau(E_{\gamma},z_{e})\) to the absorption of such photons on the extragalactic background light, that in the differential form can be written as [35] \[\frac{d\tau(E_{\gamma},\mu)}{d\mu dz}=c\frac{1-\mu}{2}\frac{dt}{dz}\int_{ \frac{2m_{e}^{2}c^{4}}{E_{\gamma}\gamma(z-\mu)(1+z)}}^{\infty}d\epsilon\frac {dn(\epsilon,z)}{d\epsilon}\sigma_{\gamma\gamma} \tag{1}\] where \(\mu=\cos\theta\) is the cosine of the angle between the interacting gamma ray and background photons. The cumulative probability of the electron-positron pair creation by the redshift \(z\) is thus simply \[P(z,z_{e})=e^{-\int_{-1}^{1}d\mu\int_{z}^{z_{e}}dz\frac{dz\tau(E_{\gamma},\mu )}{d\mu dz}} \tag{2}\] which gives the required distribution. However, since the energy and angular distribution of the generated pairs depends on \(\mu\) (see below), it is convenient to consider the differential distribution \(P(z,z_{e},\mu)\) integrated only over \(z\) when generating the pairs in the Monte Carlo approach employed here. ### Angular dependence of the photon-photon pair production Differential cross section of the photon-photon pair production for two photons with energies \(E_{\gamma}\) and \(\epsilon\) may be written as [37] \[\frac{d\sigma_{\gamma\gamma}}{dx}=\sigma_{T}\frac{3}{4}\frac{m_{ e}^{2}c^{4}}{s}\left[\frac{x}{1-x}+\frac{1-x}{x}+\frac{1-\beta_{cm}^{2}}{x(1-x)}\right.\] \[\left.-\frac{\left(1-\beta_{cm}^{2}\right)^{2}}{4x^{2}(1-x)^{2}}\right] \tag{3}\] where \(x=E_{e}/E_{\gamma}\) and \(E_{e}\) is the generated electron (or positron) energy, \(s=2E_{\gamma}\epsilon(1-\cos\theta)\) is the squared center of momentum energy and \(\beta_{cm}=\sqrt{1-4m_{e}^{2}c^{4}/s}\) is the velocity of the resulting electron in the CM reference frame. The range of \(x\) is restricted to \((1-\beta_{cm})/2\leq x\leq(1+\beta_{cm})/2\). As both generated electron and positron in the CM frame have equal oppositely directed velocities, their angular distribution may be obtained from the Lorentz transformations between the laboratory and CM frames following [38; 39] \[\mu=\frac{\mu^{\prime}+\beta_{c}}{1+\beta_{c}\mu^{\prime}}=\frac{\frac{2\pi \omega_{1}}{\omega_{1}+\omega_{2}}+\beta_{c}^{2}\beta_{cm}-1}{\beta_{c}\left( \frac{22\omega_{1}}{\omega_{1}+\omega_{2}}+\beta_{cm}-1\right)} \tag{4}\] where \(\mu=\cos\theta\) is the cosine of the electron (positron) movement direction with respect to the direction of the CM frame movement in the laboratory frame, \(\mu^{\prime}=\left(2\gamma/(\omega_{1}+\omega_{2})-1\right)/\beta_{c}\beta_{cm}\) is that in the CM frame, \(\gamma=E_{e}/m_{e}c^{2}=x\omega_{1}\) is the electron (positron) Lorentz factor in the laboratory frame, \(\gamma_{cm}=\sqrt{s/4m_{e}^{2}c^{4}}\) - that in the CM frame, \(\beta_{c}=\sqrt{1-4\gamma_{cm}^{2}/(\omega_{1}+\omega_{2})^{2}}\) is the velocity of the CM frame in the laboratory one with the additional notations \(\omega_{1}=E_{\gamma}/m_{e}c^{2}\) and \(\omega_{2}=\epsilon/m_{e}c^{2}\) introduced for compactness. The pair production differential cross section as a function of the generation angle can be then expressed as \[\frac{d\sigma_{\gamma\gamma}}{d\mu}=\frac{d\sigma_{\gamma\gamma}}{dx} \frac{dx}{d\mu}\frac{d\mu}{d\mu^{\prime}}\] \[= \frac{d\sigma_{\gamma\gamma}}{dx}\frac{\beta_{c}}{2(1-\beta_{c}^{ 2})}\frac{[2x\omega_{1}+(\beta_{cm}-1)(\omega_{1}+\omega_{2})]^{2}}{\beta_{cm }\omega_{1}(\omega_{1}+\omega_{2})} \tag{5}\] For an isotropic target photon distribution the cross section dependency on electron energy can be found integrating Eq. 5 over all the incident angles \(\theta\). An example of such integration for the case \(E_{\gamma}=1\) TeV gamma ray interacting with EBL at redshift \(z=0\) is shown in Fig. 1. One can see that for most of the generated particle energy range, the particles are injected at the hollow cone with the \(\theta\sim 2/\gamma\) opening with respect to the CM frame direction as result of the both the particles and the CM frame motions. The CM frame movement direction in general does not coincide with that of the most energetic of the interacting photons. Assuming \(\omega_{1}\gg\omega_{2}\) and accounting for the pair production threshold of \(\omega_{1}\omega_{2}\geq 1\), the corresponding maximal offset angle can be estimated as \(\theta_{CM}\approx 2\omega_{2}/\omega_{1}\leq 2\omega_{2}^{2}\). For the cosmic microwave radiation and extragalactic background light target fields with \(\omega_{2}\lesssim 1\) eV the CM frame offset is \(\theta_{CM}\lesssim 10^{-12}\) rad, which is several orders of magnitude smaller than the typical \(1/\gamma\gtrsim 10^{-7}\) rad direction spread of the \(E_{e}<10\) TeV electrons, considered below. Due to this, the corresponding coordinate frame rotation is neglected here. ### Angular dependence of the Inverse Compton emission Inverse Compton emissivity for a single electron or positron on a monochromatic background photon distribution with energy \(\epsilon_{0}\) may be written following [40] as \[j(\Omega_{sc},\epsilon_{1})=\frac{r_{0}^{2}cn}{2\gamma^{2}}\frac {\epsilon_{1}}{L_{1}}\delta(\epsilon-\epsilon_{0})\left[\left(1+\frac{ \epsilon_{1}}{mc^{2}}\frac{\mathbf{km}-1}{\gamma L}\right)^{-1}\right.\] \[\left.+\frac{\epsilon_{1}}{mc^{2}}\frac{\mathbf{km}-1}{\gamma L}+ \left(1+\frac{\mathbf{km}-1}{\gamma^{2}LL_{1}}\right)^{2}\right] \tag{6}\] with \(L=1-\beta\mathbf{k}\mathbf{e}\) and \(L_{1}=1-\beta\mathbf{m}\mathbf{e}\) where \(\mathbf{e}\), \(\mathbf{k}\) and \(\mathbf{m}\) are the direction vectors of the electron, incoming and scattered photons correspondingly. Directions of the incoming background photons that may be scattered from the energy \(\epsilon\) up to \(\epsilon_{1}\), are given by the relation \[\mathbf{km}=1+\frac{mc^{2}}{\epsilon\epsilon_{1}}\left(\epsilon_{1}\gamma L_{ 1}-\epsilon\gamma L\right) \tag{7}\] The angular distributions of the photons resulting from the scattering a non-monochromatic background distribution can be found integrating Eq. 6 over the corresponding radiation spectrum. For the specific case of the thermal background with the temperature \(T=2.725\) K, corresponding to the cosmic microwave background (CMB) at redshift \(z=0\), resulting distributions from an electron with the Lorentz factor \(\gamma=10^{6}\) at several energies are shown in Fig. 2. As expected, most of the emitted energy flux is concentrated within the narrow \(1/\gamma\) cone. The emission spectra of the same electron in several directions are shown in Fig. 3. Figure 1: Differential pair production rate for \(E_{\gamma}=1\) TeV gamma ray as function of the generated electron (positron) energy and motion direction offset angle with respect to that of the gamma ray. Calculations performed for the EBL photon field at redshift \(z=0\). Apparent layering at \(E_{e}\gtrsim 0.6\) TeV is a numerical artifact coming from the energy binning of the used EBL model [35]. Figure 2: Inverse Compton emission profiles of a single electron with the Lorentz factor \(\gamma=10^{6}\), scattering the isotropic black body photon field with the temperature \(T=2.725\) K, evaluated at multiples of the mean scattered photon energy \(\langle\epsilon_{1}\rangle=3.6\gamma^{2}kT\approx 0.85\) GeV. ### Time delay from the geometrical path difference Geometry of the secondary emission time delay problem is depicted in Fig. 4. The resulting time delay may be written as a sum of the delays originating in the triangles formed by the sides \((r_{0},d_{e},r_{e})\) on one hand side and \((r_{s},r_{e},r_{t})\) on the other. For the first triangle the time delay is simply \[\Delta t_{1}^{\prime}=t_{e}+(r_{0}-r_{e})/c\approx t_{e}-\frac{d_{e}}{c}\left( 1-\frac{r_{0}}{d_{e}+r_{0}}\frac{\theta_{e}^{2}}{2}\right) \tag{8}\] where \(t_{e}\) is the exact time the electron (positron) required to travel over \(d_{e}\) accounting for its gradual slowing down due to cooling. For the second triangle the delay is \[\Delta t_{2}^{\prime}=\frac{r_{s}-r_{e}}{c}\left[\sqrt{1+\frac{2 r_{s}r_{e}(1-\cos\alpha_{s})}{(r_{s}-r_{e})^{2}}}-1\right]\] \[\approx\frac{1}{c}\frac{r_{s}r_{e}}{r_{s}-r_{e}}\frac{\alpha_{s}^ {2}}{2} \tag{9}\] with \(\alpha_{s}\approx\alpha^{2}+\alpha_{e}^{2}+2\alpha\alpha_{e}\cos\phi_{e}\), where \(\phi_{e}\) is the positional angle of the electron (positron) motion direction and \(\alpha_{e}\approx\frac{d_{e}}{r_{e}}\theta_{e}\). The total time delay thus is \[\Delta t=(1+z)(\Delta t_{1}^{\prime}+\Delta t_{2}^{\prime})\] \[\approx(1+z)\left[t_{e}-\frac{d_{e}}{c}\left(1-\frac{r_{0}}{d_{e} +r_{0}}\frac{\theta_{e}^{2}}{2}\right)\right.\] \[\left.+\frac{1}{c}\frac{r_{s}r_{e}}{r_{s}-r_{e}}\frac{\alpha_{s}^ {2}}{2}\right] \tag{10}\] Neglecting the difference between \(t_{e}\) and \(\frac{d_{e}}{c}\) and assuming that the electron cooling distance is much smaller than the initial photon mean free path (i.e. \(d_{e}\ll r_{0}\)) while the source distance is much larger than it (i.e. \(r_{s}\gg r_{0}\sim r_{e}\)), one finds \[\Delta t\approx(1+z)\frac{1}{c}\left(d_{e}\frac{\theta_{e}^{2}}{2}+r_{0}\frac {\alpha_{s}^{2}}{2}\right) \tag{11}\] The suitable range of \(\theta_{e}\lesssim 2/\gamma\) here is defined by the electron/positron pair production angular scatter (see Sect. II.2), whereas for \(\alpha_{s}\) it is set by the requirement that the corresponding scattering angle with respect to the electron direction of motion is \(\theta_{sc}\lesssim 1/\gamma\). This angle can be found from the same triangles \[\theta_{sc}^{2}\approx\frac{r_{s}^{2}}{R^{2}}\alpha^{2}+\frac{2r _{s}(r_{s}-r_{0})}{R^{2}}\alpha\theta_{e}\cos\phi_{e}\] \[+\frac{(r_{s}-r_{0})^{2}}{R^{2}}\theta_{e}^{2} \tag{12}\] where \(R=r_{s}-(r_{0}+d_{e})\). It defines the scalar product \(\mathbf{m}\mathbf{e}\), required to evaluate the inverse Compton emissivity in Eq. 6. ### Total light curve and spectral energy distribution To get the resulting light curve and spectrum of the secondary emission, the IC emissivity in Eq. 6 needs to be integrated over the spatial and energy distribution of the generated electron-positron pairs. A Monte Carlo approach is used for this purpose, where \(10^{6}\) source photons are generated in the energy range \(0.01-10\) TeV. Since most of the astrophysical sources of gamma-ray emission are likely much smaller than the \(D_{\gamma}\simeq 800(E_{\gamma}/1\ \text{TeV})^{-1}\) Mpc mean free path of the initial gamma rays [6], the emitting source is assumed to be point-like. The generated photons are taken uniformly distributed within the cone with half-opening angle \(\alpha_{max}=10^{-5}\) rad, chosen to adequately cover the angular spread of the electrons with the Lorentz factor \(\gamma\gtrsim 10^{6}\), mostly responsible for the emission in the target \(0.1-1\) GeV energy range. Figure 3: Inverse Compton emission spectra of a single electron with the Lorentz factor \(\gamma=10^{6}\), scattering the isotropic black body photon field with the temperature \(T=2.725\) K, evaluated at several offset angles with respect to the electron motion direction. Figure 4: Sketch of the secondary emission problem. Observer (on the left) is separated from the source (on the right) by the distance \(r_{s}\). An photon emitted by the source at an angle \(\alpha\) is absorbed having travelled over the distance \(r_{0}\) and generates an electron (positron) at the relative angle \(\theta_{e}\). The latter travels over the distance \(d_{e}\) before emitting the secondary photon reaching the observer after crossing the distance of \(r_{t}\). For each photon, an absorption probability on EBL is evaluated from Eq. 2. In case of absorption, the corresponding absorption redshift, EBL photon energy and interaction angle, randomly sampled from Eq. 1, are used to calculate the energy and propagation direction of the created electron and positron using Eqs. 3 and 4. For each of those, the generated IC emission is calculated integrating Eq. 6 over the isotropic thermal distribution of the CMB photons with the temperature \(T=2.725(1+z)\) K, where the scattering angle is defined by Eq. 12. At every point of their respective trajectories, the particle energies are adjusted according to the total energy loss due to IC emission. Finally, the corresponding emission time delay is evaluated using Eq. 10 for every point of the particle trajectory. The obtained this way cascade emission kernel \(K(\epsilon_{1},E_{\gamma},\Delta t)\) relates primary source emission at energy \(E_{\gamma}\) to the generated secondary flux at energy \(\epsilon_{1}\), arriving with the time delay \(\Delta t\). The total secondary emission flux at an arbitrary moment \(t\) is straightforwardly obtained as: \[F_{echo}(\epsilon_{1},t)=\int_{-inf}^{t}F(E_{\gamma},t^{\prime})K(\epsilon_{1 },E_{\gamma},t-t^{\prime})dt^{\prime} \tag{13}\] where \(F(E_{\gamma},t^{\prime})\) is the assumed intrinsic light curve of the source. The energy and time dependence of cascade emission kernel is illustrated in Figs. 5 and 6 and can be understood from the relations outlines in Sect. II. The smallest time delays correspond to the smallest electron direction offsets \(\theta_{e}\), that during the pair production are achieved for the highest energy electrons (see Fig. 1). This results in the hard emission spectrum peaking above few GeVs. With the increase of the time delay a larger part of the initial emission cone becomes visible - though at a cost of a larger offset angle \(\theta_{e}\) and / or scattering angle \(\theta_{sc}\). As efficient IC emission takes place only within \(\theta_{sc}\lesssim 1/\gamma\), the required increase in the offset angle \(\theta_{e}\) lowers the maximal energy of the emitting particles, leading to an eventual gradual decrease of the generated energy flux starting from the highest energies. ## III Grb190114C delayed emission modelling HE and VHE gamma-ray emission from GB190114C has been contemporaneously measured in the \(0.1-1\) GeV and \(0.3-1\) TeV energy ranges [34]. Though it is possible that VHE emission is of the secondary origin, cascading down from \(E\gtrsim 20\) TeV energies, the mean free path of such energetic primary photons with respect to pair production on EBL is at least 20 times shorter than that of the VHE photons in question [6]. The additional time delay associated with such high energy secondaries does not exceed few seconds [6; 26]. The possible secondary origin of the VHE emission thus does not substantially modify the spatial structure of the cascade and has a minor impact on the secondary emission in the HE band, where time delays of the order of \(\Delta t\sim 10^{2}-10^{5}\) s are expected (see Fig. 5). Due to this an assumption, that the measured VHE emission represents the intrinsic source radiation, does not have a notable impact on the conclusions regarding the delayed emission in HE band. The model of the GRB190114C delayed emission used here thus constituted a prediction of the secondary flux in the \(0.1-1\) GeV energy range given the measured flux in the \(0.3-1\) TeV range as if it was of an intrinsic origin. Worthy to note that, assuming that initial photon energy is distributed equally between the generated electron and positron, the energies of the primary \(E_{\gamma}\) and Figure 6: Energy dependency of the cascade kernel \(K(\epsilon_{1},E_{\gamma},\Delta t)\), integrated over the initial photon energy \(E_{\gamma}\) and evaluated at several values of time delay \(\Delta t\). The assumed emission source is the same as in Fig. 5. Figure 5: Time dependency of the cascade kernel \(K(\epsilon_{1},E_{\gamma},\Delta t)\), integrated over the initial photon energy \(E_{\gamma}\), evaluated at several emission energies \(\epsilon_{1}\). Calculation was performed for a putative source at the redshift of GRB190114C (\(z=0.42\)) with an exponentially cut off power law spectrum with the index \(\Gamma=-2\), normalization at \(E_{0}=100\) GeV of \(N=10^{-22}\)\(1/(\)cm\({}^{2}\) s eV) and the cut off energy \(E_{c}=5\) TeV. secondary \(\epsilon_{1}\) gamma-ray photons are related as \(\epsilon_{1}\approx 1(E_{\gamma}/1\ {\rm TeV})^{2}\) GeV, implying that the HE and VHE windows used for GB190114C observations are well-suited for the secondary emission search. Time-delayed cascade emission from GRB190114C was modelled using the procedure outlined in Sect. II.5. As the VHE spectrum of the source is consistent with the power law model, it was chosen as the intrinsic spectral shape in the model. The model parameters and their uncertainties were found from a \(\chi^{2}\) fit to the reported spectral points in each of the 5 time bins spanning from \(T=T_{0}+68\) s to \(T=T_{0}+2400\) s, where such points were reported [34]. The uncertainties were further propagated to the delayed flux estimate using a toy MC, where the cascade flux was re-calculated for 100 random realizations of the spectral parameters, sampled from the multi-variate normal distribution with the mean values and covariance matrix taken from the spectral fit. The resulting total time-delayed cascade flux is shown in Fig. 7 along with the corresponding HE/VHE measurements from [34]. Though the maximal energy of GRB190114C VHE emission is not constrained by VHE data, an artificial limit of the initial photon energy of \(E_{\gamma}^{max}=10\) TeV was assumed here for numerical reasons. At the same time, relation between the primary and secondary photon energies, mentioned above, suggest the bulk of the cascade emission, resulting from primary photons with \(E_{\gamma}>1\) TeV, will have energy \(\epsilon_{1}\gtrsim 1\) GeV and thus would not contribute to the flux in the \(0.1-1\) GeV window considered here. To verify this assumption directly, the time-delayed cascade emission was re-calculated for a power law spectrum with a exponential cut-off at 1 TeV (also displayed in Fig. 7). The resulting \(\lesssim 30\%\) flux decrease at \(T-T_{0}\gtrsim 10^{3}\) s supports the assumption of the sub-dominant contribution of intrinsic \(E_{\gamma}\gtrsim 1\) TeV emission to the predicted cascade. It should be noted that the cascade flux estimate shown in Fig. 7 is conservative in the sense that it does not include any potential "echo" flux from \(T-T_{0}<68\) s when MAGIC observations have started. Contribution of this early-time emission, however, is not decisive. Assuming the source flux evolves as \(F(t)\propto t^{-\alpha}\) with \(\alpha\approx 1.5-1.6\)[22; 34] while the spectral shape is the same as in the \(T-T_{0}=[68;110]\) s interval (the earliest for which VHE measurement were published), relative contribution from interval \(T-T_{0}=[25;68]\) s, starting at the approximate end of the GRB prompt emission phase, is around \(\simeq 20\%\) from the estimated cascade flux; extrapolation of the VHE emission to the prompt phase down to \(T-T_{0}=5\) s leads to \(\simeq 80\%\) larger cascade flux, which is still within the uncertainties of the Fermi/LAT measurement. ## IV Discussion As one can see from Fig. 7, the expected cascade flux from GRB190114C is compatible with the Fermi/LAT measurement at \(T-T_{0}\approx 10^{4}\) s, so that, in principle, the registered flux may be composed of the "echo" entirely. Though afterglow origin of this emission is possible [34], this demonstrates that in the absence of IGMF the cascade emission from bright GRBs similar to GRB190114C is detectable with the current generation of gamma-ray telescopes despite the strong dilution of the "echo" flux in time - in contrast with earlier findings [41], focusing on the GRB190114C HE emission at later times \(T-T_{0}>2\times 10^{4}\) s. Temporal evolution of the pair echo flux, however, is distinctly different from that of the intrinsic GRB emission (see Fig. 7). This opens a possibility to distinguish the echo emission if the GRB light curve in the HE gamma-ray band would be measured for at least \(10^{4}\) s without interruptions and / or theoretical arguments can be given for extrapolation of the intrinsic GRB flux from \(T\approx T_{0}+10^{2}\) s on to later times. In a particular case of GRB190114C, extrapolation of the GeV band flux after the prompt GRB phase using the power law scaling \(F(t)\propto t^{-\alpha}\) with \(\alpha\approx 1.5-1.6\) measured in VHE band [22; 34] suggests the intrinsic source flux is below the Fermi/LAT measurement, indicating the detection of the "echo" emission at \(T-T_{0}\approx 10^{4}\) s. Such a detection would imply a low IGMF with \(B\lesssim 10^{-21}\) G at the GRB190114C redshift of \(z\approx 0.42\), due to the requirement that IGMF-induced deflections of the electron-positron pairs do not exceed their intrinsic scatter [6]. With measurement using hard-spectrum gamma-ray loud AGNs suggesting \(B\gtrsim 10^{-17}\) G at \(z\sim 0.1\)[9; 10; 11; 12; 13; 14; 15; 16; 17], such a detection would indicate a fast evolution of IGMF with redshift, thus strongly disfavoring the cos Figure 7: Expected secondary (cascade) flux from GRB190114C compared to the actual measurements in TeV and GeV bands [34]. Cascade flux is estimated from the power law primary source emission in the \(T-T_{0}=[68;2400]\) s time window, where the VHE emission was measured. An estimate assuming an exponential energy cut-off at \(E_{c}=1\) TeV is shown with the dashed orange line. Extrapolation of the initial source flux assuming the \(F(t)\propto t^{-1.5}\) scaling found in [34] is shown with the gray dash-dotted line. Cascade resulting from this extrapolation down to \(T-T_{0}=5\) s is depicted with the dotted orange line. mological origin of IGMF, where the field strength scales with redshift as \(B(z)\propto(1+z)^{2}\). It may, however, also present challenges for galactic-origin models of IGMF, where the field is expected to reach its present value around \(z\sim 1\)[18]. The latter tension may be, in principle, alleviated accounting for the highly inhomogeneous structure of IGMF, expected both for the primordial field, frozen into the plasma and following the matter density fluctuations [e.g. 42], and that originating from the galactic outflows [43] (though a small effect from the outflow-driven magnetic field bubbles was reported on average [44]). A detection of the secondary emission would be also indicative of the sub-dominant role of the plasma instabilities in cooling the injected electrons and positrons, that presently remains uncertain [45; 46; 47; 48; 49; 50]. It is interesting to note that if the detected emission at \(T-T_{0}\approx 10^{4}\) s indeed comes from the delayed cascade "echo", the VHE flux from GRB190114C during the prompt phase can not exceed much the prediction from the \(F(t)\propto t^{-1.5}\) extrapolation to avoid tension with the measurements. Worthy to note, that the GRB afterglow emission itself may contain time-delayed components mimicking the cascade "echo". Structured GRB jets may result in flares and plateaus in the light curves, that are sometimes observed in X-ray band [e.g. 51; 52]; the evolving ratio between the different spectral components of the GRB emission may lead to a change in the light curve slope, similar to that from the "echo" on-set [34]. Though temporal evolution of these phenomena in general differs from that of the "echo", the available data on GRB190114C are insufficient to tell them apart. If more than 80% of the measured flux at this time is indeed due to the afterglow, the predicted cascade emission would be excluded, imposing a \(B\gtrsim 10^{-21}\) G limit on IGMF strength at \(z\approx 0.4\). Clearly, a more careful assessment of the possible intrinsic GRB190114C emission contribution to the measured flux at \(T-T_{0}\approx 10^{4}\) s is required to assess the reliability of the "echo" emission detection assumption. Such an "echo" emission can be as well searched for from other flaring gamma-ray sources at \(z\gtrsim 0.1\). Several other GRBs have also been detected in VHE band at redshifts ranging from \(z\approx 0.08\) to \(1.1\)[53; 54; 55]. With the measured VHE fluxes much lower than in the case of GRB190114C, the expected pair echo flux for these GRBs would fall below the Fermi/LAT detection limit. However these detections demonstrate that a substantial number of GRBs may feature hard VHE spectra with the power law index \(\Gamma\gtrsim-2\) and emission longer than \(10^{2}-10^{3}\) s, required for a detectable pair echo in the absence of IGMF. The emerging population of such sources may be crucial for IGMF measurements at redshift \(z\gtrsim 1\). **ACKNOWLEDGMENTS** Author gratefully acknowledges the support of the Institute for Cosmic Ray Research (ICRR), the University of Tokyo in realization of this study and that of the CTA-North computing center at La Palma, Spain, for providing the necessary computational resources.
2310.11622
High-Resolution Building and Road Detection from Sentinel-2
Mapping buildings and roads automatically with remote sensing typically requires high-resolution imagery, which is expensive to obtain and often sparsely available. In this work we demonstrate how multiple 10 m resolution Sentinel-2 images can be used to generate 50 cm resolution building and road segmentation masks. This is done by training a `student' model with access to Sentinel-2 images to reproduce the predictions of a `teacher' model which has access to corresponding high-resolution imagery. While the predictions do not have all the fine detail of the teacher model, we find that we are able to retain much of the performance: for building segmentation we achieve 79.0\% mIoU, compared to the high-resolution teacher model accuracy of 85.5\% mIoU. We also describe two related methods that work on Sentinel-2 imagery: one for counting individual buildings which achieves $R^2 = 0.91$ against true counts and one for predicting building height with 1.5 meter mean absolute error. This work opens up new possibilities for using freely available Sentinel-2 imagery for a range of tasks that previously could only be done with high-resolution satellite imagery.
Wojciech Sirko, Emmanuel Asiedu Brempong, Juliana T. C. Marcos, Abigail Annkah, Abel Korme, Mohammed Alewi Hassen, Krishna Sapkota, Tomer Shekel, Abdoulaye Diack, Sella Nevo, Jason Hickey, John Quinn
2023-10-17T23:20:36Z
http://arxiv.org/abs/2310.11622v3
# High-resolution building and road ###### Abstract Mapping buildings and roads automatically with remote sensing typically requires high-resolution imagery, which is expensive to obtain and often sparsely available. In this work we demonstrate how multiple 10 m resolution Sentinel-2 images can be used to generate 50 cm resolution building and road segmentation masks. This is done by training a'student' model with access to Sentinel-2 images to reproduce the predictions of a 'teacher' model which has access to corresponding high-resolution imagery. While the predictions do not have all the fine detail of the teacher model, we find that we are able to retain much of the performance: for building segmentation we achieve 78.3% mIoU, compared to the high-resolution teacher model accuracy of 85.3% mIoU. We also describe a related method for counting individual buildings in a Sentinel-2 patch which achieves \(R^{2}=0.91\) against true counts. This work opens up new possibilities for using freely available Sentinel-2 imagery for a range of tasks that previously could only be done with high-resolution satellite imagery. ## 1 Introduction Buildings and roads are important to map for a range of practical applications. Models that do this automatically using high-resolution (50 cm or better) satellite imagery are increasingly effective. However, this type of imagery is difficult to obtain: there is little control over revisit times, the cost can be prohibitive for larger analyses, and there may be limited or no historical imagery for a particular area. This rules out certain analyses, such as spatially comprehensive surveys of buildings and roads, or systematic study of changes over time, e.g. for studying urbanisation, economic or environmental changes. Figure 1: Example operation of our model, where multiple frames of low-resolution Sentinel-2 imagery are used to make a single frame of high-resolution predictions for a variety of output types. A high-resolution image of the same scene is shown for comparison. The Sentinel-2 earth observation missions collect imagery globally every 2-5 days and depending on the band at up to 10 m ground resolution. This freely available data source is commonly used for the adjacent task of land cover mapping, and recent work on super-resolution with Sentinel-2 imagery has shown that a higher level of detail can be obtained than the native image resolution would suggest. This is intuitively possible because of the small variation in spatial position across successive Sentinel-2 image frames, caused by atmospheric disturbances, and even across bands in the same image due to sensor layout. A slightly different \(10^{2}\) m\({}^{2}\) area is being captured each time, and hence a sequence of such images can be used to reconstruct finer detail than exists in any one frame. We review related work in Section 2, which includes a number of experiments on obtaining 2.5m super-resolution from Sentinel-2. In our work, we attempt to extend the limit of fine detail that can be recreated from a set of Sentinel-2 images, assessing the quality of building and road presence predictions made at 50 cm resolution. To do this, we use a teacher model with access to 50 cm resolution imagery to create training labels for a large dataset of worldwide imagery. We then train a student model to reconstruct these labels given only a stack of Sentinel-2 images from the corresponding places and times (see Figure 1). We find that we are able to retain much - though not all - of the accuracy of the high-resolution teacher model: our Sentinel-2-based building segmentation has 78.3% mIoU, compared to the high-resolution-based teacher performance of 85.3% mIoU. We found that this accuracy level was equivalent to what could be achieved by a high-resolution model using a single frame of 4 m resolution imagery (see Table 5), though also noting through visual inspection that the segmentation quality sometimes far exceeds that (see Figure 2 and Section 10). Finally, we describe a method using our framework for counting the number of individual buildings in a patch, based on predicting the locations of building centroids. We can obtain \(R^{2}=0.91\) compared to true building counts, again Figure 3: Estimation of the number of buildings in a tile, based on predicting building centroids (left: high resolution image for comparison, centre: top of Sentinel-2 stack input to the model; right: predicted centroid mask). This method can obtain \(R^{2}=0.91\) with respect to true counts even though individual buildings cannot be discerned in the source imagery. Figure 2: Examples of building and road detection from Sentinel-2 imagery, each covering an area of \(192^{2}\) m\({}^{2}\). The panels on the left show high-resolution satellite imagery of the scene for comparison; although Sentinel-2 imagery has much lower level of detail in each frame, we are able to predict fine-scale features of buildings and roads. capturing much of the performance of the high-resolution teacher model (\(R^{2}=0.95\)). The predicted counts are surprisingly accurate even when buildings are very small relative to the raw Sentinel-2 resolution, or close together. Some examples are shown in Figures 3 and 15. This work extends the range of analysis tasks that can be carried out with freely available Sentinel-2 data. As well as using it to improve the Open Buildings dataset2, we plan to make model resources available for social benefit purposes in the near future. Footnote 2: [https://sites.research.google/open-buildings](https://sites.research.google/open-buildings) ## 2 Related work Several researchers have noted the potential of using freely available but relatively low-resolution remote sensing imagery, such as Sentinel-2, to obtain insights about buildings and other features with previously unobtainable spatial and temporal scope. As well as experimental results, practical data is already being produced from such systems: for example, Sentinel-2 super-resolution from 10 m to 2.5 m has been used to create buildings data for 35 cities across China [1]. Super-resolution, the task of reconstructing a high-resolution image from one or more low-resolution images, has been widely applied to photographic images, commonly with generative models such as GANs. There is existing work on applying this type of model in remote sensing imagery [2], although there is a risk of increasing resolution at the expense of introducing spurious details. Certain characteristics of remote sensing imagery, and Sentinel-2 in particular, can be exploited to aid hallucination-free super-resolution. Alias and shift between sensor channels are undesirable for the purposes of visualization, but provide a useful signal for super-resolution [3; 4; 2]. It is also possible to exploit aspects of the physical design of the Sentinel-2 satellites, to obtain 5 m super-resolved training data for specific areas with detector overlap [5]. Other results in the literature indicate that it is not necessary to explicitly model these alias and shift effects to obtain hallucination-free super-resolution, and that this can be learned directly from data with standard semantic segmentation models. U-Net is popular in remote sensing analysis and has been successfully applied in this setting. Simply up-sampling medium resolution imagery and trying to extract building detections gives increased effective resolution, which can then be used for building detection [6]. Super-resolution followed by Mask-RCNN was used to detect buildings in locations across Japan [7]. Another two stage model, SRBuildingSeg, carries out super-resolution followed by building segmentation [8]. A similar two stage setup based on U-Net was used to detect buildings using Sentinel-2 in Spain [9]. Another principle useful for remote sensing super-resolution is to take multiple images to generate a single high-resolution output. Physical features such as buildings and roads change slowly in comparison to the Sentinel-2 revisit time of 5 days, so an image stack is likely (though not guaranteed) to show the same scene. HighResNet [10] is an architecture for fusing a temporal sequence of lower-resolution remote sensing images to predict a single, higher-resolution image. This works with a convolutional architecture to fuse the images together, and a loss function which can account for differences in alignment between the low-resolution input and high-resolution training labels. PIUNet (permutation invariance and uncertainty in multitemporal image super-resolution) is an architecture for multiple-image super-resolution, used to increase Proba-V resolution from 300 m to 100 m [11]. Most of this existing work is based on fully convolutional architectures, however transformer-based models have also been used [12]. The approach in our work is also to use multiple images to predict high-resolution features, although we do not have an intermediate super-resolution step and instead train end-to-end. ## 3 Training setup We propose a method for semantic segmentation from a stack of low-resolution Sentinel-2 images at a much higher resolution than the input resolution. Our setup consists of a teacher model and a student model, with inputs and outputs as shown in Figure 4. The teacher model is trained on high-resolution (50 cm) satellite imagery and outputs high-resolution semantic segmentation confidence masks for buildings, building centroids, and roads. The student model tries to mimic the output of the teacher model using only a stack of low-resolution imagery obtained from Sentinel-2 of the same location. Our method is an end-to-end super-resolution segmentation model, so that instead of first carrying out super-resolution on the image and then running semantic segmentation on the output, we make the model predict a high-resolution semantic segmentation mask directly from low-resolution inputs. The input to the model is a set of low-resolution Sentinel-2 frames \(\mathrm{LR}_{i,t},\in\mathbb{R}^{H\times W\times C}\), arranged in a stack of time frames from \(t=1,\ldots,32\) (where \(H\times W\) are spatial dimensions and \(C\) is the number of input channels). The input channels include both imagery and metadata, as described in Section 4. The output of the model is \(\mathrm{SR}_{i}\in\mathbb{R}^{\gamma H\times\gamma W\times C^{\prime}}\), where \(\gamma\) is the upscaling factor and \(C^{\prime}\) is the number of output channels. We denote the label by \(\mathrm{HR}_{i}\in\mathbb{R}^{(\gamma H+\Delta y_{max})\times(\gamma W+\Delta x _{max})\times C^{\prime}}\), where \(\Delta x_{max}\) and \(\Delta y_{max}\) are margins corresponding to the maximum allowed translation in \(x\) and \(y\) direction respectively (Section 6). Each output channel can potentially represent a separate task. We train models with up to four output channels: building semantic segmentation, roads semantic segmentation, building centroids, and super-resolution grayscale image, as shown in Figure 4. We use the building centroid predictions with one extra step to calculate building counts. We predict a super-resolution grayscale image as one of the output channels as this helps with image registration during training and evaluation. At a high level, our model employs an encoder and a decoder. The encoder encodes each low-resolution image independently. The decoder takes a fused representation of these encodings and applies successive upsampling to it to output at target resolution. The model architecture is described in more detail in Section 5. ## 4 Dataset The dataset for training and evaluation consists of low-resolution image stacks and high-resolution label pairs \(\{\mathrm{LR}_{i,t=1:32},\mathrm{HR}_{i}\}\). To generate labels for the temporal stack we fetch Sentinel-2 imagery stacks at locations where we also have high-resolution imagery available. We fetch the stack of Sentinel-2 imagery such that the high-resolution image corresponds to the middle of the low-resolution stack, i.e. between 16th and 17th time frame in a 32 frame stack (see Figure 5). Labels for the primary tasks of buildings and roads semantic segmentation are generated as follows: labels for the training split are per pixel building and roads presence confidence scores output by the teacher model, whereas labels for the evaluation split are binary segmentation masks obtained from human drawn polygons of buildings and roads on high-resolution images (see Figure 6). Since we generate training labels using a teacher model we can generate a large number of training examples, only limited by the amount of high-resolution imagery available and computational constraints. The training split of the dataset consists of about 10 million stacks of Sentinel-2 imagery sampled globally. Examples are sampled randomly at uniform first and then subsampled by building presence using the high-resolution teacher model so that approximately 90% of the examples have buildings. The geographical distribution of examples is shown in Figure 8. A consequence of our sampling strategy is that the distribution of our dataset is roughly correlated with population density. The validation split consists of 1165 Sentinel-2 imagery stacks, which are all within the continent of Africa and chosen to include a range of urban and rural examples of different densities, as well as settings of humanitarian significance such as refugee settlements. Training examples close to validation examples (within some radius) are discarded. For simplicity we do not perform any sampling based on roads. Often roads tend to be present in the vicinity of buildings, but this is not always true. Figure 4: Teacher-student setup in this work. The student model is trained to reproduce the same outputs as a high-resolution model using 50 cm resolution imagery, but using only a stack of Sentinel-2 images at 10 m resolution. Sentinel-2 imagery is fetched from the Harmonized Sentinel-2 Top-Of-Atmosphere (TOA) Level-1C collection [13] on Earth Engine. Each Sentinel-2 timeframe consists of 13 bands, each upsampled to 4 meters per pixel regardless of their native resolution. Additionally, we add metadata for each frame, as described in Section 4.5 in more detail. ### Teacher labels The teacher model which is used to label the large worldwide training set is the same as used to generate the Open Buildings dataset, trained along the lines described in [14], with a few refinements for improved accuracy: these include the collection of extra training data in areas where earlier models had poor performance, ensembling of models trained on different partitions of the training data (distilled back to a single model) and use of the HRNet [15] architecture instead of U-Net. The geographical distribution of human-labelled training data for buildings is shown in Figure 7, reflecting the current Open Buildings coverage across Africa, South and Southeast Asia, and Latin America. Training is done in a similar way for road detection as for building detection. ### Registration Neither low-resolution Sentinel-2 images nor the high-resolution image are registered to any reference frame, and as such, both the input imagery stack as well as the labels are potentially misaligned relative to each other. Given the resolution of Sentinel-2 imagery, misalignments of a few pixels in image space amount to tens of meters on the ground. We rely on the model being able to implicitly align the low-resolution input frames and to facilitate such implicit alignment, inspired by [10], we tried pairing each input frame with the frame closest in time to the high-resolution image used to derive labels, i.e. the 17th frame. The model output however can still be misaligned relative to the labels, and in order to not penalise the model for that we manually align labels to model output during loss computation, as described in Section 6. We also do similar alignment in evaluation. Figure 5: Generation of training and evaluation data for the low-res student model. The low-resolution stack is constructed so that the time of the high-resolution image falls between the 16th and 17th frames. Figure 6: Comparison of the model-derived teacher label (centre) and human-derived label (right) for one evaluation example. ### Clouds To filter cloudy images we discard Sentinel-2 images that have one or more pixels in the opaque cloud mask set. An opaque cloud mask is derived from the QA60 band in the Sentinel-2 TOA collection (10th bit). QA60 band however is inaccurate and does not always capture all clouds (see example in Figure 9). We retain Sentinel-2 images even if QA60 indicates they have cirrus clouds, because often these images still looked useful. Moreover, cloudy high-resolution images are especially problematic since they introduce noise into the labels. Therefore both the training and evaluation datasets are filtered to have cloud-free high-resolution imagery. During inference, we only use the QA60 band to filter out timeframes with opaque clouds. ### Orthorectification Label transfer from high resolution to low resolution works well only if both the satellites have the same viewing angle or both the imagery collections are orthorectified. The high-resolution imagery collection we use is however not orthorectified (see example in Figure 10) and as such, labels generated using such imagery do not always align with Sentinel-2 imagery. Particularly problematic are areas with tall buildings where building roofs (which is what our teacher model is trained to detect) appear in different positions depending on the satellite viewing angle. Orthorectification of satellite imagery however requires an accurate elevation model - something that is not available globally. In the absence of orthorectification, labels tends to be more accurate for rural areas than for urban areas with taller buildings. ### Sentinel-2 data processing Due to the way imagery assets are tiled in the Harmonized Sentinel-2 Top-Of-Atmosphere (TOA) Level-1C collection on Earth Engine, the imagery stack fetched for a given location can potentially contain duplicate imagery [16]. We therefore perform deduplication of the imagery stack as follows: we group images by datatake ids and for each datatake id we take the highest processing baseline. Datatakes correspond to a swath of imagery taken by the Sentinel-2 satellite and often cover a very large surface area of the earth (up to 15,000 square kilometers). The processing baseline is effectively a set of configurations used to post process raw imagery acquired from the satellites and are updated regularly by the European Space Agency (ESA). Figure 8: Geographical distribution of teacher-labelled training data used to train the student model. Counts per level 5 S2 cell. Figure 7: Geographical distributions of high-resolution, human labelled data on which the teacher model was trained, and human-labelled evaluation data on which we report all evaluation metrics. Counts per level 5 S2 cell. We feed in metadata associated with each frame to the model using a simple approach where scalar value for each metadata item is broadcast to image spatial dimensions and then appended to the image in the channel dimension. As such, each scalar metadata value adds an additional channel to the input. The following metadata are used for each frame: normalized time relative to the 17th time frame, mean incidence azimuth angle, mean incidence zenith angle, mean solar azimuth angle, mean solar zenith angle, latitude, and longitude. The time of image acquisition is normalised by dividing the time duration relative to the 17th frame by ten years in seconds, and scale the other features to the [0, 1] range. We do not perform any data augmentation other than random cropping, which we found is sufficient to prevent overfitting due to the very large dataset size. To account for the fact that, in certain cloudy parts of the world it is not always possible to acquire a stack of 32 Sentinel-2 images in a given time span, during training we randomly truncate and pad the image stack on both ends. More formally, for both halves of the stack, with some probability \(p\) we generate a padding of length \(l\) distributed uniformly in \([1,16]\). This helps to make the model robust to missing frames at inference time. ## 5 Model At a high level our student model consists of a shared encoder module that encodes each LR frame separately, a simple mean based encoding fusion and a decoder module that up-samples the fused encoding (Figure 13). Below we describe encoder and decoder modules in detail. ### Encoder We employ the HRNet [15] architecture (see Figure 11) as an encoder without making any significant modifications, except for the root block. The original root block in HRNet downsamples the input image by a factor of 4, which we find to be too aggressive for LR input images that are already quite low resolution. To address this, we remove the 4x downsampling by reducing stride from 2 to 1 on the two 3x3 convolutions in the first block of HRNet. As a result, the encoder outputs features at the same spatial resolution as the input. We pre-train the HRNet encoder on ImageNet. To adapt filter weights of the first convolution trained on 3 channel RGB ImageNet input to more channels, for each filter we take the mean of the weights across channels and replicate the mean across the target channel dimensions (=13+7, corresponding to all bands in a Sentinel-2 image and all metadata we pass in). Figure 11: HRNet architecture, as it would be applied to a single frame of imagery. The network consists of 4 different stages and each stage consists of blocks that correspond to features at different resolutions. Each Sentinel-2 input has 13 image channels and 7 metadata channels. ### Cross-time information fusion In order to make the model learn not just spatial features but also the temporal relationship between spatial features we experimented with cross-time information fusion as features go through the various stages of HRNet encoder. For each stage and block (corresponding to features at various depths and resolutions), spatial features are fused across time using a depth-wise convolution in a residual fashion (Figure 12). For a given block and stage, features for pixel \((u,v)\) across all 32 time steps are passed through a \(1\times 32\) depth-wise convolution (with depth multiplier \(32\)) to transform a \(1\times 32\times c\) feature tensor to \(1\times 1\times 32c\). The tensor is then reshaped to \(1\times 32\times c\) before being passed to \(1\times 1\) point-wise convolution to obtain \(1\times 32\times c\) fused features. We reshape the tensor before the point-wise convolution to reduce the number of convolution parameters. Otherwise, this setup is equivalent to a 1D depth-wise separable convolution across time. Note that this fusion approach is much more efficient than 3D convolution or ConvLSTM in terms of the number of parameters. This is because our approach only fuses features across time for a single pixel, and using depth-wise convolution (with reshaping before point-wise convolution) helps to reduce the number of parameters even further. The original features are then added to the fused features as a residual connection. More details about pairing schemes and experimental results are given in Section 7.3. ### Decoder The decoder module in our model consists of a number of decoder blocks that each upsample the input by a factor of 2. We use the same decoder block as in our previous technical report [14] which consists of x2 (batch normalization, ReLU, convolution) followed by another (batch normalization, ReLU), fusion with residual connection to the input and finally an upconvolution, as illustrated in Figure 14. We use transposed convolution or deconvolution for up convolution. The fused encoding is passed through 3 decoder blocks with widths 360, 180 and 90 respectively. Finally the upsampled features are passed through a (convolution, batch normalization, ReLU) block before being passed through a final convolution that outputs the desired number of output channels. There is one output channel for each task: building segmentation, road segmentation, centroid detection and image super-resolution. Since the input Sentinel-2 frames are the same for each of these tasks, a single model can be trained in a multi task fashion to perform these tasks simultaneously. Apart from reducing cost and time, the different tasks could possibly serve as regularization for the other tasks to produce better results. ### Building counts via centroid prediction In some practical situations, people need building counts in an area rather than a segmentation mask of building presence. We investigated various ways of trying to estimate the count of buildings in an image tile from the Sentinel-2 image stack. The most effective method we found is based on first predicting the positions of building centroids, inspired by [17] and other work on object counting e.g. for crowds. For the centroid detection task, the labels are Gaussian'splats', as shown in Figure 15, where each splat is the same size regardless of the size of the building. This means that we expect a roughly constant response for each building, and we can derive an estimate count by simply summing the model output over the spatial dimensions and then dividing by a constant scaling factor. Figure 12: cross-time fusion of features. For each stage and block (corresponding to features at various depths and resolutions) spatial features for pixel \((u,v)\) across all 32 time steps are fused using depth-wise convolution in a residual fashion. We therefore derive the count without requiring any post-processing such as peak detection or non-maxima suppression. For a given tile \(i\), the building count, \(\hat{n}_{i}\) can be estimated as follows: \[\hat{n}_{i}=\frac{1}{K}\sum_{u,v}C^{i}_{u,v}\, \tag{1}\] where \(C^{i}_{u,v}\) is model output at pixel \(u,v\) (for the channel corresponding to centroid labels) and \(K\) is the scaling factor. \(K\) is derived from the training split by least squares regression. The centroid detection task is incorporated into the overall model as one of the output channels. Building instances from the teacher model, above a certain score threshold, are used to generate centroid labels. See Figure 15 for example labels and model output. We detail some of the results obtained with this approach in Section 7.7. ## 6 Loss function We employ per pixel Kullback-Leibler Divergence (KLD) loss between the label and the prediction, defined as follows: \[KLD(y_{i},\hat{y}_{i})=\left(y_{i}\log\left(\frac{y_{i}}{\hat{y}_{i}}\right)+ \epsilon\right)^{\gamma}\, \tag{2}\] where \(y_{i}\) and \(\hat{y_{i}}\) are the label and prediction for pixel \(i\) respectively and \(\gamma\) is the focal term. Both \(y_{i}\) and \(\hat{y_{i}}\), originally \(\in[0,1]\), are clipped to \([\epsilon,1-\epsilon]\), where \(\epsilon=1e-7\), to avoid division by zero errors. The focal term \(\gamma\) varies the Figure 14: Residual Decoder block with width \(n\) that upsamples the input by a factor of 2. Figure 13: Overall model architecture. Each of the input images is encoded separately using a shared encoder. Encodings are then averaged together (which collapses the time dimension) and passed on to the decoder that upsamples the fused encoding by a factor of 8. importance given to hard (misclassified) examples. Following a hyper-parameter sweep we set \(\gamma\) to 0.25. We also add \(\epsilon\) to KLD before exponentiation by \(\gamma\) to guard against an undefined gradient. Since the label and the model output could be misaligned, and in order to not penalize the model due to misalignment, we do an exhaustive neighborhood search based registration between the label and the prediction. For this we assume a translation model and find the \((\Delta x,\Delta y)\) translation that minimizes the mean squared error (MSE) between the label and prediction. The label is then shifted by \((\Delta x,\Delta y)\) to register it with the model output, before the loss in (2) is computed. This exhaustive neighborhood search is however computationally expensive. A possible alternative would be to also learn registration as proposed in [10], where a separate registration module takes in (prediction, label) pair as input and predicts a \(k\times k\) kernel which when convolved with the model output aligns it with the label. We crop the label such that we keep a margin around it whose size is set to the maximum translation \((\Delta x_{max},\Delta y_{max})\) allowed in a given direction (Figure 16). Once the label is shifted relative to the model output we crop the label to the output dimension so that both the label and model output have the same size. To make alignment more robust, we also make the model predict a super-resolved grayscale image. The label for that task is the 50 cm image converted to grayscale. We use similar registration logic in both training and evaluation. Figure 16: Alignment at loss: Label with margin whose size is equal to the maximum allowed translation along a given image dimension. Figure 15: From the left to the right high-resolution image, the first images of the corresponding Sentinel-2 stack, the instance masks derived from teacher model, corresponding centroid labels, and model output. ## 7 Experiments In our experiments, we report the mean Intersection over Union (mIoU) for a binary segmentation setup, where the foreground is either buildings or roads. Instead of using a fixed threshold (such as 0.5) to convert the per-pixel confidence values output by the model into a segmentation mask, we use a threshold in the range \([0,1]\) and a dilation kernel size that maximizes the mIoU. We then report the maximum mIoU obtained. For the auxiliary task of tile level building count, we report mean absolute error (MAE) and the coefficient of determination, \(\mathbb{R}^{2}\). The numbers between different tables are not necessarily comparable due to slight differences in configurations. Our evaluations focus on the building detection task and we do not report metrics for the road detection and image super-resolution tasks. ### Training Details Our models are trained with the Adam optimizer [18] using a constant learning rate of \(3\times 10^{-4}\) for 500,000 steps with batch size 256. During training images are cropped randomly to size of 512\(\times\)512. For evaluation, the random crop is replaced by a center crop but in the end metrics are calculated on a further center crop of 384\(\times\)384. All the models are initialized with the checkpoints of a model trained on ImageNet [19]. ### Handling of multiple timeframes We explored two options for handling multiple Sentinel-2 timeframes in the model. In the first one we concatenated all timeframes in the channel dimension. In the second approach we passed each timeframe through a shared encoder (see Section 5). The second approach seems to require cross-time information fusion (see Section 5.2) and/or a pairing scheme for timeframes (see Section 7.3) to exceed the performance of the first approach (see Table 1). Both approaches used slightly modified off-the-shelf HRNet [15] and U-Net [20]. See Section 5.1 for a description of how HRNet was modified, in case of U-Net the modifications were equivalent. Unless stated otherwise, in all remaining experiments we use the second approach with cross-time information fusion and no pairing scheme. ### Pairing schemes Inspired by HighResNet [10], where each low-resolution frame is paired with the per-pixel median of the stack, we experimented with various pairing schemes. We find that pairing each timeframe with the 17th timeframe (that is closest in time to the teacher label) provides a significant boost in performance over model without pairing. In comparison, model trained with no pairing but with cross-time communication (Section 5.2) performs slightly worse (see Table 1). Surprisingly enough, model trained with both pairing (17th frame) and cross-time communication performs worse than that with just pairing. We also experimented with pairing with either the median (as is done in HighResNet), mean or second darkest timeframe and find none of these pairing schemes to be better than pairing with the 17th frame. ### Resolution sensitivity analysis To understand the influence of input, output and target resolutions on model performance we carry out the following resolution sensitivity experiments: \begin{table} \begin{tabular}{l c} \hline \hline Model & mIoU (Buildings) \\ \hline U-Net concatenate timeframes in channel dimension & 72.7 \\ HRNet concatenate timeframes in channel dimension & 73.8 \\ HRNet separate timeframe encoding & 70.7 \\ HRNet separate timeframe encoding + cross-time fusion & 76.7 \\ HRNet separate timeframe encoding + Pairing (17th frame) & **76.9** \\ HRNet separate timeframe encoding + Cross-time fusion + Pairing (17th frame) & 76.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of two approaches to handling multiple timeframes: concatenating the timeframes in the channel dimension and separate encoding using a shared encoder. The second approach requires additional modifications to exceed the performance of the first approach. Input resolutionWe altered the resolution of the input while keeping the model output and label at 50 cm resolutions. The effective resolution of the Sentinel-2 input is 10 m or lower depending on the specific band, but we find that upsampling it with a simple image resize has non-trivial impact on model performance. For example, training on 4 meter inputs provides a sizable improvement over training on 8 meter inputs (Table 2). However, increasing the input resolution to 2 meters does not provide similar benefit. The models in Table 2 are trained on eight Sentinel-2 timeframes because of the computational constraints associated with training at 2 meter input resolution. For training on the stack of 32 images, we only explored training on 8 and 4 meter inputs. The performance improvement associated with training on 4 meter inputs is also observed in this case. Output resolutionFor output resolution sensitivity experiments we alter the effective resolution of the output while keeping the model input at 4 m and the label at 50 cm resolutions. As described in Section 5, to produce super-resolved outputs we add residual upsampling blocks akin to the decoder blocks used in U-Net to progressively upsample the input. We investigate the usefulness of these blocks by progressively replacing them with a bi-linear resize operation. Although the output of the model is at the same resolution as the label, the effective resolution is actually much lower since the resize operation does not add any new details. Table 3 shows that using the upsampling block to increase the effective resolution leads to an increase in the performance of the model. At 4 meters, no upsampling blocks are used, resulting in lower performance compared to other models that make use of them. Label resolutionFor label resolution sensitivity experiment we alter the effective resolution of the label while keeping the model input at 4 m and output at 50 cm resolutions. Unsurprisingly, we see that the model performs better when trained on higher resolution labels, as shown in Table 4. Comparison to single-frame detection with varying image resolutionWe wished to understand the performance of our model in comparison to a high-resolution building detection model operating on a single frame of imagery. That is, if only one image was available, what resolution would it need to be in order to get the same accuracy of detection as we obtain with 32 Sentinel-2 frames. \begin{table} \begin{tabular}{l r} \hline \hline Effective label resolution & mIoU (Buildings) \\ \hline 4 meters & 76.2 \\ 2 meters & 75.6 \\ 50 centimeters & 76.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Sensitivity analysis of label resolution (32 timeframes). \begin{table} \begin{tabular}{l r} \hline \hline Input resolution & mIoU (Buildings) \\ \hline 8 meters & 74.3 \\ 4 meters & 75.6 \\ 2 meters & 75.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Sensitivity analysis of the Sentinel-2 input resolution (8 timeframes). \begin{table} \begin{tabular}{l r} \hline \hline Effective output resolution & mIoU (Buildings) \\ \hline 4 meters & 76.1 \\ 2 meters & 76.6 \\ 1 meter & 76.8 \\ 50 centimeters & 77.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Sensitivity analysis of the Sentinel-2 output resolution demonstrating usefulness of upsampling blocks (32 timeframes). To do this, we trained a model similar to our teacher model on downsampled high-resolution images. The images are downsampled to a target lower resolution and resized back to the original image dimensions. Thus the dimensions of the images are the same but the information content is reduced. We find that at 4 meter resolution, this model's performance matches the performance of the Sentinel-2 super-resolution model (Table 5). The 50 cm model has comparable metrics to the model that was used to generate teacher labels for training the Sentinel-2 models. ### Training set size To quantify the impact of dataset size on model performance we trained our model on subsets of training data of different sizes. Table 6 shows monotonic improvement in performance as training data size increases. The model trained on the full set of 10 million images outperforms the model trained on 10% of the training set by 2 percentage points. ### Number and position of Sentinel-2 timeframes Each training example contains 32 Sentinel-2 timeframes with the corresponding high-resolution image located temporally somewhere between the 16th and 17th timeframes. The timeframes are sorted by time. This means that the model gets to see 16 frames in the past and 16 in the future around the high-resolution image time. We carry out a series of experiments to determine the relationship between the number of timeframes and the performance of the model. We find that increasing the number of timeframes leads to a monotonic increase in the performance of the model. In Table 7, the model trained on all 32 timeframes outperforms the model trained on a single timeframe by 5 percentage points. As a corollary to this, we train a model on a single timeframe duplicated 32 timeframes. This model achieves a mean intersection over union of 71.7% compared to 76.7% of the model trained on all timeframes. This shows that the additional timeframes provide useful information to the model. The conditions at the time a snapshot is taken is different for the 32 timeframes, thus there must be features that are captured in some frames but absent in others. Using multiple frames allows the model to draw from the information available in each time frame to make a single prediction. The use of multiple frames can also be understood as producing a 'dither' effect. Since noise should be randomly distributed across the image, shifts between consecutive frames allow the model to isolate the signal from the noise to produce a super-resolved output. \begin{table} \begin{tabular}{l c} \hline \hline Fraction of training data & mIoU (Buildings) \\ \hline 1\% & 69.1 \\ 5\% & 73.4 \\ 10\% & 74.7 \\ 100\% & **76.6** \\ \hline \hline \end{tabular} \end{table} Table 6: Building segmentation performance as a function of fraction of training data size used (32 timeframes). \begin{table} \begin{tabular}{l c} \hline \hline Effective image resolution (cm) & mIoU (Buildings) \\ \hline 1000 & 67.4 \\ 500 & 75.8 \\ **400** & **78.1** \\ 300 & 80.5 \\ 200 & 82.9 \\ 100 & 84.8 \\ 50 & 85.3 \\ \hline \hline Best Sentinel-2 model & **78.3** \\ \hline \hline \end{tabular} \end{table} Table 5: Resolution sensitivity for a model trained on a single high-resolution image. For comparison, the last row represents our best Sentinel-2 super-resolution segmentation model. It has comparable mIoU to the 4 meter model. We also explore how sensitive the model is to having access to future or past timeframes by training the model on only past or future frames. In Table 8, the model that only gets to see 16 past timeframes is better than a model that sees only 16 future timeframes, and roughly comparable to one that sees 8 past and 8 future timeframes. ### Building counts via centroid prediction On the auxiliary task of tile level count prediction via centroid prediction as outlined in Section 5.4 we observe a very high correlation between predicted building count and the true count (see Figure 17). We were able to achieve a coefficient of determination (\(R^{2}\)) of 0.912 and MAE of 5.67 on human labels. For comparison, model trained to do the same but using 50 cm imagery was able to achieve \(R^{2}\) of 0.955 and MAE of 4.42 on human labels. ## 8 Discussion In this work, we present an end-to-end super-resolution segmentation framework to segment buildings and roads from Sentinel-2 imagery at a much higher effective resolution than the input imagery. To this end we demonstrate label transfer from a high-resolution satellite to a low-resolution satellite that both image the same surface of the earth. We show that such label transfer not only allows generation of more accurate and fine detailed labels but also enables automatic label generation through a teacher model trained to perform the same task but using imagery from a high-resolution satellite. This significantly reduces the amount of high-resolution imagery needed for certain analysis tasks, such as large scale building mapping. Unfortunately, our approach has some limitations. First, it still assumes that one has access to a good amount of high-resolution images to first train a teacher high-resolution model and then also to generate a large dataset to train a student model as described in Section 4. However, this is a one-time cost. Additionally, with the advancement in deep learning, models can now be trained in a much more data efficient manner [21]. Furthermore, there are many \begin{table} \begin{tabular}{l c} \hline \hline Number of timeframes & mIoU (Buildings) \\ \hline 1 & 71.7 \\ 2 & 73.2 \\ 4 & 74.5 \\ 8 & 75.8 \\ 16 & 76.1 \\ 32 & **76.7** \\ \hline \hline \end{tabular} \end{table} Table 7: Model performance as a function of the number of timeframes in the Sentinel-2 stack. Figure 17: Regression plots (in linear and logarithmic scale) and metrics for the buildings count from just a stack of low-resolution images. Counts are made on tiles of size \(192^{2}\) m\({}^{2}\) on the ground. publicly available datasets and models that can be used to bootstrap different tasks [22]. Second, it assumes that for a given location one can assemble a deep stack (i.e. 32) of cloud-free Sentinel-2 images. This actually can be quite problematic if the stack has to be centered timewise around a fairly old or recent date, and especially if the location is cloudy; in humid locations such as Equatorial Guinea we noticed that for some very cloudy locations a 32 timeframe stack can span more than 2 years. The results we present in this technical report are preliminary and we believe that there is still room for improvement. For example, we did not evaluate the model on change detection, and do not have metrics on how many Sentinel-2 timeframes with change does the model need to see to detect these changes consistently. Additionally, recent advances in super-resolution using generative AI models such as GANs [2] and diffusion models [23] have shown potential. These approaches have however been limited to single image super resolution of natural images and have not been extensively used for remote sensing tasks where multiple low-resolution images of the same location are available. As such, a promising area of future research could be exploration of use of some of these approaches for multi-frame super-resolution segmentation tasks in remote sensing. ### Social and ethical considerations We believe that timely and accurate building information, particularly in areas which have few mapping resources already, are critically important for disaster response, service delivery planning and many other beneficial applications. However, there are potential issues with improvements in remote sensing analysis, both in terms of unintended consequences and from malicious use. Where such a model is used as a source about information on human population centres, for example during emergency response in a poorly mapped and inaccessible area, any false negatives could lead to settlements being neglected, and false positives lead to resources being wasted. Particular risks for the kind of Sentinel-2 based analysis we describe include settlements consisting of very small buildings made of natural materials, and buildings in deserts, both of which are challenging. ## 9 Acknowledgements We thank Sergii Kashubin, Maxim Neumann and Daniel Keysers for feedback which helped to improve this paper.
2305.04927
Detecting and Reasoning of Deleted Tweets before they are Posted
Social media platforms empower us in several ways, from information dissemination to consumption. While these platforms are useful in promoting citizen journalism, public awareness etc., they have misuse potentials. Malicious users use them to disseminate hate-speech, offensive content, rumor etc. to gain social and political agendas or to harm individuals, entities and organizations. Often times, general users unconsciously share information without verifying it, or unintentionally post harmful messages. Some of such content often get deleted either by the platform due to the violation of terms and policies, or users themselves for different reasons, e.g., regrets. There is a wide range of studies in characterizing, understanding and predicting deleted content. However, studies which aims to identify the fine-grained reasons (e.g., posts are offensive, hate speech or no identifiable reason) behind deleted content, are limited. In this study we address this gap, by identifying deleted tweets, particularly within the Arabic context, and labeling them with a corresponding fine-grained disinformation category. We then develop models that can predict the potentiality of tweets getting deleted, as well as the potential reasons behind deletion. Such models can help in moderating social media posts before even posting.
Hamdy Mubarak, Samir Abdaljalil, Azza Nassar, Firoj Alam
2023-05-05T08:25:07Z
http://arxiv.org/abs/2305.04927v1
# Detecting and Reasoning of Deleted Tweets ###### Abstract Social media platforms empower us in several ways, from information dissemination to consumption. While these platforms are useful in promoting citizen journalism, public awareness etc., they have misuse potentials. Malicious users use them to disseminate hate-speech, offensive content, rumor etc. to gain social and political agendas or to harm individuals, entities and organizations. Often times, general users unconsciously share information without verifying it, or unintentionally post harmful messages. Some of such content often get deleted either by the platform due to the violation of terms and policies, or users themselves for different reasons, e.g., regrets. There is a wide range of studies in characterizing, understanding and predicting deleted content. However, studies which aims to identify the fine-grained reasons (e.g., posts are offensive, hate speech or no identifiable reason) behind deleted content, are limited. In this study we address this gap, by identifying deleted tweets, particularly within the Arabic context, and labeling them with a corresponding fine-grained disinformation category. We then develop models that can predict the potentiality of tweets getting deleted, as well as the potential reasons behind deletion. Such models can help in moderating social media posts before even posting. Disinformation, Deleted Tweets, Twitter, keyword, keyword, keyword ## 1 Introduction In the last decade, social media has become one of the predominant communication channels for freely sharing content online. The interactions on social media platforms enable public discussions online, such as those related to local issues and politics. Feelings of intolerance in media platforms usually generate and spread hate speech and offensive content through various communication channels. Such content can inflame tensions between different groups and ignite violence among their members. Malicious users intentionally and unintentionally use media platforms to impact people's thoughts, disseminate hate speech, sway public opinions, attack the human subconscious, spread offensive content, fabricate truths, etc. The misuse of social media platforms has turned them into potential grounds for sharing inappropriate posts, misinformation, and disinformation (Zhou et al., 2016; Alam et al., 2022). One type of inappropriate posts is **regrettable posts**, those that contain regrettable content, which can make the author feel guilty or can cause the intended audience to be harmed (Zhou et al., 2016; Sleeper et al., 2013). **Misinformation** is defined as "_unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or taking satitive seriously_". **Disinformation** is "_a fabricated or deliberately manipulated text/speech/visual context, and intentionally created conspiracy theories or rumors_". While **melinformation** is "_defined as true information deliberately shared to cause harm_" (Ireton and Posetti, 2018; Alam et al., 2022). Such posts often get deleted for different reasons: _(i)_ user themselves delete the posts, _(ii)_ social media platform delete them due to breach of community guidelines (Almuhimedi et al., 2013; Wang et al., 2011). Sleeper et al. (2013) examined regrets within in-person and virtual conversations. They found that Twitter users tend to delete tweets or sometimes apologize once they realize their regret. The potential reasons behind tweets' deletion can be hate speech, offensive language, rumors, and/or spam that might violate community guidelines. In such cases, tweets get deleted, and users' accounts could get suspended as well. 12 Footnote 1: [https://help.twitter.com/en/rules-and-policies/twitter-rules](https://help.twitter.com/en/rules-and-policies/twitter-rules) Footnote 2: [https://help.twitter.com/en/managing-your-account/suspended-twitter-accounts](https://help.twitter.com/en/managing-your-account/suspended-twitter-accounts) Bhattacharya and Ganguly (2016) stated that around 11% of tweets are eventually deleted. Although deleted tweets are not accessible once they are deleted, understanding the potential reasons behind their deletion motivates several researchers to understand and identify the content of regrettable tweets or tweets of suspended accounts (Zhou et al., 2016; Wang et al., 2011). The importance of understanding the content of deleted tweets is the extraction of meaningful data of harmful content, and detecting and empowering users by sending warnings and suggestions before posts get shared on platforms. Prior studies have investigated detecting deleted tweets, spam accounts and their behaviors (Stringhini et al., 2010; Lee et al., 2010), analyzing regrets in bullying tweets (Xu et al., 2013), and identifying factors for undesirable behavior such as spamming, negative sentiment, hate speech, and misinformation spread from deleted or suspended user accounts (Toraman et al., 2022). Most of such studies are limited to English language or distant supervision approach of labeling and fine-grained analysis. In this study, we investigate the following research questions: _RQ1:_ What are the potential reasons (e.g., hate speech, offensive language) behind tweets' deletion? _RQ2:_ Are deleted tweets a good way to collect different kinds of harmful content without imposing biases (ex: vs using keywords)? _RQ3:_ How does Twitter deal with users who post disinformative content? _RQ4:_ Can we detect potentiality of deletion of tweets and the corresponding reasons before they are posted? Figure 1: Examples of disinformative and not-disinformative tweets. Not-disinfo: Not disinformative, HS: Hate speech, Off: Offensive. *WARNING: Some examples have offensive language and hate speech, which may be disturbing to the reader To address these questions, we collected 40K deleted and non-deleted _Arabic_ tweets, and randomly selected a sample of 20K deleted and 2K non-deleted tweets. We then manually labeled them with fine-grained disinformative categories as shown in Figure 2 (See Section 3). Using the labeled dataset, we trained models using classical algorithms (i.e., SVM, RF) and transformer that can detect the potentiality of tweets getting deleted and the reasons of deletion. From our manual analysis, we found disinformative tweets with a proportion of 20% and 7% in deleted and non-deleted tweets, respectively. This clearly answers the question of deleted tweets being a good way to collect different kinds of harmful content, which can help in developing datasets and models to address disinformative content identification. Our contributions and findings are summarized as the following: * We developed a manually labeled dataset consisting of binary labels (deleted vs. non-deleted) and fine-grained disinformative categories. Our data collection method is generic and can be potentially applied to other languages and topics. * Our proposed _'detection and reasoning of deleted tweets'_ approach can empower users by providing feedback before tweets are posted, which can also serve as a prevention mechanism while consciously and unconsciously producing and sharing disinformative posts. * Our data can be shared privately. 3 Footnote 3: Note that we can only share text, like, share and annotated labels of the data, no information related to the user, which we deleted. * We report insightful characteristics of deleted tweets' users by extracting their current activity status. * Our findings demonstrate that deleted tweets contain more disinformation than non-deleted ones. ## 2 Related Work Many research investigations have been conducted in the field of regretted and deleted social media data. However, what the literature lacks is the value deleted tweets could have if used as a source of data for essential NLP tasks such as disinformation detection. Starting with a set of 292K unique Twitter users, Almuhimedi et al. (2013) extracted all public tweets and/or retweets posted by users, as well as any replies to their posts alongside all relevant metadata of each tweet. Through the API, the authors could identify whether a tweet has been deleted, as "a deletion notice was sent via the API containing identifiers for the user and the specific tweet" (Almuhimedi et al., 2013). By doing so, they collected a total of 67.2M tweets, 65.6M of them were undeleted, and the other 1.6M were deleted. Through further analysis of the tweets, two of the reasons of deletion, the authors deemed as'superficial,' were due to typos and spam which made up 17% and 1% of the deleted tweets, respectively. Overall, the authors' analysis identified some common reasons of tweets' deletion. They also found that deleted and undeleted tweets share many common characteristics including the topics discussed within those tweets. Taking it a step further, Bhattacharya and Ganguly (2016) investigated the personality of users on Twitter by comparing users who deleted their tweets with the ones who did not. They started by randomly selecting 250K Twitter users and collected their corresponding tweets throughout August, 2015, as well as their corresponding deletion statuses. Current literature suggests that deleted tweets are more likely to have aggressive and negative emotions. Torres-Lugo et al. (2022) analyze 'abusive' deletion behavior on Twitter. Using the Compliance Firehose Stream provided by Twitter, they extracted users who had more than 10 deletions over a one month period, which amounted to approximately 11 million users. They analyzed abusive deletion behaviour by extracting deletion volume, as well as frequency and life-span of deleted tweets. They found that 'abusive' deleters tend to make use of this feature in order to manipulate the current 2,400 tweets a day limit set by Twitter. Other abusive deleters tend to continuously like and dislike a tweet in order to coordinate which tweets are to be more noticed by other users before deleting them. Boyd and Marwick (2011), on the other hand, focused on teenagers' deletion antics on social media. They suggested that teenagers tend to use deletion as a'structural' strategy to avoid receiving any judgements from their followers regarding any of their interests that they might express through social media posts. Other researchers analyzed features and characteristics of deleted tweets with the goal of training models to predict the likelihood of deletion based on a number of features. Potash et al. (2016) made use of topic modelling and word embeddings to predict whether a tweet is likely to be deleted or not, focusing on spam content. Using features such as tweet length, # of links, ratio of upper-case text, hashtags, etc., they trained multiple classifiers, and tested them on a variety of datasets, resulting in a precision of approximately 81%. Similarly, Bagdouri and Oard (2015) investigated in the likelihood of a tweet gets deleted within 24hrs of its time posting. By analyzing features of both the deleted tweet, and the features of the corresponding users, they determined that tweets' features play a significant role in determining the likelihood of deletion. They specifically found that the device used to post the tweet is an important factor of determining deletion's potentiality. For instance, that tweets posted using smartphones were more likely to get deleted than those posted via computers. Furthermore, Gazizullina and Mazzara (2019) utilized the Recurrent Neural Networks (RNN) to predict a tweet's likelihood of deletion using features about the text itself, as well as the metadata of tweets and users. Using post-processed word embeddings, they proposed a 'Slingshot Net Model' which evaluated at an F-1 score of 0.755. Although there has been a good amount of researchers investigating deleted tweets and their characteristics, as far as we know, little work has been done in analyzing the role that disinformation plays in deleting tweets, specifically in the Arabic context. Therefore, we are inspired to contribute to the previous literature and conduct an investigation using Arabic deleted tweets to analyze the characteristics of deleted tweets, and identify different types of disinformation that could be shared within those tweets. ## 3 Dataset ### Data Collection We used Twarc package4 to collect Arabic tweets having the word Corona in Arabic in February and March 2020. As mentioned in Mubarak et al. (2022), this word is widely used by many people in all Arab countries, news media, and official organizations (e.g., the World Health Organization (WHO)) as opposed to the term COVID in Arabic.The collection includes 18.8M tweets from which we took a random sample of 100K and checked their existence on Twitter in June 2022. We found that 64K tweets were still active, and 36K tweets were unavailable. The reasons of tweets' unavailability might be due to _(i)_ users deleted tweets, _(ii)_ accounts deleted, _(iii)_ accounts suspended, or _(iv)_ accounts became private. Note that accounts' deletion and suspension could also happen due to content violation of Twitter's policies. Footnote 4: [https://github.com/DocNow/twarc](https://github.com/DocNow/twarc) We selected a samples of tweets for the annotation in two phases, deleted and non-deleted tweets, respectively. In the _first phase_, a random sample of 20K deleted tweets was selected for the manual annotation with fine-grained disinformative categories (see the following section). In the _second phase_, we selected another 20K non-deleted tweets. From this set, we manually annotated a random sample of only 2K tweets with fine-grained disinformative categories. The reason of such two phases annotation from both deleted vs. non-deleted tweets was to see if there are similar proportions of disinformative categories in both sets. This also resulted to have an equal sample of 40K deleted and non-deleted tweets in which we used for the classification. ### Annotation For the annotation, we selected major harmful categories (i.e., hate speech, offensive) discussed (Alam et al., 2022; Sharma et al., 2022). Additionally, we selected rumor and spam categories as such content is posted on social media. Note that the intention behind rumors is not always harmful; however, due to the spread of false rumors on social media, they can turn into harmful content (Jung et al., 2020). According to Twitter policies,5 these types of content are considered as platform manipulation content ("bulk, aggressive, or deceptive activity that misleads others and/or disrupts their experience"). Footnote 5: [https://help.twitter.com/en/rules-and-policies/platform-manipulation](https://help.twitter.com/en/rules-and-policies/platform-manipulation) We use the term "disinformative" to refer to _hate speech (HS), offensive (Off), rumor and spam_. Worthy to be mentioned that not all categories directly fall under disinformation; however, we use this term to distinguish such categories from not-disinformative ones. As for the annotation instructions, we follow the definition of these categories discussed in prior studies hate speech (Zampieri et al., 2020), offensive (Alam et al., 2022; Sharma et al., 2022), rumors (Jung et al., 2020), spam (Mubarak et al., 2020; Rao et al., 2021). We asked annotators to select _not-disinformative_ label if a tweet cannot be labeled as any of the disinformative categories we used in this study. The annotation process consists of several iterations of training by an expert annotator, followed by final annotation. Given that tweets are in Arabic, we selected an Arabic fluent annotator of many Arabic dialects, with an educational qualification of Undergraduate and Master's degree. As mentioned earlier, in the _first phase_ we selected and manually annotated 20K deleted tweets. In the _second phase_, we manually annotated 2K non-deleted tweets and rest of the 18K tweets of this phase are weakly labeled as _not-disinformative_. To ensure the quality of the annotations, during the first phase, two annotators annotated a randomly selected sample of 500 tweets (250 not-disinformative and 250 fine-grained disinformative tweets), then computed annotation agreement (see the next Section). Given that the annotation process is a costly procedure, we did not use more than one annotator for the rest of the tweet annotation. ### Annotation Agreement We assessed the quality of the annotations by computing inter-annotator agreement from the annotation of three annotators. We computed Fleiss \(\kappa\) and average observed agreement (AoE) (Fleiss et al., 2013) which resulted in an agreement of 0.75 and 0.84, respectively. Based on the values, we reached _substantial_ agreement in the \(\kappa\) measurement and _perfect_ agreement in the AoE measurement.6 Footnote 6: Note that, in the Kappa measurement, the values of ranges 0.41-0.60, 0.61-0.80, and 0.81-1 refers to the moderate, substantial, and perfect agreement, respectively (Landis and Koch, 1977). ### Statistics In Table 1, we report the distribution of annotated tweets (deleted vs. non-deleted tweets). As mentioned earlier, for non-deleted tweets, we manually annotated 2K tweets, and the rest of them are weakly labeled as not-disinformative. From the table (phase 1 column), we observe that the distribution of disinformative tweets is relatively low compared to non-disinformative tweets, 19.7%, and 80.3%, respectively. From the given sample, 2K non-deleted manual annotated tweets (3rd column), we observe that the distribution between disinformative vs. non-disinformative tweets is 7.3% and 92.7%, respectively. Such a distribution clearly shows us that the distribution of disinformative tweets is more in deleted tweets than non-deleted tweets. This answers the first two questions (RQ1 and RQ2). In the 4th column, we show the total number of tweets manually and weakly labeled from non-deleted tweets. ## 4 Analysis We present an in-depth analysis of the deleted tweets dataset to gain a better understanding of the topics and entities being tweeted about, in relation to COVID-19, and the users who authored those tweets. This includes identifying _(i)_ most common rumors discussed about COVID-19 within this dataset; _(ii)_ the most common hate-speech targets within the dataset; _(iii)_ the current activity status of the users to analyze the potential role that could have been played in the deletion of their tweets; and other metadata such as the distribution of different attributes (e.g., hashtags, user mentions) and, retweet and follower counts. ### Rumors When doing the manual annotation, we kept track the frequent rumors based on the semantic meaning.7 The most common rumors were regarding finding potential cures and/or medication to battle COVID-19, while other rumors are related to conspiracies regarding the long-term effects of COVID-19 on humans, as well as potential preventative measures to minimize the spread of the virus. In table 2, we list the most frequent rumors shared by users included within the dataset, ordered by descending order of frequency. Footnote 7: There are no duplicate tweets; we removed them at the beginning. \begin{table} \begin{tabular}{l r r r} \hline \multicolumn{3}{c}{**Examples**} \\ \multicolumn{1}{c}{**1.** A number of drugs, including Malaria, Influenza,} \\ \multicolumn{1}{c}{and AIDS drugs help coronavirus patients improve.} \\ \multicolumn{1}{c}{**2.** Coronavirus is an American invention.} \\ \multicolumn{1}{c}{**3.** Coronavirus is a biological warfare weapon,} \\ \multicolumn{1}{c}{and many people and novels predicted the virus ahead of time.} \\ \multicolumn{1}{c}{**4.** Coronavirus damages organs of the human body} \\ \multicolumn{1}{c}{such as the brain and genitals as it causes male infertility.} \\ \multicolumn{1}{c}{**5.** Having certain foods such as tea, maamoul and gum} \\ \multicolumn{1}{c}{prevents the infection of Coronavirus.} \\ \multicolumn{1}{c}{**6.** Religious rituals such as wearing niqab, burning incense,} \\ \multicolumn{1}{c}{being Muslim, and ablution prevents the infection.} \\ \end{tabular} \end{table} Table 2: Most frequent rumors. Translated forms of Arabic tweets. ### Hate Speech Targets We wanted to understand if hate speech is targeted toward any entities, countries, or organizations. During the manual annotation, we identified targets to which hate speech has been targeted. We then identified the most frequent entities mentioned throughout tweets classified as hate speech. Countries, political parties, and religion seem to be the most common entities found in tweets that include hate speech words/phrases. In Figure 2, we report most frequent hate speech targets. ### User Status We wanted to understand if there are any association of disinformative categories and current Twitter users' status. The goal was to understand whether the current status of a given account is a major factor of deleting tweets. Whereas if the account gets deleted or suspended, tweets of such account get deleted as well. Using the information provided by Twitter API, we determined the current user status of all unique users who posted at least one disinformative tweet. In total, there were 3,677 unique users who posted at least one disinformative tweet. Each of the unique users was classified under one of four categories: suspended (removed by Twitter), deleted (initiated by the user), active-private (user is active but private, blocking public access to any of their tweets), and active-public (user is active, and their tweets are publicly available). In Figure 3, we present the number of users' accounts for each disinformative categories. From the figure, we observe that the distribution of hate speech is higher than other categories. An interesting point to note is that almost 40% (1,419) of all users, with at least one disinformative post, were suspended by Twitter. Out of those users, Twitter was very efficient at identifying and disabling spam users, as it could suspend 423 accounts of users who shared at least one spam tweet, which amounts to more than 62% of accounts that posted any spam content. In respect of hate speech posters, Twitter identified and suspended over 34% (696) of them. For, the other accounts, approximately 24% (893) of them were deleted by the users themselves, while 6% (216) of them are currently active but are set to private, and the remaining 33% (1,224) are still active and public. This analysis answers RQ3, as it shows that Twitter is able to identify some users who post disinformative content, and ultimately suspend the whole account. As a result, user status is an important factor to take into consideration when analyzing and characterizing the deletion of tweets, as it could be due to their corresponding accounts that are not existing anymore, either as a result of Twitter suspension, user deactivation, or the user setting the account to private. Figure 2: Word cloud for most frequent hate-speech targeted topics/entities. ### Other Metadata In Table 3, we report the distributions of some attributes in the non-deleted, deleted, and the associated disinformative tweets. There are minor differences between the non-deleted and disinformative tweets. However, the subset of the deleted tweets that is labeled as disinformative has different distributions. For example, disinformative tweets have double as many URLs, as well as more replies than the other sets, and they are less likely to be retweeted by one seventh (12% vs 77% or 82%). From this dataset, we also observe that the percentage of hate speech is higher than other categories, which might be due to the topic of interest, i.e., COVID-19. Similar findings are reported in Mubarak and Hassan (2020), which suggest that tweets about COVID-19 were found to have higher percentage of hate speech (7%) as it's a polarized topic, e.g., attacking some countries for spreading the virus. This is typically different than random collections of Arabic tweets. Mubarak et al. (2021) reported that the percentage of offensive language in random collections is between 1% to 2%, and hate speech ratio is even less. We hypothesize that many of the deleted tweets contain more harmful content than normal (ex: 10.9% hate speech, 3.8% spam), and Twitter deleted them as they violate its community standards or they were deleted by the users themselves as they regretted posting some tweets because they contain offensiveness or rumors. This also answers our first two research questions. ## 5 Experiments and Results In Figure 4, we present our proposed pipeline of post deletion detection with reasons while posting on social media. While posting the tweet detection model can detect whether a tweet will be deleted, then fine-grained disinformation model can detect whether it is one of the disinformation categories (e.g., in Figure 3: Distribution of users’ account status corresponding to each disinformative category. This status is based on the time of our analysis period (August, 2022) this case, hate speech). Our goal is to empower users while posting and/or sharing content and reduce the spread of misleading and harmful content. In the following sections, we describe the details of the proposed models and results. ### Experiment Settings We have conducted different classification experiments with a focus on detecting whether a tweet can be deleted before posting, and what could be the possible reasons. We train three different classifiers as follows: _(i)_ a binary classifier to detect whether a tweet will be deleted using the labels _deleted_ vs. non-deleted tweets, which consists of 40K tweets; _(ii)_ a binary classifier to detect whether a tweet disinformative vs not-disinformative (binary classification setting) _(iii)_ a multiclass classifier to detect fine-grained disinformative categories. For the latter two classifiers we used manually labeled 22K tweets. Note that we have not used all 40K for the later two sets of experiments given that they have weakly labeled (18K considered as not-disinformative) tweets. This could be a part of our future study. \begin{table} \begin{tabular}{l c c c} \hline **Attributes** & **Non-Deleted** & **Deleted** & **Disinformative** \\ \hline **Hashtags** & 57\% & 55\% & **63\%** \\ **URLs** & 29\% & 25\% & **51\%** \\ **User Mentions** & 82\% & **87\%** & 24\% \\ **Replies** & 05\% & 05\% & **09\%** \\ **Retweets** & 77\% & **82\%** & 12\% \\ \hline \end{tabular} \end{table} Table 3: Percentages of tweets having different attributes. Figure 4: A pipeline of our proposed system to detect and warn users while posting – what can happen and why. **Translation (HS*):**_Why is Iran considered the most dangerous spot in the world for spreading Corona?_ ### Data Splits and Preprocessing To conduct experiments, we split our dataset into three subsets with a 70-10-20 setting for train, dev and test sets, respectively. The class distributions within each subset are shown in Table 4. The second set (ii) of data split in the Table is a subset of the first set, whereas the third set (iii) is only fine-grained _Disinformation_ categories of the second set (ii). #### 5.2.1 Preprocessing: Given that social media texts are normally noisy. Before any classification experiments, we applied preprocessing to the dataset. The preprocessing includes the removal of hash symbols and non-alphanumeric symbols, URL replacement with a "URL" token, and username replacement with "USER" token. ### Models We experimented with binary and multiclass settings both classical and deep learning algorithms discussed below. The classical models include _(i)_ Random Forest (RF) (Breiman, 2001), and _(ii)_ Support Vector Machines (SVM) (Platt, 1998), which was most widely reported in the literature. The other reason to choose such algorithms is that they are computationally efficient and useful in many production systems. Given that large-scale pretrained Transformer models have achieved state-of-the-art performance for several NLP tasks. Therefore, as deep learning algorithms, we used deep contextualized text representations based on such pretrained transformer models. We used AraBERT (Antoun et al., 2020) and multilingual transformers such as XLM-R (Conneau et al., 2019). For Transformer models, we used the Transformer toolkit (Wolf et al., 2019). We fine-tuned each model using the default settings for ten epochs as described in Devlin et al. (2018). We performed ten reruns for each experiment using different random seeds, and selected the model that performed best on the development set. ### Results We report accuracy (Acc), weighted precision (P), recall (R), and F1 scores which take into account class imbalance that we had in our dataset. We compute majority as a baseline. In Table 5, we report the classification experiments of all different settings. From the table, we can see that all models outperform the majority class baseline. Comparing to the classical algorithms, SVM outperforms RF in two settings out of three. While comparing monolingual vs multilingual transformer \begin{table} \begin{tabular}{l r r r r} \hline **Class label** & **Train** & **Dev** & **Test** & **Total** \\ \hline \multicolumn{4}{c}{**(i) Binary: Deleted vs. Non-deleted**} \\ \hline Deleted & 14,012 & 2,020 & 3,968 & 20,000 \\ Not-deleted & 13,988 & 1,980 & 4,032 & 20,000 \\ \hline **Total** & **28,000** & **4,000** & **8,000** & **40,000** \\ \hline \multicolumn{4}{c}{**(ii) Binary: Disinfo vs. Non-disinfo**} \\ \hline Disinformation & 2,879 & 394 & 807 & 4,080 \\ Not-Disinfo & 12,521 & 1,806 & 3,593 & 17,920 \\ \hline **Total** & **15,400** & **2,200** & **4,400** & **22,000** \\ \hline \multicolumn{4}{c}{**(iii) Multiclass: Fine-grained disinfo labels**} \\ \hline HS & 1,563 & 227 & 448 & 2,238 \\ Off & 554 & 83 & 161 & 798 \\ Rumor & 189 & 31 & 61 & 281 \\ Spam & 550 & 67 & 146 & 763 \\ \hline **Total** & **2,856** & **408** & **816** & **4,080** \\ \hline \end{tabular} \end{table} Table 4: Distribution of the dataset for different experimental settings for train, dev and test sets. models, we observe that AraBERT performs well in detecting deleted tweets, XLM-R outperforms well in classifying whether the text of the tweet is disinformative or not. For classifying fine-grained disinformative categories, AraBERT outperforms all other models. Our results clearly answer _RQ4_, in that we can detect potentiality of deletion of tweets and the corresponding reasons, with reasonable accuracy. #### 5.4.1 Error Analysis We analyzed all rumors and offensive tweets that are misclassified as hate speech (n=243). We found annotation errors in 18% of the cases, and 5% of the errors are due to sarcasm, negation or tweets having rumors and hate speech in the same time. In the other cases, the model predicted the label as hate speech as it is the dominant class as shown in statistics in Table 1. By looking into individual class label performance for disinformative categories, we observe that spam and hate speech are the best-performing labels (F1=0.940 and F1=0.779, respectively). The offensive label is the lowest in performance (F1=0.513), which is due to the mislabeling as hate speech in many cases. ## 6 Conclusion and Future Work We presented a large manual annotated dataset that consists of deleted and non-deleted Arabic tweets with fine-grained disinformative categories. We proposed classification models that can help in detecting whether a tweet will be deleted before even being posted and detect the possible reasons of the deletion. We also reported the common characteristics of the users whose tweets were deleted.Our findings suggest that deleted tweets can be used in developing annotated datasets of misinformative and disinformative categories. Future work will include more fine-grained categories which are mostly harmful (e.g., racist) and find more reasons of tweets' deletion which can empower social media users. In addition, we plan to explore multitask learning setup that can reduce computational cost and may boost the performance of the model. Also, for future explorations regarding this topic, there needs to be a larger dataset of deleted tweets used that takes into consideration factors such as the account being suspended as opposed to the individual tweet being deleted. \begin{table} \begin{tabular}{l c c c c} \hline **Model** & **Acc** & **P** & **R** & **F1** \\ \hline \multicolumn{5}{c}{**(i) Binary: Deleted vs. Non-deleted**} \\ \hline Majority & 0.496 & 0.246 & 0.496 & 0.329 \\ RF & 0.896 & 0.882 & 0.896 & 0.854 \\ SVM & 0.852 & 0.851 & 0.852 & 0.850 \\ AraBERT & 0.910 & 0.896 & 0.910 & **0.902** \\ XLM-R & 0.886 & 0.784 & 0.886 & 0.832 \\ \hline \multicolumn{5}{c}{**(ii) Binary: Disinfo vs. Non-disinfo**} \\ \hline Majority & 0.817 & 0.667 & 0.817 & 0.734 \\ RF & 0.853 & 0.871 & 0.853 & 0.812 \\ SVM & 0.837 & 0.838 & 0.837 & 0.837 \\ AraBERT & 0.888 & 0.882 & 0.888 & 0.884 \\ XLM-R & 0.897 & 0.894 & 0.897 & **0.895** \\ \hline \multicolumn{5}{c}{**(iii) Multiclass: Fine-grained disinfo labels**} \\ \hline Majority & 0.537 & 0.288 & 0.537 & 0.375 \\ RF & 0.696 & 0.760 & 0.696 & 0.622 \\ SVM & 0.669 & 0.677 & 0.669 & 0.665 \\ AraBERT & 0.755 & 0.757 & 0.755 & **0.752** \\ XLM-R & 0.762 & 0.747 & 0.762 & 0.745 \\ \hline \end{tabular} \end{table} Table 5: Classification results for different settings that can detect tweet deletion and possible fine-grained reasons. XLM-R: XLM-RoBERTa ## 7 Limitations We developed a dataset that consists of tweets extracted from Twitter only. Additionally, we developed models that require an exploration to understand whether models will work on datasets from other social media platforms. It is important to note that although this exploration looks into the likelihood of tweet deletion based on an annotated dataset, the moderation techniques employed by social media networks such as Twitter require further analysis to be able to gain insight into potential reasons for user suspension and/or tweet deletion. ## Ethical Consideration and Broader Impact Our dataset was collected from Twitter, following its terms of service. It can enable an analysis of social media content that may be an area of interest to social media platforms and users. Our models can help to reduce the intentional and unintentional posting of social media posts that can mislead and/or harm social media users. For reproducibility concerns, we aim to share the dataset privately that may limit to widely access the dataset. However, we are looking into ethical issues if even privately sharing them is allowed.
2301.11163
Convolutional Learning on Simplicial Complexes
We propose a simplicial complex convolutional neural network (SCCNN) to learn data representations on simplicial complexes. It performs convolutions based on the multi-hop simplicial adjacencies via common faces and cofaces independently and captures the inter-simplicial couplings, generalizing state-of-the-art. Upon studying symmetries of the simplicial domain and the data space, it is shown to be permutation and orientation equivariant, thus, incorporating such inductive biases. Based on the Hodge theory, we perform a spectral analysis to understand how SCCNNs regulate data in different frequencies, showing that the convolutions via faces and cofaces operate in two orthogonal data spaces. Lastly, we study the stability of SCCNNs to domain deformations and examine the effects of various factors. Empirical results show the benefits of higher-order convolutions and inter-simplicial couplings in simplex prediction and trajectory prediction.
Maosheng Yang, Elvin Isufi
2023-01-26T15:08:11Z
http://arxiv.org/abs/2301.11163v1
# Convolutional Learning on Simplicial Complexes ###### Abstract We propose a simplicial complex convolutional neural network (SCCNN) to learn data representations on simplicial complexes. It performs convolutions based on the multi-hop simplicial adjacencies via common faces and cofaces independently and captures the inter-simplicial couplings, generalizing state-of-the-art. Upon studying symmetries of the simplicial domain and the data space, it is shown to be permutation and orientation equivariant, thus, incorporating such inductive biases. Based on the Hodge theory, we perform a spectral analysis to understand how SCCNNs regulate data in different frequencies, showing that the convolutions via faces and cofaces operate in two orthogonal data spaces. Lastly, we study the stability of SCCNNs to domain deformations and examine the effects of various factors. Empirical results show the benefits of higher-order convolutions and inter-simplicial couplings in simplex prediction and trajectory prediction. Machine Learning, ICML ## 1 Introduction Graphs are commonly used to represent the support of networked data as nodes and capture their pairwise relations as edges. Graph neural networks (GNNs) have emerged as a learning model that leverages this topology information as an inductive bias (Battaglia et al., 2018; Bronstein et al., 2021). However, this bias can be erroneous when the topology structure of data involves with polyadic or higher-order relations, which often arises in real-world problems. For example, in social networks, people often interact in social groups, not just in pairs (Newman et al., 2002). In gene regularotly networks, a collection of molecular regulators interact with each other (Masoomy et al., 2021). In coauthorship networks, collaborations form between several authors rather than just two (Benson et al., 2018; Bick et al., 2021). Moreover, GNNs are often used to learn representations from data defined on nodes (Kipf and Welling, 2017; Defferrard et al., 2016; Gilmer et al., 2017). However, we also have data defined on higher-order structures in a network. For example, water flows in a water distribution system (Money et al., 2022) and traffic flow in a road network (Jia et al., 2019), such flow-type data are naturally supported on edges of a network. In coauthorship networks, data supported on a multi-set of \(k\) nodes can be the frequency or citation of the collaboration between \(k\) people (Benson et al., 2018). Capturing the coupling between the data and these higher-order network structures is key to overcome the limitations of GNNs. As a higher-order network model, simplicial complexes (SCs) support an entity of multiple elements as a simplex and the relations between simplices can be mediated through their common faces and cofaces, referred to as lower and upper (simplicial) adjacencies. In analogy to graph Laplacians, Hodge Laplacians provide an algebraic representation of an SC to encode such adjacencies. This allows for a principled extension of processing and learning techniques from graphs to SCs. For example, Barbarossa and Sardellitti (2020) proposed a spectral signal processing framework in SCs, followed by simplicial convolutional filters (SCFs) (Yang et al., 2021, 2022). Neural networks on SCs include, among others, Ebli et al. (2020); Roddenberry et al. (2021); Yang et al. (2022); Bodnar et al. (2021); Bunch et al. (2020). But they either focus on simplices of the same order, not exploiting the couplings between different orders or apply a message passing scheme based on direct simplicial adjacencies. With a comprehensive framework to capture both higher-order simplicial adjacencies and inter-simplicial couplings, we conduct a convolution-based study for learning on SCs: 1) _Simplicial complex convolutional neural network:_ we propose an SCCNN to propagate information across simplices of the same order via lower and upper adjacencies independently in a multi-hop way while leveraging the inter-simplicial couplings. It generalizes state-of-the-art and admits intra- and extended inter-simplicial localities with a linear computational complexity. 2) _Symmetries:_ Based on group theory, we show that there exhibit a permutation symmetry in SCs and an orientation symmetry in the SC data. SCCNNs can be built equivariant to both symmetries to incorporate such inductive biases. 3) _Spectral analysis:_ Based on tools from Barbarossa and Sardellitti (2020); Yang et al. (2022b), we study how each component of the SCCNN regulates the data from the spectral perspective. This analysis generalizes to state-of-the-art. 4) _Stability analysis:_ We prove that SCCNNs are stable to domain deformations when the convolutional filters are integral Lipschitz and show how the inter-simplicial couplings propagate the deformations across the SC. ## 2 Background We briefly introduce the SC and its algebraic representation, together with the data defined on SCs. **Simplicial Complex.** Given a finite set of vertices \(\mathcal{V}:=\{1,\dots,N_{0}\}\), a \(k\)-simplex \(s^{k}\) is a subset of \(\mathcal{V}\) with cardinality \(k+1\). A _face_ of \(s^{k}\) is a subset with cardinality \(k\). A _coface_ of \(s^{k}\) is a \((k+1)\)-simplex that has \(s^{k}\) as a face. Nodes, edges and (filled) triangles are geometric realizations of 0-, 1- and 2-simplices. An SC \(\mathcal{S}\) of order \(K\) is a collection of \(k\)-simplices \(s^{k}\), \(k=[K]:=0,\dots,K\), with the inclusion property: \(s^{k-1}\in\mathcal{S}\) if \(s^{k-1}\subset s^{k}\) for \(s^{k}\in\mathcal{S}\), e.g., Figure 0(a). A graph is also an SC of order one including nodes and edges. We collect the \(k\)-simplices in \(\mathcal{S}\) in set \(\mathcal{S}^{k}=\{s^{k}_{i}|i=1,\dots,N_{k}\}\) with \(N_{k}=|\mathcal{S}^{k}|\), therefore \(\mathcal{S}=\bigcup_{t=0}^{K}\mathcal{S}^{k}\). To facilitate computations, an orientation of a simplex is chosen as an ordering of its vertices, which is an equivalence class that two orderings are equivalent if they differ by an even permutation; otherwise anti-aligned (Munkres, 2018; Lim, 2020). We fix an orientation for a simplex according to the lexicographical ordering of its vertices, \(s^{k}=[1,\dots,k+1]\), e.g., a triangle \(s^{2}=\{i,j,k\}\) is oriented as \([i,j,k]\) with \(i<j<k\) and a node has a trivial orientation. **Simplicial Adjacency.** For \(s^{k}_{i}\), we define its _lower (upper) neighborhood_\(\mathcal{N}^{k}_{i,\mathrm{d}}\) (\(\mathcal{N}^{k}_{i,\mathrm{u}}\)) as the set of \(k\)-simplices which share a common face (coface) with it. If \(s^{k}_{j}\in\mathcal{N}^{k}_{i,\mathrm{d}}(\mathcal{N}^{k}_{i,\mathrm{u}})\), we say \(s^{k}_{j}\) is _lower (upper) adjacent_ to \(s^{k}_{i}\). In Figure 0(a), we have \(\mathcal{N}^{1}_{1,\mathrm{d}}=\{e_{2},e_{3},e_{4},e_{5}\}\) and \(\mathcal{N}^{1}_{1,\mathrm{u}}=\{e_{2},e_{4}\}\) for \(e_{1}\). **Algebraic Representations.** We use incidence matrices \(\mathbf{B}_{k},k=[K]\) to describe the incidence relations in an SC, where \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\) are the node-to-edge and edge-to-triangle incidence matrices, respectively. Note that \(\mathbf{B}_{0}\) is not defined. See Appendix A for those of SC in Figure 0(a). By definition, we have \(\mathbf{B}_{k}\mathbf{B}_{k+1}=\mathbf{0}\)(Lim, 2020). In an SC of order \(K\), the Hodge Laplacians are defined as \[\mathbf{L}_{k}=\mathbf{B}_{k}^{\top}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B }_{k+1}^{\top},k=[K]\] with the _lower Laplacian_\(\mathbf{L}_{k,\mathrm{d}}=\mathbf{B}_{k}^{\top}\mathbf{B}_{k}\) and the _upper Laplacian_\(\mathbf{L}_{k,\mathrm{u}}=\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{\top}\), and the graph Laplacian \(\mathbf{L}_{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{\top}\) and \(\mathbf{L}_{K}=\mathbf{B}_{K}^{\top}\mathbf{B}_{K}\). Matrices \(\mathbf{L}_{k,\mathrm{d}}\) and \(\mathbf{L}_{k,\mathrm{u}}\) encode the lower and upper adjacencies of \(k\)-simplices, respectively. In particular, \(\mathbf{L}_{1,\mathrm{d}}\) and \(\mathbf{L}_{1,\mathrm{u}}\) encode the edge-to-edge adjacencies through nodes and triangles, respectively. **Simplicial Signals.** In an SC, we define _\(k\)-simplicial signals (or data features)_\(\mathbf{x}_{k}=[x_{k,1},\dots,x_{k,N_{k}}]^{\top},k=[K]\) by an _alternating_ map \(f_{k}:\mathcal{S}^{k}\rightarrow\mathcal{X}^{N_{k}}\) which assigns a signal \(x_{k,i}\) to the \(i\)th simplex \(s^{k}_{i}\). The alternating map restricts that if the orientation of the simplex is anti-aligned with the reference orientation, denoted by \(\overline{s}^{k}_{i}=-s^{k}_{i}\), then the sign of the signal value will be changed, \(f_{k}(\overline{s}^{k}_{i})=-f_{k}(s^{k}_{i})\). If the signal value \(x^{k}_{i}\) is negative, then the signal is anti-aligned with the reference (Lim, 2020; Schaub et al., 2021). An \(F\)-feature simplicial signal \(\mathbf{X}_{k}\in\mathbb{R}^{N_{k}\times F}\) can be defined. ## 3 SCCNNs We introduce the SCCNN to learn from data defined on SCs. Then, we discuss its intra- and inter-simplicial localities, followed by its complexity, and related works. An \(L\)-layer SCCNN defined in an SC \(\mathcal{S}\) of order \(K\) computes the output \(\mathbf{x}^{l}_{k}\) at layer \(l\) as a nonlinear function of the outputs \(\mathbf{x}^{l-1}_{k-1},\mathbf{x}^{l-1}_{k}\) and \(\mathbf{x}^{l-1}_{k+1}\) at the previous layer \(l-1\) \[\text{SCCNN}^{l}_{k}:\{\mathbf{x}^{l-1}_{k-1},\mathbf{x}^{l-1}_{k},\mathbf{x}^{ l-1}_{k+1}\}\rightarrow\mathbf{x}^{l}_{k},\] for \(k=[K]\) and \(l=1,\dots,L\), and admits a detailed form \[\mathbf{x}^{l}_{k}=\sigma(\mathbf{H}^{l}_{k,\mathrm{d}}\mathbf{x}^{l-1}_{k, \mathrm{d}}+\mathbf{H}^{l}_{k}\mathbf{x}^{l-1}_{k}+\mathbf{H}^{l}_{k,\mathrm{ u}}\mathbf{x}^{l-1}_{k,\mathrm{u}}) \tag{1}\] with \(\mathbf{x}^{l-1}_{k,\mathrm{d}}=\mathbf{B}_{k}^{\top}\mathbf{x}^{l-1}_{k-1}\) and \(\mathbf{x}^{l-1}_{k,\mathrm{u}}=\mathbf{B}_{k+1}\mathbf{x}^{l-1}_{k+1}\), which can be understood as follows: 1) The previous output \(\mathbf{x}^{l-1}_{k}\) is passed through a simplicial convolution filter (SCF) \(\mathbf{H}^{l}_{k}\)(Yang et al., 2022b), given by \[\mathbf{H}^{l}_{k}:=\mathbf{H}^{l}_{k}(\mathbf{L}_{k,\mathrm{d}},\mathbf{L}_{k, \mathrm{u}})=\sum_{t=0}^{T_{\mathrm{d}}}w^{l}_{k,\mathrm{d},t}\mathbf{L}^{t}_{k, \mathrm{d}}+\sum_{t=0}^{T_{\mathrm{u}}}w^{l}_{k,\mathrm{u},t}\mathbf{L}^{t}_{k, \mathrm{u}},\] which is a sum of two matrix polynomials of \(\mathbf{L}_{k,\mathrm{d}}\) and \(\mathbf{L}_{k,\mathrm{u}}\) with trainable filter coefficients \(\{w^{l}_{k,\mathrm{d},t},w^{l}_{k,\mathrm{u},t}\}\) and filter orders \(T_{\mathrm{d}},T_{\mathrm{u}}\). Operator \(\mathbf{H}^{l}_{k}\) performs simplicial convolutions relying on the lower and upper adjacencies independently. 2) \(\mathbf{x}^{l-1}_{k,\mathrm{d}}\) and \(\mathbf{x}^{l-1}_{k,\mathrm{u}}\) are the lower and upper projections from the lower and upper adjacent simplices (i.e., faces and cofaces) to \(k\)-simplices via incidence structures \(\mathbf{B}^{\top}_{k}\) and \(\mathbf{B}_{k+1}\), respectively. \(\mathbf{x}^{l-1}_{0,\mathrm{d}}\) and \(\mathbf{x}^{l-1}_{K,\mathrm{u}}\) are not defined. 3) The lower projection \(\mathbf{x}^{l-1}_{k,\mathrm{d}}\) is passed through another SCF, but it reduces to \(\mathbf{H}^{l}_{k,\mathrm{d}}:=\sum_{t=0}^{T_{\mathrm{d}}}w^{l}_{k,\mathrm{d},t} \mathbf{L}^{t}_{k,\mathrm{d}}\) since \(\mathbf{L}_{k,\mathrm{u}}\mathbf{B}^{\top}_{k}=\mathbf{0}\). That is, the lower projection cannot propagate via the upper adjacency. Likewise, the upper projection \(\mathbf{x}^{l-1}_{k,\mathrm{u}}\) is passed through an upper SCF \(\mathbf{H}^{l}_{k,\mathrm{u}}:=\sum_{t=0}^{T_{\mathrm{u}}}w^{l}_{k,\mathrm{u},t} \mathbf{L}^{t}_{k,\mathrm{u}}\), only accounting for the upper adjacency. 4) The sum of the three SCF outputs is passed by an elementwise nonlinearity \(\sigma(\cdot)\). A multi-feature variant of SCCNNs can also be defined (see Appendix B.1). **Localities.** Consider the output of an SCF on a \(k\)-simplicial signal, i.e., \(\mathbf{H}_{k}\mathbf{x}_{k}\) (with layer index \(l\) omitted). We have _simplicial shiftings_\(\mathbf{L}_{k,\mathrm{d}}\mathbf{x}_{k}\) and \(\mathbf{L}_{k,\mathrm{u}}\mathbf{x}_{k}\) on simplex \(s_{i}^{k}\) as \[[\mathbf{L}_{k,\mathrm{d}}\mathbf{x}_{k}]_{i} =\sum_{j\in\mathcal{N}_{i,\mathrm{d}}^{k}\cup\{i\}}[\mathbf{L}_{k,\mathrm{d}}]_{ij}[\mathbf{x}_{k}]_{j}, \tag{2}\] \[[\mathbf{L}_{k,\mathrm{u}}\mathbf{x}_{k}]_{i} =\sum_{j\in\mathcal{N}_{i,\mathrm{u}}^{k}\cup\{i\}}[\mathbf{L}_{k,\mathrm{u}}]_{ij}[\mathbf{x}_{k}]_{j},\] where \(s_{i}^{k}\) aggregates signals from its lower and upper neighbors in \(\mathcal{N}_{i,\mathrm{d}}^{k}\) and \(\mathcal{N}_{i,\mathrm{u}}^{k}\) based on the corresponding adjacencies. We can compute the \(t\)-step shifting recursively as \(\mathbf{L}_{k,\mathrm{d}}^{t}\mathbf{x}_{k}=\mathbf{L}_{\mathrm{d}}(\mathbf{L }_{k,\mathrm{d}}^{t-1}\mathbf{x}_{k})\), a one-step shifting of the \((t-1)\)-shift result; likewise for \(\mathbf{L}_{k,\mathrm{u}}^{t}\mathbf{x}_{k}\). An SCF linearly combines such multi-step simplicial shiftings based on lower and upper adjacencies. Thus, the output \(\mathbf{H}_{k}\mathbf{x}_{k}\) is localized in \(T_{\mathrm{d}}\)-hop lower and \(T_{\mathrm{u}}\)-hop upper \(k\)-simplicial neighborhoods (Yang et al., 2022b). SCCNNs preserve such _intra-simplicial locality_ as the elementwise nonlinearity does not alter the information locality, shown in Figures 0(b) and 0(c). An SCCNN takes the data on \(k\)- and \((k\pm 1)\)-simplices at layer \(l-1\) to compute \(\mathbf{x}_{k}^{l}\), causing interactions between \(k\)-simplices and their (co)faces when all SCFs are identity. In turn, \(\mathbf{x}_{k-1}^{l-1}\) contains information on \((k-2)\)-simplices from layer \(l-2\). Likewise for \(\mathbf{x}_{k+1}^{l-1}\), thus, \(\mathbf{x}_{k}^{l}\) also contains information up to \((k\pm 2)\)-simplices if \(L\geq 2\), because \(\mathbf{B}_{k}\sigma(\mathbf{B}_{k+1})\neq\mathbf{0}\) (see Appendix B.3). Accordingly, this _inter-simplicial locality_ extends to the whole SC if \(L\geq K\), unlike linear filters in an SC where the locality happens up to the adjacent simplices (Isufi and Yang, 2022; Schaub et al., 2021). This locality is further coupled with the intra-locality through three SCFs such that a node not only interacts with its cofaces (direct edges) and direct triangles including it, but also edges and triangles further hops away which contribute to the neighboring nodes, as shown in Figure 0(d). **Complexity.** For an SCCNN layer, the parameter complexity is of order \(\mathcal{O}(T_{\mathrm{d}}+T_{\mathrm{u}})\). Denote the maximum of the number of neighbors for \(k\)-simplices by \(M_{k}:=\max\{|\mathcal{N}_{i,\mathrm{d}}^{k}|,|\mathcal{N}_{i,\mathrm{u}}^{k} |\}_{i=1}^{N_{k}}\). The computational complexity is of order \(\mathcal{O}(k(N_{k}+N_{k+1})+N_{k}M_{k}(T_{\mathrm{d}}+T_{\mathrm{u}}))\), discussed in Appendix B.2, which is linear to the simplex dimensions. ### Related Works The related works of this paper concern the following. **Signal Processing on SCs.** Recent works on processing SC signals started on edge flows, which intrinsically follow properties like divergence-free, curl-free or harmonic (Jiang et al., 2011; Schaub and Segarra, 2018; Jia et al., 2019). In Barbarossa and Sardellitti (2020); Schaub et al. (2021), a better understanding of simplicial signals was approached via Hodge theory (Lim, 2020). Yang et al. (2021, 2022b) proposed an SCF, providing a spectral analysis of simplicial signals based on the spectrum of Hodge Laplacians. SCF was further extended to a joint filtering of signals on simplices of different orders by Isufi and Yang (2022). These concepts are key to understand the SCCNN spectrally. **NNs on SCs.** Roddenberry and Segarra (2019) first used edge-Laplacian \(\mathbf{L}_{1,\mathrm{d}}\) to build NNs where edge convolution only considers the lower adjacency. Ebli et al. (2020) built an SNN based on a convolution via Hodge Laplacians, jointly relying on the lower and upper adjacencies. Yang et al. (2022a) discussed the limitations of this strategy and proposed separate simplicial convolutions based on the SCF. A one-step simplicial shifting separately by \(\mathbf{L}_{k,\mathrm{d}}\) and \(\mathbf{L}_{k,\mathrm{u}}\) was Figure 1: (a) An SC where arrows indicate the reference orientations of edges and triangles. 2-simplices are (filled) triangles shaded in green and open triangle \(\{1,3,4\}\) is not in the SC. (b) Lower convolution via \(\mathbf{H}_{1}\) and \(\mathbf{H}_{1,\mathrm{d}}\) on edge \(e_{1}\): SCF \(\mathbf{H}_{1}\) aggregates the information from its direct lower neighbors (edges in blue) and two-hop lower neighbors (edges in purple) to \(e_{1}\) (in black) if \(T_{\mathrm{d}}=2\); and lower SCF \(\mathbf{H}_{1,\mathrm{d}}\) aggregates the projected information from nodes to edges likewise (denoted by the arrows in blue and purple from nodes to edges). (c) Upper convolution via \(\mathbf{H}_{1}\) and \(\mathbf{H}_{1,\mathrm{u}}\) on \(e_{1}\): \(\mathbf{H}_{1}\) aggregates the information from direct upper neighbors (edges in red) and two-hop upper neighbors (edges in orange) to \(e_{1}\) (in black); and upper SCF \(\mathbf{H}_{1,\mathrm{u}}\) aggregates the projected information from triangles to edges likewise (denoted by double arrows in red and orange from triangle centers to edges). (d) Node 1 (in black) contains information from its neighbors \(\{2,3,4\}\) (nodes in red), and projected information from edges which contribute to these neighbors (denoted by arrows in red from edges to nodes), and from triangles \(\{t_{1},t_{2},t_{3}\}\) which contribute to those edges (denoted by double arrows in red from triangle centers to edges). This interaction is the coupling between the intra- and the extended inter-simplicial locality. proposed by Roddenberry et al. (2021). An attention scheme was applied to the previous two by Giusti et al. (2022); Goh et al. (2022). Information from simplices of adjacent orders was added by Bunch et al. (2020) and Yang et al. (2022c). Instead, Bodnar et al. (2021b) and Hajij et al. (2021) used a message passing scheme to collect such information, in analogy to the graph case (Gilmer et al., 2017). Chen et al. (2022b) combined graph shifting of node features and simplicial shifting of edge features in link predictions. As listed in Table 1 and further discussed in Appendix B.4, most of these solutions can be subsumed into the SCCNNs. **Graph Neural Networks.** NNs on SCs return to GNNs when the SC is a graph. Most GNNs vary in terms of the graph convolutions, a shift-and-sum operation via graph shift operators such as graph adjacency and Laplacian matrices, e.g., a one-step graph shifting was performed in Kipf and Welling (2017) in contrast to a general graph convolution in (Defferrard et al., 2016; Gama et al., 2019, 2020b), which can be obtained as \(\mathbf{x}_{0}^{l}\) without the upper projection \(\mathbf{x}_{0,\text{u}}\). ## 4 Simplicial Complex Symmetry Machine learning models rely on _symmetries_ of the object domain, which are transformations that keep invariant certain object properties. Leveraging such symmetries of the data and the underlying domain imposes inductive biases, allowing the model to learn effective data representations (Bronstein et al., 2021). For example, GNNs leverage the permutation symmetry group to learn from graphs (Hamilton, 2020; Ruiz et al., 2021). Here we study the symmetries of the SC domain and the simplicial signal space, and show that SCCNNs preserve such symmetries, ultimately, extending the approach of Bronstein et al. (2021) to the SCs. **Permutation Symmetry.** In an SC \(\mathcal{S}\), the labeling of the \(k\)-simplices in \(\mathcal{S}^{k}=\{s^{k}_{\mathfrak{p}_{k}(1)},\ldots,s^{k}_{\mathfrak{p}_{k}( N_{k})}\}\) is a permutation \(\mathfrak{p}_{k}\) of the indices \(\{1,\ldots,N_{k}\}\). These permutations form a _permutation group_\(\mathfrak{P}_{k}\) with \(N_{k}!\) elements based on the group axioms (see Appendix C.1): they are associative, every permutation has an identity permutation and an inverse, and every two permutations form another permutation. A permutation \(\mathfrak{p}_{k}\in\mathfrak{P}_{k}\) can be represented by an orthogonal permutation matrix \(\mathbf{P}_{k}\in\{0,1\}^{N_{k}\times N_{k}}\) with entry \([\mathbf{P}_{k}]_{ij}=1\) if \(i=\mathfrak{p}_{k}(j)\) and \([\mathbf{P}_{k}]_{ij}=0\) otherwise. Thus, the labeling of simplices in \(\mathcal{S}\) form a set \(\{\mathfrak{P}_{k}:k=[K]\}\) whose elements are permutation groups. This set can be represented by a set of matrices, \(\mathbb{P}=\{\mathbb{P}_{k}:k=[K]\}\) with \(\mathbb{P}_{k}=\{\mathbf{P}_{k,i}\in\{0,1\}^{N_{k}\times N_{k}}:\mathbf{P}_{k,i}\mathbf{1}=\mathbf{1},\mathbf{P}_{k,i}^{\top}\mathbf{1}=\mathbf{1},i=1, \ldots,N_{k}!\}\) representing the permutation group \(\mathfrak{P}_{k}\). We then study how a permutation (labeling) of simplices affects the domain SC and the simplicial signals. **Proposition 4.1** (Permutation Symmetry).: _Consider an SC \(\mathcal{S}\) with \(\mathbf{B}_{k}\) and \(\mathbf{L}_{k}\) for \(k=[K]\). Let \(\{\mathbf{P}_{k}:k=[K]\}\in\mathbb{P}\) represent a sequence of permutations \(\{\mathfrak{p}_{k}\in\mathfrak{P}_{k}:k=[K]\}\). Denote the permuted incidence matrices and Hodge Laplacians by \(\overline{\mathbf{B}}_{k}\) and \(\overline{\mathbf{L}}_{k}\), for \(k=[K]\). Then, we have_ i)__\(\overline{\mathbf{B}}_{k}=\mathbf{P}_{k-1}\mathbf{B}_{k}\mathbf{P}_{k}^{\top}\) _with entries_ \([\overline{\mathbf{B}}_{k}]_{i^{\prime}j^{\prime}}=[\mathbf{B}_{k}]_{ij}\) _for_ \(i^{\prime}=\mathfrak{p}_{k-1}(i),j^{\prime}=\mathfrak{p}_{k}(j)\)_; ii)__\(\overline{\mathbf{L}}_{k}=\mathbf{P}_{k}\mathbf{L}_{k}\mathbf{P}_{k}^{\top}\) _with entries_ \([\overline{\mathbf{L}}_{k}]_{i^{\prime}j^{\prime}}=[\mathbf{L}_{k}]_{ij}\) _for_ \(i^{\prime}=\mathfrak{p}_{k}(i),j^{\prime}=\mathfrak{p}_{k}(j)\)_; and iii) the spectral property of the SC remains equivariant._ See proof in Appendix C.2. This states that an SC is unaffected by the labeling of simplices, as well as its simplicial adjacencies, illustrated in Figure 1(a). Its algebraic representations remain equivariant to permutations, i.e., they are a rearrangement of the rows and columns of the original ones. The spectral property of algebraic representations remain equivariant as well. Furthermore, the permutation of \(k\)-simplices does not affect \(\mathbf{B}_{j}\) for \(j\neq k,k+1\), nor \(\mathbf{L}_{j}\) for \(j\neq k\). Lastly, a \(k\)-simplicial signal \(\mathbf{x}^{k}\) changes into \(\overline{\mathbf{x}}_{k}=\mathbf{P}_{k}\mathbf{x}_{k}\) according to the permutation of \(k\)-simplices. **Orientation Symmetry.** In an oriented SC, the equivalence of an orientation defines that a simplex \(s^{k}=[0,\ldots,k]\) and its reoriented version \(\overline{s}^{k}=[\pi(0),\ldots,\pi(k)]\) have the same orientation if \(\pi\) is an even permutation of \(\{0,\ldots,k\}\); and they are anti-aligned \(\overline{s}^{k}=-s^{k}\), if \(\pi\) is an odd permutation. These two orientations of a \(k\)-simplex form an _orientation group_\(\mathfrak{D}_{k}=\{\mathfrak{e},\mathfrak{o}_{-}\}\) with two elements. They can be obtained by a group homomorphism which maps all the even permutations of \(\{0,\ldots,k\}\) to the identity orientation \(\mathfrak{e}\) and all the odd permutations of \([k]\) to the reverse orientation \(\mathfrak{o}_{-}\). Note that \(\mathfrak{D}_{0}=\{\mathfrak{e}\}\) since a node has one trivial orientation and we assume \(k\neq 0\) here. Thus, in an oriented SC \(\mathcal{S}\) we have a set whose elements \begin{table} \begin{tabular}{l c} \hline \hline Methods & Parameters (n.d. denotes “not defined”) \\ \hline Ebli et al. (2020) & \(w^{l}_{k,\text{d},t}=w^{l}_{k,\text{u},t},\mathbf{H}^{l}_{k,\text{d}}, \mathbf{H}^{l}_{k,\text{u}}\) n.d. \\ Roddenberry et al. (2021) & \(T_{\text{d}}=T_{\text{u}}=1,\mathbf{H}^{l}_{k,\text{d}},\mathbf{H}^{l}_{k,\text{u}}\) n.d. \\ Yang et al. (2022a) & \(\mathbf{H}^{l}_{k,\text{d}},\mathbf{H}^{l}_{k,\text{u}}\) n.d. \\ Bunch et al. (2020) & \(T_{\text{d}}=T_{\text{u}}=1,\mathbf{H}^{l}_{k,\text{d}}=\mathbf{H}^{l}_{k, \text{u}}=\mathbf{I}\) \\ Bodnar et al. (2021b) & \(T_{\text{d}}=T_{\text{u}}=1,\mathbf{H}^{l}_{k,\text{d}}=\mathbf{H}^{l}_{k, \text{u}}=\mathbf{I}\) \\ \hline \hline \end{tabular} \end{table} Table 1: SCCNNs generalize several related works. Figure 2: (a) Simplex relabling (relabled nodes, edges and triangles indicated by orange, blue and red) does not alter the SC in Figure 0(a). (b) Reorienting simplices (edges in blue, triangles in red) does not alter the chain (simplicial signal) on them. are orientation groups \(\{\mathfrak{O}_{k,i}:k=[K],i=1,\ldots,N_{k}\}\) with group \(\mathfrak{O}_{k,i}\) for the \(i\)th \(k\)-simplex \(s_{i}^{k}\). The subset \(\{\mathfrak{O}_{k,i}:i=1,\ldots,N_{k}\}\) admits a diagonal matrix representation \(\mathbf{D}_{k}\in\{-1,1\}^{N_{k}\times N_{k}}\) with \([\mathbf{D}_{k}]_{ii}=1\) if \(\mathfrak{e}\) is applied to \(s_{i}^{k}\), and \([\mathbf{D}_{k}]_{ii}=-1\) if \(\mathfrak{o}_{-}\) is applied. Then, we have a set of orientation matrices for \(\mathcal{S}\) as \(\mathbb{D}=\{\mathbf{D}_{k}:k=[K]\}\). A simplicial signal \(\mathbf{x}_{k}\) by definition remains unchanged w.r.t. the underlying simplices after an orientation change. To see this conveniently, we introduce the \(k\)_-chain_ space \(\mathcal{C}_{k}\) with a chain \(c_{k}=\sum_{i=1}^{N_{k}}x_{k,i}s_{i}^{k}\) that is a linear combination of \(k\)-simplices weighted by the supported signals \(x_{k,i}\). With the basis \(\{s_{i}^{k}:i=1,\ldots,N_{k}\}\), a \(k\)-chain \(c_{k}\) can be represented by the \(k\)-simplicial signal vector \(\mathbf{x}_{k}=[x_{k,1},\ldots,x_{k,N_{k}}]^{\top}\). By imposing the alternating property to \(\mathcal{C}_{k}\) that if \(s_{i}^{k}\) is reversed, the weight \(x_{k,i}\) changes its sign, a \(k\)-chain space \(\mathcal{C}_{k}\) is then isomorphic to the \(k\)-simplicial signal space \(\mathcal{X}_{k}\)(Munkres, 2018). Unlike permutation groups, orientation groups do not form a symmetry group in an oriented SC but in the chain space as stated by the following proposition. **Proposition 4.2** (Orientation Symmetry).: _Consider an oriented SC \(\mathcal{S}\) with \(\mathbf{B}_{k}\) and \(\mathbf{L}_{k}\) for \(k=[K]\). Let \(\{\mathbf{D}_{k}:k=[K]\}\in\mathbb{D}\) represent a sequence of orientation changes \(\{\mathfrak{O}_{i}^{k}:k=[K],i=1,\ldots,N_{k}\}\) of simplices in \(\mathcal{S}\). In the reoriented SC, incidence matrices become \(\overline{\mathbf{B}}_{k}=\mathbf{D}_{k}\mathbf{B}_{k}\mathbf{D}_{k+1}\) and Hodge Laplacians become \(\overline{\mathbf{L}}_{k}=\mathbf{D}_{k}\mathbf{L}_{k}\mathbf{D}_{k}\). Moreover, a \(k\)-simplicial signal \(\mathbf{x}_{k}\) becomes \(\overline{\mathbf{x}}_{k}=\mathbf{D}_{k}\mathbf{x}_{k}\) and its underlying \(k\)-chain remains unchanged._ See the proof in Appendix C.3. This states that the incidence relations and the simplicial adjacencies in an oriented SC are altered when the orientations are reversed, whereas the \(k\)-chain remains invariant to this transformation and the \(k\)-simplicial signal \(\mathbf{x}_{k}\) is equivariant in terms of the basis. **Equivariance of SCCNNs.** Upon seeing that permutations form a symmetry group in an SC and orientations form a symmetry group in the simplicial signal space, we show that SCCNNs in (1) are equivariant to the two symmetries. **Proposition 4.3** (Permutation Equivariance).: \(\textit{SCCNN}_{k}^{l}:\{\mathbf{x}_{k-1}^{l-1},\mathbf{x}_{k}^{l-1},\mathbf{ x}_{k+1}^{l-1}\}\rightarrow\mathbf{x}_{k}^{l}\) _in (1) is equivalent to permutations. In the \(\{\mathfrak{P}_{k}:k=[K]\}\)-permuted SC, it follows_ \[\{\mathbf{P}_{k-1}\mathbf{x}_{k-1}^{l-1},\mathbf{P}_{k}\mathbf{x}_{k}^{l-1}, \mathbf{P}_{k+1}\mathbf{x}_{k+1}^{l-1}\}\rightarrow\mathbf{P}_{k}\mathbf{x}_{ k}^{l},\] _with matrix representation \(\mathbf{P}_{k}\) of \(\mathfrak{P}_{k}\). Thus, permutations on the SC and the input affect the output in the same way._ **Proposition 4.4** (Orientation Equivariance).: \(\textit{SCCNN}_{k}^{l}:\{\mathbf{x}_{k-1}^{l-1},\mathbf{x}_{k}^{l-1},\mathbf{ x}_{k+1}^{l-1}\}\rightarrow\mathbf{x}_{k}^{l}\) _in (1) with an odd nonlinearity \(\sigma(\cdot)\) is equivariant to orientations. In a reoriented SC by \(\{\mathfrak{O}_{k,i}:k=[K],i=1,\ldots,N_{k}\}\), it follows_ \[\{\mathbf{D}_{k-1}\mathbf{x}_{k-1}^{l-1},\mathbf{D}_{k}\mathbf{x}_{k}^{l-1}, \mathbf{D}_{k+1}\mathbf{x}_{k+1}^{l-1}\}\rightarrow\mathbf{D}_{k}\mathbf{x}_{ k}^{l},\] _with matrix representation \(\mathbf{D}_{k}\) of \(\{\mathfrak{O}_{k,i}:i=1,\ldots,N_{k}\}\), i.e., orientations of the SC affect the output in the same way._ Propositions 4.3 and 4.4 shows that SCCNNs incorporate the inductive biases imposed by the symmetries of the SC and the signal space. If we relabel the SC, the output of an SCCCNN on \(k\)-simplices will be relabled according to the labeling of \(k\)-simplices and remain unaffected by labeling of \(j\)-simplices with \(j\neq k\). If the orientation of a simplex is reversed, the output of an SCCNN on this simplex changes its sign. The latter however requires an odd nonlinearity, thus, if ReLU or its alternatives are used, the SCCNN will not leverage the orientation symmetry of the data. ## 5 Spectral Analysis We use the spectrum of Hodge Laplacians and the Hodge decomposition (Hodge, 1989) to perform a spectral analysis for the SCCNN, which were also used to analyze SCFs and the NN in Yang et al. (2022). This analysis also reveals the spectral mechanisms of other NNs in Table 1. **Theorem 5.1** (Hodge Decomposition).: _In an SC \(\mathcal{S}\) with incidence matrices \(\mathbf{B}_{k}\) and Hodge Laplacians \(\mathbf{L}_{k}\), we have_ \[\mathcal{X}_{k}=\operatorname{im}(\mathbf{B}_{k}^{\top})\oplus\ker(\mathbf{L}_{ k})\oplus\operatorname{im}(\mathbf{B}_{k+1})\] _for \(k=[K]\), where \(\oplus\) is direct sum operation. Any \(\mathbf{x}_{k}\) can be expressed as a sum of three orthogonal components \(\mathbf{x}_{k}=\mathbf{x}_{k,\mathrm{G}}+\mathbf{x}_{k,\mathrm{H}}+\mathbf{x}_ {k,\mathrm{C}}\) with \(\mathbf{x}_{k,\mathrm{G}}=\mathbf{B}_{k}^{\top}\mathbf{x}_{k-1}\) and \(\mathbf{x}_{k,\mathrm{C}}=\mathbf{B}_{k+1}\mathbf{x}_{k+1}\), for some \(\mathbf{x}_{k-1}\) and \(\mathbf{x}_{k+1}\), and \(\mathbf{L}_{k}\mathbf{x}_{k,\mathrm{H}}=\mathbf{0}\)._ For \(k=1\), \(\operatorname{im}(\mathbf{B}_{1}^{\top})\) is the _gradient_ space collecting edge flows as the gradient of some node signal; \(\operatorname{im}(\mathbf{B}_{2})\) is the _curl_ space where flows circulate within triangles; and \(\ker(\mathbf{L}_{1})\) contains _harmonic_ flows which are divergence-free (zero netflow at nodes) and curl-free (zero circulation in triangles) (Barbarossa & Sardellitti, 2020; Schaub et al., 2021). This decomposition implies that \(\mathbf{x}_{k}\) is a sum of \(\mathbf{x}_{k,\mathrm{G}}\) via lower incident relations from some \(\mathbf{x}_{k-1}\), \(\mathbf{x}_{k,\mathrm{C}}\) via upper incident relations from some \(\mathbf{x}_{k+1}\), and \(\mathbf{x}_{k,\mathrm{H}}\) which cannot be diffused to other simplices. This motivates the input \(\mathbf{x}_{k-1}\) and \(\mathbf{x}_{k+1}\) of an SCCNN layer as they also contain information that contributes to the \(k\)-simplicial signal space. **Definition 5.2** (Simplicial Fourier Transform).: The SFT of \(\mathbf{x}_{k}\) is \(\tilde{\mathbf{x}}_{k}=\mathbf{U}_{k}^{\top}\mathbf{x}_{k}\) where SFT basis \(\mathbf{U}_{k}\) is the eigenbasis of \(\mathbf{L}_{k}=\mathbf{U}_{k}\mathbf{\Lambda}_{k}\mathbf{U}_{k}^{\top}\), and the inverse SFT is \(\mathbf{x}_{k}=\mathbf{U}_{k}\tilde{\mathbf{x}}_{k}\)(Barbarossa & Sardellitti, 2020). **Proposition 5.3** (Yang et al. (2022)).: _The SFT basis \(\mathbf{U}_{k}\) can be found as \(\mathbf{U}_{k}=[\mathbf{U}_{k,\mathrm{H}}\ \mathbf{U}_{k,\mathrm{G}}\ \mathbf{U}_{k,\mathrm{C}}]\) where 1) \(\mathbf{U}_{k,\mathrm{H}}\), associated with \(N_{k,\mathrm{H}}\) zero eigenvalues of \(\mathbf{L}_{k}\), spans \(\ker(\mathbf{L}_{k})\), and \(\dim(\ker(\mathbf{L}_{k}))=N_{k,\mathrm{H}}\); 2) \(\mathbf{U}_{k,\mathrm{G}}\), associated with nonzero eigenvalues \(\{\lambda_{k,\mathrm{G},i}\}_{i=1}^{N_{k,\mathrm{G}}}\) of \(\mathbf{L}_{k,\mathrm{d}}\), referred to as gradient frequencies, spans \(\operatorname{im}(\mathbf{B}_{k}^{\top})\), and \(\dim(\operatorname{im}(\mathbf{B}_{k}^{\top}))=N_{k,\mathrm{G}}\ The eigenvalues of \(\mathbf{L}_{k}\) carry two types of simplicial frequencies, which measure the \(k\)-simplicial signal variations in terms of faces and cofaces. For \(k=1\), gradient frequencies measure the edge flow "smoothness" in terms of nodal variations, i.e., the total divergence. Curl frequencies measure the smoothness in terms of rotational variations, i.e., the total curl. For \(k=0\), curl frequencies are the graph frequencies in graph signal processing while gradient frequencies do not exist. We refer to Appendix D.3 for more details. We can now analyze the input of an SCCNN layer in spectral domain. First, the SFT of \(\mathbf{x}_{k}\) is given by \[\tilde{\mathbf{x}}_{k}=[\tilde{\mathbf{x}}_{k,\mathrm{H}}^{\top},\tilde{ \mathbf{x}}_{k,\mathrm{G}}^{\top},\tilde{\mathbf{x}}_{k,\mathrm{C}}^{\top}]^{\top} \tag{3}\] with the _harmonic embedding_\(\tilde{\mathbf{x}}_{k,\mathrm{H}}=\mathbf{U}_{k,\mathrm{H}}^{\top}\mathbf{x}_ {k}=\mathbf{U}_{k,\mathrm{H}}^{\top}\mathbf{x}_{k,\mathrm{H}}\) in the zero frequencies, the _gradient embedding_\(\tilde{\mathbf{x}}_{k,\mathrm{G}}=\mathbf{U}_{k,\mathrm{G}}^{\top}\mathbf{x }_{k}=\mathbf{U}_{k,\mathrm{G}}^{\top}\mathbf{x}_{k,\mathrm{G}}\) in the gradient frequencies, and the _curl embedding_\(\tilde{\mathbf{x}}_{k,\mathrm{C}}=\mathbf{U}_{k,\mathrm{C}}^{\top}\mathbf{x }_{k}=\mathbf{U}_{k,\mathrm{C}}^{\top}\mathbf{x}_{k,\mathrm{C}}\) in the curl frequencies. Second, the lower projection \(\mathbf{x}_{k,\mathrm{d}}\in\mathrm{im}(\mathbf{B}_{k}^{\top})\) has only a nonzero gradient embedding \(\tilde{\mathbf{x}}_{k,\mathrm{d}}=\mathbf{U}_{k,\mathrm{G}}^{\top}\mathbf{x }_{k,\mathrm{d}}\). The upper projection \(\mathbf{x}_{k,\mathrm{u}}\in\mathrm{im}(\mathbf{B}_{k+1})\) contains only a nonzero curl embedding \(\tilde{\mathbf{x}}_{k,\mathrm{u}}=\mathbf{U}_{k,\mathrm{C}}^{\top}\mathbf{x }_{k,\mathrm{u}}\). **Corollary 5.4**.: \(\mathbf{L}_{k,\mathrm{d}}\) _and \(\mathbf{L}_{k,\mathrm{u}}\) admit diagonalizations by \(\mathbf{U}_{k}\). Thus, the simplicial shifting in (2) can be expressed as_ \[\begin{split}\mathbf{L}_{k,\mathrm{d}}\mathbf{x}_{k}& =\mathbf{U}_{k,\mathrm{G}}(\mathbf{\lambda}_{k,\mathrm{G}}\odot \tilde{\mathbf{x}}_{k,\mathrm{G}})\in\mathrm{im}(\mathbf{B}_{k}^{\top})\\ \mathbf{L}_{k,\mathrm{u}}\mathbf{x}_{k}&=\mathbf{U}_ {k,\mathrm{C}}(\mathbf{\lambda}_{k,\mathrm{C}}\odot\tilde{\mathbf{x}}_{k, \mathrm{C}})\in\mathrm{im}(\mathbf{B}_{k+1})\end{split} \tag{4}\] _with the Hadamard product \(\odot\) and column vectors \(\boldsymbol{\lambda}_{k,\mathrm{G}}\) and \(\boldsymbol{\lambda}_{k,\mathrm{C}}\) collecting gradient and curl frequencies, respectively._ See Appendix D.4 for the proof. (4) implies that a lower shifting of \(\mathbf{x}_{k}\) results a signal living in the gradient space and an upper one results in the curl space. This limits a linear relation between the output and input in terms of the corresponding frequencies as in Roddenberry et al. (2021). By diagonalizing an SCF \(\mathbf{H}_{k}\) with \(\mathbf{U}_{k}\), we can further express the simplicial convolution as \[\mathbf{H}_{k}\mathbf{x}_{k}=\mathbf{U}_{k}\tilde{\mathbf{H}}_{k}\mathbf{U}_ {k}^{\top}\mathbf{x}_{k}=\mathbf{U}_{k}(\tilde{\mathbf{h}}_{k}\odot\tilde{ \mathbf{x}}_{k}) \tag{5}\] where \(\tilde{\mathbf{H}}_{k}=\mathrm{diag}(\tilde{\mathbf{h}}_{k})\). Here, \(\tilde{\mathbf{h}}_{k}=[\tilde{\mathbf{h}}_{k,\mathrm{H}}^{\top},\tilde{ \mathbf{h}}_{k,\mathrm{G}}^{\top},\tilde{\mathbf{h}}_{k,\mathrm{C}}^{\top}]^ {\top}\) is the _filter frequency response_, given by \[\begin{cases}\text{harmonic response}:\tilde{\mathbf{h}}_{k,\mathrm{H}}=(w _{k,\mathrm{d},0}+w_{k,\mathrm{u},0})\mathbf{1},\\ \text{gradient response}:\tilde{\mathbf{h}}_{k,\mathrm{G}}=\sum_{t=0}^{T_{ \mathrm{d}}}w_{k,\mathrm{d},t}\boldsymbol{\lambda}_{k,\mathrm{G}}^{\oplus t} +w_{k,\mathrm{u},0}\mathbf{1},\\ \text{curl response}:\tilde{\mathbf{h}}_{k,\mathrm{C}}=\sum_{t=0}^{T_{\mathrm{u} }}w_{k,\mathrm{u},t}\boldsymbol{\lambda}_{k,\mathrm{C}}^{\oplus t}+w_{k, \mathrm{d},0}\mathbf{1},\end{cases}\] with \((\cdot)^{\oplus t}\) the elementwise \(t\)th power of a vector. Furthermore, we can express \(\tilde{\mathbf{h}}_{k}\odot\tilde{\mathbf{x}}_{k}\) as \[[(\tilde{\mathbf{h}}_{k,\mathrm{H}}\odot\tilde{\mathbf{x}}_{k,\mathrm{H}})^ {\top},(\tilde{\mathbf{h}}_{k,\mathrm{G}}\odot\tilde{\mathbf{x}}_{k,\mathrm{G }})^{\top},(\tilde{\mathbf{h}}_{k,\mathrm{C}}\odot\tilde{\mathbf{x}}_{k, \mathrm{C}})^{\top}]^{\top}. \tag{6}\] Therefore, the simplicial convolution corresponds to a pointwise multiplication of the SFT of a simplicial signal by the filter frequency response in the spectral domain. Specifically, the frequency response \(\tilde{\mathbf{h}}_{k,\mathrm{H}}\) at the zero frequency is determined by the coefficients of the SCF on the identity matrix. The coefficients \(\{w_{k,\mathrm{d},t}\}_{t=1}^{T_{\mathrm{d}}}\) on \(\mathbf{L}_{k,\mathrm{d}}\) and its powers contribute to \(\tilde{\mathbf{h}}_{k,\mathrm{G}}\), acting in the gradient frequencies and gradient space, while the coefficients \(\{w_{k,\mathrm{u},t}\}_{t=1}^{T_{\mathrm{u}}}\) on \(\mathbf{L}_{k,\mathrm{u}}\) and its powers contribute to \(\tilde{\mathbf{h}}_{k,\mathrm{C}}\), acting in the curl frequencies and curl space. This is a direct result of Corollary 5.4. Unlike (6), the SCF in Ebli et al. (2020) has the same gradient and curl responses which prohibits different processing in the gradient and curl spaces. The lower SCF \(\mathbf{H}_{k,\mathrm{d}}\) has \(\tilde{\mathbf{h}}_{k,\mathrm{d}}=\sum_{t=0}^{T_{\mathrm{d}}}w_{k,\mathrm{d },t}^{\prime}\boldsymbol{\lambda}_{k,\mathrm{G}}^{\oplus t}\) as the frequency response that modulates the gradient embedding of \(\mathbf{x}_{k,\mathrm{d}}\) and the upper SCF \(\mathbf{H}_{k,\mathrm{u}}\) has \(\tilde{\mathbf{h}}_{k,\mathrm{u}}=\sum_{t=0}^{T_{\mathrm{u}}}w_{k,\mathrm{u}, t}^{\prime}\boldsymbol{\lambda}_{k,\mathrm{C}}^{\oplus t}\) as the frequency response that modulates the curl embedding of \(\mathbf{x}_{k,\mathrm{u}}\). Now, consider the output after the linear operation in an SCCNN layer \(\mathbf{y}_{k}=\mathbf{H}_{k,\mathrm{d}}\mathbf{x}_{k,\mathrm{d}}+\mathbf{H}_{k} \mathbf{x}_{k}+\mathbf{H}_{k,\mathrm{u}}\mathbf{x}_{k,\mathrm{u}}\). Its three spectral embeddings are given by \[\begin{cases}\tilde{\mathbf{y}}_{k,\mathrm{H}}=\tilde{\mathbf{h}}_{k, \mathrm{H}}\odot\tilde{\mathbf{x}}_{k,\mathrm{H}},\\ \tilde{\mathbf{y}}_{k,\mathrm{G}}=\tilde{\mathbf{h}}_{k,\mathrm{d}}\odot \tilde{\mathbf{x}}_{k,\mathrm{d}}+\tilde{\mathbf{h}}_{k,\mathrm{G}}\odot \tilde{\mathbf{x}}_{k,\mathrm{G}},\\ \tilde{\mathbf{y}}_{k,\mathrm{C}}=\tilde{\mathbf{h}}_{k,\mathrm{C}}\odot \tilde{\mathbf{x}}_{k,\mathrm{C}}+\tilde{\mathbf{h}}_{k,\mathrm{u}}\odot \tilde{\mathbf{x}}_{k,\mathrm{u}}.\end{cases} \tag{7}\] This spectral relation shows how SCCNNs regulate the three inputs coming from simplices of different order and enable a flexible processing of inputs in different signal spaces owing to that different coefficients are used in the SCFs. The nonlinearity induces an information spillage (Gama et al., 2020) such that one type of spectral embedding could be spread over other types of frequencies. That is, \(\sigma(\tilde{\mathbf{y}}_{k,\mathrm{G}})\) could contain information in zero or curl frequencies. For example, a gradient flow projected from a node input could have information spillage in curl frequencies after \(\sigma(\cdot)\). This spilled information further contributes to a triangle signal via projection \(\mathbf{B}_{2}^{\top}\). Thus, the triangle output of SCCNNs contains information from nodes. This is the spectral perspective of the extended inter-simplicial locality. ## 6 Stability Analysis Characterizing the stability of NNs to domain perturbations is key to understand their learning abilities from data (Bruna and Mallat, 2013; Bronstein et al., 2021). The analysis by Gama et al. (2020) showed that GNNs could be both stable and selective in contrast to graph convolution filters. We here perform a stability analysis of SCCNNs to understand the effect of various factors on the output of different simplices, with a focus on the roles of lower and upper simplicial adjacencies and inter-simplicial couplings. Domain perturbations could occur in a weighted SC as a result of misestimated simplicial weights. Denote as a weight matrix of \(k\)-simplices. A weighted lower Laplacian is defined as \(\mathbf{L}_{k,\mathrm{d}}=f_{k,\mathrm{d}}(\mathbf{B}_{k},\mathbf{M}_{k-1}, \mathbf{M}_{k})\) a function of incidence matrix \(\mathbf{B}_{k}\) and weights \(\mathbf{M}_{k-1},\mathbf{M}_{k}\), and likewise for the upper one \(\mathbf{L}_{k,\mathrm{u}}=f_{k,\mathrm{u}}(\mathbf{B}_{k+1},\mathbf{M}_{k}, \mathbf{M}_{k+1})\). The projections in SCCNNs are performed by the lower and upper projection matrices in place of \(\mathbf{B}_{k}^{\top}\) and \(\mathbf{B}_{k+1}\), defined as \(\mathbf{R}_{k,\mathrm{d}}=f^{\prime}_{k,\mathrm{d}}(\mathbf{B}_{k},\mathbf{M }_{k-1},\mathbf{M}_{k})\) and \(\mathbf{R}_{k,\mathrm{u}}=f^{\prime}_{k,\mathrm{u}}(\mathbf{B}_{k+1},\mathbf{ M}_{k},\mathbf{M}_{k+1})\), functions of incidence matrices and weights. See Appendix E.1 for some explicit forms by Grady and Polimeni (2010); Schaub et al. (2020). The misestimations of these weights could be viewed as relative perturbations on Hodge Laplacians and projection matrices. **Definition 6.1** (Relative Perturbation).: Consider a weighted SC \(\mathcal{S}\) with projection matrices \(\mathbf{R}_{k,\mathrm{d}},\mathbf{R}_{k,\mathrm{u}}\) and Hodge Laplacians \(\mathbf{L}_{k,\mathrm{d}}\), \(\mathbf{L}_{k,\mathrm{u}}\), \(k=[K]\). A relative perturbed SC \(\widehat{\mathcal{S}}\) has \[\widehat{\mathbf{R}}_{k,\mathrm{d}}=\mathbf{R}_{k,\mathrm{d}}+ \mathbf{J}_{k,\mathrm{d}}\mathbf{R}_{k,\mathrm{d}},\ \widehat{\mathbf{R}}_{k,\mathrm{u}}=\mathbf{R}_{k,\mathrm{u}}+\mathbf{J}_{k, \mathrm{u}}\mathbf{R}_{k,\mathrm{u}},\] \[\widehat{\mathbf{L}}_{k,\mathrm{d}}=\mathbf{L}_{k,\mathrm{d}}+ \mathbf{E}_{k,\mathrm{d}}\mathbf{L}_{k,\mathrm{d}}+\mathbf{L}_{k,\mathrm{d}} \mathbf{E}_{k,\mathrm{d}},\] \[\widehat{\mathbf{L}}_{k,\mathrm{u}}=\mathbf{L}_{k,\mathrm{u}}+ \mathbf{E}_{k,\mathrm{u}}\mathbf{L}_{k,\mathrm{u}}+\mathbf{L}_{k,\mathrm{u}} \mathbf{E}_{k,\mathrm{u}}\] where small perturbation matrices follow that \(\|\mathbf{E}_{k,\mathrm{d}}\|\leq\epsilon_{k,\mathrm{d}}\) and \(\|\mathbf{J}_{k,\mathrm{d}}\|\leq\varepsilon_{k,\mathrm{d}}\)\(\|\mathbf{E}_{k,\mathrm{u}}\|\leq\epsilon_{k,\mathrm{u}}\) and \(\|\mathbf{J}_{k,\mathrm{u}}\|\leq\varepsilon_{k,\mathrm{u}}\) with the spectral radius \(\|\cdot\|\). This model generalizes the graph perturbation model in Gama et al. (2019, 2020); Parada-Mayorga et al. (2022) and implies that the same degree of perturbations affect differently stronger and weaker simplicial adjacencies. We further describe an SCF by its integral Lipschitz property. **Definition 6.2** (Integral Lipschitz SCF).: An SCF \(\mathbf{H}_{k}\) is integral Lipschitz with constants \(C_{k,\mathrm{d}}\) and \(C_{k,\mathrm{u}}\) if if \[|\lambda\tilde{h}^{\prime}_{k,\mathrm{G}}(\lambda)|\leq C_{k,\mathrm{d}}\ \text{and}\ | \lambda\tilde{h}^{\prime}_{k,\mathrm{C}}(\lambda)|\leq C_{k,\mathrm{u}}, \tag{8}\] with \(\tilde{h}^{\prime}_{k,\mathrm{G}}(\lambda)\) and \(\tilde{h}^{\prime}_{k,\mathrm{C}}(\lambda)\) the derivatives of the gradient and curl frequency response functions [cf. (5)], respectively. Integral Lipschitz SCFs can have a large variability in low simplicial frequencies \(\lambda\to 0\), thus, a good selectivity with a low stability, while in large frequencies, they tend to be flat with a better stability at the cost of selectivity. This tradeoff holds independently for the gradient and curl frequencies. See Appendix E.2 for more details. As of the polynomial nature of frequency responses, all SCFs of an SCCNN are integral Lipschitz. We also denote the constant for the lower SCFs \(\mathbf{H}_{k,\mathrm{d}}\) by \(C_{k,\mathrm{d}}\) and for the upper SCFs \(\mathbf{H}_{k,\mathrm{u}}\) by \(C_{k,\mathrm{u}}\). **Assumption 6.3**.: The SCFs \(\mathbf{H}_{k}\) of an SCCNN have a normalized bounded frequency response \(|\tilde{h}_{k}(\lambda)|\leq 1\), likewise for \(\mathbf{H}_{k,\mathrm{d}}\) and \(\mathbf{H}_{k,\mathrm{u}}\), where we assume one for simplicity. **Assumption 6.4**.: The lower and upper projections are finite with bounded norms \(\|\mathbf{R}_{k,\mathrm{d}}\|\leq r_{k,\mathrm{d}}\) and \(\|\mathbf{R}_{k,\mathrm{u}}\|\leq r_{k,\mathrm{u}}\). **Assumption 6.5**.: The initial input \(\mathbf{x}_{k}^{0}\) is finite a limited energy \(\|\mathbf{x}_{k}^{0}\|\leq\beta_{k}\), \(k=[K]\), collected in \(\boldsymbol{\beta}=[\beta_{0},\ldots,\beta_{K}]^{\top}\). **Assumption 6.6**.: The nonlinearity \(\sigma(\cdot)\) is \(C_{\sigma}\)-Lipschitz, i.e., \(|\sigma(b)-\sigma(a)|\leq|b-a|\). **Theorem 6.7** (Stability).: _Let \(\mathbf{x}_{k}^{L}\) be the output of an \(L\)-layer SCCNN defined on a weighted SC. Let \(\tilde{\mathbf{x}}_{k}^{L}\) be the output of the same SCCNN but on a relatively perturbed SC according to Definition 6.1. Under Assumptions 6.3 to 6.6, the Euclidean distance between the two outputs is finite and upper-bounded by \(\|\tilde{\mathbf{x}}_{k}^{L}-\mathbf{x}_{k}^{L}\|\leq[\mathbf{d}]_{k}\) with_ \[\mathbf{d}=C_{\sigma}^{L}\sum_{l=1}^{L}\widehat{\mathbf{Z}}^{l-1}\mathbf{T} \mathbf{Z}^{L-l}\boldsymbol{\beta}. \tag{9}\] _Here, matrices \(\mathbf{T}\) and \(\mathbf{Z}\) are tridiagonal, e.g., for \(K=2\),_ \[\mathbf{T}=\begin{bmatrix}t_{0}&t_{0,\mathrm{u}}\\ t_{1,\mathrm{d}}&t_{1}&t_{1,\mathrm{u}}\\ t_{2,\mathrm{d}}&t_{2}\end{bmatrix}\ \text{and}\ \mathbf{Z}=\begin{bmatrix}1&r_{0, \mathrm{u}}\\ r_{1,\mathrm{d}}&1&r_{1,\mathrm{u}}\\ r_{2,\mathrm{d}}&1\end{bmatrix},\] _with constants \(t_{k,\mathrm{d}}=r_{k,\mathrm{d}}\varepsilon_{k,\mathrm{d}}+C_{k,\mathrm{d}} \Delta_{k,\mathrm{d}}\epsilon_{k,\mathrm{d}}r_{k,\mathrm{d}}\), \(t_{k,\mathrm{u}}=r_{k,\mathrm{u}}\varepsilon_{k,\mathrm{u}}+C_{k,\mathrm{u}} \Delta_{k,\mathrm{u}}\epsilon_{k,\mathrm{u}}r_{k,\mathrm{u}}\) and \(t_{k}=C_{k,\mathrm{d}}\Delta_{k,\mathrm{d}}\epsilon_{k,\mathrm{d}}+C_{k,\mathrm{u}} \Delta_{k,\mathrm{u}}\epsilon_{k,\mathrm{u}}\), where \(\Delta_{k,\mathrm{d}}\) and \(\Delta_{k,\mathrm{u}}\) capture the eigenvector misalignment between the respective Hodge Laplacians and their perturbations, scaled by \(\sqrt{N_{k}}\). Matrix \(\widehat{\mathbf{Z}}\) is defined as \(\mathbf{Z}\) but with off-diagonal entries \(\hat{r}_{k,\mathrm{d}}=r_{k,\mathrm{d}}(1+\varepsilon_{k,\mathrm{d}})\) and \(\hat{r}_{k,\mathrm{u}}=r_{k,\mathrm{u}}(1+\varepsilon_{k,\mathrm{u}})\)._ The proof can be found in Appendix E.3. This result shows that SCCNNs are stable to relative domain perturbations, which can be analyzed from two perspectives. First, the stability of \(k\)-simplicial output depends on not only factors of \(k\)-simplices, but also simplices of other orders due to the inter-simplicial couplings. When \(L=1\), the node output bound \(d_{0}\) depends on \(\beta_{0}\) via \(t_{0}\), and \(\beta_{1}\) via \(t_{0,\mathrm{u}}\) where node perturbation \(\mathbf{E}_{0,\mathrm{u}}\) (described by \(\Delta_{0,\mathrm{u}}\) and \(\epsilon_{0,\mathrm{u}}\)), node SCFs (by \(C_{0,\mathrm{u}}\)) and projections from edges to nodes (by \(r_{0,\mathrm{u}}\) and \(\varepsilon_{0,\mathrm{u}}\)) play a role. Likewise, \(d_{1}\) depends on \(\beta_{0},\beta_{1}\) and \(\beta_{2}\) via factors of perturbations, SCFs and projections in the edge space. When \(L=2\), the bound \(d_{0}\) is also affected by \(t_{1,\mathrm{d}},t_{1}\) and \(t_{1,\mathrm{u}}\) containing factors in the edge space. As \(L\) increases, factors in the triangle space (and higher-order simplicial space) will appear in \(d_{0}\), as illustrated in Figure 3. Thus, while leveraging information from adjacent simplices may be beneficial, it may severely affect the stability when Figure 3: Euclidean distances of node, edge and triangle outputs of an SCCNN with one _(Left)_ and two _(Right)_ layers under perturbations on triangle weights. Edge output is influenced after one layer, while node output is influenced after two layers. the SC is perturbed. This can be mitigated by using less layers and higher-order SCFs, imposed by stronger integral Lipschitz properties, to maintain the expressive power. Second, the integral Lipschitz property \(C_{k,\mathrm{d}}\) in gradient frequencies plays no role in the stability against upper perturbations, and vice-versa. Thus, if there are only triangle perturbations SCFs in the edge space need not to be strictly integral Lipschitz in gradient frequencies where SCCNNs could be more selective while preserving stability. This is a direct benefit of using different parameter spaces in the gradient and curl spaces unlike in Ebli et al. (2020). ## 7 Experiments **Simplex Prediction.** We consider the task of simplex prediction: _given all the \((k-1)\)-simplices in a set of \(k+1\) nodes, to predict if this set will be closed to form a \(k\)-simplex,_ which is an extension of link prediction in graphs (Zhang and Chen, 2018). Our approach is to first learn features of lower-order simplices and then use an MLP to identify if a simplex is closed or open. With coauthorship data in Ammar et al. (2018) we built an SC as Ebli et al. (2020) where nodes are authors and collaborations between \(k\)-authors are modeled as \((k-1)\)-simplices. The simplicial signals are the number of citations, e.g., \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are the citations of dyadic and triadic collaborations. Thus, 2-simplex predictions in this SC give rise to predict triadic collaborations given the pairwise collaborations in the triads. In an SC of order two, for an open triangle, we use an SC-CNN to learn features of nodes and edges. Then, an MLP is used to predict if this triangle shall be closed or not based on its three node or edge features. We also perform a 3-simplex prediction, which amounts to predicting tetradic collaborations. Table 2 reports the AUC results. We include the experiment details and an ablation study in Appendix F.1. We see that SC solutions achieve better results than using only graphs, validating SCs as an inductive model. SCCNN performs best in both tasks, since it exploits both intra- and extended inter-simplicial localities. First, SCCNN leverages the lower and upper adjacencies individually in a multi-hop fashion to perform convolutions such that it performs better than Bunch. For the same reason, SCNN performs better than SNN and PSNN. Second, SCCNN considers information from adjacent simplices such that it gives better results than methods without the inter-simplicial locality, such as GNN and SCNN, or CF-SC with a limited locality. **Trajectory Prediction.** In the trajectory prediction task, a trajectory is represented as an edge flow and the goal is to predict the next node based on this representation (Roddenberry et al., 2021). We consider trajectories in a synthetic SC and ocean drifter trajectories localized around Madagascar (Schaub et al., 2020). The experiment details can be found in Appendix F.3. From the results of different methods in Table 3, we see that SCCNN performs better than Bunch due to the use of higher-order convolutions, and likewise, SCNN and SNN give better predictions than PSNN. Also, differentiating the parameter space for lower and upper convolutions improves the performance of SCNN compared to SNN. As zero inputs on nodes and triangles are applied, SCCNN does not perform better than SCNN. Like other NNs, SCCNNs do not deteriorate in the reverse task owing to the orientation equivariance and they show good generalization ability to the unseen data as well. ## 8 Conclusion We proposed an SCCNN for learning on SCs, which performs a simplicial convolution with an intra-simplicial locality and multi-hop information from adjacent simplices with an extended inter-simplicial locality. We provide a through theoretical study of the proposed architecture from different viewpoints. First, we study the symmetries in an SC and simplicial signal space and show the SCCNN can be built equivariant to permutations and orientations of simplices. We then study its spectral behavior and understand how the learned convolutional filters perform in the different simplicial frequencies, i.e., in different simplicial signal spaces. Finally, we study the stability of the SCCNN, showing that it is stable to domain perturbations and how the inter-simplicial locality affects the performance. We corroborate these results with numerical experiments achieving a comparable performance with state-of-the-art alternatives. \begin{table} \begin{tabular}{l c c} \hline \hline Methods & 2-Simplex & 3-Simplex \\ \hline Harm. Mean (Benson et al., 2018) & 62.8\(\pm\)2.7 & 63.6\(\pm\)1.6 \\ MLP & 68.5\(\pm\)1.6 & 69.0\(\pm\)2.2 \\ GF (Sandryhaila Moura, 2013) & 78.7\(\pm\)1.2 & 83.9\(\pm\)2.3 \\ SCF (Yang et al., 2022b) & 92.6\(\pm\)1.8 & 94.9\(\pm\)1.0 \\ CF-SC (Isufi and Yang, 2022) & 96.9\(\pm\)0.8 & 97.9\(\pm\)0.7 \\ GNN (Gama et al., 2020a) & 93.9\(\pm\)1.0 & 96.6\(\pm\)0.5 \\ SNN (Ebli et al., 2020) & 92.0\(\pm\)1.8 & 95.1\(\pm\)1.2 \\ PSNN (Roddenberry et al., 2021) & 95.6\(\pm\)1.3 & 98.1\(\pm\)0.5 \\ SCNN (Yang et al., 2022a) & 96.5\(\pm\)1.5 & 98.3\(\pm\)0.4 \\ Bunch (Bunch et al., 2020) & 98.0\(\pm\)0.5 & 98.5\(\pm\)0.5 \\ **SCCNN (ours)** & 98.4\(\pm\)0.5 & 99.4\(\pm\)0.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Simplex prediction AUC (\(\%\), area under the curve) results for ten runs. The _first_ and _second_ best results are in _red_ and _blue_. \begin{table} \begin{tabular}{l c c c c} \hline \hline Methods & Standard & Reverse & Gen. & Ocean \\ \hline PSNN & 63.1\(\pm\)3.1 & 58.4\(\pm\)3.9 & 55.3\(\pm\)2.5 & 49.0\(\pm\)8.0 \\ SCNN & 67.7\(\pm\)1.7 & 55.3\(\pm\)5.3 & 61.2\(\pm\)3.2 & 53.0\(\pm\)7.8 \\ SNN & 65.5\(\pm\)2.4 & 53.6\(\pm\)6.1 & 59.5\(\pm\)3.7 & **52.5\(\pm\)6.0** \\ Bunch & 62.3\(\pm\)4.0 & 59.6\(\pm\)6.1 & 53.9\(\pm\)3.1 & 4.6\(\pm\)0.6 \\ **SCCNN (ours)** & **65.2\(\pm\)4.1** & 58.9\(\pm\)4.1 & 56.8\(\pm\)2.4 & 54.5\(\pm\)7.9 \\ \hline \hline \end{tabular} \end{table} Table 3: Prediction accuracy of the synthetic data in standard, reverse and generalization tasks, and ocean drifters (Last Column).
2310.08373
Chrono: A Peer-to-Peer Network with Verifiable Causality
Logical clocks are a fundamental tool to establish causal ordering of events in a distributed system. They have been used as the building block in weakly consistent storage systems, causally ordered broadcast, distributed snapshots, deadlock detection, and distributed system debugging. However, prior logical clock constructs fail to work in a permissionless setting with Byzantine participants. In this work, we introduce Chrono, a novel logical clock system that targets an open and decentralized network. Chrono introduces a new logical clock construct, the Decaying Onion Bloom Clock (DOBC), that scales independently to the size of the network. To tolerate Byzantine behaviors, Chrono leverages non-uniform incrementally verifiable computation (IVC) to efficiently prove and verify the construction of DOBC clocks. We have applied Chrono to build two decentralized applications, a weakly consistent key-value store and an anti-censorship social network, demonstrating the power of scalable, verifiable causality in a decentralized network.
Michael Hu Yiqing, Guangda Sun, Arun Fu, Akasha Zhu, Jialin Li
2023-10-12T14:44:39Z
http://arxiv.org/abs/2310.08373v1
# Chrono: A Peer-to-Peer Network with Verifiable Causality ###### Abstract. Logical clocks are a fundamental tool to establish causal ordering of events in a distributed system. They have been used as the building block in weakly consistent storage systems, causally ordered broadcast, distributed snapshots, deadlock detection, and distributed system debugging. However, prior logical clock constructs fail to work in a permissionless setting with Byzantine participants. In this work, we introduce Chrono, a novel logical clock system that targets an open and decentralized network. Chrono introduces a new logical clock construct, the Decaying Onion Bloom Clock (DOBC), that scales independently to the size of the network. To tolerate Byzantine behaviors, Chrono leverages non-uniform incrementally verifiable computation (IVC) to efficiently prove and verify the construction of DOBC clocks. We have applied Chrono to build two decentralized applications, a weakly consistent key-value store and an anti-censorship social network, demonstrating the power of scalable, verifiable causality in a decentralized network. + Footnote †: Authors’ addresses: Michael Hu Yiqing, National University of Singapore, [email protected]; Guangdong Sun, National University of Singapore, [email protected]; Arun Fu, Advaita Labs, [email protected]; Akasha Zhu, Advaita Labs, [email protected]; Jialin Li, National University of Singapore, [email protected]. ## 1. Introduction The ordering of events is a fundamental concept in distributed systems. In state machine replication systems (Han et al., 2002; Dwork et al., 2003; Dwork et al., 2004; Dwork et al., 2005), the set of replicas needs to agree on the order of operations in the log; shards in a distributed database (Bauer et al., 2002; Dwork et al., 2004; Dwork et al., 2005) are tasked to execute distributed transactions in a consistent partial order; for mutual exclusion of shared resources, participants in a distributed system have to agree on the order of acquiring locks (Bauer et al., 2002; Dwork et al., 2004; Dwork et al., 2005); in a distributed storage system (Bauer et al., 2002; Dwork et al., 2005; Dwork et al., 2005; Dwork et al., 2005; Dwork et al., 2005), servers apply a consistent order of mutations to storage objects. It is well-known that perfectly synchronized clocks do not exist in realistic distributed systems, due to clock drift and relativity. Ordering events using physical clock timestamps is therefore not reliable and can lead to anomalies. Logical clocks (Han et al., 2002; Dwork et al., 2005), on the other hand, offer a solution to order events in a distributed system without relying on physical time. Logical clocks are consistent with _logical causality_, _i.e._, if event \(a\) can causally influence event \(b\), then the logical clock of \(a\) is prior to that of \(b\). Unlike physical time ordering, causality in a distributed system is only a partial order, as there exists events which do not causally influence each other. Many forms of logical clocks have been proposed in the literature (Han et al., 2002; Dwork et al., 2005; Dwork et al., 2005; Dwork et al., 2005), though not all of them can be used to deduce causality between events. For instance, even if an event has a smaller Lamport clock (Han et al., 2002) than another, the two events can still be logically concurrent. Existing logical clock constructs, however, fall short in an open, decentralized network (Bauer et al., 2002; Dwork et al., 2005). In these networks, any participant can join or leave the system at any time. Such dynamic environment presents deep scalability challenges for vector-based logical clocks (Dwork et al., 2005; Dwork et al., 2005; Dwork et al., 2005). More critically, prior systems assume all participants in the system faithfully follow the protocol to update and propagate their clocks. In a decentralized network, Byzantine (Dwork et al., 2005) behaviors, where a participant can deviate arbitrarily from the specified protocol, are common. Unfortunately, existing logical clock constructs are not Byzantine-fault tolerant. By not following the clock protocol, a single Byzantine participant can easily compromise the clock's causality guarantees, _i.e._, logical clocks may imply erroneous causality between events. Such adversaries can render the entire clock construct pointless. In this work, we address the above shortcomings by proposing a new logical clock system, Chrono. Chrono targets a permissionless network with possible Byzantine participants. Similar to prior logical clocks, Chrono can be used to infer causality in the network. Chrono, however, only concerns causal dependency between object states, _i.e._, an object state is the result of a series of mutations from another object state. In return, Chrono always infers _true causality_, unlike the possible causality implied by prior approaches. To handle dynamic membership, Chrono introduces Decaying Onion Bloom Clock (DOBC), a novel construct based on Bloom clocks (Dob, 2018). DOBC is agnostic to the identity and the number of the participants in the network. It achieves this property by applying Bloom filters to only record the state transition history. To maintain low false positive rate even for arbitrarily long causal histories, DOBC uses layers of Bloom filters, a construct inspired by log-structured merge-tree (Dob, 2018). Recent transitions are stored in the top layer filters; when a layer is filled up, its filters are merged and pushed to the next layer. DOBC therefore offers accurate causality inference for recent histories, while its accuracy gracefully degrades for the distant past. To tolerate Byzantine participants, Chrono builds upon recent advances in verifiable computation (VC). Specifically, Chrono applies non-uniform incrementally verifiable computation (IVC) (Kurz et al., 2017), a proof system that uses recursive Succinct Non-interactive Argument of Knowledge (SNARKs) (Bradner et al., 2017). When mutating an object in Chrono, the initiating node generates a succinct proof that demonstrates the validity of both the state transition and the DOBC clock update. The proof is attached to the object when disseminating the object in the network. A receiver verifies the attached proof before accepting the object. Using IVC, a node can incrementally mutate any verified object, and efficiently generate a succinct proof for the entire causal history of the object. Moreover, both prover time and verifier time are independent of the length of the causal history. As each node may apply arbitrary state mutation function to an object, Chrono uses a variant of IVC called non-uniform IVC (Kurz et al., 2017) to address the rigidity of the original IVC construct. We have built two decentralized systems atop Chrono to demonstrate the power of verifiable causality. The first system is a weakly consistent data store Kstore. Kstore provides eventual consistency (Kurz et al., 2017) even in the presence of strong adversaries. It leverages Chrono to track versioned histories of stored data and to effectively merge conflicting versions. It relies on provable causal history to avoid lost updates or inconsistencies cause by Byzantine behaviors. Chrono is more scalable, available, and provides faster query latency than existing BFT systems. The second system, Kosocial, is an anti-censorship decentralized social network. Beyond the benefits already provided in Kstore, Chrono enables Kosocial to effectively eliminate censorship attacks, a major challenge in other social applications. Using verifiable causality, clients in Kosocial can enforce propagation and visibility of posted content, and generate proof-of-censorship when censorship attacks are launched. ## 2. Background This section covers background information on two main topics: causality of events in a distributed system, and verifiable computation. ### Causality of Events in Distributed Systems The seminal work by Lamport (Lamport, 1971) introduces a _happens-before_ relationship that defines the possible causality between events in a distributed system. Specifically, let \(<\) be a binary relation between pairs of events in an execution of a distributed system. \(e_{1}\prec e_{2}\) if event \(e_{1}\) may influence event \(e_{2}\), or equivalently, \(e_{2}\) is causally dependent on \(e_{1}\). \(<\) is a strict partial order, _i.e._, it is _irreflexive, asymmetric_, and _transitive_. Being a partial order, not all pairs of events are causally dependent. If neither \(e_{1}\prec e_{2}\) nor \(e_{2}\prec e_{1}\), \(e_{1}\) and \(e_{2}\) are defined to be logically concurrent (represented as \(e_{1}\parallel e_{2}\)). Without perfectly synchronized physical clocks, it is impossible to determine which of \(e_{1}\) or \(e_{2}\) happens first if \(e_{1}\parallel e_{2}\). Events in an execution are categorized into three general types: * Local events on a node (\(e^{local}\)). They are any event happen on a node that does not involve messages. * Message send event (\(e^{send}\)). A source node sends an unicast message to a destination node. Broadcasts or multicasts are equivalent to a set of unicast messages. * Message receive event (\(e^{recc}\)). For each \(e^{send}\), there is a corresponding message receive event on the destination node if the message is successfully delivered. The happens-before relation \(\prec\) on events in an execution obeys the following rules: * \(e_{1}^{local}\prec e_{2}^{local}\) if both events happen on the same node and \(e_{1}^{local}\) happens before \(e_{2}^{local}\) in the local sequential event order. * \(e^{send}\prec e^{recv}\) if \(e^{send}\) and \(e^{recv}\) are the corresponding message send and receive pair. ### Logical Clocks Logical clocks can be used to determine the happens-before relation defined in SS2.1. One instance of logical clocks is the Lamport clock [16]. Using Lamport clock, each node \(n_{i}\) in the system maintains a local clock \(c_{i}\), represented as a natural number. The rules to update the clocks are: * Upon a local event \(e_{i}^{local}\), \(n_{i}\) increments its local clock. * When \(n_{i}\) sends a message, \(n_{i}\) increments its local clock, and attaches the local clock value in the message. * When \(n_{i}\) receives a message with a clock value \(c_{m}\), it sets its local clock to \(max(c_{i},c_{m})+1\). The logical time of an event \(e\), represented as \(c_{e}\), is the local clock value after the clock update. Lamport clock guarantees the following property: If \(e_{1}\prec e_{2}\), then \(c_{e_{1}}<c_{e_{2}}\). However, the inverse is not true, _i.e._, \(c_{e_{1}}<c_{e_{2}}\) does not imply that \(e_{1}\prec e_{2}\). To be more precise, if \(c_{e_{1}}<c_{e_{2}}\), either \(e_{1}\prec e_{2}\) or \(e_{1}||e_{2}\), but not \(e_{2}\prec e_{1}\). For use cases that require accurate causality inference, Lamport clock falls short. Vector clock addresses this shortcoming of Lamport clock. As the name suggested, a vector clock, \(v\), consists of a vector of natural numbers. Cardinality of a vector clock equals the size of the system. Each node is assigned a unique index in the vector clock. We use \(v[i]\) to denote the \(i\)th number in a vector clock \(v\). The rules to update the vector clocks are: * Upon a local event \(e_{i}^{local}\), \(n_{i}\) increments \(v_{i}[i]\). * When \(n_{i}\) sends a message, \(n_{i}\) increments \(v_{i}[i]\), and attaches \(v_{i}\) in the message. * When \(n_{i}\) receives a message with a clock \(v_{m}\), it sets \(v_{i}\) to \(v_{i}^{\prime}\), where \(\forall p\in[0..S),v_{i}^{\prime}[p]=max(v_{i}[p],v_{m}[p])\), \(S\) is the size of the system. \(n_{i}\) then increments \(v_{i}[i]\). \(v_{i}<v_{j}\) if and only if \(\forall p\in[0..S),v_{i}[p]\leq v_{j}[p]\) and \(\exists p\in[0..S),v_{i}[p]<v_{j}[p]\). By definition, there exists \(v_{i}\) and \(v_{j}\) such that neither \(v_{i}<v_{j}\) nor \(v_{j}<v_{i}\), _i.e._, \(<\) is a partial order on the set of vector clocks. Vector clock guarantees the following stronger property: \(e_{1}<e_{2}\) if and only if \(v_{e_{1}}<v_{e_{2}}\). ### Verifiable Computation In systems where identities cannot be trusted (our target deployment model), publicly verifiable proofs are required to verify the claims made by the participants. More concretely, if a node claims that the output of applying a certain function \(\mu\) on input \(x\) is \(y\), the naive way to verify such a statement would be to re-execute the operation and compare the outputs. Such an approach might not be viable when the verifier does not have enough computational resources to execute the function. For instance, the current bitcoin blockchain is approximately 450 GB in size. If a user wants to verify the latest block, he can either: 1. Trust the person who provided the latest block to him (this is extremely unwise). 2. Verify the whole chain himself from the genesis block; This takes a lot of compute time, storage space, and network bandwidth. An argument system is a cryptographic construct to achieve verifiable computation without trusting the entity performing the computation. The goals of an argument system are quite simple. For a given statement \(\mu(x)\xrightarrow{?}y\), it produces an accompanying proof \(\pi\). This proof can be verified publicly to assert that the statement is true with all but a negligible probability. More concretely, a prover \(P\) wishes to convince a verifier \(V\) that it knows some witness statement \(w\) such that, for some public statement \(x\) and arithmetic circuit \(C\), \(C(x,w)\to y\). Properties of an argument system.There are two properties an argument system must satisfy: 1. **Completeness:** A valid proof will always be accepted by a valid verifier. 2. **(Knowledge) Soundness:** If a prover attempts to generate a proof without a valid witness, this proof will only be accepted by the verifier with a negligible probability. There are many types of argument system. One of the most commonly used argument systems is Succinct Non-interactive Arguments of Knowledge (SNARK). As the name suggested, using a SNARK, the verifier requires no further interaction with the prover, other than receiving the proof, when verifying; the proof itself is also short, while the time to verify is fast (at most logarithmic to the circuit size). If the witness \(w\) cannot be derived by the verifier with sufficient probability, then the SNARK is also considered _zero-knowledge_ (zk-SNARK). Chrono does not require the zero-knowledge property, so we omit the details of zk-SNARKs. Recursive proof systems.SNARKs are useful in many settings, e.g., cloud computing, as it allows a verifier to validate computationally expensive function executions in a fraction of the time to run it. However, in some distributed computing scenarios, simply verifying a single execution is not sufficient. Instead, we wish to verify a particular non-deterministic chain of executions. Naively applying any off-the-shelf general circuit SNARK for every step will result in proofs and verification times that grows linearly. In a highly evolving and volatile system, these metrics are unacceptable. There has been some recent development in verifiable computation that has the potential to address the above challenges. One particularly promising technique is recursive proof system. In a recursive proof system, the prover recursively proves the correct execution of incremental computations. Such technique can be applied to realize incrementally verifiable computation (IVC). In IVC, in each step of the computation, the prover takes the output and proof of the previous step, and produces an output and proof for the next step. A verifier only needs to verify the proof of a single step to ensure correct execution of the entire computation from genesis. Critically, both prover and verifier time are independent of the length of the computation. There exists quite a few constructions for recursive proofs in the wild, ranging from constructs like Halo[(4)] to PCD[(8)]. We envision more and more efficient constructs will be developed eventually. ## 3. Chrono We first define the high-level model and properties of Chrono. The system consists of a set of nodes \((n_{1},n_{2},\dots)\). Nodes can create, destroy, and mutate _objects_. They can also send objects to other nodes in the system. Besides send and recv, we define generic create and mutate functions: * \(create()\to o\): the create function generates an object. * \(mutate(o_{1},o_{2},\dots)\to o^{\prime}\): the mutate function takes a list of objects \(o_{1},o_{2},\dots\) and generates a new object \(o^{\prime}\). Unlike prior work (Kang et al., 2018), Chrono concerns the causality of objects, not causality of events. We similarly define a binary relation \(\prec\) on the set of objects in an execution of a distributed system. \(\prec\) denotes the causal relationship between any two objects, _i.e._, \(o_{1}\prec o_{2}\) if and only if \(o_{2}\) is causally dependent on \(o_{1}\). Object causality in Chrono is defined as follows: * If an object \(o\) is generated from create, \(o\) is not causally dependent on any other object in the system, _i.e._, \(\forall o^{\prime}\in O,o^{\prime}\not\prec o\), where \(O\) is all objects ever generated in the execution. * If an object \(o\) is generated from \(mutate(o_{1},o_{2},\dots)\), \(o\) is causally dependent on \(o_{1},o_{2},\dots\), _i.e._, \(o_{1}\prec o,o_{2}\prec o,\dots\) We note that the causality definition in Chrono is stronger than those in prior work (Kang et al., 2018). Instead of "possible influence", \(\prec\) implies definite causal relationship between two objects1. More formally, if \(o_{i}\prec o_{j}\), there exist a sequence of mutate invocations such that \(mutate(o_{i},\dots)\to o_{1}\), \(mutate(o_{1},\dots)\to o_{2}\), \(...\), \(mutate(o_{n},\dots)\to o_{j}\). Footnote 1: We assume mutate implies definite causality. That is, if \(mutate(o_{1},o_{2},\dots)\to o\), then \(o\) is causally dependent on \(o_{1},o_{2},\dots\). Chrono provides the following _causality inference_ guarantee: Theorem 3.1 ().: _For any two objects \(o_{i}\) and \(o_{j}\) which are generated in an execution of a distributed system, Chrono can deduce the causality relationship between the two objects, i.e., Chrono can correctly output \(o_{i}\prec o_{j}\), \(o_{j}\prec o_{i}\), or \(o_{i}\parallel o_{j}\)._ ## 4. Design and Implementation This section details the concrete design of Chrono using a novel logical clock construct coupled with verifiable computation. ### System Setting Suppose each object is defined as a tuple \(o_{i}=(s_{i},C_{i})\), where \(s_{i}\in\mathcal{S}\) is a state permutation in the set of all possible states \(\mathcal{S}\). \(C_{i}\) is a logical clock construct. This indicates that for two objects \(o_{1}=(s_{1},c_{x}),o_{2}=(s_{1},c_{y})\) with homogeneous states, they are still considered unique \(o_{1}\neq o_{2}\) if the clock values are distinct. Chrono attributes two objects with the exact same state and clock values as identical. This, however, is not necessarily true. This is inevitable and might lead to certain false positives in casualty evaluations. There exists a dynamic set of objects \(O\), which initially only includes the genesis object \(o_{0}=(s_{0},C_{0},d)\), where \(s_{0}\) is the genesis state. There also exists a family of mutate functions \(M=\{\mu_{1},\cdots\}\). For a new object \(o_{i}\) to be created, a mutate function must be operated on an existing object: \[\exists o_{i-1}\in O,\exists\mu\in M:\mu(o_{i-1})\to o_{i},o_{i}\in O\] Therefore, \(d\) for an object denotes the depth, or number of mutate functions applied onto \(o_{0}\) to derive the object. ### Logical clock Constructs In this section, we will explore a few existing clock constructs and evaluate its feasibility of use in Chrono. #### 4.2.1. Vector clocks Vector clocks [28] have long been the standard to determine causality in systems. However, as briefly mentioned in SS3, vector clocks do not necessarily draw accurate object causalities. Instead, vector clocks simply provide a temporal order and use that to infer possible causality; This will lead to great degrees of false positives. We illustrate this with a simple example; Suppose we have two nodes \(A\) and \(B\): 1. \(A\) and \(B\) start with the vector clock \([0,0]\) 2. \(A\) creates a new state, tags it with the clock value \([1,0]\) and sends it to \(B\). 3. \(B\) receives this new state and updates its clock to \([1,1]\). 4. Suppose \(B\) now creates a new state by applying mutate to the genesis state instead of \(A\)'s newly produced state. \(B\)'s new clock value will be \([1,2]\). The issue is that although \(B\)'s new state does not depend on \(A\)'s state, its clock implies such. This is because the vector clock only captures the temporal order of states/objects produced by the nodes, and therefore probable causality; Furthermore, we conjecture this probability to be impossible to be derived. Instead, we have to provide a way for \(B\) to generate distinct clocks based on its decision (to either mutate the genesis object or \(A\)'s object). This requires the decoupling of identities and clocks. Instead, each object is now tied to a logical clock that is used to infer its plausible relationship with another object. #### 4.2.2. Counting bloom clocks The Bloom clock (\(BC\)) [26] is a logical clock construct that can be used to probabilistically determine causality between objects. The \(BC\) is based on the counting Bloom filter [3], and can be defined as a vector of \(n\) integers \([c_{1},\cdots,c_{n}]\). In the context of Chrono, when operating on an existing object \(\mu(s_{i},C_{i},d)\rightarrow(s_{i+1},C_{i+1},d+1)\), the \(BC\) protocol uses a family of \(m\) cryptographically secure hash functions \(h_{1},\cdots,h_{m}\) that produces \(m\) indices \(h_{1}(s_{i+1}),\cdots,h_{m}(s_{i+1})\). Each index is then mapped and incremented on \(C_{i}\) to produce \(C_{i+1}\). When comparing two objects, \(o_{x}=(s_{x},C_{x},d_{x})\) and \(o_{y}=(s_{y},C_{y},d_{y})\), there are 3 possible scenarios: 1. \(\forall c_{xi}\in C_{x},c_{yi}\in C_{y},\exists c_{xj}\in C_{x},c_{yj}\in C_{ y}:c_{xi}\geq c_{yi}\wedge c_{xj}>c_{yj}\wedge d_{x}>d_{y}\Rightarrow(s_{x},C_{x})\succ(s_{ y},C_{y})\). 2. \(\forall c_{yi}\in C_{y},c_{xi}\in C_{x}:c_{yi}\geq c_{xi}\wedge d_{y}\geq d_{x} \Rightarrow(s_{y},C_{y})\succeq(s_{x},C_{x})\) 3. \((s_{y},C_{y},d_{y})\) and \((s_{x},C_{x},d_{x})\) are concurrent. The \(BC\) might postulate parenthood when in fact the objects are concurrent. This is due to possible hash collisions into the limited vector of size \(n\). The fundamental benefit of \(BC\)'s is its inherent agnosticism towards identities in the system; It can potentially be utilized by an unbounded number of nodes. This makes it suitable to be utilized in highly decentralized and permissionless settings. However, \(BC\) suffers from two main issues: _Issue 1: limited lifespan._ Eventually, a \(BC\) will increment to a point in which comparisons between two objects will always lead to a false positive. That is to say the \(BC\) can only hold a limited amount of objects before its utility is diminished. Therefore, to maintain its utility, we have to limit the number of objects a \(BC\) can hold. A naive reset might work, however it is crude. Comparisons between resets are impossible. Additionally, some synchrony might be required to derive consensus on the state of the clock before resets. A simple fix would be to have a \(BC\) be represented by \(k\) bloom filters (\(BF\)) of size \(n\). These \(k\) bloom filters is a sliding window on the history of objects, where objects outside these \(k\) bloom filters are forgotten. The intuition is that after \(k\) mutate s, there is little to be gained from comparing a descendant state that is so far removed from its ancestor. Following that philosophy, more priority should be put into recent states (relative to the current state), and less priority to distant states. Especially in the \(BC\) protocol, a similar comparison will likely return a false positive. _Issue 2: multi-parent problem._ In the \(BC\) protocol, all prior object clocks \(C_{i}\) are compressed into a single vector. However, this also means that similar objects (by means of indices after hashing) represented in the clock will potentially lead to false positives. The simple fix represented above resolves this to some extent. Since each \(BC\) is now represented by \(k\) bloom filters, we can analyze the state at each depth of mutation. This removes some false positives that previously existed in the original \(BC\). As illustrated with Figure 1, simply utilizing the bloom clock will lead to the false conclusion that \(o_{3}\) is causally dependant on \(o_{2}\);This is due to the coincidental hash collisions. However, if a history of \(BF\)s are listed, it can be referenced and compared to eliminate \(o_{2}\) as a false parent of \(o_{3}\). ### Decaying Onion Bloom Clocks The naive resolution described in SS4.2.2 would require an extremely inefficient \(k*n\) bits. In Chrono, we introduce a novel logical clock construct: the Decaying Onion Bloom Clock (DOBC). DOBC an improvement over \(BC\)s: 1. DOBC probabilistically determines causality between objects with a depth difference of at most \(k\). 2. DOBC addresses the issues described in SS4.2.2, whilst utilizing less space. DOBC achieves this by keeping a finer grain memory of recent state transitions; This is opposed to distant state transitions, where its view is compressed to produce a coarser grained expression. To provide indefinite utility across any number of state transitions, DOBC eventually forgets states that are too distant. Eventually, states that are too distant are forgotten. 3. A sub-function that allows DOBCs of different depths to be _merged_. The casuality utility is maintained with regard to any of it its ancestors. In this section we will describe the base DOBC protocol. We generalize the Counting bloom filter construct to variable-sized Bloom filters (\(VBF_{i}\)), where each of its \(n\) indices are stored with exactly \(i\) bits. The DOBC also consists of \(|L|\) layers, each layer \(l^{i}\) stores a pre-determined amount \(|l^{i}|\) of \(VBF_{ji}\)s, where \(j^{i}\) is the size an index for each \(VBF\) at layer Figure 1. How adding some history can be used to eliminate some false positives: Without the history, \(o_{2}\) will be incorrectly determined as the ancestor to \(o_{3}\). \(l^{i}\) and \(j^{i+1}>j^{i}\). For the sake of simplicity, let's assume \(j^{i}\cong i\). Each \(VBF_{i}\) in a layer is ordered from \(l^{i}_{1},\cdots,l^{i}_{|l^{i}|}\). Suppose for a certain execution path, it produces an ordered set of objects \(\{o_{0},o_{1},o_{2},\cdots\}\), where \(\exists\mu\)_in_\(M:\mu(o_{i}=(s_{i},C_{i},d))\rightarrow(o_{i+1}=(s_{i+1},C_{i+1},d+1))\). Initially, \(VBF_{i}\)s on all layers are set to 0. For illustration purposes, let's use the following settings: \[|L|=3,|l^{1}|=4,|l^{2}|=2,|l^{3}|=1\] This is illustrated with Figure 2. When \(o_{1}\) is generated, a Bloom filter \(BF_{s_{1}}\) is created by hashing \(s_{1}\) with the family of \(m\) distinct hash functions and setting the corresponding indices to 1. It is important to note that \(VBF_{1}\cong BF\) if both contain the same number of indices. \(BF_{s_{1}}\) is inserted into \(l^{1}_{1}\). When \(s_{2}\) is reached, \(BF_{s_{2}}\) is created and placed into \(l^{1}_{1}\), and \(BF_{s_{1}}\) is moved to the next available slot (which in this case is \(l^{1}_{2}\)). DOBC for \(o_{1}\) and \(o_{2}\) is illustrated by Figure 3. Eventually, as new objects (and states) are created, in the DOBC for a specific object (\(o_{4}\)), \(BF_{s_{1}}\) will be at \(l^{1}_{|l^{1}|}=l^{1}_{4}\). To make space for \(BF_{s_{4}}\), \(BF_{s_{1}}\) instead moves to \(l^{2}_{1}\). In theory a \(VBF_{2}\) can hold the compressed information of \(2*2-1=3\)\(VBF_{1}\)s. Therefore, \(BF_{s_{1}},BF_{s_{2}},BF_{s_{3}}\) are added together before it moves to \(l^{2}_{2}\). This is illustrated by Figure 4. Intuitively, a \(VBF_{i+1}\) in layer \(l^{i+1}\) can store a multiple of \(VBF_{i}\) from the previous layer (\(l^{i}\)). When \(l^{|L|}_{|l^{i}|}\) has reached the maximum capacity, and a new state is reached, for the new object, \(l^{|L|}_{|l^{i}|}\) is **delete** and \(l^{|L|}_{|l^{i}|-1}\) or \(l^{|L|-1}_{|l^{i}|}\) takes its place. In the context of our example, \(BF_{s_{1}},\cdots,BF_{s_{k}}\) is evicted from \(l^{3}_{1}\) and \(BF_{s_{3}},\cdots,BF_{s_{9}}\) takes its space. This is illustrated with Figure 5. In our example, the DOBC keeps track of an average of \(k=13.5\) states, but it is clear to see that its ability to compare any causality drops every 4 objects. Figure 3: (1) illustrates the DOBC for \(o_{1}\), (2) illustrates the DOBC for \(o_{2}\) where \(BF_{s_{1}}\) is moved to \(l^{1}_{2}\) to make space for \(BF_{s_{2}}\). Figure 2: An illustration of how a DOBC will look like with the setting: \(j^{i}\cong i,|L|=3,|l^{1}|=4,|l^{2}|=2,|l^{3}|=1\) It is clear to see that there exists some wastage in utilization with the example DOBC, as a \(VBF_{3}\) can in theory hold \(2^{3}-1=7\)\(BF\)'s. However, because it can only hold a certain multiple of \(VBF\)'s from the previous layer, it only holds \((2^{2}-1)*2\)\(BF\)s. The maximum number of \(BF\)'s or unique states held by \(l_{\left|l^{i}\right|}^{\left|L\right|}\) is therefore: \[\gamma=\left(\prod_{i=1}^{i=\left|L\right|-1}\left\lfloor\frac{2^{j^{i+1}}-1} {2^{j^{i}}-1}\right\rfloor\right)*(2^{j^{i}}-1)\] The total number of states a DOBC can hold thus ranges from \([k,k-\gamma+1]\) states, with an average of \(k+\frac{\gamma+1}{2}\) states. Where for \(\left|L\right|>1\): \[k=|l^{1}|*(2^{j^{i}}-1)+\left(\sum_{i=2}^{i=\left|L\right|}\left\lfloor\frac {2^{j^{i}}-1}{2^{j^{i-1}}-1}\right\rfloor*\left|l^{i}\right|\right)\] #### 4.3.1. Comparing DOBCs In this section, we will go through how two different DOBCs are compared to derive causality. In the naive approach mentioned in SS4.2.2, it is quite obvious to see how clocks can be compared. However, in DOBC, we only keep a limited history \(k\) of states, Figure 4. (1) illustrates the DOBC for \(\sigma_{5}\), (2) DOBC for \(o_{6}\), (3) DOBC for \(o_{7}\), (4) DOBC for \(o_{8}\). Figure 5. (1) illustrates the DOBC for \(\sigma_{16}\). (2) illustrates the DOBC for \(o_{17}\), where \(BF_{51},\cdots,BF_{5_{k}}\) is evicted. therefore only histories of a certain range can be compared. The greater the overlap, the lower the possibilities of false positives. We will utilize the same setting from SS4.3 to illustrate an example. Suppose we have the DOBC for \(o_{18}\) and \(o_{16}\), how we determine causality of \(o_{18}\) on \(o_{16}\) is illustrated with Figure 6: Since each state has differing depths, we compare different sections of its DOBCs to draw our causality conclusion. For example, \(l_{3}^{1}\in o_{18}\) should correspond to \(l_{1}^{1}\in o_{16}\). Similarly, the \(l_{1}^{2}\in o_{18}\) corresponds to addition of \(l_{3}^{1}\cap l_{4}^{1}\in o_{16}\). Intuitively, two set of \(VBF\)'s are comparable between two DOBCs if they correspond to the same depth. If all comparable \(VBFs\) in \(o_{18}\) are greater than or equal to the corresponding \(VBFs\) in \(o_{16}\), then we draw the conclusion that \(o_{18}\succ o_{16}\) with some acceptable probability. #### 4.3.2. Eliminating Wastage in DOBC As briefly mentioned in SS4.3, certain parameters will lead to bit wastage. This is not ideal if we wish to fully utilize every single bit in DOBC. We came up with two possible approaches to mitigate wastage. These approaches might require overhauls to the DOBC protocol: Figure 6. (1) illustrates the DOBC for \(o_{16}\). (2) illustrates the DOBC for \(o_{18}\). To determine if \(o_{18}\) is causally dependent on \(o_{16}\), we simply compare the similarly highlighted parts in both clocks. If the similarly highlighted parts in (2) is greater than or equal to the ones in (1) for all similarly highlighted parts, then we conclude \(o_{18}\succ o_{16}\). Figure 7. (1) illustrates the DOBC for \(o_{11}\). (2) illustrates the DOBC for \(o_{12}\) with incomplete decay; Although \(l_{1}^{2}\) is not completely full, it is shifted along with \(1_{2}^{2}\) to completely fill \(l_{1}^{3}\). 1. **Incomplete \(VBF\) decay:** We dictate that a \(VBF_{j^{i}}\) does not necessarily have to be "full" before it is moved to the next layer. That is to say we change the "\(a\ VBF_{i+1}\)_in layer_\(l^{i+1}\)_can store a multiple of \(VBF_{i}\) from the previous layer (\(l^{i}\))_" notion of the base DOBC protocol. We should be able to fully utilize all the space in all layers. This is illustrated by Figure 7. An important note is that \(VBF_{j^{1}}\) must be of sufficient granularity to fill all the "gaps" of \(VBF\)'s of subsequent layers. Therefore, \(j^{1}=1\) should always work. 2. **Perfect \(VBF\) decay:** If the next layer \(VBF_{j^{i}}\) can exactly hold a multiple of \(VBF_{j^{i-1}}\)s from the previous layer, then naturally, space will be fully utilized. More concretely, the number of \(BFs\ VBF_{j^{i}}\) can hold is congruent to \(0\) modulo of the \(BFs\ VBF_{j^{i-1}}\) can hold, given \(|L|>1\). An example would be to have the setting: \(|L|=3,j^{1}=1,j^{2}=2,j^{3}=4\). No space will be wasted since: \[(2^{2}-1)\%1=0\] \[(2^{4}-1)\%(2^{2}-1)=0\] \[o_{x}\] \ be done elegantly. This is to ensure that the causality of future descendants can still be adequately inferred. DOBC has two methods to merge clocks, each at opposite ends of the spectrum when it comes to utility and its trade-offs. Since DOBC only keeps track of at most \(k\) objects prior to the current one. When merging two objects \(o_{x}=(s_{x},C_{x},d_{x}),o_{y}=(s_{y},C_{y},d_{y}):|d_{x}-d_{y}|>k\), the resultant object technically exists as both depth \(d_{y}+1\) and \(d_{x}+1\). The unfortunate side effect of merging is that the new merged DOBC must keep track of both heights. _Extending Clocks:_ The Extending clocks approach is the most naive approach. As the name suggests, and illustrated in (1) of Figure 8: The corresponding \(VBF\)'s after a merge is stored in a linked list configuration, where each \(l_{j}^{i}:i=1,j>1\) is now a pair of \(VBF\)'s. Comparing DOSCs to derive causality will now be done twice; once with the red highlighted \(VBF\)'s, another with the blue ones. Figure 9: Since the Extending Clocks solution merges clocks at differing states of decay, decaying both clocks as is might lead to indefinite extra space utilization. To make the extra space utilization transient, we use the “more-decayed” clock as an anchor to decay the other “less-decayed” clock. As seen in (1), since the clock highlighted red is of a greater state of decay, therefore after (2), at (3) it also pre-emptively decays the blue clock. The result is a single \(VBF\) at \(l_{1}^{2}\). This merging solution will linearly grow the size of the resultant clocks per merge. However, this increase in space is transient and will only persist for at most \(k\) mutations. This is achieved by tweaking the decay function of DOBC, by making the new merged \(BF\)'s decay at different rates. This is illustrated in Figure 9. _Combined Maxima Clocks:_ Instead of naively appending clocks together in the previous solution, another alternative is to take the maxima of each index of the \(VBF\)'s. As illustrated in Figure 8, for \(l_{1}^{2}\), the \(VBF_{2}\) will have its value at each index be the maximum of that of the corresponding two \(VBF\)'s from \(o_{x}\) and \(o_{y}\). Intuitively, such an approach will definitely increase the false positive rate. However, its inherent benefit is that the resultant merged DOBC will always remain the same size. Furthermore, the false positive rate can be reduced by increasing \(n\) (number of indices) of the \(VBF\)s. _Hybrid Approach:_ A hybrid approach will be to employ the extending clocks method for a maximum of \(p\) merges. After which, older merges will be combined in the same approach outlined by the combined maxima clocks. This will ensure a bound on the clock size, whilst having some good utility and lower false positive rates in some cases. ### Verifiable Logical Clock Construction To apply a proof system for Chrono we first need to articulate the requirements we need, more precisely: 1. The proof system must support a known family of functions \((M=\{\mu_{1},\cdots,\mu_{i},\cdots,\mu_{\omega}\},|M|=\omega)\). 2. For a particular proof corresponding to a linear execution of \(n\) functions. The proof must assert to the validity that a non-deterministic multiset of functions of size \(n\) have been applied onto the genesis state. That is to say \(\exists m=\{m_{1},\cdots,m_{n}\}:\forall m_{i}\in m,m_{i}\in M\) where there \(\exists\pi\) that validates \(s_{n}\gets m_{n}(m_{n-1}\cdots(m_{1}(s_{0})))\), where \(s_{0}\) is the genesis state. 3. Transparent setup: every one can join as is. 4. Proof size and verification time must remain constant. At the time of writing, we will utilize the constructs introduced in Nova[15] and subsequently SuperNova[14]. #### 4.4.1. R1CS (Rank 1 Constraint System) To understand Nova and by extension, SuperNova, we must first briefly explain R1CS. R1CS is a method to quickly verify that a particular binary or arithmetic execution has been carried out properly without actually running through the execution again. More precisely, R1CS allows the prover to generate a solution to some polynomial problem (represented by the arithmetic circuit), in which the verifier can verify in constant time. The arithmetic circuit2 can be viewed logically as a collection of gates, where each gate has a left and right input as well as a single output. How the gates are wired to produce the final outputs are defined by a set of 3 matrices \(A,B,C\in\mathbb{F}^{q\times q}\). The prover also generates a solution vector \(Z=W,x,1,Z\in\mathbb{F}^{q}\), where \(W\)(or witness) are the intermediary outputs, \(x\)(or instance) are the inputs and outputs of the circuit, and 1 is a constant. Footnote 2: We will omit details in regard to binary circuits as any binary circuit can be converted into an arithmetic circuit. The verifier can simply verify the execution of the whole circuit as: \[(A\cdot Z)\circ(B\cdot Z)=(C\cdot Z)\] Where \(\cdot\) denotes matrix multiplication and \(\circ\) denotes the Hadamard product. #### 4.4.2. What is Nova? Nova, is an Incrementally Verifiable Computation (IVC) algorithm that requires no trusted setup, generates constant sized proofs for any step, and guarantees constant verification time. Nova introduced a novel method to combine two R1CS instances into a single instance. Naively adding two R1CS instances would lead to an incorrect instance. Therefore, Nova introduces a variant of R1CS: _Relaxed R1CS_ which introduces an error vector \(E\in\mathbb{F}^{q}\), where an instance \((E,u,x)\) is satisfied if: \[(A\cdot Z)\circ(B\cdot Z)=u\cdot(C\cdot Z)+E\] Where \(Z=(W,x,u)\). In particular, suppose there are two separate instances \(Z_{1}=(W_{1},x_{1},u_{1})\) and \(Z_{2}=(W_{2},x_{2},u_{2})\). \(u\gets u_{1}+r\cdot u_{2}\) and \(E\) is a function of \((Z_{1},Z_{2},r)\). The resulting instance or _folded_ instance can be simply verified by the verifier. The intuition is that if any of the relaxed R1CS instances that were folded is invalid, then the final folded relaxed R1CS instance will be invalid as well. Therefore, verifying the final folded relaxed R1CS instance asserts that arithmetic circuit has been executed correctly a particular number of times. Verifier work is further reduced by the introduction of the _committed relaxed R1CS_ scheme. This scheme allows the prover to utilize additively-homomorphic commitment schemes (like Pedersen commitments) to save the verifier from generating \(E\) themselves. Instead, the prover will send commitments for \(E_{1},E_{2},W_{1},W_{2}\) as well as another matrix \(T\) which is a result of a function of \(Z_{1},Z_{2}\). The verifier can additively combine the commitments to generate the committed folded instance. The prover will then reveal the actual folded instance. If it matches, the verifier can simply take the folded instance as is to use for verification. The resultant proof structure for the \(i^{th}\) step is a folded committed relaxed R1CS instance and witness pair asserting to the validity of executions up till step \(i-1\), and a single relaxed R1CS asserting to the validity of step \(i\). The scheme is then made _non-interactive_ by utilizing a public coin3 hinging on the Fiat-Shamir heuristic. Footnote 3: This can be instantiated by using a cryptographic-secure hash function #### 4.4.3. Nova for a family of functions Traditional IVCs like Nova typically are designed for a single function. To allow Nova to satisfy requirement (2) above, we can utilize a universal circuit. Intuitively, a universal circuit can be visualized as a circuit made by combining \(n\) sub-circuits, each representing the execution to a particular function. The inputs of this universal circuit will then "select" a particular sub-circuit to execute. The major downside to utilizing general circuits is that for any step of the execution, the whole circuit is actually still processing some input and generating some output. This means the prover will be doing some unnecessary work. Additionally prove sizes might be larger. SuperNova[14] was developed to circumvent this issue, we will elaborate this in the following text. #### 4.4.4. SuperNova: Universal machines without universal circuits SuperNova generalizes Nova's IVC to _non-uniform_ IVC, i.e., there exists a family of functions \(F=\{f_{1},\cdots,f_{n}\}\), and a control function \(\varphi\) which determines the function to run at a particular step \(j\). That is to say at the \(j^{th}\) step, the prover proves that \(f_{j},j=\varphi(W_{i},x_{i})\) has been correctly applied with witness instance pair \((W_{i},x_{i})\) to produce \(x_{i+1}\). _Recursive Proofs in SuperNova:_ It is not possible to naively apply the folding scheme developed in Nova, because each function (and the corresponding circuit) is structurally heterogeneous. A SuperNova proof therefore maintains a list of running instances \(U_{i}\), where \(U_{i}[j]\) is the folded instance of all previous invocations of \(f_{j}\) before the \(i^{th}\) step. It also contains a corresponding list of Witnesses \(W_{i}\), as well as an instance witness pair \((u_{i},w_{i})\) that asserts to valid execution of step \(i\). Furthermore, instead of simply applying the functions within the function family as is, SuperNova instead runs the _augmented_ version of the function \(f^{\prime}_{\varphi(W_{i},x_{i})}\). In essence, the augmented function does not just simply run \(f_{\varphi(W_{i},x_{i})}(W_{i},x_{i})\) to output \(x_{i+1}\). It also checks that \(U_{i},\varphi(W_{i-1},x_{i-1})\) are indeed produced by the prior step if it is contained in \(u_{i}\). This asserts that checking \(U_{i+1}\) is the same as checking \((U_{i},u_{i})\). The augmented function then folds \(u_{i}\) into \(U_{i}[\varphi(W_{i-1},x_{i-1})]\), and produces \(\varphi(W_{i},x_{i})\). \[((i+1,x_{0},x_{i+1}),U_{i+1},\varphi(W_{i},x_{i}))\gets f^{\prime}_{ \varphi(W_{i},x_{i})}(U_{i},u_{i},\varphi(W_{i-1},x_{i-1}),(i,x_{0},x_{i},W_{i }))\] Resulting in the witness pair \((u_{i+1},w_{i+1})\). Intuitively, verifying \(U_{i+1}\) is equivalent to verifying the prior \(i\) steps. \(u_{i+1}\) asserts the \(i+1\) step. Therefore, the proof for step \(i\) can be expressed as: \[\Pi_{i}=((U_{i},w_{i}),(u_{i},w_{i}))\] Additional details on SuperNova proofs using committed relaxed R1CS instances can be found in the original paper. We aim to utilize the Non-Uniform Incrementally Verifiable Computation scheme described above for Chrono. As such, each function in \(F\) corresponds injectively to a specific mutate function in \(M\). The depth value \(d\) simply corresponds to \(i\) in each instance \(u_{i}\). The object state and clock value corresponds to \(x_{i}\). ## 5. Use Cases So far, we have discussed the high-level properties of Chrono, and a concrete design of Chrono using DOBC and verifiable computation. In this section, we describe a few use cases of Chrono. ### Weakly Consistent Data Store Prior systems (Kstore, 2017) have built data stores with eventual consistency using logical clocks. Similarly, we leverage Chrono to build a weakly consistent decentralized data storage, Kstore. Kstore implements a key-value storage interface. Each unique _key_ is mapped to an arbitrarily-sized value. Kstore is fully decentralized and permissionless. Any node can join and leave the system at any time. Compared to its strongly consistent counterparts (Kstore, 2018; Kstore, 2019), Kstore offers higher efficiency, scalability, and availability. Kstore provides _eventual consistency_: if no further writes are applied to a key, eventually all nodes observe the same value mapped to the key. Each Kstore node maintains a subset of the keys in the key-space. We use a distributed hash table (DHT) (Kstore, 2018; Kstore, 2019) with virtual nodes for key-space partitioning and request routing. For fault tolerance, each key is stored on \(R\) virtual nodes closest to the key hash on the hash ring, where \(R\) is a configurable parameter. This set of virtual nodes is called the _replica group_ for the key. Higher \(R\) offers stronger fault tolerance, but results in longer update latency and higher storage overhead. Kstore's storage API exposes three external operations, Get, Insert, and Update. Get takes a key and returns the value mapped to the key. Update maps a new value to an existing key. Insert creates a new key into Kstore with an initial mapped value. Insert also takes an optional user-defined Merge function. The Merge function takes a set of values as input and outputs a single value. For instance, a Merge function for numerical value types could be maximum, and union for set value types. When a client invokes Insert, the request is routed to one of the \(R\) responsible virtual nodes on the DHT. The client can choose any reachable nodes, \(L\), in the replica group. Upon receiving the Insert request, \(L\) invokes create to generate an object, with the genesis state set to the value in the request. The object state also includes the Merge function in the request. \(L\) then forwards the generated object to the remaining nodes in the replica group. Each node in the group verifies the validity of the object using the attached proof (SS4.4) and stores it locally. When a client invokes Update on a key, the request is similarly routed to one of the \(R\) responsible virtual nodes, \(L\). Note that \(L\) does not need to be the same node that creates the object. \(L\) then invokes mutate with the locally stored object \(o_{l}\) as input, _i.e._, \(mutate(o_{l})\to o^{\prime}_{l}\). The output object state is set to the value in the Update request. \(L\) then forwards \(o^{\prime}_{l}\) to the other nodes in the replica group. When a replica node receives \(o^{\prime}_{l}\), it uses Chrono to determine the causality relation between \(o^{\prime}_{l}\) and its locally stored object \(o\). If \(o^{\prime}_{l}<o\), the node ignores the object. If \(o<o^{\prime}_{l}\), the node replaces the local object with \(o^{\prime}_{l}\). Otherwise, \(o\parallel o^{\prime}_{l}\) and the node invokes \(mutate(o^{\prime}_{l},o)\to o^{\prime}\), and stores the new object \(o^{\prime}\). When invoking mutate, the node applies the Merge function stored in the object. When a client invokes Get, it simply routes the request to any of the \(R\) nodes in the replica group. The node returns the object if it is stored locally. The client iterates through the replica group until the object is found. ### Anti-Censorship Decentralized Social Network The second use case is a decentralized social network, Kosocial, which we built atop Chrono. In Kosocial, users (represented by a private/public key pair) publish posts (_e.g._, short text, blogs, and photos) to the network, and subscribe to other users to receive their posted content. Users can also react and comment on posted content, both of which are fetched alongside the content. Kosocial stores the status and all published content of a user in a Chrono object with the type uobj. 4 All posts are signed by the publishing user. Kosocial defines a Update and a Merge function. Update takes a uobj and produces a new uobj with the newly published posts added to it. Merge takes multiple vobjs for the same user and merge their content to produce a new vobj. To read the posts of a user, a subscribed client simply fetches the corresponding uobj. We omit the exact format of uobj and detailed implementation of Update and Merge. Footnote 4: For simplicity, we store both the metadata and the content in the Chrono object. An optimized implementation can store content separately, and only saves content hashes, which can be used as pointers, in the Chrono object. Kosocial uses a DHT (Kos the client stores a uobj for each subscribed user. When it received a uobj' from a replica, it applies mutate(uobj, uobj') with the Merge function to update the object. ## 6. Conclusion In this work, we design a new logical clock system, Chrono. Chrono addresses key limitations of prior logical clock constructs. It scales perfectly in a decentralized network with dynamic membership, and tolerates Byzantine behaviors regardless of the proportion of adversaries. Chrono achieves the above strong properties by introducing a novel logical clock structure, the Decaying Onion Bloom Clock (DOBC). It additionally applies non-uniform IVC to ensure independently verifiable construction of DOBC even in the presence of Byzantine behaviors. To showcase the capability of verifiable causality enabled by Chrono, we have built a weakly consistent key-value store and an anti-censorship social network using Chrono.
2307.03831
Integrability from categorification and the 2-Kac-Moody Algebra
The theory of Poisson-Lie groups and Lie bialgebras plays a major role in the study of one dimensional integrable systems; many families of integrable systems can be recovered from a Lax pair which is constructed from a Lie bialgebra associated to a Poisson-Lie group. A higher homotopy notion of Poisson-Lie groups and Lie bialgebras has been studied using Lie algebra crossed-modules and $L_2$-algebras, which gave rise to the notion of (strict) Lie 2-bialgebras and Poisson-Lie 2-groups . In this paper, we use these structures to generalize the construction of a Lax pairs and introduce an appropriate notion of {higher homotopy integrability}. Within this framework, we introduce a higher homotopy version of the Kac-Moody algebra, with which the 2-Lax equation can be rewritten as a zero 2-curvature condition in 2+1d. An explicit characterization of our higher Kac-Moody algebra will be given, and we also demonstrate how it governs the 2-Lax pairs and the symmetries of a 3d topological-holomorphic field theory studied recently. This 3d theory thus serves as an example of a physical system that exhibits the sort of 2-graded integrability that we have defined here.
Hank Chen, Florian Girelli
2023-07-07T20:58:46Z
http://arxiv.org/abs/2307.03831v2
# Integrability from categorification ###### Abstract The theory of Poisson-Lie groups and Lie bialgebras plays a major role in the study of one dimensional integrable systems; many families of integrable systems can be recovered from a Lax pair which is constructed from a Lie bialgebra associated to a Poisson-Lie group [1, 2]. A categorified notion of Poisson-Lie groups and Lie bialgebras has been studied using Lie algebra crossed-modules and \(L_{2}\)-algebras [3, 4, 5], which gave rise to the notion of (strict) Lie 2-bialgebras and Poisson-Lie 2-groups. In this paper, we use these structures to generalize the construction of a Lax pairs and introduce an appropriate notion of categorified integrability. Within this framework, we explicitly construct and analyze the 2-dimensional version of the XXX model, whose dynamics is governed by an underlying Lie 2-bialgebra. We emphasize that the 2-graded form of our categorified notion of integrability directly implies the 2-dimensional nature of the degrees-of-freedom in our theory. ###### Contents * 1 Introduction * 2 Integrable systems from Lie bialgebras: a review * 2.1 Lax pair * 2.2 The XXX spin chain * 3 Strict Lie 2-bialgebras and strict Poisson-Lie 2-groups * 3.1 Lie 2-algebras and Lie 2-groups * 3.2 2-graded Poisson structure * 3.3 The 2-graded classical \(r\)-matrix * 4 2-graded integrability * 4.1 2-Lax pair * 4.2 Conserved quantities * 4.3 2-Kirillov-Kostant Poisson structure on \(C^{\infty}(\mathfrak{g}^{\otimes}[1])\) * 4.4 2-Lax pair on \(\mathfrak{g}^{*}[1]\) * 5 The XXX spin rectangle * 5.1 Restoring the 2-dimensional continuum * 5.1.1 The classical 2D Hamiltonian * 5.1.2 Bulk-boundary coupling dynamics * 5.2 2D Heisenberg model and quantum inverse scattering * 6 Conclusion Introduction It is well-known since the 19th century that physical classical systems can be described by a _Poisson structure_ on the space \(M\) of configurations, which is a bilinear skew-symmetric map \(\{\cdot,\cdot\}\) called the _Poisson bracket_ on the space \(C^{\infty}(M)\) of smooth functions on \(M\) satisfying the Leibniz rule and the Jacobi identity. The classical dynamics, under a chosen _Hamiltonian_ function \(H\in C^{\infty}(M)\), of the observables \(f\in C^{\infty}(M)\) in the system is governed by the Poisson bracket on \(M\), through the following differential equation \[\dot{f}=\{H,f\},\] with some appropriate initial conditions. The Poisson structure also serves as the precursor to canonical Dirac quantization, in the sense that \(\{\cdot,\cdot\}\) contributes to the quantum commutator \([\cdot,\cdot]\) to first order in the quantum deformation parameter (usually denoted by \(\hbar\)), hence Poisson geometry in general plays a central role in the understanding of various physical systems, both classical and quantum. A manifold \(M\) equipped with a Poisson structure \(\{\cdot,\cdot\}\) together with the choice of a Hamiltonian \((M,\{\cdot,\cdot\},H)\) is called a _Hamiltonian system_. A particularly special class of Hamiltonian systems has "enough" conserved quantities -- namely functions \(f\) with \(\dot{f}=0\) -- such that the dynamics of \(H\) can be separately described completely on the level sets of these conserved quantities. More precisely, we call \((M,\{\cdot,\cdot\},H)\)_(completely) integrable_ if there exist \(n\)-number of constants of motion \(f_{1},f_{2},\ldots,f_{n}\) in involution, namely \(\{f_{i},f_{j}\}=0\) for all \(1\leqslant i,j\leqslant n\), where \(n=\frac{1}{2}\dim M\). This is because such a system \(\{f_{i}\}_{i}\) of conserved quantities partitions the Poisson manifold into leaves that are invariant under the dynamics generated by \(H\). Finding constants of motion is not an easy task, and integrable systems in general are difficult to characterize. To get a handle on them, the theory of _Lax pairs_ had been developed [6, 7, 1]. A Lax pair \((L,P)\) is a tuple of maps \(M\to\mathfrak{g}\) into a Lie algebra \(\mathfrak{g}\) such that the **Lax equation** \[\dot{L}=\{H,L\}=[L,P] \tag{1.1}\] holds. If the Hamiltonian system \((M,\{\cdot,\cdot\},H)\) admits a Lax pair, this Lax equation implies that the following _trace polynomials_ on any representation space \(V\) of \(\mathfrak{g}\) \[f_{k}=\operatorname{tr}_{V}\rho(L)^{k},\qquad\rho:\mathfrak{g}\to\mathfrak{ gl}(V)\] are constants of motion in involution [2] for any \(k\). In particular, the eigenvalues of \(L\) are conserved quantities, hence for \(\dim\mathfrak{g}\) sufficiently large, \((M,\{\cdot,\cdot\},H)\) is integrable [1]. Finding a Lax pair on a general Hamiltonian system, if one even exists, is itself a difficult problem. However, if we are given a Poisson-Lie group \((G,\Pi)\), its corresponding bialgebra \(\mathfrak{g}\)[8] and a Poisson map \(\mathcal{J}:M\to\mathfrak{g}^{*}\), then we can pull back the Poisson map \(\mathcal{J}\), namely perform a particular change of canonical coordinates, to achieve a Lax pair for the induced Hamiltonian system on \(M\). Indeed, from the bialgebra \(\mathfrak{g}\), we can infer a Poisson structure \(\{\cdot,\cdot\}\)* on the function algebra \(C^{\infty}(\mathfrak{g}^{*})\) of the dual \(\mathfrak{g}^{*}\) Lie algebra for which a Lax pair \((L,P)\) can be _canonically_ constructed for any invariant Hamiltonian \(H\in C^{\infty}(\mathfrak{g}^{*})\), see for example a nice summary of the construction in [2]. Many physical integrable systems, such as the classical Toda, Korteweg-de Vries and the Kadomtsev-Petviashvili hierarchies [1], as well as the XXX/XXZ/XYZ family of quantum spin chains [9], can be transformed in this way to the canonical integrable system on \(\mathfrak{g}^{*}\) for certain Lie bialgebras \(\mathfrak{g}\). However, these are all one dimensional, and it is generally a difficult task to identify the notion of integrability for higher dimensional systems; see for example for some proposals [10, 11]. It is expected that increasing the dimensionality of a system corresponds to a "categorification" of the relevant structure; this is the "categorical ladder = dimensional ladder" proposal [12, 13, 14, 15]. Kapranov and Voevodsky used this proposal to categorify the notion of vector space in order to find higher dimensional version of the Yang-Baxter equation [16]. While not providing obvious concrete integrable models, the Kapranov-Voevodsky approach has been very influential in the study of higher categorical structures (see for instance [17, 18, 19, 20, 21, 22, 23, 24] for a short list of developments). Apart from the Kapranov-Voevodsky approach, other routes toward categorification include _Soergel bimodules_ in Khovanov's knot categorification [25, 26, 27], as well as the Baez-Crans \(L_{\infty}\)-algebras that appear in higher-gauge theory [28, 29, 30, 31]. The latter Baez-Crans approach relies on a more algebraic formulation, which makes it straightforward to generalise the standard construction of Lie algebras and Poisson-Lie groups. Using this notion of Baez-Crans categorification, the goal of this paper is thus to lift the usual notion of integrability and define a "2-Lax pair" suitable for 2-dimensional systems. We shall leverage the existing notions of Poisson-Lie 2-groups [4] and Lie 2-bialgebras [3, 5] in order to achieve our goal. In Section 2, we first recall how to explicitly construct the integrable system on \(\mathfrak{g}^{*}\) given a Lie bialgebra \(\mathfrak{g}\) following [2], then we demonstrate how the XXX quantum spin chain can be discussed in this framework. In Section 3, we review the key properties of the Poisson 2-group [32] and the associated Lie 2-bialgebra [3, 5] that shall be useful for our purposes. Following this, we take the Lie 1-algebra treatment in [2] as guide and define a notion of _2-graded integrability_ in Section 4. We describe the categorified analogue of the Lax pair construction in the Lie 1-algebra case, and explicitly prove that it provides an example of 2-graded integrability. This section contains our first main result. Lastly, Section 5 is about our second main result. We focus on a specific Lie 2-algebra example and we show how this amounts to a categorification of the construction in Section 2.2. It yields an inherently 2-dimensional integrable lattice model. This property can be identified as an consequence of the categorified, 2-graded nature intrinsic to the notion of integrability that we have developed. ## 2 Integrable systems from Lie bialgebras: a review We will review first how to construct a Lax pair by picking a Lie algebra \(\mathfrak{g}\) and a \(r\)-matrix. We then recall the construction of the well-known example, the \(XXX\) spin chain. ### Lax pair In this section, we first recall how \(r\)-matrix associated to a Lie algebra \(\mathfrak{g}\) can be used to define a Lie algebra 1-cocycle, which upon integration will give rise to a Poisson bivector on the group \(G\). We will then use this Poisson structure to construct the Lax pair. Let \(\mathfrak{g}\) denote a Lie algebra over the field \(k\). Recall that a quasitriangular classical \(r\)-matrix \(r\in\mathfrak{g}^{2\otimes}\) on \(\mathfrak{g}\) satisfies the _modified classical Yang-Baxter equation_ \[[\![r^{\wedge},r^{\wedge}]\!]=\omega=-[\![r^{\odot},r^{\odot}]\!],\] where \([\![\cdot,\cdot]\!]\) is the Schouten bracket and \(r=r^{\wedge}+r^{\odot}\) decomposes into skew-symmetric \(r^{\wedge}\in\mathfrak{g}^{2\wedge}\) and symmetric \(r^{\odot}\in\mathfrak{g}^{2\odot}\) components. The classical Yang-Baxter equation imposes the ad-invariance of the symmetric part \(r^{\odot}\), which makes it into a quadratic Casimir. Non-degenerate elements therein determines on \(\mathfrak{g}\) a non-degenerate invariant symmetric bilinear form \(\langle\cdot,\cdot\rangle=\langle\cdot,\cdot\rangle_{r^{\odot}}\), which is unique (the Killing form) if \(\mathfrak{g}\) is semisimple. Define the space \(\mathfrak{g}^{*}\) dual to \(\mathfrak{g}\) with respect to this non-degenerate bilinear form by \(g=\langle\cdot,X\rangle\in\mathfrak{g}^{*}\) for each \(X\in\mathfrak{g}\). We thus understand \(\mathfrak{g}^{*}\) as the space of linear functionals \(g\) on \(\mathfrak{g}\) given by the evaluation pairing \(g(X)=r^{\odot}(g,x)\). The skew-symmetric part \(r^{\wedge}\) defines a 1-cocycle \(\psi\in Z^{1}(\mathfrak{g},\mathfrak{g}^{2\wedge})\) given by \[\psi X=(\operatorname{ad}_{X}\otimes 1+1\otimes\operatorname{ad}_{X})r^{ \wedge},\qquad X\in\mathfrak{g},\] and induces the Lie bracket \([\cdot,\cdot]_{\mathfrak{*}}\) on \(\mathfrak{g}^{*}\) defined with respect to the bilinear form \(r^{\odot}\) as \[\langle[\![g,g^{\prime}]\!]_{*},X\rangle=[g,g^{\prime}]\!]_{*}(X)=(g\wedge g^ {\prime})(\psi X)=\langle g\wedge g^{\prime},\psi X\rangle,\quad g,g^{\prime} \in\mathfrak{g}^{*}.\] Hence, the classical \(r\)-matrix \(r=r^{\odot}+r^{\wedge}\) can be used to define a map \(\varphi:\mathfrak{g}\to\mathfrak{g}\) that we can use to define the dual Lie bracket \[\varphi(X) =(r^{\wedge})^{ij}\langle X,T_{i}\rangle T_{j}\] \[=\langle\cdot,[\varphi(X),X^{\prime}]+[X,\varphi(X^{\prime})] \rangle,\qquad X,X^{\prime}\in\mathfrak{g} \tag{2.1}\] where \(g=\langle\cdot,X\rangle\) and \(g^{\prime}=\langle\cdot,X^{\prime}\rangle\) and \(\{T_{i}\}_{i}\) is a basis of \(\mathfrak{g}\). The map \(\varphi\) allows to identify yet another Lie bracket \([\cdot,\cdot]_{r}=[\cdot,\cdot]\circ(\varphi\otimes 1+1\otimes\varphi)\) on \(\mathfrak{g}\) that shall be useful in (2.2) to define the Poisson bracket on functions on \(\mathfrak{g}^{*}\). We now integrate \(\psi\) to a group 1-cocycle \(\hat{\psi}:G\to\mathfrak{g}^{2\wedge}\), and define the corresponding **Poisson bivector**, \(\Pi\), \[\psi(X)=-\frac{d}{ds}|_{s=0}\hat{\psi}_{\exp sX},\qquad\Pi_{x}=(L_{x})_{*}\hat {\psi}_{x},\] via the tangent pushforward \(L_{*}\) of the left-multiplication on \(G\) for each \(X\in\mathfrak{g},x\in G\). Identifying \(\mathfrak{g}\) with left-invariant tangent vector fields on \(G\), we induce the natural Poisson bracket \[\{F,F^{\prime}\}=\Pi(F\otimes F^{\prime}),\qquad F,F^{\prime}\in C^{\infty}(G)\] on \(G\). The Jacobi identity follows from the _cocycle condition_\((\operatorname{ad}_{x}\otimes 1+1\otimes\operatorname{ad}_{x})\psi=0\), \(x\in G\). If we use the non-degenerate pairing \(r^{\odot}\) to identify the cotangent fibres with \(\mathfrak{g}^{*}\), we induce a Poisson bracket on \(\mathfrak{g}^{*}\)[2] \[\{\phi,\phi^{\prime}\}_{r}^{*}(g)=\langle g,[\![d_{g}\phi,d_{g}\phi^{\prime}] \!]_{r}\rangle,\quad\phi,\phi^{\prime}\in C^{\infty}(\mathfrak{g}^{*}),\quad g \in\mathfrak{g}^{*}, \tag{2.2}\] where the bracket \([\cdot,\cdot]^{\prime}\) is induced by the map \(\varphi\) determined by (2.1). The quadratic Casimir \(r^{\heartsuit}\) is invariant under the coadjoint action of the Lie group \(G\) on \(\mathfrak{g}^{*}\), \[\langle\operatorname{Ad}_{x}^{*}g,X\rangle=\langle g,\operatorname{Ad}_{x^{-1} }X\rangle,\] where \(x\in G\) and \(\operatorname{Ad}\) is the adjoint representation of \(G\) on \(\mathfrak{g}\). This then allows us to construct invariant Hamiltonians \(H\) on \(\mathfrak{g}^{*}\) from \(r^{\heartsuit}\). This is significant, as the Hamiltonian system \((\mathfrak{g}^{*},\{\cdot,\cdot\}^{*},H)\) is automatically integrable [2]. Indeed, it admits a Lax pair \((L,P):\mathfrak{g}^{*}\to\mathfrak{g}\), satisfying \[\dot{L}=\{H,L\}_{r}^{*}=[L,P] \tag{2.3}\] with (recall repeated indices are summed) \[L:g\mapsto(r^{\heartsuit})^{ij}\langle g,T_{i}\rangle T_{j},\qquad P:g\mapsto \varphi(d_{g}H)=(r^{\wedge})^{ij}\langle T_{i},d_{g}H\rangle T_{j} \tag{2.4}\] in terms of the basis \(\{T_{i}\}_{i}\) of \(\mathfrak{g}\), in which \(L\) is given by [2] \[\{L,L\}_{r}^{*}=[L\otimes 1+1\otimes L,r^{\wedge}].\] This is the key result that shall be important in the following example. ### The XXX spin chain Let us describe a family of integrable lattice systems called _quantum spin chains_, which plays a central role in condensed matter physics. Such systems are motivated by the tight-binding hypothesis -- which postulates that under certain physical conditions, the electronic degrees of freedom of a crystal are localized to the lattice sites -- as well as the discovery in the late 19th century that magnetism is in fact a result of the _quantum nature_ of these tightly-bound electrons. The collective dynamics of the half-integer spins of these electrons plays a crucial role in magnetization [33]. This has led to the development of quantum lattice models for magnetization based on the spin group \(\operatorname{Spin}(3)\cong SU(2)\), in which \(\mathfrak{su}(2)\) degrees of freedom are assigned to the lattice sites. Of particular interest is the XXX/XXZ/XYZ family of quantum spin chains, which in fact hosts extremely interesting symmetry structures [34, 35]. For a 1-dimensional \(N\)-site spin chain, the Hamiltonian takes the nearest-neighbor form \[H=\sum_{n=1}^{N}H_{n,n+1},\qquad H_{n,n+1}=\frac{1}{2}\sum_{i=1}^{3}J_{i} \sigma_{i}(n)\otimes\sigma_{i}(n+1),\] where \(\{\sigma_{i}\}_{i=1}^{3}\) are the Pauli matrices; ie. generators of \(\mathfrak{su}(2)\) in the fundamental representation. The XXX model is defined by \(J_{1}=J_{2}=J_{3}\), the XXZ model by \(J_{1}=J_{2}\neq J_{3}\) and XYZ model by \(J_{1}\neq J_{2}\neq J_{3}\). Due to the translational symmetry of the Hamiltonian, understanding the dynamics of each summand \(H_{n,n+1}\) is sufficient to understand the whole system. Indeed, products of the 2-site transfer matrix \(T_{12}\) can be used to diagonalize the \(N\)-site Hamiltonian; this is the _algebraic Bethe ansatz_[35]. Consider the \(n=1\) summand, \(H_{12}\), which corresponds to the \(N=2\)-site model. The _quantum inverse scattering method_[34, 35, 9] is a technique that generates the Hamiltonian \(H_{12}\) from the \(R\)-matrix of a certain underlying Hopf algebra. In the XXX case, this Hopf algebra is \(U\mathfrak{sl}_{2}\) and the \(R\)-matrix is given by \[R(\mu)=\mu+\mathcal{P}=(\mu+1)+r,\qquad r=\sigma_{3}\otimes\sigma_{3}+2\sigma_ {+}\otimes\sigma_{-}+2\sigma_{-}\otimes\sigma_{+}, \tag{2.5}\] where \(\sigma_{3},\sigma_{\pm}\) denote the basis of \(\mathfrak{sl}_{2}\), and \(\mu\in\mathbb{C}\) is the spectral parameter. On the other hand, the Hopf algebra underlying the XXZ lattice is the \(q\)-deformation \(U_{q}\mathfrak{sl}_{2}\), where \(q\) is related to the coupling constants by \(J_{3}/J_{1}=\frac{1}{2}(q+q^{-1})\). Here, the \(R\)-matrix is the standard \(\mathfrak{sl}_{2}\) family [34, 35]. The Poisson structure in the classical limit.We shall focus on analyzing the XXX model in the following, and examine how the Poisson-Lie symmetry arises in the classical limit. Following [36], the classical Hamiltonian in the _Lax representation_ can be obtained from the continuum limit (ie. vanishing lattice spacing) of the expectation value, \[\langle H\rangle\xrightarrow{\text{continuum}}h\,\propto\,\int dx\sum_{i=1}^{3 }\left(\frac{d\sigma_{i}}{dx}\right)^{2}, \tag{2.6}\] where \(x\) is the coordinate of the lattice sites in the continuum. Here, \(\sigma_{1,2,3}(x)\) are the generators of the \(\mathfrak{su}(2)\)-current algebra (note \(\sigma_{\pm}=\sigma_{1}\mp i\sigma_{2}\)) \[\{\sigma_{+}(x),\sigma_{-}(x^{\prime})\}=2\sigma_{3}(x)\delta(x-x^{\prime}), \qquad\{\sigma_{3}(x),\sigma_{\pm}(x)\}=\pm\sigma_{\pm}(x)\delta(x-x^{ \prime}), \tag{2.7}\] and can be parameterized by the canonical spinor angles \((\theta,\phi)\) in the following way \[\sigma_{3}=\cos 2\theta,\qquad\sigma_{\pm}=\frac{1}{2}\sin 2\theta e^{\mp i\phi}, \tag{2.8}\] provided \(\{\cos 2\theta(x),\phi(y)\}=i\delta(x-y)\)[36]. We shall seek to extract the Poisson-Lie symmetry as explained in Section 2 from this current algebra. To do so, we first consider the _smeared_ version of (2.7). This means we introduce functions \(\alpha:\mathbb{R}\to\mathbb{R}^{3}\) and integrate over the currents, such that (2.7) reads as \[\{\Sigma[\alpha],\Sigma[\alpha^{\prime}]\}=\int dxdy\,\alpha^{i}(x)\alpha^{ \prime j}(y)\{\sigma_{i}(x),\sigma_{j}(y)\}=\int dx\,(\alpha\times\alpha^{ \prime})^{i}(x)\sigma_{i}(x)=\Sigma[\alpha\times\alpha^{\prime}]\] in terms of the smeared currents \[\Sigma[\alpha]=\int dx\,\alpha^{i}\sigma_{i}(x). \tag{2.9}\] By treating the image of \(\alpha\in C^{\infty}(\mathbb{R}^{3})\) as coordinate functions on \(\mathbb{R}^{3}\), such that \(\alpha^{i}(g)=g_{i}\) for \(g\in\mathbb{R}^{3}\), we then identify \(\Sigma\) as a _Poisson map_ between the smeared current algebra and the Poisson structure \[\{\alpha^{i},\alpha^{j}\}^{*}=\epsilon_{ijk}\alpha^{k}.\] By identifying \(\mathbb{R}^{3}\) as the dual Lie algebra \(\mathfrak{su}_{2}^{*}\), this Poisson structure is in fact the one obtained from (2.2), \[\{\alpha^{i},\alpha^{j}\}^{*}(g)=\epsilon_{ijk}\alpha_{k}(g)=\langle g,[d_{g} \alpha^{i},d_{g}\alpha^{j}]\rangle, \tag{2.10}\] in which the underlying classical \(r\)-matrix is \[r=\sigma_{i}\otimes\sigma_{i}\in\mathfrak{su}_{2}^{2\otimes}.\] This \(r=r(1)\) is precisely the classical limit \(r(\mu)=\frac{1}{\mu}\mathcal{P}\) of the \(R\)-matrix (2.5) underlying the XXX model [34, 35] at \(\mu=1\). Further, since the quadratic Casimir \((r^{\odot})^{ij}=\delta_{ij}\) is diagonal in the basis \(\{\alpha^{i}\},\{\sigma_{i}\}\) we have chosen, the resulting classical Lax function \(L:\mathbb{R}^{3}\to\mathfrak{su}_{2}\) from (2.4) \[l(\alpha^{i})=(r^{\odot})^{jk}\langle\alpha^{i},\sigma_{j}\rangle\sigma_{k}= \sigma_{i}\] coincides with the classical Lax potential of the XXX spin chain in the continuum, \[l(x)=\sigma_{i}\otimes\sigma_{i}(x),\] under the Poisson-Lie duality \(\alpha^{i}=\langle\cdot,\sigma_{i}\rangle\) between \(\mathbb{R}^{3}\) and \(\mathfrak{su}_{2}\). Notice that only the second tensor factor (ie. the "quantum space" [35, 9]) acquires coordinate dependence through the smearing map \(\Sigma\). Conversely, given a Poisson-Lie symmetry and the setup of Section 2, the functional \(\Sigma\) allows us to associate coordinate functions \(s_{i}\in C^{\infty}(\mathfrak{g}^{*})\) on \(\mathfrak{g}^{*}\) to distributional generators \(\sigma_{i}\) of the \(\mathfrak{g}\)-current algebra, which essentially appends appropriate delta functions to (2.10). We call this procedure a _restoration of the continuum_. Reconstructing the classical Hamiltonian.We have seen above how we can recover the classical \(\mathfrak{su}_{2}\)-current algebra from an underlying Lie bialgebra structure on \(\mathfrak{su}(2)\), exposing the Poisson-Lie symmetry of the classical XXX Hamiltonian. To further drive the point home, we demonstrate here that restoring the continuum can in fact reconstruct the classical Lax Hamiltonian (2.6). To begin, we consider the classical continuum Lax operator \(\hat{l}(x)=d/dx+l(x)\). The associated classical monodromy matrix \(\mathcal{T}\) by definition satisfies the following flatness condition \(\hat{l}(x)\mathcal{T}(x;x^{\prime})=0\), subject to the boundary condition \(\mathcal{T}(x;x)=1\)[37]. This condition can be solved \[\mathcal{T}(x;x^{\prime})=P\exp\left(-\int_{x^{\prime}}^{x}dt\,l(t)\right), \qquad x^{\prime}<x\in\mathbb{R}\] through the path-ordered exponential. We can then form the _transfer matrix_ \[Z=\operatorname{tr}_{(1)}\mathcal{T},\] by tracing out the auxiliary space \(V_{(1)}\) (the first tensor factor in \(l\)[34, 35]). The classical Hamiltonian is given by the free energy \[h=-\ln Z=-\ln\operatorname{tr}_{(1)}\mathcal{T}=-\det_{(1)}\ln\mathcal{T}\] of the partition function defined by the transfer matrix \(\mathcal{T}\)[38]. Now suppose \(x-x^{\prime}=\ell\) is one lattice constant \(\ell\ll 1\) apart, such that \(\int_{x}^{x^{\prime}}dt\,l(t)\approx\ell\,l(x)\), then in the _long-wavelength limit_[38] we approximate \[\frac{1}{\ell}\int_{x}^{x^{\prime}}dx\,l(x)\sim l=\int dx\,l^{\prime}(x).\] Computing the determinant \(\det_{(1)}\ln\mathcal{T}\) then indeed reproduces the classical Hamiltonian (2.6) in the Lax representation. To summarize, in the above we have given a brief review of the XXX quantum spin chain, and demonstrated how one can uncover the underlying Poisson-Lie symmetry in the classical limit. In the following, we shall generalize the theory of integrable systems outlined in Section 2 to the Lie 2-algebra case. This gives a categorification of the underlying Poisson structure, from which we shall reconstruct the lattice system associated to the above quantum lattice. ## 3 Strict Lie 2-bialgebras and strict Poisson-Lie 2-groups ### Lie 2-algebras and Lie 2-groups **Definition 3.1**.: A Lie algebra crossed-module \((\mathfrak{g},\rhd,[\cdot,\cdot]_{0})\) is the data of a pair of Lie algebras \((\mathfrak{g}_{-1},[\cdot,]^{(-1)})\), \((\mathfrak{g}_{0},[\cdot,]_{0})\), a Lie algebra action \(\rhd:\mathfrak{g}_{0}\otimes\mathfrak{g}_{-1}\to\mathfrak{g}_{-1}\) and a Lie algebra homomorphism \(t:\mathfrak{g}_{-1}\to\mathfrak{g}_{0}\) (called the _\(t\)-map_), satisfying the equivariance and Peiffer identities \[t(X\rhd Y)=[X,tY]_{0},\qquad[Y,Y^{\prime}]^{(-1)}=(tY)\rhd Y^{\prime}, \tag{3.1}\] as well as the 2-Jacobi identities \[[X,[X^{\prime},X^{\prime\prime}]_{0}]_{0}+[X^{\prime},[X^{\prime \prime},X]_{0}]_{0}+[X^{\prime\prime},[X,X^{\prime}]_{0}]_{0}=0,\] \[X\rhd(X^{\prime}\rhd Y)-X^{\prime}\rhd(X\rhd Y)-[X,X^{\prime}]_ {0}\rhd Y=0, \tag{3.2}\] \(\forall X,X^{\prime},X^{\prime\prime}\in\mathfrak{g}_{0}\), and \(\forall Y,Y^{\prime}\in\mathfrak{g}_{-1}\). We shall denote a Lie algebra crossed-module by \(\mathfrak{g}=(\mathfrak{g}_{-1}\xrightarrow{t}\mathfrak{g}_{0},\rhd,[\cdot ]_{0})\)[5]. It is well-known that Lie algebra crossed-modules are equivalent to _strict_\(L_{2}\)-algebras [3, 39]. **Definition 3.2**.: A _strict_\(L_{2}\)-algebra is a strict 2-term \(L_{\infty}\)-algebra. Explicitly, one has a graded space \(\mathfrak{g}\cong V_{-1}\oplus V_{0}\) equipped with \(n\)-ary operations \(\mu_{n}\in\operatorname{Hom}^{2-n}(\mathfrak{g}^{n\wedge},\mathfrak{g})\) given by \[n=1:\quad\mu_{1}:V_{-1}\to V_{0},\qquad n=2:\quad\mu_{2}=[\cdot,\cdot]:(V_{0} \oplus V_{-1})\otimes(V_{0}\oplus V_{-1})\to(V_{0}\oplus V_{-1})\] such that the following _Koszul conditions_ are satisfied, \[[X,X^{\prime}]=-[X^{\prime},X],\quad[X,Y]=-[Y,X],\quad\mu_{1}[X,Y ]=[X,\mu_{1}Y],\quad[\mu_{1}Y,Y^{\prime}]=[Y,\mu_{1}Y^{\prime}],\] \[[[X,X^{\prime\prime}],X^{\prime\prime}]+[[X^{\prime\prime},X],X^{ \prime}]+[[X^{\prime},X^{\prime\prime}],X]=0,\qquad[[X,X^{\prime}],Y]+[[X,Y],X ^{\prime}]+[X,[X^{\prime},Y]]=0,\] where \(X,X^{\prime},X^{\prime\prime}\in V_{0}\), \(Y,Y^{\prime}\in V_{-1}\). It is convenient to write the graded bracket \(\mu_{2}=[,]:V_{i}\otimes V_{j}\to V_{i+j}\), \(-2\leqslant i+j\leqslant 0\), in terms of the degree \(i,j\mod 2\) of \(\mathfrak{g}\cong V_{-1}\oplus V_{0}\), such that \[\mu_{2}(Y+X,Y^{\prime}+X^{\prime})=[X,X^{\prime}]+\big{(}[X,Y^{ \prime}]+[Y,X^{\prime}]\big{)},\qquad X,X^{\prime}\in V_{0},\ Y,Y^{\prime} \in V_{-1}. \tag{3.3}\] We shall also extend \(\mu_{1}\) to the full space \(V_{0}\oplus V_{-1}\) by \(\mu_{1}(Y+X)=\mu_{1}Y\). Given a Lie algebra crossed-module \(\mathfrak{g}=(\mathfrak{g}_{-1}\xrightarrow{t}\mathfrak{g}_{0},\rhd,[\cdot,\cdot]_{0})\), we simply identify \(\mathfrak{g}_{-1}=V_{-1},\mathfrak{g}_{0}=V_{0}\) and \(t=\mu_{1}\). Then, one reassembles the graded bracket \(\mu_{2}\) from the bracket \([\cdot,\cdot]_{0}\) on \(\mathfrak{g}_{0}\) as well as the Lie algebra action \(\rhd\) such that \[\mu_{2}(Y+X,Y^{\prime}+X^{\prime})=[X,X^{\prime}]_{0}+\big{(}X\rhd Y^{\prime}- X^{\prime}\rhd Y\big{)},\qquad X,X^{\prime}\in V_{0},\ Y,Y^{\prime}\in V_{-1}.\] It is then simple to check that the Lie algebra crossed-module conditions imply precisely the Koszul conditions; in particular, the Peiffer identity implies the Koszul identity \[[\mu_{1}Y,Y^{\prime}]=[tY,Y^{\prime}]=[Y,Y^{\prime}]^{(-1)}=-[Y^{\prime},Y]^{( -1)}=-[tY^{\prime},Y]=-[\mu_{1}Y^{\prime},Y]=[Y,\mu_{1}Y^{\prime}]\] as required. Conversely, given a strict \(L_{2}\)-algebra, one may recover a Lie algebra crossed-module with the above procedure, provided one _defines_ the bracket \([\cdot,\cdot]^{(-1)}\) on \(\mathfrak{g}_{-1}\) by \[[Y,Y^{\prime}]^{(-1)}\equiv[\mu_{1}Y,Y^{\prime}], \tag{3.4}\] whence the Koszul conditions guarantee that this bracket is skew-symmetric and satisfies the Jacobi identity. Due to this result, we will use "**Lie 2-algebras**" in the following to refer to both a Lie algebra crossed-module and a strict \(L_{2}\)-algebra. All Lie 2-algebras will be understood as strict \(L_{2}\)-algebras in this paper, unless otherwise specified. It is known that there is a one-to-one correspondence between (strict) Lie 2-algebras and connected, simply connected (strict) 2-groups [29, 40, 4], where the latter of which also admits a group crossed-module description. **Definition 3.3**.: A **Lie 2-group**\(G=G_{-1}\xrightarrow{\mathbf{t}}G_{0}\) is the data of a pair of Lie groups \(G_{-1},G_{0}\), a smooth Lie group automorphism \(\rhd:G_{0}\times G_{-1}\to G_{-1}\) and a smooth group homomorphism \(\mathbf{t}:G_{-1}\to G_{0}\) such that the following conditions \[\mathbf{t}(x\rhd y)=x\mathbf{t}(y)x^{-1},\qquad(\mathbf{t}y)\rhd y^{\prime}= yy^{\prime}y^{-1} \tag{3.5}\] are satisfied for each \(x\in G_{0}\) and \(y,y^{\prime}\in G_{-1}\). It is easy to see that the \(t\)-map for the Lie algebra crossed-module is the tangent pushforward (ie. the derivative) of the smooth map \(\mathbf{t}\) in the corresponding Lie 2-group \(G\). **Lie bialgebra crossed-modules and 2-group 2-cocycles.** **Definition 3.4**.: The following linear maps \[\delta_{-1}:\mathfrak{g}_{-1}\to\mathfrak{g}_{-1}^{2\otimes},\qquad\delta_{0 }:\mathfrak{g}_{0}\to(\mathfrak{g}_{0}\otimes\mathfrak{g}_{-1})\oplus( \mathfrak{g}_{-1}\otimes\mathfrak{g}_{0})\] on the Lie 2-algebra \(\mathfrak{g}\) is called a **Lie 2-algebra 2-cocycle** iff the following conditions are satisfied [3, 40] \[\delta_{0}t = (t\otimes 1+1\otimes t)\delta_{-1},\qquad(\textbf{ID1}\text{ in }\textbf{Theorem 2.15}\text{ of Ref. \@@cite[cite]{[\@@bibref{}{A The inner product \(\iota\) in (3.8) is given by the evaluation pairing \(\langle\cdot,\cdot\rangle:\mathfrak{g}^{*}\otimes\mathfrak{g}\to k\) such that \(\langle g+f,Y+X\rangle=f(Y)+g(X)\) for each \(g\in\mathfrak{g}_{0}^{*},f\in\mathfrak{g}_{-1}^{*},X\in\mathfrak{g}_{0},Y\in \mathfrak{g}_{-1}\). The dual \(t^{T}\) of the \(t\)-map is taken with respect to this pairing, \[\langle t^{T}g,Y\rangle=\langle g,tY\rangle,\qquad\forall g\in\mathfrak{g}_{0 }^{*},\ Y\in\mathfrak{g}_{-1}.\] A quadratic 2-Casimir [5] can also be used to induce such an invariant bilinear pairing; we shall explain this in more detail later. We note that \(G\) acts on \(\mathfrak{g}\) by adjoint representation \(\mathrm{Ad}\), defined in more details in (3.18). ### 2-graded Poisson structure We start by providing the definition of a Poisson 2-algebra directly inspired from the notion of Lie 2-algebra [4, 32]. **Definition 3.5**.: Consider \(M=(M_{-1}\xrightarrow{\mathfrak{t}}M_{0})\) given in terms of a pair of manifolds \(M_{-1},M_{0}\) and a smooth map \(\mathfrak{t}:M_{-1}\to M_{0}\). The (graded) algebra \(C^{\infty}(M)=C^{\infty}(M_{0})\xrightarrow{\mathfrak{t}}C^{\infty}(M_{-1})\) of smooth functions on \(M\) is given in terms of the graded sum \(C^{\infty}(M_{-1})\oplus C^{\infty}(M_{0})\), and the pullback \(\mathfrak{t}^{*}:C^{\infty}(M_{0})\to C^{\infty}(M_{-1})\). Note the reversal of the degrees due to the pullback -- \(F_{0}\in C^{\infty}(M_{0})\) has degree-(-1) while \(F_{-1}\in C^{\infty}(M_{-1})\) has degree-0. It will also be convenient to extend \(\mathfrak{t}^{*}\) to all of \(C^{\infty}(M)\cong C^{\infty}(M_{0})\oplus C^{\infty}(M_{-1})\) by \[\mathfrak{t}^{*}F=\mathfrak{t}^{*}(F_{0}\oplus F_{-1})=\mathfrak{t}^{*}F_{-1},\qquad\forall\ F\in C^{\infty}(M).\] The section \(\Gamma(M,TM)\) of vector fields inherits the graded structure from \(TM\cong TM_{-1}\times TM_{0}\). Hence, to build bivectors on \(M\), we begin by first forming the following 3-term chain complex \[\Gamma(M,TM\otimes TM) = \Gamma(M,TM_{-1}\otimes TM_{-1})\xrightarrow{D_{t}^{+}}\] \[\left(\Gamma(M,TM_{-1}\otimes TM_{0})\right)\oplus\Gamma(M,TM_{0 }\otimes TM_{-1}))\xrightarrow{D_{t}^{-}}\Gamma(M,TM_{0}\otimes TM_{0}),\] where \(D_{t}^{\pm}=t\otimes\mathrm{id}\pm\mathrm{id}\otimes t\) and \(t=\mathfrak{t}_{*}:TM_{-1}\to TM_{0}\) is the tangent pushforward of the anchor map \(\mathfrak{t}:M_{-1}\to M_{0}\). In accordance with the grading, we assign the degree -2, -1, 0 to the terms of the complex \(\Gamma(M,TM^{2\otimes})\) from the left to right. We shall define the space of bivector fields \(\mathfrak{X}^{2}(M)\) as a subspace of the complex \(\Gamma(M,TM^{2\otimes})\). **Definition 3.6**.: The **graded bivector fields**\(\mathfrak{X}^{2}(M)\) on \(M\) consist of sections \(\Pi\in\Gamma(M,TM^{2\otimes})\) such that the following conditions \[\mathfrak{t}^{*}\Pi^{0}=D_{t}^{+}\Pi^{-1},\qquad D_{t}^{-}\Pi^{0}=0 \tag{3.9}\] are satisfied, where \(\Pi^{-1}\) has degree-(-2) and \(\Pi^{0}\) has degree-(-1) in \(\Gamma(M,TM^{2\otimes})\). Due to the second condition, we can introduce a component \(\bar{\Pi}^{0}\) in degree-0 by \[\bar{\Pi}^{0}=(1\otimes t)\Pi^{0}=(t\otimes 1)\Pi^{0}.\] One can compute that, for any smooth map \(\phi:X\to Y\) and any vector \(\xi\in\Gamma(X,TX)\), we have \[\xi(\phi^{*}F)=(\phi_{*}\xi)(F),\qquad F\in C^{\infty}(Y),\] and therefore \[D_{t}^{+}\Pi^{-1}=\Pi^{-1}\circ(\mathfrak{t}^{*}\otimes 1+1\otimes\mathfrak{t}^{*}). \tag{3.10}\] This will be important in the following. We use the subspace of _skew_ bivector fields \(\mathfrak{X}^{2}_{\mathrm{sk}}(M)\subset\Gamma(M,TM\wedge TM)\) to define the following structure on \(C^{\infty}(M)\). Let \(\Pi=\Pi^{-1}+\Pi^{0}\in\mathfrak{X}^{2}_{\mathrm{sk}}(M)\), we define \[\{F,F^{\prime}\}=\Pi(F\otimes F^{\prime}),\qquad F,F^{\prime}\in C^{\infty} (M), \tag{3.11}\] which can be more explicitly written in the decomposed form \[\{F,F^{\prime}\}_{0} = \{F_{0},F_{0}^{\prime}\}_{0}=\bar{\Pi}^{0}(F_{0}\otimes F_{0}^{ \prime}),\] \[\{F,F^{\prime}\}_{-1} = \{F_{-1},F_{0}^{\prime}\}_{-1}+\{F_{0},F_{-1}^{\prime}\}_{-1}= \Pi^{0}(F_{-1}\otimes F_{0}^{\prime}+F_{0}\otimes F_{-1}^{\prime}),\] \[\{F,F^{\prime}\}_{-2} = \{F_{-1},F_{-1}^{\prime}\}_{-2}=\Pi^{-1}(F_{-1}\otimes F_{-1}^{ \prime}),\] by leveraging the decomposition \(F=F_{-1}\oplus F_{0}\) of functions on \(M\). We now prove that \((C^{\infty}(M),\{\cdot,\cdot\})\) is in fact a Lie 2-algebra. **Lemma 3.1**.: _Let \(\Pi=\Pi^{-1}+\Pi^{0}\in\mathfrak{X}^{2}_{\text{\rm{sk}}}(M)\) denote a_ **Poisson bivector** _on \(M\), namely a bivector field satisfying_ \[\sum_{\text{cycl.}}\Pi(\Pi\otimes 1)=0. \tag{3.12}\] _Then the graded space \(C^{\infty}(M)=C^{\infty}(M_{0})\xrightarrow{\mathfrak{t}^{*}}C^{\infty}(M_{-1})\) equipped with the bracket (3.11) a strict Lie 2-algebra. We call \((C^{\infty}(M),\{\cdot,\cdot\})\) the_ **Poisson 2-algebra** _of the graded Poisson manifold \((M,\Pi)\)._ Proof.: The proof consists in showing that the different properties given in Definition 3.2 are satisfied. The skew-symmetry property is automatic. By a direct computation, the first condition in (3.9) implies \[\mathfrak{t}^{*}\{F,F^{\prime}\}_{-1} = (\mathfrak{t}^{*}\Pi^{0})(F_{0}\otimes F^{\prime}_{-1}+F_{-1} \otimes F^{\prime}_{0})\] \[= (D^{+}_{t}\Pi^{-1})(F_{0}\otimes F^{\prime}_{-1}+F_{-1}\otimes F ^{\prime}_{0})\] \[= \Pi^{-1}(\mathfrak{t}^{*}F_{0}\otimes F^{\prime}_{-1}+F_{-1} \otimes\mathfrak{t}^{*}F^{\prime}_{0})\] \[= \{F,\mathfrak{t}^{*}F^{\prime}\}_{-2}+\{\mathfrak{t}^{*}F,F^{ \prime}\}_{-2},\] where we have also used (3.10). On the other hand, \(\{\cdot,\cdot\}_{0}\) is determined by \(\{\cdot,\cdot\}_{-1}\), as \(\bar{\Pi}^{0}\) is induced by \(\Pi^{0}\) through \(D^{+}_{t}\) from (3.9). We thus have \[\{F,F^{\prime}\}_{0} = \bar{\Pi}^{0}(F_{0}\otimes F^{\prime}_{0})=\frac{1}{2}(D^{+}_{t} \Pi^{0})(F_{0}\otimes F^{\prime}_{0})\] \[= \frac{1}{2}\Pi^{0}(\mathfrak{t}^{*}F_{0}\otimes F^{\prime}_{0}+F_ {0}\otimes\mathfrak{t}^{*}F^{\prime}_{0})\] \[= \frac{1}{2}(\{\mathfrak{t}^{*}F,F^{\prime}\}_{-1}+\{F,\mathfrak{ t}^{*}F^{\prime}\}_{-1})=\{\mathfrak{t}^{*}F,F^{\prime}\}_{-1}.\] From the Lie 2-algebraic perspective [3], the right-hand side of this computation should be taken as the _definition_ of \(\{\cdot,\cdot\}_{0}\). Now it suffices to check the 2-Jacobi identities, \[\{\{F,F^{\prime}\}_{-2},F^{\prime\prime}\}_{-1}+\{\{F^{\prime},F^ {\prime\prime}\}_{-1},F\}_{-1}+\{\{F^{\prime\prime},F\}_{-1},F^{\prime}\}_{-1} = 0\] \[\{\{F,F^{\prime}\}_{-2},F^{\prime\prime}\}_{-2}+\{\{F^{\prime},F^ {\prime\prime}\}_{-2},F\}_{-2}+\{\{F^{\prime\prime},F\}_{-2},F\}_{-2} = 0.\] These are nothing but (3.12). The central example of a graded Poisson manifold \((M,\Pi)\) is a **(strict) Poisson-Lie group \((G,\Pi)\)**, where the graded Poisson bivector field \(\Pi=\Pi^{-1}+\Pi^{0}\in\mathfrak{X}^{2}_{\text{\rm{sk}}}(M)\) is given by \[\Pi^{-1}_{y}=(L_{y})_{*}(\hat{\delta}_{-1})_{y},\qquad\Pi^{0}_{x}=(L_{x})_{*}( \hat{\delta}_{0})_{x},\qquad\bar{\Pi}^{0}_{x}=\frac{1}{2}(L_{x})_{*}(D^{+}_{t} \hat{\delta}_{0})_{x},\] where \(\hat{\delta}\) integrates the Lie 2-algebra 2-cocycle \(\delta=\delta_{-1}+\delta_{0}\) on \(\mathfrak{g}\), and \(L_{*}\) is the pushforward of the left-multiplication on \(G_{-1}\rtimes G_{0}\). The conditions (3.9) are nothing but (3.8), and (3.12) follow from the 2-cobracket conditions (3.7). The rest of the 2-cocycle conditions, namely the third and fourth equations in (3.6), in fact implies the _multiplicativity_ of the bivector \(\Pi\) with respect to the group and groupoid multiplications in \(G\)[4]. 2-graded Poisson maps.Let \(M,M^{\prime}\) denote two 2-graded spaces, with \(t\)-maps \(\mathbf{t},\mathbf{t}^{\prime}\), respectively. A smooth 2-graded map \(\mathcal{J}=(\mathcal{J}_{-1},\mathcal{J}_{0}):M\to M^{\prime}\) consists of smooth maps \(\mathcal{J}_{-1,0}:M_{-1,0}\to M^{\prime}_{-1,0}\) as its components, such that we have \(\mathbf{t}^{\prime}\mathcal{J}_{-1}=\mathcal{J}_{0}\mathbf{t}\). These maps pullback onto maps \(\mathcal{J}_{-1,0}^{*}:C^{\infty}(M^{\prime}_{-1,0})\to C^{\infty}(M_{-1,0})\) on functions satisfying \(\mathfrak{t}^{*}\mathcal{J}_{0}^{*}=\mathcal{J}_{-1}^{*}\mathbf{t}^{\prime*}\), such that \(\mathcal{J}^{*}=(\mathcal{J}_{0}^{*},\mathcal{J}_{-1}^{*}):C^{\infty}(M^{\prime} )\to C^{\infty}(M)\) defines the 2-graded map on the function algebra of \(M^{\prime}\). When \(M=G,M^{\prime}=G^{\prime}\) are 2-groups, then \(\mathcal{J}\) must be a **2-group homomorphism**[4]: the components \(\mathcal{J}_{0},\mathcal{J}_{-1}\) are group homomorphisms such that \(\mathcal{J}_{-1}(x\triangleright y)=(\mathcal{J}_{0}x)\triangleright^{\prime}( \mathcal{J}_{-1}y)\) for each \(x\in G_{0},y\in G_{-1}\), in addition to the condition \(\mathcal{J}_{0}\mathbf{t}=\mathbf{t}^{\prime}\mathcal{J}_{-1}\). These imply that \(\mathcal{J}=(\mathcal{J}_{-1},\mathcal{J}_{0})\) preserves the Peiffer identities (3.5) on \(G,G^{\prime}\). If we let \(j=(j_{-1},j_{0})\) denote the derivative of \(\mathcal{J}\), then \(j\) preserves the Peiffer identities (3.1) on the Lie 2-algebras \(\mathfrak{g},\mathfrak{g}^{\prime}\): \[t^{\prime}{}_{J-1}=j_{0}t,\qquad j_{0}[X,X^{\prime}]=[j_{0}X,j_{0}X^{\prime}]^{ \prime},\qquad j_{-1}(X\triangleright Y)=(j_{0}X)\triangleright^{\prime}(j_{-1}Y)\] for each \(X,X^{\prime}\in\mathfrak{g}_{0},Y\in\mathfrak{g}_{-1}\), where \(t,t^{\prime}\) denote respectively the crossed-module maps on \(\mathfrak{g},\mathfrak{g}^{\prime}\). We call such maps **Lie 2-algebra homomorphisms**[41]. Suppose \((G,\Pi)\) and \((G^{\prime},\Pi^{\prime})\) are two Poisson-Lie 2-groups. The condition for \(\mathcal{J}\) to be a Poisson map is that its pullback \(\mathcal{J}^{*}\) commutes with the bivectors \[(\mathcal{J}^{*}_{0}\Pi^{\prime 0})(F_{0}\otimes\mathcal{F}^{ \prime}_{-1}+F_{-1}\otimes F^{\prime}_{0}) = \Pi^{0}(\mathcal{J}^{*}_{0}F_{0}\otimes\mathcal{J}^{*}_{-1}F^{ \prime}_{-1}+\mathcal{J}^{*}_{-1}F_{-1}\otimes\mathcal{J}^{*}_{0}F^{\prime}_{ 0}),\] \[(\mathcal{J}^{*}_{-1}\Pi^{\prime-1})(F_{-1}\otimes F^{\prime}_{-1}) = \Pi^{-1}(\mathcal{J}^{*}_{-1}F_{-1}\otimes\mathcal{J}^{*}_{-1}F^{ \prime}_{-1})\] for each \(F,F^{\prime}\in C^{\infty}(G^{\prime})\). If we let \(\{\cdot,\cdot\},\{\cdot,\cdot\}^{\prime}\) denote respectively the \(L_{2}\)-Poisson brackets induced on \(C^{\infty}(G),C^{\infty}(G^{\prime})\) via (3.11), then \(\mathcal{J}^{*}\) is required to preserve them: \[\mathcal{J}^{*}\{\cdot,\cdot\}^{\prime}=\{\mathcal{J}^{*},\mathcal{J}^{*}.\}.\] This must hold for each graded component, hence they are nothing but the conditions for \(\mathcal{J}^{*}\) to be a \(L_{2}\)-algebra homomorphism between Poisson 2-algebras. In other words, we have **Definition 3.7**.: Let \((G,\Pi),(G^{\prime},\Pi^{\prime})\) denote two Poisson-Lie 2-groups. A 2-graded map \(\mathcal{J}:G\to G^{\prime}\) is a **2-graded Poisson map** iff \(\mathcal{J}\) is a 2-group homomorphism such that its pullback \(\mathcal{J}^{*}=(\mathcal{J}^{*}_{0},\mathcal{J}^{*}_{-1})\) is a Poisson 2-algebra homomorphism. In particular, a Poisson-Lie 2-group \((G,\Pi)\) is precisely such that the group and groupoid multiplications are 2-graded Poisson maps [4]. ### The 2-graded classical \(r\)-matrix The theory of 2-bialgebras and the associated 2-graded classical \(r\)-matrix \(R\in\mathfrak{g}_{0}\otimes\mathfrak{g}_{-1}\) have been studied previously in [3, 4, 5]. Write \(R=R_{1}\oplus R_{2}\), where \(R_{1}\in\mathfrak{g}_{-1}\otimes\mathfrak{g}_{0}\) and \(R_{2}\in\mathfrak{g}_{0}\otimes\mathfrak{g}_{-1}\), then we form the skew-symmetric and symmetric pieces \[R^{\wedge}=(R_{1}-R_{2}^{T})\oplus(R_{2}-R_{1}^{T}),\qquad R^{\odot}=(R_{1}+R _{2}^{T})\oplus(R_{2}+R_{1}^{T})\] of \(R\). It then follows clearly that we have \(R=R^{\wedge}+R^{\oplus}\), upon which we impose the **modified 2-graded classical Yang-Baxter equations** (2-CYBE) \[[\![R^{\wedge},R^{\wedge}]\!]=\Omega=-[\![R^{\odot},R^{\odot}]\!],\qquad D_{t }^{-}R^{\wedge}=0. \tag{3.13}\] Here, \([\![\cdot,\cdot]\!]\) is the graded Schouten bracket [3, 4]. The second condition \(D_{t}^{-}R=0\) has no 1-graded analogue, and is necessary for our theory. Now it was argued in [3] that, under certain mild technical conditions, there is a decomposition \(R=r-D_{t}^{+}\mathfrak{r}\) with \(D_{t}^{+}=t\otimes 1+1\otimes t\), where \[r\in(\mathfrak{g}_{0}\otimes\mathfrak{g}_{-1})\oplus(\mathfrak{g}_{-1} \otimes\mathfrak{g}_{0}),\qquad\mathfrak{r}\in\mathfrak{g}_{-1}^{2\otimes}.\] For later applications, it would be convenient to suppose that the skew-symmetric and symmetric components of \(R\) also decompose accordingly [5], \[R^{\wedge}=r^{\wedge}-D_{t}^{+}\mathfrak{r}^{\wedge},\qquad R^{\odot}=r^{ \odot}-D_{t}^{+}\mathfrak{r}^{\odot}, \tag{3.14}\] where \(r^{\wedge},r^{\odot}\) are defined as in (3.3), and similarly for \(\mathfrak{r}^{\wedge},\mathfrak{r}^{\odot}\). Notice under (3.14), the condition \(D_{t}^{-}R=0\) only concerns the factor \(r\), as \(D_{t}^{-}D_{t}^{+}=0\)[3]. Solutions of (3.13) are then called **2-graded classical \(r\)-matrices**, from which we may construct a Poisson bracket \(\{\cdot,\cdot\}\) (3.11) on the 2-group \(G\). These elements \(R^{\odot},R^{\wedge}\) shall play a central role in the following. In particular, it was shown that the skew-symmetric piece \(R^{\wedge}\) yields a 2-coboundary \(\delta=(\delta_{-1},\delta_{0})=dR^{\wedge}\) on \(\mathfrak{g}\) -- satisfying (3.6), (3.7) -- given by1[3] Footnote 1: Note here that \([\cdot,\cdot]\) is the graded Lie bracket on \(\mathfrak{g}\), which includes \([\cdot,\cdot]\!]_{60}\) as well as the crossed-module action \(\triangleright\). \[\delta_{0}(X)=[X\otimes 1+1\otimes X,R^{\wedge}],\qquad\delta_{-1}(Y)=[Y \otimes 1+1\otimes Y,R^{\wedge}], \tag{3.15}\] which determines a Lie 2-algebra structure on the _dual_\(\mathfrak{g}^{*}\)[1]. Hence the data \((\mathfrak{g},\delta=dR^{\wedge})\) defines a Lie 2-bialgebra. Skew-symmetric component \(R^{\wedge}\).As was shown in Section 3 and [4], a 2-cocycle \(\delta\) on \(\mathfrak{g}\) integrates to a Poisson bivector field \(\Pi\) on \(G\), which is multiplicative with respect to the underlying 2-group structure (namely the structure of a group groupoid). In other words, given the classical 2-\(r\)-matrix \(R\) on \(\mathfrak{g}\) satisfying (3.13), \(G\) is a Poisson-Lie 2-group. Since the underlying 2-cocycle \(\delta=dR\) is a Lie 2-algebra 2-coboundary, integrating \(\delta\) to the Poisson bivector \(\Pi\) involves integrating the 2-adjoint representation of \(\mathfrak{g}\) on itself. We shall describe this more explicitly. The 2-adjoint representation of \(\mathfrak{g}\) on itself consist of the following, \[{}_{2}\mathrm{ad}=(\mathrm{ad}_{0},\mathrm{ad}_{-1}):\mathfrak{g}\to\mathrm{ End}\,\mathfrak{g},\qquad\begin{cases}\mathrm{ad}_{0}:\mathfrak{g}_{0}\to \mathrm{End}(\mathfrak{g}_{0}\oplus\mathfrak{g}_{-1})\\ \mathrm{ad}_{-1}:\mathfrak{g}_{-1}\to\mathrm{Hom}(\mathfrak{g}_{0},\mathfrak{ g}_{-1})\end{cases}\quad, \tag{3.16}\] where \[\mathrm{ad}_{0}(X)=(\mathrm{ad}_{X}=[X,\cdot],\chi_{X}=X\rhd\cdot),\qquad \mathrm{ad}_{-1}(Y)=\cdot\rhd Y\] for each \(X\in\mathfrak{g}_{0},Y\in\mathfrak{g}_{-1}\). They satisfy the following key identities \[\mathrm{ad}_{X}\,t=t\chi_{X},\qquad\mathrm{ad}_{-1}(Y)t=-\,\mathrm{ad}_{Y}, \qquad t\,\mathrm{ad}_{-1}(Y)=-\,\mathrm{ad}_{Y} \tag{3.17}\] for each \(X\in\mathfrak{g}_{0},Y\in\mathfrak{g}_{-1}\), which come from the equivariance and the Peiffer identity conditions (3.1). We then integrate each of these graded components individually (as operators) \[\mathrm{ad}_{X}=-\frac{d}{ds}|_{s=0}\,\mathrm{Ad}_{\exp sX},\qquad\chi_{X}=- \frac{d}{ds}|_{s=0}\mathcal{X}_{\exp sX},\qquad\mathrm{ad}_{-1}(Y)=-\frac{d}{ ds}|_{s=0}\,\mathrm{Ad}_{-1}(\exp sY)\] to yield an action \({}_{2}\,\mathrm{Ad}=(\mathrm{Ad}_{0},\mathrm{Ad}_{-1})\) of the Lie _2-group_\(G\) on \(\mathfrak{g}\), where \(\mathrm{Ad}_{0}=(\mathrm{Ad},\mathcal{X})\) and \(\mathrm{Ad}_{-1}=(\cdot\rhd y)\)\(y\in G_{-1}\). The corresponding identities \[\mathrm{Ad}_{x}\,t=t\mathcal{X}_{x},\qquad\mathrm{Ad}_{-1}(y)t=\mathrm{Ad}_{y ^{-1}},\qquad t\,\mathrm{Ad}_{-1}(y)=\mathrm{Ad}_{\mathfrak{g}^{-1}} \tag{3.18}\] are automatic for each \(x\in G_{0},y\in G_{-1}\). The necessary 2-group 2-coboundary \(\hat{\delta}\) is then obtained through this adjoint representation, \[\hat{\delta}_{(y,x)}=_{2}\mathrm{Ad}_{(y,x)}\,R^{\wedge}-R^{\wedge},\] and hence induces a Poisson-Lie bivector field \(\Pi\). This is in direct analogy with the usual Lie 1-algebra case. Symmetric component \(R^{\odot}\).Recall from the Lie 1-algebra case that the classical Yang-Baxter equation constrains the symmetric piece of the classical \(r\)-matrix to be ad-invariant. Similarly, (3.13) constrains \(R^{\odot}\) to be invariant under the 2-adjoint representation \({}_{2}\,\mathrm{ad}\) as described above. Moreover, \(R^{\odot}\) must also satisfy an equivariance condition \(D^{-}_{t}R^{\odot}=0\). Provided \(R^{\odot}\in\mathfrak{g}_{0}\odot\mathfrak{g}_{-1}\) is non-degenerate (ie. invertible as a matrix on \(\mathfrak{g}_{0}\oplus\mathfrak{g}_{-1}\)), it defines a _quadratic 2-Casimir_\(2\mathrm{Cas}_{\mathfrak{g}}\) of \(\mathfrak{g}\). Such a quadratic 2-Casimir element determines a non-degenerate bilinear form \(\langle\cdot,\cdot\rangle=\langle\cdot,\cdot\rangle_{\odot}\) on \(\mathfrak{g}\), which have been characterized in [5]. With \(\langle\cdot,\cdot\rangle\), we identify \(\mathfrak{g}\) with its own dual Lie 2-algebra \(\mathfrak{g}^{*}[1]\cong\mathfrak{g}_{0}^{*}\oplus\mathfrak{g}_{-1}^{*}\), consisting of elements \(g+f=\langle X+Y,\cdot\rangle\) such that the evaluation pairing reads \[f(Y^{\prime})+g(X^{\prime})=\langle Y+X,Y^{\prime}+X^{\prime}\rangle.\] The \(t\)-map \(t^{T}:\mathfrak{g}_{0}^{*}\to\mathfrak{g}_{-1}^{*}\) for the dual space \(\mathfrak{g}^{*}[1]\) is defined by \[(t^{T}g)(Y)=g(tY)\iff\langle t^{T}\cdot,\cdot\rangle=\langle\cdot,t\cdot\rangle.\] Note \(\deg(\mathfrak{g}_{0}^{*})=-1\) while \(\deg(\mathfrak{g}_{-1}^{*})=0\). Leveraging this pairing, the classical 2-Yang-Baxter equation (3.13) for \(R=R^{\wedge}+R^{\odot}\) then gives rise to a 2-coboundary \((\delta_{-1},\delta_{0})=dR^{\wedge}\) on \(\mathfrak{g}^{*}[1]\cong\mathfrak{g}\), for which \[[f,f^{\prime}]_{*}(Y)=(f\wedge f^{\prime})(\delta_{-1}Y),\qquad[f,g]_{*}(X)=(f \rhd^{*}g)(X)=(f\wedge g)(\delta_{0}X)\] defines a Lie 2-algebra structure on the dual \(\mathfrak{g}^{*}[1]\), where \(f\in\mathfrak{g}_{-1}^{*},g\in\mathfrak{g}_{0}^{*}\). The 2-cocycle conditions (3.6) imply precisely the Koszul identities of \([\cdot,\cdot]_{*}\)[3]. This interplay of the 2-Casimir \(R^{\odot}\) and the 2-coboundary \(R^{\wedge}\) is what we shall use in the following to develop our theory of higher integrability. 2-graded integrability The goal in this section is to lay out the general theory of 2-graded integrable systems, and derive an appropriate notion of a "2-graded Lax equation" as a categorification of the usual Lax equation. Then, once this is achieved, we specialize to the dual crossed-module \(\mathfrak{g}^{*}[1]\) and construct a 2-graded Lax pair on it, in analogy with the 1-algebra case introduced in Section 2. We then work to prove that it does in fact satisfy the 2-graded Lax equations. We begin with a 2-graded space \(M=M_{-1}\xrightarrow{\mathfrak{t}}M_{0}\) equipped with a Poisson bivector \(\Pi=\Pi^{-1}+\Pi^{0}\) satisfying (3.9) and (3.12). We let \((C^{\infty}(M),\mathfrak{t}^{*},\{\cdot,\cdot\})\) denote the Poisson 2-algebra arising from \(\Pi\) via **Lemma** 3.1. ### 2-Lax pair Consider smooth functions from the 2-graded space \(M=M_{-1}\oplus M_{0}\) into a Lie 2-algebra \(\mathfrak{g}\). We treat such functions as elements in the tensor product \(C^{\infty}(M)\otimes\mathfrak{g}\), which is a 3-term complex (cf. [3]) \[\underbrace{C^{\infty}(M_{0})\otimes\mathfrak{g}_{-1}}_{\deg(-2)}\xrightarrow {D^{+}}\underbrace{(C^{\infty}(M_{-1})\otimes\mathfrak{g}_{-1})\oplus(C^{ \infty}(M_{0})\otimes\mathfrak{g}_{0})}_{\deg(-1)}\xrightarrow{D^{-}} \underbrace{C^{\infty}(M_{-1})\otimes\mathfrak{g}_{0}}_{\deg(-0)} \tag{4.1}\] with the chain maps \(D^{\pm}=1\otimes t\pm\mathfrak{t}^{*}\otimes 1\). The graded Lie bracket \([\cdot,\cdot]\) on \(\mathfrak{g}\), together with the graded Poisson bracket \(\{\cdot,\cdot\}\) on \(C^{\infty}(M)\), as in **Proposition** 3.1, endow this complex with two Lie 2-algebra structures. Let \(H\in C^{\infty}(M)\) a Hamiltonian function on \(M=M_{-1}\xrightarrow{\mathfrak{t}}M_{0}\), which admits a graded decomposition \(H=H_{-1}+H_{0}\in C^{\infty}(M_{-1})\oplus C^{\infty}(M_{0})\). **Definition 4.1**.: A tuple of elements \((L,P)\in C^{\infty}(M)\otimes\mathfrak{g}\) is a **2-Lax pair**, of the Hamiltonian system \((M,\{\cdot,\},H)\) iff it satisfies the **2-Lax equation** \[\dot{L}=\{H,L\}=[P,L], \tag{4.2}\] where \(\{\cdot,\cdot\},[\cdot,\cdot]\) are the graded Poisson/Lie brackets on the complex (4.1). There is a subtlety associated to the meaning of "\(\dot{L}^{*}\) in (4.2), as the Hamiltonian \(H=H_{-1}+H_{0}\) here is itself graded. As such, the dynamics it generates is also graded, in the sense that there are essentially two Hamiltonians evolving under a single "time" parameter. We note the functions \(L,P:M\to\mathfrak{g}\) themselves need not be a 2-vector space homomorphisms. Indeed, such maps must only have components concentrated in degree-0 and degree-(-2) in (4.1) [40, 28]. ### Conserved quantities Recall that in the 1-algebra case, the trace polynomials \(f_{k}\) of the Lax function \(L\) are constants of motion. We wish now to investigate the analogous notion of "2-graded integrability" afforded by the 2-Lax equations (4.2). Toward this, we must first explain how to construct trace polynomials in the 2-graded context and hence the relevant concept of 2-representation in our context. 2-representations.Let \(V=V_{-1}\xrightarrow{\hat{\sigma}}V_{0}\) denote a 2-term complex of vector spaces. **Definition 4.2**.: The space of endomorphisms \(\mathfrak{gl}(V):\operatorname{End}^{-1}(V)\xrightarrow{\delta}\operatorname {End}^{0}(V)\) of \(V\) is a 2-graded space \[\operatorname{End}^{-1}(V)=\operatorname{Hom}(V_{0},V_{-1}),\qquad\operatorname {End}^{0}(V)=\{M+N\in\operatorname{End}(V_{-1})\oplus\operatorname{End}(V_{0}) \mid N\hat{\sigma}=\hat{\sigma}M\}, \tag{4.3}\] equipped with the following (strict) Lie 2-algebra structure [3, 40] \[\delta:\operatorname{End}^{-1}(V)\to\operatorname{End}^{0}(V), \delta(A)=A\hat{\sigma}+\hat{\sigma}A,\] \[\left[M+N,M^{\prime}+N^{\prime}\right]_{C}=\left[M,M^{\prime} \right]+\left[N,N^{\prime}\right], (M+N)\rhd_{C}A=MA-AN,\] \[\left[A,A^{\prime}\right]_{C}=A\hat{\sigma}A^{\prime}-A^{\prime} \hat{\sigma}A,\] for each \(M+N\in\operatorname{End}^{0}(V),\ A\in\operatorname{End}^{-1}(V)\). **Definition 4.3**.: A (strict) **2-representation**\(\rho:\mathfrak{g}\to\mathfrak{gl}(V)\) is a Lie 2-algebra homomorphism such that the following square \[\begin{CD}\mathfrak{g}_{-1}@>{t}>{}>\mathfrak{g}_{0}\\ @V{\rho_{1}}V{}V@V{\rho_{0}}V{}V\\ \operatorname{End}^{-1}(V)@>{\delta}>{}>\operatorname{End}^{0}(V)\end{CD} \tag{4.4}\] commutes. More explicitly, we have \(\rho=(\rho_{0},\rho_{1})\) with \(\rho_{0}(X)=(\rho_{0}^{0}(X),\rho_{0}^{1}(X))\in\operatorname{End}^{0}(V)\) and \(\rho_{1}(Y)\in\operatorname{End}^{-1}(V)\) for each \(X\in\mathfrak{g}_{0},Y\in\mathfrak{g}_{-1}\), such that the following conditions \[\rho_{0}^{0}(tY)=\partial\rho_{1}(Y),\qquad\rho_{0}^{1}(tY)=\rho_ {1}(Y)\partial,\] \[\rho_{1}(X\rhd Y)=(\rho_{0}X)\rhd_{C}\rho_{1}Y-\rho_{0}^{1}(X)\rho _{1}(Y)-\rho_{1}(Y)\rho_{0}^{0}(X) \tag{4.5}\] are satisfied. Furthermore, \(\rho_{0}=\rho_{0}^{1}+\rho_{0}^{0}\) represents \(\mathfrak{g}_{0}\) on respectively \(V_{-1}\) and \(V_{0}\), with \(\partial\) as the intertwiner. Elementary examples of 2-representations include the adjoint representation of \(\mathfrak{g}\) on itself, or the coadjoint representation of \(\mathfrak{g}\) on the dual Lie 2-algebra \(\mathfrak{g}^{*}[1]\); see [3, 5] for details of these examples. Any 2-representation \(\rho\) as defined above gives rise to a _genuine_ representation \(\rho^{gen}\) on the direct sum \(V_{-1}\oplus V_{0}\), which takes the form of a block matrix \[\rho^{gen}(L)=\begin{pmatrix}\rho_{0}^{1}(L_{0}+tL_{-1})&\rho_{1}(L_{-1})\\ 0&\rho_{0}^{0}(L_{0})\end{pmatrix}\in\mathfrak{gl}(V_{-1}\oplus V_{0}),\quad L _{0}\in\mathfrak{g}_{0},\quad L_{-1}\in\mathfrak{g}_{-1}, \tag{4.6}\] where \(L_{-1},L_{0}\) denotes the graded components of \(L\) that take values in \(\mathfrak{g}_{-1},\mathfrak{g}_{0}\subset\mathfrak{g}\), respectively. This representation was shown to satisfy \(\rho^{gen}([L,P])=[\rho^{gen}(L),\rho^{gen}(P)]_{C}\) in [40], where \([\cdot,\cdot]_{C}\) is the matrix commutator on \(\mathfrak{gl}(V_{-1}\oplus V_{0})\). **Example 4.1**.: _The most relevant 2-representation for our current paper is the 2-coadjoint representation \({}_{2}\operatorname{Ad}^{*}\) of the 2-group \(G\) on its dual Lie 2-algebra \(V=\mathfrak{g}^{*}[1]\). We shall now prove that \({}_{2}\operatorname{Ad}^{*}:G\to\operatorname{End}(\mathfrak{g}^{*}[1])\) is indeed a 2-representation._ _We define \({}_{2}\operatorname{Ad}^{*}\) by dualizing the adjoint representation \({}_{2}\operatorname{Ad}=(\operatorname{Ad}_{0},\Upsilon)\) of \(G\) on \(\mathfrak{g}\) defined in (3.16). Hence, \({}_{2}\operatorname{Ad}^{*}\) has the graded components_ \[\operatorname{Ad}_{0}^{*}=(\mathcal{X}^{*},\operatorname{Ad}^{*}):\mathfrak{ g}_{0}\to\operatorname{End}(\mathfrak{g}_{0}^{*}\oplus\mathfrak{g}_{-1}^{*}), \qquad\Upsilon^{*}:\mathfrak{g}_{-1}\to\operatorname{Hom}(\mathfrak{g}_{-1}^{* },\mathfrak{g}_{0}^{*})\] _satisfying for each \(x\in G_{0},y\in G_{-1}\) and \(X\in\mathfrak{g}_{0},f\in\mathfrak{g}_{-1},g\in\mathfrak{g}_{0}^{*},f\in \mathfrak{g}_{-1}^{*}\) the invariance conditions_ \[\langle\operatorname{Ad}_{x}^{*}g+X_{x}^{*}f,Y+X^{\prime}\rangle = \langle g+f,\mathcal{X}_{x^{-1}}Y+\operatorname{Ad}_{x^{-1}}X^{ \prime}\rangle,\] \[\langle\Upsilon_{y}^{*}(f),X\rangle = \langle f,\Upsilon_{y^{-1}}(X)\rangle\] _with respect to the natural pairing form \(\langle\cdot,\cdot\rangle\) between \(\mathfrak{g}^{*}[1]\) and \(\mathfrak{g}\)._ _Now by dualizing (3.18) against this pairing form, we see that \({}_{2}\operatorname{Ad}^{*}\) satisfies the following key identities_ \[t^{T}\operatorname{Ad}_{x}^{*}=\mathcal{X}_{x}^{*}t^{T},\qquad t^{T}\Upsilon_{ y}^{*}=\operatorname{Ad}_{y^{-1}}^{*},\qquad\Upsilon_{y}^{*}t^{T}=\operatorname{Ad}_{y^{-1}}^ {*},\] _where \(t^{T}:\mathfrak{g}_{0}^{*}\to\mathfrak{g}_{-1}^{*}\) is the dual \(t\)-map on \(\mathfrak{g}^{*}[1]\). The first identity implies \((\mathcal{X}^{*},\operatorname{Ad}^{*})\in\operatorname{End}(\mathfrak{g}^{*}[ 1])_{0}\), while the rest imply precisely the commutativity condition (4.4). Indeed, one explicitly computes for each \(Y\in\mathfrak{g}_{-1},X\in\mathfrak{g}_{0}\) that_ \[\langle Y,t^{T}(\Upsilon_{y}^{*}(f))\rangle = \langle Y,\operatorname{Ad}_{y^{-1}}^{*}f\rangle=\langle \operatorname{Ad}_{y}Y,f\rangle=\langle\mathcal{X}_{\mathbf{t}y^{-1}}Y,f \rangle=\langle Y,\mathcal{X}_{\mathbf{t}y^{-1}}^{*}f\rangle,\] \[\langle X,\Upsilon_{y}^{*}(t^{T}g)\rangle = \langle\Upsilon_{y^{-1}}X,t^{T}g\rangle=\langle\iota(\Upsilon_{y^{ -1}}X),g\rangle=\langle\operatorname{Ad}_{\mathbf{t}y^{-1}}X,g\rangle= \langle X,\operatorname{Ad}_{\mathbf{t}y}^{*}g\rangle,\] _where we have used (3.18)._ **Definition 4.4**.: A function \(H\in C^{\infty}(\mathfrak{g}^{*}[1])\) is \({}_{2}\operatorname{Ad}^{*}\)-invariant if \[H_{0}\circ\operatorname{Ad}_{x}^{*}=H_{0},\qquad H_{-1}\circ\mathcal{X}_{x}^{ *}=H_{-1},\qquad H_{0}\circ\Upsilon_{y}^{*}=H_{-1}, \tag{4.7}\] for each \(x\in G_{0},y\in G_{-1}\), and where \(H=H_{-1}+H_{0}\in C^{\infty}(\mathfrak{g}^{*}[1])\cong C^{\infty}(\mathfrak{g} _{0}^{*})\oplus C^{\infty}(\mathfrak{g}_{-1}^{*})\). This notion of invariance will be useful later. Constants of (graded) motion.We are now ready to characterize the notion of conserved quantities inherited from the construction of 2-representation built out on 2-vector spaces of the Baez-Crans type. **Theorem 4.1**.: _Let \(\chi_{V}:\mathfrak{gl}(V)\to\mathbb{R}\) denote a class function; namely any linear map that is invariant under the \(L_{2}\)-bracket \([-,-]_{C}\) on \(\mathfrak{gl}(V)\). The 2-Lax equation (4.2) implies that the polynomials_ \[\mathcal{F}_{k}=\chi_{V}\rho(L)^{k}\] _are constants of motion for any \(k\) and 2-representation \(\rho\)._ Proof.: The proof runs in exact analogy with the 1-algebra case [2]. From (4.15) and the cyclicity of \(\chi\), we have \[\check{\mathcal{F}}_{k}=\sum_{i=0}^{k-1}\chi_{V}(\rho(L)^{i}\rho(\dot{L})\rho(L)^ {k-i-1})=k\,\chi_{V}(\rho(L)^{k-1}\rho([L,P])).\] By the fact that \(\rho\) is a homomorphism of Lie 2-algebras, we have \(\rho([L,P])=[\rho(L),\rho(P)]_{C}\) and hence \[\chi_{V}(\rho(L)^{k-1}\rho([L,P])) = \chi_{V}(\rho(L)^{k-1}[\rho(L),\rho(P)]_{C})\] \[= \chi_{V}([\rho(L)^{k},\rho(P)]_{C})=0,\] again from the invariance of \(\chi_{V}\). Note that the conservation of these trace polynomials is independent of the choice of the 2-representation \(\rho\). However, what exactly is being conserved does depend on the representation -- it is the eigenvalues of the matrix representation \(\rho(L)\). The conservation of these eigenvalues can be understood as the notion of "2-graded integrability" that the 2-Lax pair in **Definition 4.2** affords. By making use of the genuine representation \(\rho^{gen}\) given in (4.6), a straightforward example of a class function \(\chi_{V}\) is given by merely the trace form on \(\mathfrak{gl}(V_{-1}\oplus V_{0})\). As such, the above result states that the trace polynomials \[\mathcal{F}_{k}=\operatorname{tr}_{V}(\rho^{gen}(L)^{k})\] are conserved for any \(k\in\mathbb{Z}_{\geqslant 0}\). By a fundamental result in linear algebra, the eigenvalues of a block-triangular matrix consist of the combined eigenvalues of its diagonal blocks: \[\operatorname{Eigen}\rho^{gen}(L)=\operatorname{Eigen}\rho_{0}^{1}(L_{0}+tL _{-1})\coprod\operatorname{Eigen}\rho_{0}^{0}(L_{0}).\] These are example of the conserved quantities associated to the 2-Lax equation (4.2) that one can always compute, using the genuine representation (4.6). _Remark 4.1_.: Let us consider some special cases. When \(t=0\), \(\rho_{0}\) determines two distinct representations on \(V_{-1},V_{0}\). Thus in general we achieve \(n+m\) number of distinct eigenvalues of \(\rho^{gen}(L)=\rho_{0}L_{0}\), where \(n+m=\dim(V_{-1}\oplus V_{0})=\dim V_{-1}+\dim V_{0}\). On the other hand, when \(t=1\) and \(\partial=1\), the components \(\rho_{0}^{1}=\rho_{0}^{0}\) of \(\rho_{0}\) must coincide. If \(L_{0}=L_{-1}\) (see **Corollary 4.4**), we have two copies of the \(n\) eigenvalues of \(\rho(L)\). In this \(t=1\) case, we can say that our notion of 2-graded integrability consist of "two copies" of the usual integrability. ### 2-Kirillov-Kostant Poisson structure on \(C^{\infty}(\mathfrak{g}^{*}[1])\) We first generalize the standard Kirillov-Kostant Poisson structure to the Lie 2-algebra context. This shall serve as the appropriate setting for constructing a canonical 2-Lax pair on the dual space \(\mathfrak{g}^{*}[1]\) of a given Lie 2-bialgebra \((\mathfrak{g};dR^{\wedge})\). **Proposition 4.1**.: _Let \(\mathfrak{g}\) denote a Lie 2-bialgebra with the graded \(L_{2}\)-bracket \([\cdot,\cdot]\). The graded algebra of functions \(C^{\infty}(\mathfrak{g}^{*}[1])\), equipped with the Poisson bracket \(\{,\}^{*}\)_ \[\{\phi,\phi^{\prime}\}^{*}(g+f)=\langle g+f,[d_{g+f}\phi,d_{g+f}\phi^{\prime}] \rangle,\qquad\phi,\phi^{\prime}\in C^{\infty}(\mathfrak{g}^{*}[1]), \tag{4.8}\] _where \(g+f\in\mathfrak{g}^{*}[1]\), is a Poisson 2-algebra. We call this a_ **2-Kirillov-Kostant (2KK) Poisson structure** _on \(C^{\infty}(\mathfrak{g}^{*}[1])\)._ Proof.: It will be convenient to provide the explicit correspondence between the graded components of \(\{,\}\) and \([,]\). For this, it is useful to recall that \(\mathfrak{g}\) is dual to \(\mathfrak{g}^{*}[1]\), so 1-forms on \(\mathfrak{g}^{*}[1]\) are elements in \(\mathfrak{g}\). In particular, \(d\phi\) is valued in \(\mathfrak{g}\) for \(C^{\infty}(\mathfrak{g}^{*}[1])\ni\phi=(\phi_{-1}+\phi_{0})\in C^{\infty}( \mathfrak{g}^{*}_{0})\oplus C^{\infty}(\mathfrak{g}^{*}_{-1})\) and \[d_{g}\phi_{-1}\in\mathfrak{g}_{0},\qquad d_{f}\phi_{0}\in\mathfrak{g}_{-1}.\] With this in mind, we identify the components of the graded bracket \(\{,\}\). \[\{\phi,\phi^{\prime}\}^{*}_{-1}(g+f)=\langle g,[d_{f}\phi_{0},d_{g} \phi^{\prime}_{-1}]_{-1}+[d_{g}\phi_{-1},d_{f}\phi^{\prime}_{0}]_{-1}\rangle,\] \[\{\phi_{0},\phi^{\prime}\}^{*}_{0}(f)=\langle f,[d_{f}\phi_{0},d_{f }\phi^{\prime}_{0}]^{(-1)}\rangle\] \[\{\phi,\phi^{\prime}\}^{*}_{-2}(g)=\langle g,[d_{g}\phi_{-1},d_{g }\phi^{\prime}_{-1}]_{0}\rangle. \tag{4.9}\] Now we must show that this graded Poisson bracket \(\{\cdot,\cdot\}^{*}\) is a \(L_{2}\)-bracket, satisfying (3.9) and (3.12). To do so, first we note that we can decompose \(\mathfrak{g}^{*}_{-1}\) as \(\mathfrak{g}^{*}_{-1}\cong\mathfrak{im}\,t^{T}\oplus\operatorname{coker}t^{T}\), hence every \(f\in\mathfrak{g}^{*}_{-1}\) can be written as \[f=t^{T}g^{\prime}+f^{\prime}\in\operatorname{im}t^{T}\oplus\operatorname{coker} t^{T}. \tag{4.10}\] Next, using the rank-nullity theorem, we have that \(\mathrm{coker}\,t^{T}\cong\ker t\) by duality, and hence \[t(d_{f}\phi_{0})=t(d_{t^{T}g^{\prime}}\phi_{0}+d_{f^{\prime}}\phi_{0})=t(d_{t^{T }g^{\prime}}\phi_{0})=((t^{T})^{*}d\phi_{0})_{g^{\prime}} \tag{4.11}\] for any \(\phi_{0}\in C^{\infty}(\mathfrak{g}_{-1}^{*})\); note the last equality is the _definition_ of the pullback \((t^{T})^{*}d\phi_{0}\). We can now directly compute \[((t^{T})^{*}\{\phi_{-1},\phi_{0}^{\prime}\}_{-1}^{*})(g) = \langle t^{T}g,[(d_{g}\phi_{-1}),d_{t^{T}g}\phi_{0}^{\prime}] \rangle=\langle g,t[d_{g}\phi_{-1},d_{t^{T}g}\phi_{0}^{\prime}]\rangle \tag{4.12}\] \[= \langle g,[d_{g}\phi_{-1},t(d_{t^{T}g}\phi_{0}^{\prime})]\rangle= \langle g,[d_{g}\phi_{-1},((t^{T})^{*}d\phi^{\prime})_{g}]\rangle\] \[= \{\phi_{-1},(t^{T})^{*}\phi_{0}^{\prime}\}_{-2}^{*}(g),\] \[\{\phi_{0},\phi_{0}^{\prime}\}_{0}^{*}(f) = \langle f,[d_{f}\phi_{0},d_{f}\phi_{0}^{\prime}]\rangle=\langle f,[t(d_{t^{T}g^{\prime}}\phi_{0})),d_{f}\phi_{0}^{\prime}]\rangle\] \[= \langle f,[((t^{T})^{*}d\phi_{0})_{g^{\prime}},d_{f}\phi_{0}^{ \prime}]\rangle\] \[= \{(t^{T})^{*}\phi_{0},\phi_{0}^{\prime}\}_{-1}^{*}(f),\] where we have used the equivariance and the Peiffer identity (3.1) in \(\mathfrak{g}\). Similarly, the 2-Jacobi identities (3.12) follow from that (3.2) of \([,]\). We now construct an alternative 2-KK Poisson structure on \(\mathfrak{g}^{*}[1]\) by explicitly making use of the classical 2-r-matrix. In analogy with (2.1), we first define a map \(\varphi=(\varphi_{-1},\varphi_{0}):\mathfrak{g}\to\mathfrak{g}\) of 2-graded vector spaces, then use it to define an alternative \(L_{2}\)-bracket \([\cdot,\cdot]_{R}\) on \(\mathfrak{g}\). Let us fix the bases \(\{T_{i}\}_{i},\{S_{a}\}_{a}\) of \(\mathfrak{g}_{0},\mathfrak{g}_{-1}\) respectively. **Proposition 4.2**.: _The map \(\varphi=(\varphi_{-1},\varphi_{0}):\mathfrak{g}\to\mathfrak{g}\) defined by_ \[\varphi_{-1}:\mathfrak{g}_{-1}\to\mathfrak{g}_{-1}, Y\mapsto(R^{\times})^{ia}\langle Y,T_{i}\rangle S_{a},\] \[\varphi_{0}:\mathfrak{g}_{0}\to\mathfrak{g}_{0}, X\mapsto(R^{\times})^{ai}\langle X,S_{a}\rangle T_{i},\] _is a 2-vector space homomorphism if and only if \(D_{t}^{-}R^{\times}=0\)._ Proof.: Clearly, \(\varphi\) is linear, hence it remains to show that \(t\varphi_{-1}=\varphi_{0}t\). By definition, this requires \[(R^{\times})^{ia}T_{i}\wedge t(S_{a})=(R^{\times})^{ai}t(S_{a})\wedge T_{i}\] for each basis elements \(T_{i}\in\mathfrak{g}_{0},S_{a}\in\mathfrak{g}_{-1}\). In other words, the combination \((R^{\times})^{ia}t_{a}^{j}\) is skew-symmetric; this is precisely the condition \(D_{t}^{-}R^{\times}=0\) in (3.13) [3]. **Proposition 4.3**.: _Let \(R\in\mathfrak{g}_{0}\otimes\mathfrak{g}_{-1}\oplus\mathfrak{g}_{-1}\otimes \mathfrak{g}_{0}\) denote a solution to the modified 2-CYBE (3.13). The bracket defined by_ \[[Y+X,Y^{\prime}+X^{\prime}]_{R}=[\varphi(Y+X),Y^{\prime}+X^{\prime}]+[Y+X, \varphi(Y^{\prime}+X^{\prime})],\] _is a Lie 2-algebra bracket which satisfies_ \[[g+f,g^{\prime}+f^{\prime}]=\langle\cdot,[Y+X,Y^{\prime}+X^{\prime}]_{R}\rangle \tag{4.13}\] _where \(f^{(\prime)}=\langle\cdot,X^{(\prime)}\rangle\in\mathfrak{g}_{0}^{*}\) and \(g^{(\prime)}=\langle\cdot,Y^{(\prime)}\rangle\in\mathfrak{g}_{-1}^{*}\)._ Proof.: Recall [3] that the skew-symmetric piece \(R^{\times}\) of a solution \(R\) to the modified 2-CYBE (3.13) defines the cobracket \(dR^{\times}(Y+X)=\delta(Y+X)=\delta_{-1}(Y)+\delta_{0}(X)\) given by \[\delta_{-1}(Y)=[Y\otimes 1+1\otimes Y,R^{\times}],\qquad\delta_{0}(X)=[X \otimes 1+1\otimes X,R^{\times}],\] and the symmetric piece \(R^{\Im}=\langle\cdot,\cdot\rangle\) defines a \({}_{2}\) ad-invariant pairing. These facts allow us to compute directly (cf. [2]) that, for each basis element \(Z_{i}=S_{i}+T_{i}\in\mathfrak{g}\), \[[h,h^{\prime}](Z_{i}) = \langle h\otimes h^{\prime},\delta(Z_{i})\rangle=\langle h\otimes h ^{\prime},[Z_{i}\otimes 1+1\otimes Z_{i},R^{\times}]\rangle\] \[= (R^{\times})^{jk}\langle h\otimes h^{\prime},[Z_{i},Z_{j}]\otimes Z _{k}+Z_{j}\otimes[Z_{i},Z_{k}]\rangle\] \[= (R^{\times})^{jk}\left(R^{\Im}(Z,[Z_{i},Z_{j}])R^{\Im}(Z^{\prime},Z_{k})+R^{\Im}(Z,Z_{j})R^{\Im}(Z^{\prime},[Z_{i},Z_{k}])\right),\] \[= -(R^{\times})^{jk}\left(R^{\Im}([Z,Z_{j}],Z_{i})R^{\Im}(Z^{\prime},Z_{k})+R^{\Im}(Z,Z_{j})R^{\Im}([Z^{\prime},Z_{k}],Z_{i})\right)\] \[= R^{\Im}([Z,\varphi(Z^{\prime})],Z_{i})+R^{\Im}([\varphi(Z),Z^{ \prime}],Z_{i})\] \[= R^{\Im}([Z,Z^{\prime}]_{R},Z_{i})=\langle Z_{i},[Z,Z^{\prime}]_{R}\rangle,\] where we abbreviated the graded elements \(h=g+f,h^{\prime}=g^{\prime}+f^{\prime}\in\mathfrak{g}^{*}\) and used that \(Z^{(\prime)}\equiv\langle h^{(\prime)},\cdot\rangle\in\mathfrak{g}\). This proves (4.13). Now let us establish that \([\cdot,\cdot]_{R}\) is a genuine \(L_{2}\)-bracket on \(\mathfrak{g}\). Since \([\cdot,\cdot]\) by hypothesis is equivariant and satisfies the Peiffer identity with respect to \(t\), the fact that \(\varphi\) is a 2-vector space homomorphism implies the same for \([\cdot,\cdot]_{R}\). It thus suffices to check the 2-Jacobi identities for \([\cdot,\cdot]_{R}\), but this directly follows from (4.13) (cf. [42]), \[\langle Z_{0},\mathbb{O}\,[[Z,Z^{\prime}]_{R},Z^{\prime\prime}]_{R}\rangle= \ \langle\mathbb{O}\,[[h,h^{\prime}],h^{\prime\prime}])(Z_{0})=0\qquad\forall Z_{ 0}\in\mathfrak{g}.\] **Lemma 4.1**.: _The Poisson bracket \(\{,\}_{R}^{*}\), defined by the following formula_ \[\{\phi,\phi^{\prime}\}_{R}^{*}(g+f)=\langle g+f,[d_{g+f}\phi,d_{g+f}\phi^{ \prime}]_{R}\rangle, \tag{4.14}\] _where \(\phi,\phi^{\prime}\in C^{\infty}(\mathfrak{g}^{*}[1])\), \(g+f\in\mathfrak{g}^{*}[1]\), is a 2KK Poisson structure._ Proof.: This follows from the fact that \([,]_{R}\) is a \(L_{2}\)-bracket, hence the proof of **Proposition 4.1** applies. ### 2-Lax pair on \(\mathfrak{g}^{*}[1]\) Fix a \({}_{2}\,\mathrm{Ad}^{*}\)-invariant Hamiltonian \(H\in C^{\infty}(\mathfrak{g}^{*}[1])\) (as defined in Definition 4.4). We are now ready to finally canonically construct a 2-Lax pair \((L,P)\) on \((\mathfrak{g}^{*}[1],\{\cdot,\cdot\}_{R}^{*},H)\) in this section according to (4.2), based on the 2-KK Poisson structure \(\{\cdot,\cdot\}_{R}^{*}\) (4.14) as well as the underlying classical 2-\(r\)-matrix. We will take \[L_{0}\in C^{\infty}(\mathfrak{g}^{*}_{-1})\otimes\mathfrak{g}_{ 0}, L_{-1}\in C^{\infty}(\mathfrak{g}^{*}_{0})\otimes\mathfrak{g}_{-1},\] \[P_{-1}\in C^{\infty}(\mathfrak{g}^{*}_{-1})\otimes\mathfrak{g}_{ -1}, P_{0}\in C^{\infty}(\mathfrak{g}^{*}_{0})\otimes\mathfrak{g}_{0},\] hence \(L\) has degree-(-1) and \(P\) has degree-0 and -2 in the complex (4.1). Fix bases \(\{T_{i}\}_{i},\{S_{a}\}_{a}\) of \(\mathfrak{g}_{0},\mathfrak{g}_{-1}\), and suppose the classical 2-\(r\)-matrix \(R\) on \(\mathfrak{g}\) is invertible. We make use of a basic linear algebra fact [43] that the inverse of an off-diagonal block matrix, such as \(R\) where the off-diagonal pieces are given by \(R_{1},R_{2}\), is the off diagonal matrix with blocks \(R_{2}^{-1}\) and \(R_{1}^{-1}\), and hence the inverse of the symmetric piece \((R_{1}^{\odot})_{ai}\), for instance, has matrix elements \(((R_{2}^{\odot})^{-1})^{ai}\). Put \[L_{0}:f \mapsto(R_{2}^{\odot})^{ai}f(S_{a})T_{i}, L_{-1}:g \mapsto(R_{1}^{\odot})^{ia}g(T_{i})S_{a},\] \[P_{-1}:f \mapsto\varphi_{-1}(d_{f}H_{0}), P_{0}:g \mapsto\varphi_{0}(d_{g}H_{-1}), \tag{4.15}\] and we wish to show that \((L,P):\mathfrak{g}^{*}[1]\to\mathfrak{g}\) is indeed a 2-Lax pair as in **Definition 4.2**. **Theorem 4.2**.: _Let \(H\in C^{\infty}(\mathfrak{g}^{*}[1])\) denote a \({}_{2}\,\mathrm{Ad}^{*}\)-invariant Hamiltonian. Then \((L,P)\) given in (4.15) is a 2-Lax pair of the 2-graded Hamiltonian system \((\mathfrak{g}^{*}[1],\{\cdot,\cdot\}_{R}^{*},H)\) for which the Lax potential \(L\) satisfies_ \[\mathbf{t}^{*}L=tL,\qquad\{L,L\}_{R}^{*}=[L\otimes 1+1\otimes L,R^{\wedge}], \tag{4.16}\] _where \(\mathbf{t}^{*}=(t^{T})^{*}\) is the pullback of \(t^{T}:\mathfrak{g}^{*}_{0}\to\mathfrak{g}^{*}_{-1}\) and we have extended the \(L_{2}\)-bracket \([\cdot,\cdot]\) to \(\mathfrak{g}^{2\otimes}\)._ Proof.: First we compute the coefficients \[(d_{f}L_{-1})^{i}=R_{2}^{\odot bi}S_{b},\qquad(d_{g}L_{0})^{a}=R_{1}^{\odot j \alpha}T_{j}.\] We note also that the \({}_{2}\,\mathrm{Ad}^{*}\)-invariance of \(H\) (4.7) implies, in particular, that \[[Y+X,d_{g+f}H]=0,\forall Y\in\mathfrak{g}_{-1},\,\forall X\in\mathfrak{g}_{ 0},\] (we emphasize we use the bracket \([,]\) and not \([,]_{R}\)). Then from the 2-KK Poisson structure (4.14) we have \[\{H,L\}_{R}^{*}(g+f) = \langle g+f,[d_{g+f}H,d_{g+f}L^{i,a}]_{R}\rangle(T_{i}\oplus S_{ a})\] \[= \langle g+f,[d_{g+f}H,d_{f}L^{i}_{-1}]_{R}\rangle T_{i}+\langle g +f,[d_{g+f}H,d_{g}L^{a}_{0}]_{R}\rangle S_{a}\] \[= \langle g+f,[\varphi(d_{g+f}H),d_{f}L^{i}_{-1}]+[d_{g+f}H,\varphi(d _{f}L^{i}_{-1})]\rangle T_{i}\quad\text{\small{\small{\small{\small{\small{\small{\small{\small{\small{\ {\small{\small{\ To prove the second statement, we first note that we have the following expressions \[\varphi_{-1}(S_{a})=(R^{\odot}_{1})_{ai}(R^{\odot}_{2})^{ic}S_{c},\qquad\varphi_{ 0}(T_{i})=(R^{\odot}_{2})_{ib}(R^{\wedge}_{1})^{bj}T_{j}\] for \(\varphi\). Hence by a direct computation, \[\{L,L\}_{R}^{\ast}(g+f) = \{L^{a,i},L^{b,j}\}^{\ast}(g+f)(S_{a}+T_{i})\otimes(S_{b}+T_{j})\] (4.17) \[= \langle g+f,[d_{g+f}L^{a,i},d_{g+f}L^{b,j}]_{R}\rangle(S_{a}+T_{i} )\otimes(S_{b}+T_{j})\] \[= (R^{\odot ai^{\prime}}_{2}+R^{\odot ai^{\prime}}_{2})(R^{\odot ij^ {\prime}}_{2}+R^{\odot j^{\prime}}_{1})\langle g+f,[\varphi_{0}T_{i^{\prime}}+ \varphi_{-1}S_{a^{\prime}},T_{j^{\prime}}+S_{b^{\prime}}]\] \[+\ [T_{i^{\prime}}+S_{a^{\prime}},\varphi_{0}T_{j^{\prime}}+ \varphi_{-1}S_{b^{\prime}}]\rangle(S_{a}+T_{i})\otimes(S_{b}+T_{j})\] \[= (R^{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\ **Proposition 4.4**.: _Let \(\mathfrak{g}\) be a Lie bialgebra. If \((\hat{L},\hat{P}):\mathfrak{g}^{*}\to\mathfrak{g}\) is a Lax pair on \(\mathfrak{g}^{*}\), then the following graded functions \(L=\hat{L}\oplus\hat{L},P=\hat{P}\oplus\hat{P}\) consisting of two copies of the original Lax pair, is a 2-Lax pair on \(\operatorname{id}_{\mathfrak{g}^{*}}=\operatorname{id}_{\mathfrak{g}}^{*}[ \mathsf{I}]:\mathfrak{g}^{*}\stackrel{{\operatorname{id}}}{{ \longrightarrow}}\mathfrak{g}^{*}\)._ These two above results show that our definition of the 2-Lax pair (4.2) is indeed a generalization (a categorification) of the usual Lax pair. ## 5 The XXX spin rectangle We now consider an example of the theory of 2-Lax pairs we have developed above, following the quantum XXX spin chain described in Section 2.2. The idea is to extend the underlying Lie bialgebra \(\mathfrak{g}=\mathfrak{su}_{2}\) to a Lie 2-bialgebra \(\operatorname{id}_{\mathfrak{g}}=\mathfrak{g}\stackrel{{ \operatorname{id}}}{{\longrightarrow}}\mathfrak{g}\)[3]. Further, by **Proposition 4.4**, the Lax pair on \(\mathfrak{su}_{2}\) also extends to a 2-Lax pair on \(\operatorname{id}_{\mathfrak{su}_{2}}\). In this section, we will first give an explicit description of the underlying Lie 2-bialgebra structure on \(\operatorname{id}_{\mathfrak{g}}\), then we shall proceed to reconstruct the physical spin model by running the procedure outlined in Section 2.2 in reverse. We will emphasize the resulting higher-dimensional nature of the lattice model inherited from the graded structure of the underlying Lie 2-bialgebra. Recall from Section (2) and 2.2 that the Lie bialgebra \(\mathfrak{g}=\mathfrak{su}_{2}\) is equipped with the canonical classical r-matrix \[r=\sum_{i=1}^{3}\sigma_{i}\mathop{\otimes}\sigma_{i},\qquad\sigma_{i}\in \mathfrak{su}_{2},\] which gives rise to the Poisson bracket (2.10) on \(C^{\infty}(\mathfrak{su}_{2}^{*})\cong C^{\infty}(\mathbb{R}^{3})\). In accordance with **Proposition 4.4**, we lift \(\mathfrak{g}=\mathfrak{su}_{2}\) to a Lie 2-bialgebra \(\operatorname{id}_{\mathfrak{g}}=\mathfrak{g}\stackrel{{ \operatorname{id}}}{{\longrightarrow}}\mathfrak{g}\). Let \(\{\sigma_{a}\}_{a}\) denote the basis on on degree-0 copy of \(\mathfrak{g}_{0}=\mathfrak{su}_{2}\), and \(\{\kappa_{a}\}_{a}\) denote that on the degree-(-1) copy \(\mathfrak{g}_{-1}=\mathfrak{su}_{2}\), such that \(t:\kappa_{a}\mapsto\sigma_{a}\). We take the classical 2-r-matrix \[R=\sum_{a}\sigma_{a}\mathop{\otimes}\kappa_{a}\in\mathfrak{g}_{0}\mathop{ \otimes}\mathfrak{g}_{-1}=\mathfrak{su}_{2}^{2\otimes}, \tag{5.1}\] such that the trace form \(R^{\mathbb{O}}(\sigma_{a},\kappa_{b})=\operatorname{tr}\sigma_{a}\kappa_{b}= \delta_{ab}\) gives an identification \(\operatorname{id}^{*}[1]\cong\operatorname{id}_{\mathbb{R}^{3}}=\mathbb{R}^{3 }\stackrel{{\operatorname{id}}}{{\longrightarrow}}\mathbb{R}^{3}\) with \(\mathfrak{g}_{0}^{*}=\mathfrak{g}_{-1}^{*}\cong\mathbb{R}^{3}\). Now suppose \(\{\beta_{a}\}_{a}\subset C^{\infty}(\mathfrak{g}_{0}^{*})\) denotes the coordinate functions for the dual basis \(\{\sigma^{a}\}_{a}\), and similarly \(\{\alpha_{a}\}_{a}\subset C^{\infty}(\mathfrak{g}_{-1}^{*})\) denote those for the dual basis \(\{\kappa^{a}\}_{a}\). Then the 2-KK Poisson structure (4.8) reads \[\begin{array}{rl}\text{In coordinates:}&\{\beta_{a}+\alpha_{\alpha^{\prime}}, \beta_{b}+\alpha_{\nu}\}^{*}&=&(\epsilon_{\alpha^{\prime}\!\omega}\beta_{c}+ \epsilon_{\nu\alpha}\beta_{c})+\epsilon_{\alpha\!\omega}c_{c},\\ &\text{In vectors:}&\{\beta+\alpha,\beta^{\prime}+\alpha^{\prime}\}^{*}&=&( \beta\times\alpha^{\prime}+\alpha\times\beta^{\prime})+\alpha\times\alpha^{ \prime},\end{array} \tag{5.2}\] where \(\epsilon_{abc}\) are the Levi-Civita symbols. Given \((t^{T})^{*}(\alpha_{a})=\beta_{a}\), we may use the Peiffer identity to compute \(\{\beta_{a},\beta_{b}\}_{0}^{*}=\epsilon_{abc}\beta_{c}\), as expected. ### Restoring the 2-dimensional continuum As mentioned in **Proposition 4.4**, the Lax operator \(C^{\infty}(\mathfrak{su}_{2})\) also lifts to a 2-Lax operator \(L\) on \(C^{\infty}(\operatorname{id}_{\mathbb{R}^{3}})\). In case where the classical 2-r-matrix is given by (5.1), we have simply \[L(\beta_{a}+\alpha_{a})=\sigma_{a}+\kappa_{a},\qquad l=\sum_{a=1}^{3}\kappa_{a }\mathop{\otimes}\sigma_{a}+\sigma_{a}\mathop{\otimes}\kappa_{a}, \tag{5.3}\] where we have rewritten \(L:\operatorname{id}_{\mathbb{R}^{3}}\to\operatorname{id}_{\mathfrak{su}_{2}}\) as an element \(l\in\operatorname{id}_{\mathfrak{su}_{2}}^{2\otimes}\) through the duality \[\mathfrak{su}_{2}\cong\mathfrak{su}_{2}^{*}\cong\mathbb{R}^{3},\qquad\{\beta_{ a}+\alpha_{a}\}_{a}\mapsto\{\sigma_{a}+\kappa_{a}\}_{a}\] induced by the quadratic Casimir \(R^{\mathbb{O}}=\operatorname{tr}\). We are now ready to reconstruct the classical limit of the 2-dimensional XXX spin rectangle. We shall assume repeated indices are summed in the following. To restore the continuum, we shall promote the coordinate functions \(\alpha,\beta\in C^{\infty}(\mathbb{R}^{3})\) to smooth maps \(\alpha_{a}:\mathbb{R}\to\mathbb{R},\beta_{a}:\mathbb{R}^{2}\to\mathbb{R}\) and define the following smearing operators \[\Sigma[\beta\mathop{\oplus}\alpha]=\int\int dudv\beta^{a}(u,v)\sigma_{a}(u,v)+ \int dx\alpha^{a}(x)\kappa_{a}(x), \tag{5.4}\] Notice that \(\sigma_{a}(u,v)\) is a _manifestly_ 2-dimensional current, which comes equipped with maps that integrate out respectively the first \(u\) or the second \(v\) coordinate. We now study the current algebra that arises from demanding that \(\Sigma\) define a 2-graded Poisson map from \(C^{\infty}(\mathrm{id}_{\mathbb{R}^{3}})\) as given in **Definition** 3.7; namely, we impose \[\{\Sigma[\beta\otimes\alpha],\Sigma[\beta^{\prime}\oplus\alpha^{\prime}]\}^{*}= \Sigma[\big{(}\alpha\times\beta^{\prime}-\alpha^{\prime}\times\beta\big{)} \oplus\alpha\times\alpha^{\prime}]\] for all \(\beta,\beta^{\prime}:\mathbb{R}^{2}\to\mathbb{R}^{3}\) and \(\alpha,\alpha^{\prime}:\mathbb{R}\to\mathbb{R}^{3}\). We shall proceed by examining each graded component: \[\iint dxdx^{\prime}\,\alpha^{a}(x)\alpha^{\prime b}(x^{\prime})\{ \kappa_{a}(x),\kappa_{b}(x^{\prime})\}^{*} = \int dx\,(\alpha\times\alpha^{\prime})^{c}(x)\kappa_{c}(x),\] \[\int dx\iint dudv\,\alpha^{a}(x)\beta^{b}(u,v)\{\kappa_{a}(x), \sigma_{b}(u,v)\}^{*} = \iint dudv\,(\alpha(u)\times\beta(u,v))^{c}\sigma_{c}(u,v),\] which gives rise to the algebra \[\{\kappa_{a}(x),\kappa_{b}(x^{\prime})\}^{*} = \delta(x-x^{\prime})\,\epsilon_{abc}\kappa_{c}(x^{\prime}),\] \[\{\kappa_{a}(x),\sigma_{b}(u,v)\}^{*} = \delta(x-u)\,\epsilon_{abc}\sigma_{c}(u,v), \tag{5.5}\] in accordance with the procedure outlined in Section 2.2. We now wish to endow a \(L_{2}\)-algebra structure on the brackets (5.5), under the ansatz that we have the following bracket \[\{t(\sigma_{a}(u,v)),\sigma_{b}(u^{\prime},v^{\prime})\}^{*}=\delta(u-u^{ \prime})\delta(v-v^{\prime})\,\epsilon_{abc}\sigma_{c}(u^{\prime},v^{\prime}). \tag{5.6}\] The natural candidate for the \(t\)-map is \[\bar{t}:\sigma_{a}(u,v)\mapsto\int dv\,t(\sigma_{a})(u,v)=\kappa_{a}(u),\] which consist of the \(t\)-map \(t\) on the underlying Lie 2-algebra \(\mathrm{id}_{\mathfrak{su}_{2}}\) and an integration over the \(v\)-axis.3 Footnote 3: Denote by \(\overline{\sigma}_{a}(u)=\bar{t}(\sigma)_{a}(u)\). We have, by a Fourier transform, \[\sum_{p_{u}}e^{-ip_{u}u}\overline{\sigma}_{a}(p_{u}) = \overline{\sigma}_{a}(u)=\int dv\,t(\sigma_{a}(u,v))\] \[= \int dv\sum_{p_{u},p_{v}}e^{-ip_{v}v}e^{-ip_{u}u}\,\overline{t( \sigma_{a})}(p_{u},p_{v})=\sum_{p_{u}}e^{-ip_{u}u}\,\overline{t(\sigma_{a})}(p _{u},0),\] hence the \(t\)-map can be equivalently seen as a projection onto the momentum \(p_{v}=0\).. From (5.6), we check that \[\{(\bar{t}\sigma_{a})(u),\sigma_{b}(u^{\prime},v^{\prime})\}^{*}=\int dv\delta (u-u^{\prime})\delta(v-v^{\prime})\,\epsilon_{abc}\sigma_{c}(u^{\prime},v^{ \prime})=\delta(u-u^{\prime})\,\epsilon_{abc}\sigma_{c}(u^{\prime},v^{\prime}),\] which is indeed consistent with (5.5) given \(\kappa_{a}(u)=(\bar{t}\sigma_{a})(u)\). **Proposition 5.1**.: _The map \(\bar{t}\) is equivariant and satisfies a Koszul identity of the form_ \[\{\bar{t}(\sigma)_{a}(u),\sigma_{b}(u^{\prime},v)\}^{*}=\{\sigma_{a}(u,v),\bar {t}(\sigma)_{b}(u^{\prime})\}^{*}.\] _We call (5.5) the_ **2-dimensional \(\mathfrak{su}(2)\)-current algebra**_._ Proof.: A direct computation yields equivariance \[\{\kappa_{a}(x),\bar{t}(\sigma_{b})(u))\}^{*} = \delta(x-u)\epsilon_{abc}\kappa_{c}(u)\] \[= \delta(x-u)\epsilon_{abc}\bar{t}(\sigma_{c})(u)=\bar{t}(\{\kappa_ {a}(x),\sigma_{b}(u,v)\}^{*}).\] Now notice that the definition (5.6) satisfies \[\{t(\sigma_{a}(u,v)),\sigma_{b}(u^{\prime},v^{\prime})\}^{*} = \delta(u-u^{\prime})\delta(v-v^{\prime})\,\epsilon_{abc}\sigma_{c} (u^{\prime},v^{\prime})\] \[= \delta(u-u^{\prime})\delta(v-v^{\prime})\,\epsilon_{abc}\sigma_{c} (u,v)\] \[= \{\sigma_{a}(u,v),t(\sigma_{b}(u^{\prime},v^{\prime}))\},\] which is a non-trivial constraint only when \(v=v^{\prime}\). An integration over the locus \(v=v^{\prime}\) then yields the Koszul identity as stated. Note that there is another version of this 2d current algebra in which we integrate out the \(u\)-coordinate, instead of the \(v\)-coordinate. On \(\mathbb{R}^{2}\), these 2d current algebras are isomorphic. It is worth emphasizing that these currents are inherently 2-dimensional, as a consequence of the graded nature of the underlying Lie 2-bialgebra. Further, the \(t\)-map above in essence identifies the \(x\)-direction as the "boundary" \(u\)-direction of the 2-dimensional \((u,v)\)-plane. _Remark 5.1_.: In accordance with (3.4), we take (5.6) as the _definition_ of the bracket \[\{\sigma_{a}(u,v),\sigma_{b}(u^{\prime},v^{\prime})\}^{*}=\delta(u-u^{\prime}) \delta(v-v^{\prime})\,\epsilon_{abc}\sigma_{c}(u^{\prime},v^{\prime}), \tag{5.7}\] which one may notice is nothing but the \(2\)-dimensional version of the \(\mathfrak{su}_{2}\)-current algebra (2.7). This allows us to parameterize \(\sigma\) with canonical spinor coordinates \((\theta,\phi)\) in the same way as in (2.8), where the \(2\)-dimensional fields \(\theta,\phi\) now satisfy the canonical bracket \[\{\cos 2\theta(u,v),\phi(u^{\prime},v^{\prime})\}=i\delta(u-u^{\prime}) \delta(v-v^{\prime}).\] We shall make use of this parameterization in the following. #### 5.1.1 The classical 2D Hamiltonian We now follow the above procedure to restore the \(2\)-dimensional continuum for the Lax operator \(l\) defined in (5.3). This is accomplished by replacing the _second_ tensor factor of \(l\) with the \(2\)-dimensional currents. Moreover, we shall promote \(l\) to a classical Lax operator of the form \[\hat{l}(u,v;x)=\left[\frac{d^{2}}{dudv}+\kappa_{a}\otimes\sigma_{a}(u,v) \right]+\left[\frac{d}{dx}+\sigma_{a}\otimes\kappa_{a}(x)\right]=\hat{l}_{-1} (u,v)+\hat{l}_{0}(x). \tag{5.8}\] We emphasize that only the second component of the tensor products depends on the coordinates \(x\) or \((u,v)\). For the purpose of the following computation, we assume the coordinates \(x\in S^{1}\) and \((u,v)\in\mathbb{T}^{2}\) are compactified. Consider the monodromy matrix \(\mathcal{T}\) satisfying the flatness condition \[\hat{l}(u,v;x)\mathcal{T}(u,v;x|u^{\prime},v^{\prime};x^{\prime})=0,\qquad \mathcal{T}(u,v;x|u,v;x)=1.\] Note \(\mathcal{T}\) has the graded structure \(\mathcal{T}(u,v;x|u^{\prime},v^{\prime};x^{\prime})=\mathcal{T}_{-1}(u,v|u^{ \prime},v^{\prime})+\mathcal{T}_{0}(x|x^{\prime})\), against which the flatness condition above decomposes accordingly, \[\hat{l}_{-1}(u,v)\mathcal{T}_{-1}(u,v|u^{\prime},v^{\prime})=0, \qquad\qquad\mathcal{T}_{-1}(u,v|u,v)=1,\] \[\hat{l}_{0}(x)\mathcal{T}_{0}(x|x^{\prime})=0, \qquad\qquad\mathcal{T}_{0}(x|x^{\prime})=1.\] The latter \(1\)-dimensional piece has already occurred in Section 2.2, hence we shall mainly focus on the former piece. To solve this \(2\)-dimensional flatness condition, we use the _surface-ordered_ exponential \(S\exp\)[44] (using \(l_{-1}(u,v)\equiv\kappa_{a}\otimes\sigma_{a}(u,v)\) and \(l_{0}(x)\equiv\sigma_{a}\otimes\kappa_{a}(x)\)): \[\mathcal{T}_{-1}(u,v|u^{\prime},v^{\prime})=S\exp\left(-\iint\limits_{D}dpdq\, l_{-1}(p,q)\right),\] where \(D=D(u,v|u^{\prime},v^{\prime})\) denotes the rectangle spanned by the four vertices \((u,v),(u^{\prime},v),(u,v^{\prime}),(u^{\prime},v^{\prime})\) with \(u<u^{\prime},v<v^{\prime}\). This allow us to form the transfer matrix/partition function \(Z_{-1}=\operatorname{tr}_{(1)}\mathcal{T}_{-1}\) such that the classical Hamiltonian can be written as \[h_{-1}=-\ln Z_{-1}=-\ln\operatorname{tr}_{(1)}\mathcal{T}_{-1}=-\det_{(1)} \ln\mathcal{T}_{-1}.\] To compute this quantity, we use the long-wavelength limit in each \(u\)-\(/v\)-direction: suppose \(u^{\prime}-u=v-v^{\prime}=\ell\) are one lattice constant \(\ell\ll 1\) apart, then we compute \[h_{-1}\varpropto\iint dudv\,\sum_{a=1}^{3}|\nabla\sigma_{a}|^{2}=\iint dudv \,\sum_{a=1}^{3}\left[\left(\frac{\partial\sigma_{a}}{\partial u}\right)^{2} +\left(\frac{\partial\sigma_{a}}{\partial v}\right)^{2}\right].\] The \(2\)-dimensional classical XXX Hamiltonian is therefore given by \[h(\sigma\oplus\kappa)=h_{-1}(\sigma)\oplus h_{0}(\kappa)=\iint dudv\,\sum_{a =1}^{3}|\nabla\sigma_{a}|^{2}\oplus\int dx\,\sum_{a=1}^{3}\left(\frac{d\kappa _{a}}{dx}\right)^{2}, \tag{5.9}\] where the current generators \(\kappa,\sigma\) here have been parameterized in terms of spinor fields by (2.8) (and the 2D version explained in _Remark 5.1_). #### 5.1.2 Bulk-boundary coupling dynamics We now utilize the graded Poisson structure (5.5) and the classical XXX Hamiltonian (5.9) to compute the dynamics of the 2-dimensional currents. Following **Definition**4.2, we have \[\dot{\sigma}_{a}(u,v) = \{h,\sigma_{a}(u,v)\}^{*}=\int dx\sum_{b=1}^{3}\{\left(\frac{d \kappa_{b}}{dx}\right)^{2},\sigma_{a}(u,v)\}\] \[= 2\int dx\frac{d\kappa_{b}}{dx}\{\frac{d\kappa_{b}}{dx},\sigma_{a }(u,v)\}^{*}=-2\int dx\frac{d^{2}\kappa_{b}}{dx^{2}}\{\kappa_{b}(x),\sigma_{a }(u,v)\}^{*}\] \[= -2\epsilon_{abc}\frac{d^{2}\kappa_{b}}{du^{2}}\sigma_{c}(u,v)\] where we have used an integration by parts, and assumed that the \(x\)-coordinate has been compactified on \(S^{1}\) -- in other words, we impose periodic boundary conditions \(\kappa_{a}(2\pi)=\kappa_{a}(0)\). Similarly, we have \[\dot{\kappa}_{a}(x)=\{h,\kappa_{a}(x)\}^{*}=\{h_{-1},\kappa_{a}(x)\}^{*} \oplus\{h_{0},\kappa_{a}(x)\}^{*},\] whence the latter term can be computed straightforwardly \[\{h_{0},\kappa_{a}(x)\}^{*}=-2\epsilon_{abc}\frac{d^{2}\kappa_{b}}{dx^{2}} \kappa_{c}.\] The former term requires more care: \[\{h_{-1},\kappa_{a}(x)\}^{*} = \int\int dudv\sum_{b=1}^{3}\left[\{\left(\frac{\partial\sigma_{b }}{\partial u}\right)^{2},\kappa_{a}(x)\}+\{\left(\frac{\partial\sigma_{b}}{ \partial v}\right)^{2},\kappa_{a}(x)\}\right]\] \[= -2\int\int dudv\sum_{b=1}^{3}\nabla^{2}\sigma_{b}\{\sigma_{b}(u,v ),\kappa_{a}(x)\}^{*}\] \[= -2\int dve\epsilon_{abc}[\nabla^{2}\sigma_{b}(x,v)]\sigma_{c}(x,v),\] where we have assumed that \((u,v)\in\mathbb{T}^{2}\) are compactified: we impose the Born-von Karmen boundary conditions \[\sigma_{a}(2\pi,v)=\sigma_{a}(0,v),\qquad\sigma_{a}(u,2\pi)=\sigma_{a}(u,0), \qquad\forall\ u,v,\] and so on also for their derivatives. In summary, we have the following \[\dot{\mathbf{\sigma}}=-2\frac{d^{2}\mathbf{\kappa}}{du^{2}}\times\mathbf{\sigma},\qquad \dot{\mathbf{\kappa}}=-2\int dv\nabla^{2}\mathbf{\sigma}\times\mathbf{\sigma}-2\frac{d^{ 2}\mathbf{\kappa}}{dx^{2}}\times\mathbf{\kappa}, \tag{5.10}\] where \(\mathbf{\kappa}=(\kappa_{1},\kappa_{2},\kappa_{3})\) and similarly for \(\mathbf{\sigma}\). Notice that the dynamics of \(\kappa\) involves a highly non-local bulk interaction term! Bulk dynamics from the Peiffer identity.Throughout our above analysis, we are made keenly aware that the 2-dimensional nature of the \(\mathfrak{su}_{2}\)-currents arises from the 2-graded structure of the underlying Lie 2-bialgbera. In degree-(-2) in particular, one can use the Peiffer identity (5.7) to compute the dynamics in just the \(\sigma\)-sector. By performing the same computations as we have done above, we arrive at \[\dot{\mathbf{\sigma}}=\{h_{-1},\mathbf{\sigma}\}^{*}=-2\nabla^{2}\mathbf{\sigma}\times\bm {\sigma}, \tag{5.11}\] which can be interpreted as the dynamical system "deep in the bulk" of the graded system (5.10). Note that, up to a rescaling \(2\sigma\to\sigma\) of the currents, (5.11) is nothing more than the isotropic 2+1d Landau-Lifshitz equation (LLE). It has been found that the 2+1d LLE (5.12) is _not_ a nonlinear PDE of Painleve type [45]. This fact may suggest that it is not integrable, if it were not for the fact that a Lax pair is known for its solitonic traveling wave solutions [46, 47]. The dynamical system (5.11) is therefore still quite mysterious, and its quantum counterpart -- the 2D Heisenberg XXX model -- even more so; see Section 5.2. Boundary dynamics from the \(t\)-map.By **Corollary 4.1**, the \(t\)-map relates 2d dynamics to 1d boundary map -- a sort of "boundary map". We thus expect to be able to recover the usual classical description of the 1d XXX spin chain. This can be achieved directly given that we have \(\bar{t}\mathbf{\kappa}=0\) and \(\mathbf{\kappa}=\bar{t}\mathbf{\sigma}\) understood for the \(\mathfrak{su}_{2}\)-currents. Indeed, by applying the pullback \(\bar{t}^{\mathbf{\ast}}\) to (5.9) (namely precomposing \(\bar{t}\)), we integrate out the \(v\)-dependence whence \[h_{-1}(\bar{t}(\sigma))=\iint dudv\sum_{a=1}^{3}|\nabla(\bar{t}\sigma_{a})|^{2 }-L_{v}\int du\sum_{a=1}^{3}\left(\frac{d\kappa_{a}}{du}\right)^{2}=L_{v}h_{0} (\kappa),\] where \(L_{v}\) denotes the system size along the \(v\)-direction (ie. the size of the meridian of the 2-torus \((u,v)\in\mathbb{T}^{2}\)). Similarly, by applying \(\bar{t}\) to (5.10), we have \[\dot{\mathbf{\kappa}}=-2\frac{d^{2}\mathbf{\kappa}}{du^{2}}\times\mathbf{\kappa}, \tag{5.12}\] which is indeed the isotropic 1+1d LLE describing the classical dynamics of the XXX spin chain [36], again up to a rescaling \(2\kappa\mapsto\kappa\) of the currents. The 1+1d LLE (5.12) then in some sense lies at the "boundary" of the 2-dimensional model (5.10). _Remark 5.2_.: It is important for us to emphasize here that (5.10), (5.11) and (5.12) are intrinsically distinct dynamical systems. Given that \(\bar{t}\) by definition relates the "bulk" degree-of-freedom \(\sigma\) with those \(\kappa\) of the "boundary", the degrees associated to the currents and their dynamics should be understood as that describing "bulk-boundary coupling". In other words, one has the interpretation that the 1+1d LLE (5.12) lies at the boundary of the bulk 2+1d LLE (5.11), together with the coupling dynamics given by (5.10). ### 2D Heisenberg model and quantum inverse scattering Let us now attempt to restore the quantum lattice system associated to the classical Hamiltonian \(h\) (5.9). The quantization of the 1-dimensional piece \(h_{0}\) is well-understood -- it is simply the usual XXX spin chain/1D isotropic Heisenberg model. It is then also obvious that, given the form of the classical 2D Hamiltonian \(h_{-1}\) and the associated 2+1d LLE (5.11), we should also be able to describe the 2D isotropic Heisenberg model. We shall demonstrate this more explicitly. First, we discretize the 2-torus \(\mathbb{R}^{2}\) by introducing a lattice spacing \(\ell\), such that \(u_{n}=n\ell,v_{m}=m\ell\) for \((n,m)\in\mathbb{Z}^{2}\). Define the \(\mathfrak{su}(2)\) degrees of freedom \[\varsigma_{\text{a}}(n,m)=\sigma_{a}(u_{n},v_{m})\] on each lattice site \((n,m)\), we then promote the Poisson bracket (5.7) to the 2d quantum rotor algebra \[[\varsigma_{\text{a}}(n,m),\varsigma_{\text{b}}(n^{\prime},m^{\prime})]= \delta_{nn^{\prime}}\delta_{mm^{\prime}}\epsilon_{abc}\varsigma_{\text{c}}(n^{ \prime},m^{\prime}) \tag{5.13}\] By discretizing the derivatives in \(h_{-1}\) (5.9) and discarding diagonal terms \(\varsigma_{\text{a}}(n,m)\otimes\varsigma_{\text{a}}(n,m)\) -- which contributes only constants -- we obtain the 2D XXX Heisenberg model Hamiltonian \[H_{2d}=\frac{1}{2}\sum_{n,m}\sum_{a=1}^{3}\varsigma_{\text{a}}(n,m)\otimes \varsigma_{\text{a}}(n+1,m)+\varsigma_{\text{a}}(n,m)\otimes\varsigma_{\text{ a}}(n,m+1). \tag{5.14}\] The associated \(t\)-map in the quantum setting, \[\bar{t}:\varsigma_{\text{a}}(n,m)\mapsto\sum_{m}t(\varsigma_{\text{a}})(n,m) \equiv\varkappa_{\text{a}}(n),\] allows us to recover the 1-dimensional degrees-of-freedom, as illustrated in Fig. 1. Indeed, by precomposing (5.14) with the \(t\)-map, the second term is diagonal due to the lattice translational symmetry (and can hence be discarded), while the first term yields \[\frac{L_{v}}{2}\sum_{n}\sum_{a=1}^{3}\varkappa_{\text{a}}(n)\otimes\varkappa_ {\text{a}}(n+1)\varpropto H_{1d},\] which is indeed proportional to the 1D XXX Heisenberg model Hamiltonian. Let us also discretize the "boundary" coordinate \(x=p\ell\), where \(p\in\mathbb{Z}\) and \(\ell\) is as above the lattice separation. We promote the 2-dimensional \(\mathfrak{su}_{2}\)-current algebra to the quantum algebra \[[\varkappa_{\text{a}}(p),\varkappa_{\text{b}}(p^{\prime})]=\delta_{pp^{ \prime}}\,\epsilon_{abc}\varkappa_{\text{c}}(p^{\prime}),\qquad[\varkappa_{ \text{a}}(p),\varsigma_{\text{b}}(n,m)]=\delta_{pn}\,\epsilon_{abc}\varsigma_{ \text{c}}(n,m).\] We note that these quantum algebra relations describe (i) the 1D quantum rotor algebra in \(\varkappa\) and (ii) the coupling between the bulk 2D and boundary 1D spin degree-of-freedom, and by construction satisfy the following \(L_{2}\)-algebra identities \[[\varkappa_{\text{a}}(p),\bar{t}(\varsigma_{\text{b}})(n)]=\bar{t}([\varkappa_ {\text{a}}(p),\sigma_{b}(n,m)]),\qquad[\bar{t}(\varsigma)_{\text{a}}(n), \varsigma_{\text{b}}(n^{\prime},m)]=[\varsigma_{\text{a}}(n,m),\bar{t}( \varsigma)_{\text{b}}(n^{\prime})]\] as per **Proposition 5.1**.: We interpret these identities as consistency conditions of the bulk-boundary coupling with the quantum "projection to the boundary" \(t\)-map \(\bar{t}\). he 2-dimensional quantum inverse scattering method.The sense in which the 2d quantum XXX Heisenberg model (5.14) is integrable, if at all, is not known. The usual Bethe ansatz, which allowed one to prove that the usual 1D XXX spin chain is integrable, fails in the 2D case. However, we note that (5.14) has the form of a _quantum spin rectangle_, meaning that \(H_{2d}\) is a stack of horizontal and vertical quantum spin chains, each of which are given by nearest-neighbor spin-spin couplings. Such 2-dimensional spin systems had also been studied in [48], where they proposed a higher-dimensional notion of integrability for \(H_{2d}\) based on trialgebras. The advantage of our perspective based on Lie 2-bialgebras here is that we have access to the "bulk-boundary" relation through the underlying \(t\)-map, which allows us to leverage the known notion of integrabiltiy for the boundary XXX spin chain. Indeed, since we know that 1. the (1-dimensional) quantum inverse scattering method (see [34, 35, 9] and Section 2.2) allows us to generate the XXX spin chain from the \(R\)-matrix \[R_{1d}=1+\sigma_{a}\otimes\sigma_{a}\in U\mathfrak{su}_{2}^{2\otimes}\] of the underlying Lie bialgebra \(\mathfrak{su}_{2}\) (ie. (2.5) at spectral parameter \(\mu=0\)), and 2. the \(t\)-map relates the bulk and boundary degrees-of-freedom, the corresponding 2-graded scattering matrix which generates the 2d Heisenberg XXX model (5.14) must have an associated 2-graded \(R\)-matrix \(R_{2d}\) satisfying \[(1\otimes t)R_{2d}=R_{1d}.\] This 2-\(R\)-matrix is in fact nothing but the "quantization" (together with a quantum 2-Yang-Baxter equation, in the sense of [49]) of the classical 2-\(r\)-matrix (5.1) \[R_{2d}=1+\kappa_{a}\otimes\sigma_{a}\in(U\operatorname{id}_{\mathfrak{su}_{2}} )^{2\otimes},\] where we have extended the \(t\)-map on \(\operatorname{id}_{\mathfrak{su}_{2}}\) to the universal envelop \(U\operatorname{id}_{\mathfrak{su}_{2}}\) by \(t:\kappa_{a}\mapsto\sigma_{a},1\mapsto 1\). The fact that the quantum spin rectangle (5.14) can be generated by stacking quantum spin chains can be understood as the fact that the \(t\)-map of the underlying Lie 2-bialgebra \(\operatorname{id}_{\mathfrak{su}_{2}}\) we have begun with is the identity. The universal envelop \(U\operatorname{id}_{\mathfrak{su}_{2}}\) forms a strict 2-Hopf algebra \(U\mathfrak{su}_{2}\stackrel{{\operatorname{id}}}{{\longrightarrow }}U\mathfrak{su}_{2}\) as defined in [49]. We had also shown there that, in the strict case, this structure coincides with the notion of "cat\({}^{1}\)-Hopf algebras" of [39], which is a category internal to the category of Hopf algebras. Now the fact that our model (5.14) fits within the framework of [48], which was based on trialgebras, means that the 2-Hopf algebra \(U\operatorname{id}_{\mathfrak{su}_{2}}\) should admit a description as a trialgebra. This is indeed the case [15]: cocommutative cat\({}^{1}\)-Hopf algebras are equivalent to cocommutative trialgebras. Though the above example is "classical", in the sense that there is no non-trivial \(q\)-deformation, it nevertheless demonstrates that our general theory can generate, for more interesting non-identity \(t\)-maps, 2D integrable quantum lattice systems that does not necessarily take the form of a spin rectangle. And hence, our formalism generalizes the procedure based on trialgebras outlined in [48]. Moreover, our observations lead us to posit that the above construction should be captured, in the general case of the XXZ/XYZ family, by a "2-dimensional quantum inverse scattering" associated to the 2-\(R\)-matrix of a 2-Hopf algebra [49, 50]. We shall leave a detailed study of this method for a future work. Figure 1: The \(t\)-map \(\bar{t}\) performs an averaging of the 2-dimensional spin degrees-of-freedom (black dots) over the \(v\)-axis, which projects the model (5.14) onto the 1-dimensional XXX spin chain along the \(u\)-axis (red dots). Conclusion To summarize, we have defined a notion of 2-graded integrability inherited from the structure of a Poisson-Lie 2-group, which can be interpreted as a categorification of the usual notion of integrability for 1D systems to 2-dimensions. We have explicitly generalized various integrability theorems from the Lie 1-algebra case as proven in [2], and wrote down an explicit formula for the canonical 2-Lax pair that one can construct from a solution of the 2-CYBE in a Lie 2-bialgebra [3, 4, 5]. We have also found that, through the \(t\)-map of the underlying Lie 2-bialgebra, the usual 1-Lax equation is in fact hidden in our definition of a 2-Lax pair. This result can be understood as an integrable "bulk-boundary" relation. Starting from the Lie 2-bialgebra \(\mathrm{id}_{\mathfrak{su}_{2}}=\mathfrak{su}_{2}\xrightarrow[t\to 1]{t \longrightarrow}\mathfrak{su}_{2}\) and its canonical 2-graded classical \(r\)-matrix (5.1), we worked out an explicit example of the 2D XXX spin rectangle using our general mathematical framework. In this example, the 2-graded nature of the underlying Lie 2-bialgebra allowed us to capture the dynamics of _both_ the 2D "bulk" and 1D "boundary" variables (the \(\mathfrak{su}_{2}\)-currents), which obey a certain graded \(L_{2}\)-algebra structure. This is consistent with the understanding that categorification leads to higher-dimensional physics [28, 31, 49, 50, 51, 52, 53, 54, 55]. Further, as explained in _Remark 5.2_, the 2-graded nature of the 2-Lax equation allowed us to describe not only the 1+1d and 2+1d isotropic LLEs (5.11), (5.12), but also the bulk-boundary coupling dynamics (5.10) between them, which is consistent with the accompanying \(t\)-map \(\bar{t}\) that relates the bulk and boundary degrees-of-freedom. The purely boundary system is well-known to be integrable via the Bethe ansatz [9, 34, 35], and the integrability of the purely bulk system had also undergone investigation in the literature [56, 57, 47, 46]. The bulk-boundary hybrid system, on the other hand, is by construction "2-integrable" in the sense that we have defined in Section 4. To understand the "higher-dimensional integrability" of the 2D Heisenberg XXX model, it then seems that we would have to describe the 2-dimensional quantum inverse scattering method. It is therefore important for us to also study how 2-\(R\)-matrices of [49] achieve dependence on the spectral parameters. We expect such a higher notion of integrability to be useful also in the context of string theory [58, 59] and higher BF theory [60, 61, 5, 62]. We recognize that that 2-dimensional spin lattice models had also been constructed in another way from the data of a fusion 2-category in [63]. We would like to point out that the construction of the fusion surface model studied there follows very closely the algebraic Bethe ansatz: a transfer matrix \(T\) was first defined from the data of a separable algebra in the fusion 2-category, then an "anisotropy limit" of sorts was taken \[T\sim P_{0}+\mathrm{tr}_{P_{0}}\mathfrak{n}\,H\] in order to obtain the Hamiltonian \(H\) of the fusion surface model, where \(P_{0}\) is a certain projector. This runs in parallel with the quantum inverse scattering method [9] -- indeed, this is precisely how we recover the classical Hamiltonian \(h\) from the monodromy \(\mathcal{T}\) in Sections 2.2 and 5.1.1. Recent work [49] by the authors gave a definition of a "categorical quantum group", whose 2-representation 2-category forms a fusion 2-category, and hence can serve as the input for the fusion surface model of [63]. Further, the classical limit of such 2-quantum groups were shown to be Lie 2-bialgebras, hence we posit that the construction of the fusion surface model, with the _possibly non-semisimple_ 2-representation 2-category of categorical quantum groups as input, goes through an appropriate notion of 2-dimensional quantum inverse scattering. Moreover, this would also help describe the proper way in which the fusion surface models of [63] are integrable. We would like to highlight some directions we find interesting to develop in a near future. Generalization to weak Poisson 2-groups.It is well-known [39, 59, 58, 5, 62] that Lie 2-algebras are classified up to equivalence by the following data: 1. the cokernel \(\mathfrak{n}=\mathrm{coker}\,t\subset\mathfrak{g}_{0}\), 2. the kernel \(V=\ker t\subset\mathfrak{g}_{-1}\), which is an Abelian subalgebra of \(\mathfrak{g}_{-1}\), and 3. the **Postnikov class**\(\kappa\in H^{3}(\mathfrak{n},V)\) in the Lie algebra cohomology. In the theory of \(L_{\infty}\)-algebras, the Postnikov class manifests as the cohomology class of the _homotopy map_\(\mu\), whose role is to relax the 2-Jacobi identities (3.2) [32, 59, 58, 62]. The most popular example of such _weak_ Lie 2-algebras is the string 2-algebra, which arises of course in the context of string theory [59, 58, 64]. The Postnikov class also plays a central role in the theory of higher-dimensional gapped topological phases [51, 52] in condensed matter theory, as well as the Green-Schwarz anomaly cancellation mechanism in higher-energy physics [65, 66, 62]. There is thus much motivation to generalize our above theory of 2-graded integrability to the weak case. This can be achieved by leveraging the theory of weak Lie 2-bialgebras [32], as well as the characterization of the associated weak/quasi Poisson-Lie 2-groups [4]. Furthermore, the fusion surface model of [63] mentioned above, or the associated Douglas-Reutter topological field theory [67], has a key ingredient the "10-\(j\) symbol", which comes via skeletonization from the pentagonator 2-morphisms of the underlying fusion 2-category. For 2-representation 2-categories, this pentagonator data was identified in [49, 50] to arise from a homotopy weakening of the underlying 2-representation theory, which _strict_ 2-bialgebras do not see. It would be important to also formalize the general 2-dimensional quantum inverse scattering method in the weak case. Zero 2-curvature condition and surface monodromy.Recall the 2-dimensional current \(\sigma_{a}(u,v)\) from Section 5. The associated Lax operator \(\hat{l}_{2}\) also depends on both coordinates \(u,v\), and hence can be interpreted as a 2-form connection [29], whose _surface holonomy_[68] is given by the surface-ordered exponential [44] \[\mathcal{T}_{2}=S\exp{\left(-\iint\limits_{S}dvdu\,l_{2}(v,u)\right)}.\] As such, the full transfer matrix/partition function \(\mathcal{T}\), which includes both the 1- and 2-holonomies of the connection forms \(l_{1},l_{2}\), should be interpreted as an element in the 2-group \(\mathrm{id}_{SU(2)}\), which satisfies the 2-graded flatness condition \(\hat{l}\mathcal{T}=0\). This is consistent with the expectation one would have upon categorifying the usual 1-dimensional treatment. Following this line of thinking, it could then be possible in general to rewrite the 2-Lax equation (4.15) as a zero 2-curvature condition [29, 58, 62] \[F=dA+\frac{1}{2}[A\wedge A]=t\Sigma,\qquad d_{A}\Sigma=d\Sigma+A\wedge^{\triangleright} \Sigma=0\] for a 2-connection \((A,\Sigma)\) constructed out of the 2-Lax pair \((L,P)\). In the usual Lie 1-algebra case, the construction of the Lax connection requires the structures of the affine Lie algebra \(\widehat{\Omega}_{\mathfrak{D}}\) centrally extending the loop algebra \(\Omega_{\mathfrak{D}}\)[1]. Correspondingly, the Lax 2-connection construction would require one to understand _affine Lie 2-algebras_, which are Lie 2-algebra centrally extensions [40] of an appropriate categorification of the loop algebra \(\Omega_{\mathfrak{D}}\). Work towards this subtle problem has been initiated by the authors.
2303.00617
Causalvis: Visualizations for Causal Inference
Causal inference is a statistical paradigm for quantifying causal effects using observational data. It is a complex process, requiring multiple steps, iterations, and collaborations with domain experts. Analysts often rely on visualizations to evaluate the accuracy of each step. However, existing visualization toolkits are not designed to support the entire causal inference process within computational environments familiar to analysts. In this paper, we address this gap with Causalvis, a Python visualization package for causal inference. Working closely with causal inference experts, we adopted an iterative design process to develop four interactive visualization modules to support causal inference analysis tasks. The modules are then presented back to the experts for feedback and evaluation. We found that Causalvis effectively supported the iterative causal inference process. We discuss the implications of our findings for designing visualizations for causal inference, particularly for tasks of communication and collaboration.
Grace Guo, Ehud Karavani, Alex Endert, Bum Chul Kwon
2023-03-01T16:14:24Z
http://arxiv.org/abs/2303.00617v1
# Causalvis: Visualizations for Causal Inference ###### Abstract. Causal inference is a statistical paradigm for quantifying causal effects using observational data. It is a complex process, requiring multiple steps, iterations, and collaborations with domain experts. Analysts often rely on visualizations to evaluate the accuracy of each step. However, existing visualization toolkits are not designed to support the entire causal inference process within computational environments familiar to analysts. In this paper, we address this gap with Causalvis, a Python visualization package for causal inference. Working closely with causal inference experts, we adopted an iterative design process to develop four interactive visualization modules to support causal inference analysis tasks. The modules are then presented back to the experts for feedback and evaluation. We found that Causality effectively supported the iterative causal inference process. We discuss the implications of our findings for designing visualizations for causal inference, particularly for tasks of communication and collaboration. + Footnote †: journal: Journal of Computer Vision 1 Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + Footnote †: journal: Journal of Computer Vision + data set - treatment and control - and compare outcomes between them. Estimating the size of the effect of treatment exposure on the outcome is often the goal of causal inference analysis. By replacing RCTs with observational data, additional steps must be taken to ensure that the estimated treatment effect is reliable and unbiased. This is a complex process, requiring multiple steps, iterations, and collaborations with domain experts. Researchers need to accurately understand the causal relationships in the data set, identify and control for all confounding variables, make the necessary statistical adjustments, and ensure that the selected treatment and control groups satisfy required assumptions. Consider again the scenario where researchers are interested in estimating the effect of cigarettes on lung cancer. Researchers can begin by selecting two groups - smokers and non-smokers - from a large data set of medical records, then compare rates of lung cancer (outcome) between those that smoke (treatment) versus those that do not (control). However, for this comparison to be valid, several assumptions must hold true. One such assumption is that the groups must share some minimal amount of similarities - for example, if we have no women in the smoking group, we should not include women in the control group either. Moreover, there should be no unmeasured or uncontrolled confounding variables - if smokers tend to be older than the non-smokers and older people are at higher risk of cancer to begin with, then age is a confounder that, if left unadjusted, will lead to an inaccurate estimate of the treatment effect. The above scenario demonstrates the complexity of causal inference. Analysts often rely on visualizations to inspect and resolve errors in each step of the process. However, while there are existing visualization libraries used by causal inference experts, they are often not designed specifically for causal inference (Zhou et al., 2017; Wang et al., 2018), or support only limited analysis tasks (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). This is further complicated by the existence of different causal inference approaches that adopt incompatible assumptions and processes (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), resulting in visualization tools that provide only fragmented support for the analysis workflow, and are not designed to work with one another. Additionally, causal inference often requires multiple repetitions to refine the analysis, but many visualization libraries are not designed for such rapid iteration and do not integrate with the computational environments and statistical packages analysts are familiar with. Taken together, there are, to the best of our knowledge, no existing visualization toolkits that can be used together to support the various analysis tasks of causal inference practitioners over the entire causal inference workflow. In this paper, we address these challenges through a design study with causal inference experts. Working closely with the experts over the course of three months, we first sought to understand the causal inference workflow and analytic tasks through two rounds of formative interviews. We then adopted an iterative design process to develop visualizations to support the tasks, before finally presenting the visualizations back to the experts for feedback and evaluation. The result of this design study is _Causalvis_ (Fig 1), a Python visualization package for causal inference. Causalvis consists of four visualization modules that support data analysts with tasks such as understanding and communicating causal structure, identifying and statistically controlling for confounding variables, refining cohorts, exploring heterogeneous treatment effects, and tracking analytic provenance. The contributions of this paper include (i) a characterization of user tasks and design requirements during the causal inference process; (ii) Causalvis1, a Python library that consists of four visualization modules to support experts during causal inference; (iii) feedback from experts about the design, functionality, and areas of improvement for each module; and (v) methodological lessons learned from developing and evaluating Python packages with multiple sub-modules in established computational environments. Footnote 1: [https://github.com/causalvis/causalvis](https://github.com/causalvis/causalvis) ## 2. Related Work ### Causal Inference Causality is the overarching paradigm focused on the science of cause and effect. The methods of uncovering causal structure from data can be further categorized into _causal inference_ and _causal discovery_. Causal discovery (also known as reverse causality (Krause et al., 2017)) generally aims to infer the causal relations among variables from observational data without specifying treatment or outcome in advance, while causal inference (also known as forward causality) aims to quantify the strength of the causal relationship of a pre-specified treatment on a pre-specified outcome. For example, researchers would use causal discovery to understand the causal links between variables - such as smoking behavior, age, biological sex, and lung cancer - in an observational data set, and use causal inference to estimate the size of the effect of smoking (treatment) on the risk of lung cancer (outcome) considering influences from covariates. Causal inference, the focus of our work, relies on non-experimental observational data to estimate causal effects. Unlike RCTs where participants are randomly assigned to a particular treatment, observational studies do not manipulate an individual's settings or experiences. Instead, data would be recorded about that individual, such as their demographic information, whether they were exposed to a treatment, and other related attributes. In a healthcare setting, for instance, researchers can access such observational data in the form of medical databases of patient health and demographic data. From these data sets, researchers can define the treatment and outcome of interest, and relevant covariates. The potential outcomes (PO) framework (Han et al., 2017), also called the Neyman-Rubin framework (Neyman and Rubin, 1972) or the Rubin Causal Model (Rubin, 1972), describes a theoretical framework for causal inference. Consider again the scenario where we attempt to find the causal effect of smoking on cancer. To measure the true causal effect of smoking in a population, we need to generate two potential outcomes for every individual - the outcome had they smoked and the outcome had they not. We could then quantify the difference between the two outcomes for each individual, and average that difference across the population to obtain the **average treatment effect** (ATE). However, in reality, we cannot obtain both potential outcomes for each individual, as each individual either did smoke or did not. Only one potential outcome is already observed, and the other outcome can never be observed (i.e., counterfactual). Therefore, causal inference provides the theory and tools for when and how we can estimate causal effects. Informally, to estimate the ATE, we select a cohort of individuals who share baseline biological/demographic characteristics, we divide them into treatment and control groups based on whether they did or did not smoke, perform statistical adjustments to cancel out spurious correlations so the groups are more comparable, and then calculate the difference in average outcomes (i.e., rate of cancer diagnosis) between the two groups. The PO framework provides the theoretical assumptions required to identify whether this estimated difference from the data at hand is indeed the causal treatment effect. In addition to the PO framework, there exists a complementary approach to causal inference called the Structural Causal Model (SCM). SCM relies on the _do_-calculus system pioneered by Judea Pearl (Judea, 1976; Pearl, 1977), which champions the use of Directed Acyclic Graphs (DAG) for causation. DAGs consist of nodes and vertices, where nodes correspond to variables in the data and a directed vertex between two nodes indicates a possible causal relationship between the variables (Krause et al., 2017). Generally, DAGs can accomplish two tasks: 1) identifying whether a causal question can be answered from the data, and 2) performing causal effect estimation when used as a computational graph. Under the SCM framework, practitioners would use DAGs to accomplish both tasks by estimating the effect sizes of all causal relationships (vertices) in a DAG. In contrast, PO practitioners focus on quantifying the effect of the one vertex connecting the treatment to the outcome, and would use the DAG only for identification (the first task). This difference in approaches has led the two frameworks to adopt different assumptions and use different sets of estimation tools. While both frameworks have their strengths and weaknesses, many causal estimation methods that are popular today-such as propensity score methods and matching-were developed under the PO framework (Krishnan, 2017). The PO framework has thus seen greater adoption by causal inference practitioners in fields ranging from economics and policy to healthcare and epidemiology. **Our tool is designed to support the PO framework,** allowing users to estimate the causal effect of a pre-specified treatment on a pre-specified outcome using common statistical tools. In all subsequent sections, unless otherwise indicated, the term _causal inference_ will refer to the PO approach. ### Causal Visualization Many visualization techniques have been developed to communicate causal structure and causal relationships, ranging from static directed acyclic graphs (Krishnan, 2017; Krishnan, 2017) to animated visualizations (Krause et al., 2017; Krause et al., 2017; Judea, 1976). More recently, studies have looked at combining textual narratives with causal graphs to help users understand temporal events (Krishnan, 2017), as well as how visualizations might be leveraged for causal support (Krishnan, 2017), interpreting counterfactuals (Krishnan, 2017), identifying mediator variables (Krishnan, 2017), and conveying causality in biological pathways (Krishnan, 2017). Studies have also explored how visualizations might erroneously convey an illusion of causality in data (Krishnan, 2017). An important prerequisite for causal inference is a directed acyclic graph (DAG) depicting the possible causal relationships between different variables in the data. Traditionally, these causal relationships would be manually specified by a domain expert, such as a doctor or physician, aided by tools like Dagitty (Dagitty, 2017) that enable interactive specification of DAGs. More recently, automatic methods for _causal discovery_ were developed to recover causal structures and learn causal relationships between variables from observational data (Krishnan, 2017). These tools can be fully automated, like Causalnex (Bowenstein et al., 2017), or interactive, like SeqCausal (Krishnan, 2017) and DOMINO (Krishnan, 2017), enabling humans to take part in discovering the underlying causal structure from sequential data. However, in both cases, causal discovery tools do not quantify the effect of a treatment on an outcome variable, and require subsequent use of _causal inference_ methods. There exists a broad range of visualization tools that have been developed for causal inference, such as visualizing and refining causal structures (Krause et al., 2017), performing analyses (Krause et al., 2017), explaining the AI models used (Krishnan, 2017), and debiasing AI algorithms (Krishnan, 2017). While these visualization tools support a variety of user tasks, they are mostly grounded in the SCM framework and are not compatible with in the more popular PO framework. For example, while both frameworks might use DAGs to visualize causal relationships, the analytic goals of the visualization would differ. Previous approaches, such as the Visual Causality Analyst (Krause et al., 2017) and the Causal Structure Investigator (Krause et al., 2017), use multiple regression models to estimate the effect size of all causal relationships in the DAG (see 2.1). It thus makes sense for these tools to encode effect sizes using vertices between nodes. In contrast, the PO framework focuses on estimating the effect size of a single predefined treatment on a predefined outcome (i.e., a single vertex). This smaller scope allows analysts to use more diverse estimation methods that are commonly available in statistical tools, but also requires analysts to adjust for variables that may bias the estimation. A DAG built for the PO framework would thus need to highlight how variables in the graph relate to the treatment-outcome relationship (e.g., by introducing confounders), a task that is not supported by the previous SCM-based visualization tools. This example illustrates only one of many distinctions between the two approaches. These distinctions mean that visualizations designed for the PO framework must support entirely different user tasks and analysis processes. To this end, we identified only three visualization packages developed for the PO framework - VAINE (Krause et al., 2017), causallib (Krishnan, 2017), and Cobalt (Krause et al., 2017). VAINE and causallib are both designed for use with the Jupyter notebook environment. VAINE is an interactive visual analytics tool that helps users identify clusters in the data set and estimate the average treatment effects across clusters, while causallib uses static visualizations to help users evaluate their causal inference models. In contrast, Cobalt is a visualization package designed for R, and helps analysts validate that their selected samples are suitable for causal inference. Of these, causallib and Cobalt only provide static visualizations, which makes rapid iteration and interactive analysis time-consuming. Furthermore, each package only addresses limited tasks in the causal inference process, and some tasks, such as identifying variable types, are not supported in any tool. To address this gap, we conducted a design study with experts to understand users' workflows in causal inference and to build and evaluate an interactive visualization system to support their analysis. ### Visualizations in Computational Environments Computational notebooks (e.g., JupyterLab, Google Colab, and Kaggle Notebooks) are programming environments in which users can interweave segments of code and output within the same interface. These notebooks have been widely adopted for their ability to support exploratory data analysis (Srivastava et al., 2017; Wang et al., 2018), collaboration (Srivastava et al., 2017), rapid iteration and workflow documentation (Srivastava et al., 2017). Recent works have advocated for the development of interactive visualizations in such computational environments in order to support reproducibility, streamline analysis and increase adoption of visualization systems (Srivastava et al., 2017; Wang et al., 2018). To support these goals, tools have been developed that help users embed interactive visualizations (Wang et al., 2018), create dashboards (Wang et al., 2018; Wang et al., 2018), and reuse workflows (Wang et al., 2018) in JupyterLab. Studies have also developed tools to condense notebooks for better collaboration (Wang et al., 2018) and communication through interactive data comics (Wang et al., 2018) and presentation slides (Wang et al., 2018). In addition to these general purpose tools, there has also been a trend towards fluid, interactive widgets embedded within notebook environments (Srivastava et al., 2017). Within data visualization, a range of interactive notebook widgets have been developed for use in domains such as biology (Srivastava et al., 2017), machine learning (Srivastava et al., 2017; Wang et al., 2018), data comparison (Srivastava et al., 2017), and programming education (Grover et al., 2018). Specific to causal inference, both VAINE (Srivastava et al., 2017) and causallib (Srivastava et al., 2017) are designed to be used in Jupyter Notebook. Inspired by these tools, we implemented Causals as a Python package for computational notebooks so that users can easily collaborate with experts and iterate through their causal inference workflow rapidly. ## 3. Design Study To understand the process of causal inference, users' tasks, and their analytic goals, we conducted a design study (Wang et al., 2018) with eight causal inference experts. Through two rounds of interviews, we first performed a formative user task analysis to derive the typical causal inference workflow and relevant analytic tasks. We then validated our findings in a second round of interviews, where the same experts helped refine and elaborate on the workflow and ideate through low-fidelity wireframes of visualization designs that can support their work. In the following sections, we provide a brief background on the causal inference framework, and describe the participant recruitment and design study process. Then, we present the causal inference workflow and user analytic goals that were derived from the interviews. ### Participants The target users of our system are causal inference experts who are interested in using causal inference to estimate the effect of a treatment on an outcome within a particular usage domain. They are familiar with the causal inference process and are experienced using it in prior or current projects. We recruited experts through a snowballing method. We first posted a message on a Slack channel, and reached out to project managers using our enterprise network. Through these connections, we branched out and recruited other participants with relevant expertise in causal inference. A total of eight experts (E1-8) in diverse domains were recruited. Participants were asked to self-report on a scale of 1 (no experience) to 5 (I consider myself an expert) their proficiency in Python (\(\mu\)=3.86, \(\sigma\)=0.690), JupyterLab (\(\mu\)=3.29, \(\sigma\)=1.38), causal inference (\(\mu\)=4.14, \(\sigma\)=1.07), creating/using DAGs (\(\mu\)=3.71, \(\sigma\)=0.76), and data visualization (\(\mu\)=3.86, \(\sigma\)=0.690). One participant declined to provide the above information. Three of the participants are consultants who serve as contacts for clients interested in causal inference. Their clients have domain expertise and relevant data sets, while their role as consultants is to perform causal analysis and communicate results back to the clients. Two of the participants are data scientists and researchers involved in the development of libraries for causal inference. They are also domain experts who have used causal inference in healthcare research, and are experienced in conducting and reporting research results. The other three participants are graduate students who work on causal inference, including theoretical (non-domain specific) simulations as well as healthcare (domain specific) related applications. ### Study Procedure To understand the process of causal inference and tools used by experts, we first conducted formative interviews with causal inference experts. The interviews were semi-structured, and where applicable, we asked specific follow-up questions based on user responses and their area of expertise. Each session lasted for no more than an hour, with 1-3 causal inference experts on each call. From these interviews, we gained an initial understanding of the sequence of tasks typically performed during the causal inference process and the challenges faced. We also obtained an overview of the data domains the experts worked in, as well as the current ecosystem of tools and libraries that support their work. Next, we created an initial three-step workflow summarizing the causal inference process. We also made low-fidelity wireframes of possible visualization tools that might be used to support each step of the workflow. These wireframes were simple prototypes, sketches, or screenshots of visualizations that have been published in causal inference research papers. For each visualization design, we provided multiple alternatives and annotated the images to indicate how interactions would work. We also combined them with screenshots of pseudo-code written in JupyterLab to better reflect how the designs would integrate into the computational environment as a visualization module. Finally, in a second round of interviews, we presented the workflow and wireframes back to our experts for validation and feedback. During these interviews, specific comments were surfaced to improve our understanding of the workflow. Participants also pointed out particular features and changes that can be implemented in the visualization wireframes. Building on the feedback, we then refined the three-step workflow to better reflect the causal inference process (see Section 3.3). We also came up with a set of design goals for Causals via that capture the needs and requirements of our users (see Section 3.4). The workflow and design goals are presented in the following sections. ### The Causal Inference Workflow The causal inference workflow presented here summarizes the process shared by all the causal inference experts we interviewed (Fig. 3). The causal inference workflow begins with an observational data set and can be summarized into 3 main steps: 1) Causal Structure Modeling, 2) Cohort Construction/Refinement, and 3) Treatment Effect Exploration. The three steps are described in detail below. We present experts' remarks in quotes and italics where appropriate. #### 3.3.1. Causal Structure Modeling The first step of causal inference is typically causal structure modeling. The goal of this step is for causal inference analysts to accurately model the causal relationships between variables in the data set and identify the variables that must be adjusted for during the analysis process. In this step, causal inference analysts often _"start with a causal graph"_ (E3). Causal graphs are **directed acyclic graphs** (DAGs) where nodes are variables, and a directed vertex from node M to N indicates that M is a likely cause of N. From this graph, analysts would identify the variables to adjust in order to satisfy the assumption that there are no unmeasured confounding factors. Confounders are variables that affect both the treatment and the outcome, and if left unadjusted, they can introduce bias to the treatment effect estimation. An estimated treatment effect is valid only if all confounders are identified and adjusted for. In addition to confounders, there are a few other variable types that must be identified: **mediators, colliders**, and **prognostic factors**, to name the most common ones (S Additionally, we also derived eight analytic tasks that should be supported by our visualization package. For each task, we indicate if it is only relevant to one of the steps in the workflow described above. These tasks were then used to guide the design of the Causalvis visualization modules. * **T1 Collaboratively creating and communicating causal structure.** [Causal Structure Modeling] Many experts we interviewed mentioned that causal inference is often a collaborative effort that requires the input of domain experts who are familiar with the data, but may lack expertise with causality. Causalvis should thus make it easy to communicate the causal structure of a data set. It should also be easy to create and modify the causal structure without technical expertise or in-depth knowledge about causality. * **T2 Maintaining the independence of causal structure from specific data sets.** [Causal Structure Modeling] Discussion between data scientists and domain experts over causal structure can sometimes begin before data sets are available as it might guide what variables are even needed in the first place. They may also want to include additional variables that are known to be relevant. Causalvis should thus _"not be constrained to the data set itself"_ (E1) and support the modeling of causal structures independently of any data set. * **T3 Identifying different types of variables.** [Causal Structure Modeling] Given the importance of selecting the right control variables during causal inference, variable types, such as confounders, mediators, colliders and prognostic factors, should be emphasized as _"something to be mindful of"_ (E3). Causalysis should help users quickly identify the different variable types and keep track of them. * **T4 Checking covariate balance and positivity violations.** [Cohort Construction/Refinement] Ensuring that treatment and control groups are comparable during cohort refinement is a crucial task for obtaining unbiased estimates of the treatment effect. Causalvis should help users identify when covariate balance and the positivity assumption are violated. Furthermore, Causalvis should allow users to select samples that should be excluded in order to satisfy the positivity assumption. * **Estimating treatment effects conditioned on a variable.** [Treatment Effect Exploration] It is often insufficient to estimate a treatment effect for an entire population. Data analysts want to condition on certain variables and gain insight into how the treatment effect differs across subgroups. Causalysis should support the exploration and identification of heterogenous treatment effects in a cohort. * **T6 Supporting a flexible and iterative workflow.** Common across all our experts is the highly flexible and iterative nature of their causal inference process. Experts often described the analysis steps as being "iterative". Different experts working in different domains may also use different causal inference methods, skip steps in the workflow, or prioritize one stage while using heuristics in the others. The three steps identified in 3.3 are thus neither prescriptive nor unidirectional. Causalvis should allow users to iterate through each step and return to an earlier step to refine their process when needed. * **T7 Tracking analytic provenance.** Since there is often no established ground truth when conducting causal inference on observational data, analysts often iterate through multiple hypotheses, cohorts, and estimates of the treatment effect value. They were thus _"in favor of version control"_ (E5). Causalvis should provide a method of tracking and comparing outcomes across different analytic decisions. * **T8 Integrating with existing causal analysis packages.** From our formative user interviews, we found that there was no one unified method of performing causal inference, and experts expect Causalvis to integrate seamlessly with existing data formats (such as networkx2 graphs) and libraries Figure 3. The causal inference workflow begins with some observational data and consists of 3 main steps: 1) Causal Structure Modeling, 2) Cohort Construction/Refinement, and 3) Treatment Effect Exploration. Double-ended arrows indicate iterative steps where analysts return to an earlier process to refine their analysis. The inputs and outputs are indicated above the corresponding steps. For researchers, the goal of each step is to obtain the outputs, which may then be passed on as inputs in subsequent analyses. We also include an optional causal discovery step that may sometimes be used to automatically generate initial DAGs from the data set that are later refined by domain experts during Causal Structure Modeling. (such as CausalNex (Causausaus, 2017) and causallib (Causallib, 2018)) that they already use for causal inference. ## 4. Causalvis Based on the user workflow (see 3.3) and analytic tasks (see 3.4) identified from our design study, we developed the Causalvis visualization package to support analysts through the three steps of the causal inference process. The package is developed for use in the JupyterLab computational environment since all experts in our formative study mentioned that they typically worked with Python packages and JupyterLab notebooks. We took an iterative design approach to our development process. In addition to the two rounds of formative interviews described, we also presented videos of the modules to the experts during implementation, informally collecting intermediate feedback to refine our designs. The Causalvis package consists of four visualization modules, where the first three modules (DAG, CohortEvaluator, and TreatmentEffectExplorer) correspond to the three steps identified in the causal inference workflow and the fourth module (VersionHistory) is designed to track analytic provenance. They are designed to work independently of one another **(T6)**. A key choice made in our design process was to focus on the visualizations needed for each step (see Fig. 3), while leaving the data processing and statistical adjustment techniques at the discretion of the user. We thus emphasized the ability for each module to integrate seamlessly with existing libraries and data structures instead. The Causalvis front-end is implemented in Javascript, using the React framework3. All visualizations are implemented using D3js4. Each module is integrated into the JupyterLab5 computational environment using Python and the IPywidget framework6. In the following sections, we describe the modules in detail. Where relevant, we indicate the functions available in each module, the arguments accepted, and the variables that can be accessed from the module class. For each module, we also discuss visualization packages currently available to analysts. We describe how Causalvis innovates and extends these existing solutions, with emphasis on the additional tasks supported by our modules, informed by formative interviews with causal inference experts. Footnote 3: [https://reactjs.org/](https://reactjs.org/) Footnote 4: [https://djs.org/](https://djs.org/) Footnote 5: [https://jupyter.org/](https://jupyter.org/) Footnote 6: [https://projects.readthedocs.io/en/stable/](https://projects.readthedocs.io/en/stable/) Footnote 7: We used the data set for Mathematics. ### Usage Scenario When describing the visualization modules in the following sections, we will simultaneously walk through a usage scenario (Zhou et al., 2018) to demonstrate and contextualize how Causalysis might be used in a realistic causal inference task. We use the UCI Student Performance data set (Brandt et al., 2018) in this scenario7. This data set records student math grades and related data at two Portuguese schools throughout the year. There are 30 attributes that track student demographic, social and academic information (such as _age_, _address_, and _abscenes_), and 3 attributes that track student grade throughout the year _(G1, G2, G3)_. We drop 7 sensitive or open-ended attributes from the data set _(school_, _sex_, _age_, _Mjob_, _Fjob_, _reason_, _guardian_), leaving a total of 26 attributes and 395 samples (rows). Footnote 7: [https://www.jobjects.readthedocs.io/en/stable/](https://www.jobjects.readthedocs.io/en/stable/) This data set has previously been used to build machine learning models predicting student academic performance (Brandt et al., 2018) and demonstrate the use of causal discovery packages such as Causalnex (Causausaus, 2017). We select it for our Causalvis usage scenario because the data domain is intuitive, and the existence of prior use cases serves as a baseline for causal inference analysis. The treatment attribute of interest is _abscenes_, found to be a predictive factor in machine learning models of student performance (Brandt et al., 2018). However, the size of its causal effect has not been estimated in prior studies. In this usage scenario, we convert _abscenes_ to a binary variable based on the data set median, such that students with \(\geq median\) absences is encoded as having frequent absences (_abscenes_ = 1), while students with \(<median\) absences have few or no absences (_abscenes_ = 0). The outcome attribute of interest is _G1_, the grade obtained in the first exam of the year. The scenario will follow the three-step workflow introduced in 3.3. Throughout the example, we may use other packages, such as causallib, to perform the statistical calculations necessary for causal inference. These external packages are not requirements or dependencies of Causalvis. Analysts may use other statistical packages they are familiar with so long as the data is passed to Causalvis in the expected data format. ### Dag The DAG module (Fig. 4) is designed to help users quickly and effectively model different causal structures using directed acyclic graph (DAG) visualizations. In the Causal Structure Modeling step (see Section 3.3.1), users want to understand the causal relationships and identify the variables that need to be adjusted for. They typically collaborate with domain experts to iteratively refine the DAG and construct different hypotheses. While some existing packages, such as Causalnex (see A.2), allow users to model causal structures by specifying nodes and links between nodes, this can only be done manually by writing code, and the resulting DAG visualizations cannot be interactively edited through direct manipulation. This makes the process of creating and refining DAGs too time-consuming, particularly when collaborating through infrequent meetings with domain experts who may not have the time or technical expertise to refine DAGs programmatically. The DAGitty (Brandt et al., 2018) (see A.1) application is a tool that supports the interactive editing of DAGs. However, the application expects users to know causal structure terminology (e.g. confounders, conditional independence), which may be unfamiliar to domain experts who are often not causal inference experts. Our Causalvis DAG module extends these existing tools by supporting interactive DAG modeling within the JupyterLab computational environment itself. By allowing users to create and refine DAGs directly on a visual interface, we facilitate the interactive and collaborative causal structure modeling process needed by causal inference analysts (see 3.3.1, **T1**). Additionally, we implement automatic variable type identification for subsequent analysis, and image download features that support easy sharing and communication with subject matter experts **(T1)**. **Initializing the DAG Module.** The DAG module can be initialized in a variety of ways, the simplest of which is using DAG() to create an empty canvas without first obtaining an existing data set **(T2)**. Additionally, the module also accepts the following arguments: attributes, graph, data, and nx_graph, allowing users to flexibly create and edit DAGs based on the data input available. These different input formats are included to support collaboration between users, allowing analysts to quickly load causal graphs that have been created beforehand **(T1)**. Some input formats also help integrate the DAG module with existing Python packages **(T8)**. For instance, the Causalmex package outputs causal structures in the networkx data format, which can be directly passed into the module using the nx_graph argument. **Creating and Editing DAGs.** When creating or editing a DAG, users can add nodes by clicking on the variable name from the list on the left. If the module is initialized with no variables, or if users wish to capture the relationship of additional factors, new nodes can be added to the canvas using the _Add Node_ button (Fig. 4 3). This allows users to quickly and flexibly capture domain knowledge about causal relationships without being restricted by any existing data set **(T2)**. Users can also toggle between the _move_ and _edit links_ buttons to either reposition the layout of the nodes or edit the links in the DAG (Fig. 4 2). By supporting direct interaction with the DAG, users can iterate rapidly through different hypotheses **(T6)**, and subject matter experts can collaborate on modeling the causal structure even without technical programming know-how **(T1)**. **Identifying Variable Types.** Each variable in the list has a context menu that has the following options: _Set as Treatment, Set as Outcome, Edit Tags_, and _Delete from Graph_. The _Set as Treatment_ and _Set as Outcome_ options will designate a particular variable as either the treatment or outcome of interest and will change the color of the corresponding node in the DAG (Fig. 4 4). There can only be one treatment and one outcome in each DAG, and a single variable cannot be both. Furthermore, when both treatment and outcome have been selected, all other nodes will be automatically colored to highlight the different variable types: _confounders_, _colliders_, _mediators_, and _prognostics_. Variable types are identified by recursively traversing the target-source relationships in the node-link structure (Fig. 2), and are dynamically updated whenever the user edits the DAG or the treatment and outcome variables. These highlights can help users identify the variables that must be adjusted for in subsequent analyses **(T3)**. **Saving and Sharing DAGs.** Once a DAG has been created, users can share the causal model by downloading the DAG as a _.png_ image using the _download image_ button **(T1)** (Fig. 4 5). Alternatively, the node-link structure can also be shared as a _.json_ file using the _download json_ button. This file can be customized to include information about the different variable types identified. **Accessing DAGs and Variable Types in Python.** Data analysts who wish to use the outputs of the DAG module in subsequent analyses can also quickly access the relevant data variables in the Jupyter notebook without downloading the _.json_ file **(T8)**. The causal structure created can be obtained using DAG, and the different variable types can be accessed similarly with.confounds,.colliders, mediators, and.prognostics. These variable types can Figure 4. The DAG module initialized with a networkx graph. 1) The graph is visualized on load. 2) Toggle buttons can be used to switch between layout editing and link editing. 3) The _Add Node_ button can be used to add custom nodes to the DAG. 4) The context menu of each variable can be used to set treatments and outcomes, edit tags, and delete variables from the visualization. 5) The DAG can be downloaded and saved as an image or.json file. then be used in subsequent analysis to determine the statistical adjustments that need to be made **(T3)**. #### 4.2.1. Usage Scenario We start by constructing the DAG model of the Student Performance data set to visualize the expected causal relationships between different attributes. To do so, we can load the data set into a DataFrame using the Python pandas package, and pass this to the DAG module to manually create a DAG from scratch. More effectively, however, we can also leverage the Causalnex package to automatically create an initial 'discovery' of what the causal structure should be. We then load the Causalnex graph into the DAG module (Fig. 4) to delete spurious nodes and links, add connections based on domain knowledge, and identify variable relationships. After iteratively editing the graph, we obtain a revised version of the DAG that represents our hypothesis of the causal structures that affect student absence and exam grades. We then set _abscenes_ as the treatment variable, and _G1_ as the outcome variable. The DAG automatically updates to highlight the other relevant variable types (Fig. 5). From this, we can identify that there are six confounding variables _(Pstatus, famsup, health, Medu, internet, and failures)_ and four prognostic variables _(paid, studytime, schoolsup, higher)_ that should be adjusted for in subsequent steps of the causal inference analysis. ### Cohort Evaluator The CohortEvaluator module (Fig. 6) is designed to help users validate that their selected cohorts satisfy positivity assumptions, and to refine the cohorts when necessary (see 3.3.1). When conducting causal inference analysis, the covariate distributions of the treatment and control groups should be as similar as possible to reduce the effect of biasing covariates. The propensity score plot and absolute Standardized Mean Difference plot (aSMD plot, also called Love plot) included in this module have been widely used in causal inference (Krishan et al., 2017), and are also part of causal inference toolkits like causallib (S between the treatment and control means would indicate to users that the groups are unbalanced for this particular covariate (**T4**). **Supporting Different Causal Inference Methods.** From our interviews with causal inference experts, we found that data analysts would approach cohort construction using different methods, such as matching or IPW (see 3.3.2). When using methods such as matching, data analysts would typically take the observational data (unadjusted cohort) and select samples to form balanced treatment and control groups (adjusted cohort). When using IPW, however, only the unadjusted cohort is used, and each sample in the unadjusted cohort is weighted by the inverse of its propensity score to create a pseudo-population where the treatment and control groups are balanced. To account for different methods, the CohortEvaluator module can initialized with the unadjustedCohort argument, and the optional adjustedCohort argument (**Tc6**). If only the unadjustedCohort is provided, the adjusted aSMD values in the aSMD plot will be calculated using the inverse propensity scores to weight the unadjusted aSMD values. #### 4.3.1. Usage Scenario. In the prior step, we used the DAG module to model the causal structure of the data and identify the relevant covariates (confunders and prognostics). We now adjust for these covariates through matching to obtain a cohort to use for treatment effect estimation. In this example, covariate matching results in an initial cohort of 328 students - 192 in the treatment group (frequent absences), and 189 in the control group (few or no absences). We then pass this selected cohort to the CohortEvaluator module to ensure that the cohort has comparable treatment and control groups with respect to the identified covariates. Since we are using the matching approach, we pass the original data set to the CohortEvaluator module using the unadjustedCohort argument, while the matched cohort is provided as the adjustedCohort argument. From the CohortEvaluator module, we see that the cohorts are not fully compatible for a causal analysis (Fig. 6). The propensity scores for the treatment and control groups have different distributions on the left tail, which suggests that there is a positivity violation. The standardized mean difference plot also indicates that the adjusted aSMD of the _internet_=yes, _Medu_ and _health_ variables are greater than 0.18. In fact, the adjusted aSMD for _internet_=yes is Figure 6. The CohortEvaluator module initialized with both adjusted and unadjusted cohort arguments. 1) The propensity score distribution of the treatment and control groups. 2) The absolute Standardized Mean Difference (aSMD) plot visualizing the aSMD of each covariate. 3) Toggle buttons can be used to view detailed distributions of each covariate. 4) The detailed distributions view automatically shows the distributions of covariates with an adjusted aSMD above 0.1. 5) The _Show/Hide covariates_ button can be used to customize which covariate distributions are shown, including already well-balanced covariates. Figure 7. The cohort of selected students after refinement. Compared to Figure 6, we can see from the aSMD plot that each covariate is much better balanced, and the adjusted aSMD of all covariates are now below 0.1 (black points). greater than its unadjusted aSMD, which suggests that the matched cohort increased the difference between the treatment and control groups for the _internet_-_yes_ variable. Taken together, the visualizations in the CohortEvaluator module suggest that the treatment and control groups for this cohort are not sufficiently similar (i.e. do not overlap), which may lead to biases in treatment effect estimation. We return to the matching process to use more stringent matching parameters. After this refinement, we obtain a smaller cohort with 200 samples - 95 in the treatment group (frequent absences), and 105 in the control group (few or no absences). This cohort is passed to the CohortEvaluator module (Fig. 7). We can see that the propensity score distributions for the treatment and control groups are now more similar. This is further supported by the standardized mean difference plot, which indicates that the adjusted aSMD for all variables are below 0.1. This is a well-balanced cohort with no positivity violations, which we can now use in subsequent treatment effect estimation. ### Treatment Effect Explorer The goal of causal inference is often to estimate the average treatment effect (ATE) of a particular treatment on the outcome of interest. While the ATE is calculated for the entire cohort, it can be useful to compare the effect between different subgroups as well. If the average effect differs between subgroups (e.g., for males and females), we say that there is a heterogeneous treatment effect. Identifying heterogeneity can result in more precise conclusions **(T5)**. However, to the best of our knowledge, there are no visualization packages that have been developed specifically for treatment effect exploration. From our formative interviews, we found that causal inference analysts currently make use of general purpose visualization authoring tools such as matplotlib9 or seaborn10. These visualizations are often static and created _ad hoc_ for each study, making the process to time-consuming for in-depth subgroup exploration. The TreatmentEffectExplorer module (Fig. 8) is thus designed to visualize individual treatment effects conditioned on different variables, with the goal of helping users identify when trends in treatment effects change across different subgroups (see 3.3.3). Note that this module can only be used with certain causal inference methods, such as matching, where it is possible to calculate individual treatment effects **(T6)**. It is unsuitable for methods such as IPW where only the ATE for the entire cohort can be obtained. Footnote 10: [https://snap.pydata.org/](https://snap.pydata.org/) **Creating and Exploring Subgroups.** On load, the TreatmentEffectExplorer module defaults to a single visualization of the distribution of individual treatment effects for the entire cohort (Fig. 8 ). The visualization uses a ramcloud plot (Bauer et al., 2017), which has been found to convey statistical summaries about the data distribution with minimal distortion or misinformation. To the left of this visualization is a list of variables available in the data set (Fig. 8 ). Users can click on a particular variable name to facet the visualization by this variable. Up to three variables can be selected this way. The first selected variable will be visualized along the \(x\) axis, while subsequent variables will be used to create small multiples in a matrix layout. If the first variable is binary, the cohort is automatically divided into two groups based on that variable, and raincloud plots will be used to visualize the distribution for each group separately (Fig. 8 ). If the first variable is a continuous variable, all individual treatment effect values will be visualized using a scatterplot. For the subsequent variables used to create small multiples of the visualization, if the selected variable is binary, the cohort will simply be divided based on that variable. However, if a faceting variable is continuous, the TreatmentEffectExplorer module first calculates the variable mean, which is then used to divide the cohort into two sub-populations. Note that the variable mean is only a heuristic used to perform a default grouping of the cohort, and users can customize this threshold value using a corresponding slider bar (Fig. 8 ). Additionally, we also visualize the distribution of the continuous variable in a beeswarm plot directly next to the slider to help users identify natural sub-populations in the data where a more appropriate division should be made. **Identifying Heterogeneous Treatment Effects.** All faceted plots in the TreatmentEffectExplorer module share the same \(x\) and \(y\) axes ranges to support easy comparison across the different visualizations. This would also help users identify when the treatment effect for one sub-population is significantly different from others **(T5)**. We also add a dashed line to the visualizations when the domain of the \(y\) axis includes 0 in order to highlight when sub-populations have opposite treatment effects, which can be an indication of Simpson's paradox. #### 4.4.1. Usage Scenario. In the previous step, we selected a cohort from the data set consisting of a treatment and a control group. Since we used the matching method, we can calculate individual treatment effects for each treatment-control pair in the cohort as the difference in _G1_ grade if a student had frequent absences in a year (_absences_ = 1) compared to if they had few or no absences (_absences_ = 0). This then allows us to explore trends in individual treatment effects across the selected cohort. The individual treatment effect values are passed to the TreatmentEffectExplorer module to explore subgroups and identify heterogeneous treatment effects. Using the TreatmentEffectExplorer module, we choose to group the cohort based on internet access (_internet_-_yes_) and student health (_health_) (see Fig. 8, ). The visualizations are now faceted by student health - the students with poorer health (_health_ \(<\) 3.5) are visualized in the facet on the left, and students with better health (_health_ \(>\) 3.5) are visualized in the facet on the right. Across the four subgroups, we can see immediately that students with poor health who had no internet access at home had a clear the decrease in grades caused by frequent absences. Comparatively, this effect of absences was less pronounced for the other subgroups, which all have distributions around 0. This finding may be interesting for parents and educators, and prompt follow up studies into how frequent absences affect the performance of students with health conditions. ### Version History Causal inference is a highly iterative process where data analysts often have to test different causal structures and construct cohorts that may result in different estimations of ATE. Keeping track of DAGs and cohorts is thus a crucial task to help users recall their analytic provenance **(T7)**. However, to the best of our knowledge, there are no provenance tracking tools that have been developed for causal inference. For this purpose, the VersionHistory module is designed to store and visualize different DAGs and cohorts such that users can view their causal inference analytic history, as well as restore previous versions when necessary (Fig. 9). **Tracking Provenance with the VersionHistory module.** The VersionHistory module is initialized simply using VersionHistory() without any arguments. This creates an empty icicle plot with no DAGs or cohorts. As data analysts work through the causal inference process, they can track their analytic provenance by calling the.addVersion((DAG, cohort, ATE)) function **(T7)**. This function accepts a tuple that includes information about the DAG, the cohort selected, and the estimated ATE for this particular iteration of the causal inference workflow **(T6)**. The VersionHistory module visualizes DAG and cohort versions using an icicle plot (Fig. 9 ). Below the icicle plot is a dot plot that visualizes the ATE corresponding to each cohort. Hovering over a dot will reveal a tool-tip with the DAG and cohort version (Fig. 8 ). The visualizations in the VersionHistory module automatically update each time a new version is added, and the user need not run the component again. **Saving and Sharing Versions.** At the end of the analysis, all versions can be downloaded using the.saveVersions() function, which saves all DAGs and cohorts as a _json_ file. This allows users to easily restore an earlier version of their analysis, as well as share their analytic provenance with collaborators when needed **(T1)**. ## 5. Expert Evaluation We evaluate the Causalvis package in a qualitative study with eleven experts to obtain feedback for the visualization modules, and validate that the visualizations support the causal inference analysis tasks identified (Section 3.4). In our future work, we plan to also evaluate Causalvis through a more in-depth deployment study. Working with causal inference experts, we will look at how Causalvis would be used in real-world analyses within specific application domains. This long-term deployment will help us better understand how Causalvis fits into current user practices. For the scope of this paper, we rely on participant expertise to evaluate the usability of the Causalvis modules and surface potential issues before real-world deployment. In the following sections, we describe the study design and results. ### Participants We recruited eleven causal inference experts (P1-11, 7 males/4 females) to participate in the evaluation study, five of whom were also participants in the formative interviews (see Section 3). Before each study session, participants were asked to self-report on a scale of 1 (no experience) to 5 (I consider myself an expert) their proficiency in Python (\(\mu\)=4, \(\sigma\)=0.775), JupyterLab (\(\mu\)=3.45, \(\sigma\)=1.21), causal inference (\(\mu\)=3.64, \(\sigma\)=1.12), creating/using DAGs (\(\mu\)=3.09, \(\sigma\)=1.04), and data visualization (\(\mu\)=3.36, \(\sigma\)=0.674). All participants are data scientists or performed data analysis as part of their job. Their experience working with causal inference projects ranged from 3 months to 4 years, with the exception of one participant who has not formally worked on causal inference projects, but who has studied it in his own time and is planning to apply it to subsequent projects. Of the eleven participants, seven are researchers who have built tools to support causal inference analysis or worked on causal inference research projects in healthcare fields. Two participants are consultants who work with clients in the government to apply causal inference to policy-related decision making. The other two participants are doctoral students who use causal inference as part of their doctoral studies. Figure 8. The TreatmentEffectExplorer module. 1) On load, the module shows the distribution of the individual treatment effects of the entire cohort. 2) Users can subgroup the cohort by up to three variables. 3) The visualization will be faceted if two or more variables are selected. 4) If a non-binary variable is selected, the threshold for sub-grouping can be adjusted with a slider. ### Tasks We prepared 4 notebooks in JupyterLab demonstrating each of the 4 modules in the Causalvis package. In the notebooks, we presented a condensed version of the usage scenario included in this paper (see 4.1) using the same UCI Student Performance data set (Bordes and McAllester, 2017). Each notebook included short explanations and examples of how the module would be used in analysis. Where necessary, we also showed how the module would be used in conjunction with existing Python libraries **(T8)**. During the study, we asked participants to work through the notebooks and complete guided tasks such as initializing the DAG with a custom list of variables or brushing over the propensity plot to select samples in the CohortEvaluator module. These tasks were designed to familiarize participants with the module features. Once participants completed a notebook, we asked for feedback about the module and improvements they would make. The sessions were conducted remotely over a video conferencing service, Webex. Each session took one to two hours. ### Results Overall, participants gave positive feedback for the Causalvis package. In the following sections, we highlight when participants found Causalvis to effectively address their analysis tasks (T1-8), and discuss additional feedback and suggestions made for each module. #### 5.3.1. Dag **The DAG module supported communication and collaboration (T1)** Participants found that the option to save and share the DAGs as images was helpful for communicating the causal relationships in the data. As P4 said, when working on publications of her causal inference work, _"having a good visualization was always pretty important."_ Similarly, P7 also commented that the visual representation of causal relationships in the DAG was an important means of communicating with audiences who were not data scientists. Though data scientists would need to know the details about the causal structure and the causal inference process, an external collaborator or domain expert _"only wants to know the relationship between the covariates and see the outcome."_ In addition to the effectiveness of the visualization itself, participants also felt that the DAG module was an improvement on existing tools. P9, for instance, described working on publications where he had to manually create DAGs in Powerpoint and Photoshop, concluding that _"this would have been much nicer."_ **Automatically identifying different variable types in the DAG module was helpful for participants (T3).** During the study, many participants found it useful that the DAG module would dynamically highlight the important variable types once the treatment and outcomes were selected. As P1 described, _"it's nice that it will immediately color everything so that it's also visibly clear which nodes have which context."_ Overall, participants felt that this feature was helpful for users who were less familiar with causal inference and _"didn't know the confounders, colliders, and mediators"_ (P9), as well as for data analysts who might discover _"something we hadn't realized is a confounder or a mediator somewhere along the graph"_ (P4). Furthermore, participants also liked the use of color encodings in the DAG to highlight each variable type. By providing visual feedback that made the variable types explicit and quickly identifiable, the DAG module was able to better support the task of selecting the variables that need to be adjusted for. As P10 described, it _"takes some time to visually parse a graph"_, so _"this automatic annotation is very useful"_. However, while many participants appreciated this feature, some also cautioned against overreliance on automatic variable identification. P9 referred to this as _"a double-edged sword"_, and expressed concerns that variables could be wrongly highlighted because someone _"could create a DAG where it labels them incorrectly."_ This can then introduce errors into the subsequent causal inference analysis if the wrong variables are adjusted for. **Participants liked being able to quickly and interactively iterate on the DAG module.** Compared to the current techniques or tools that are used to create DAGs, many participants commented that they preferred the interactive features provided by the DAG Figure 9. The VersionHistory module. 1) An icicle plot visualizing three versions of the DAG and the different cohorts created to test the causal model in each DAG version. 2) The estimated ATE for each cohort. We see that the estimated values are mostly clustered around the range \(-0.25\) to \(-0.10\). However, there are two outlier cohorts with very large and very small ATEs. module. P1, for instance, said that _"it's nice that it gives you the option to edit it manually, and you don't have to write it all in some yaml or something like that."_ Similarly, P2 had prior experience learning about DAGs from R, but commented that the visualizations had not been interactive, which _"would've been very useful."_ During the discussion, P9 elaborated further on his preference for interactivity. Previously, he had created DAGs using tools such as Powerpoint and Photoshow, but felt that _"iterating on something is what makes it really annoying."_ In comparison, when using the Causalvis DAG module, he appreciated _"being able to quickly iterate and add in and remove nodes and relationships,"_ which was made possible through the interactive interface. **Participants wanted more annotation features in the DAG module.** A frequent comment from participants was the suggestion to add annotations to edges between nodes. For data analysts, this feature would help them gain a better understanding of the data set. As P1 said, it would be helpful to add the correlation coefficient to the edges because _"it's not just the structure of the causal relationships, but also some numerical estimation of how things relate between one another."_ This information would help her identify whether a confounder is highly correlated with both treatment and outcome, and would thus need to be prioritized during adjustment. In contrast, other participants wanted to add annotations to explain relationships or communicate with collaborators. For example, P9 wanted to _"add information to describe these pathways beyond just an arrow."_ Similarly, P11 would typically meet with collaborators only every one or two weeks, and wanted to highlight questionable links and indicate parts of the DAG that need to be _"reviewed by the subject matter expert."_ Ultimately, for data scientists who have to work with domain experts frequently over the course of the causal inference process, annotations can be a helpful means of supporting collaboration. This is best summarized by P10, who said that _"[annotations] are not about causal inference. These are just about communication. But in a client setting, communication is important, so that, I think, is useful."_ #### 5.3.2. Cohort Evaluator **CohortEvaluator module provided additional insight into the data distribution in cases of covariate balance and positivity violations (T4).** Many participants in our study appreciated that they could see the distributions of each covariate visualized separately in the details view (Fig. 6 ), and felt that it a useful addition to the propensity score and the aSMD plots that are more commonly used in causal inference. Participants described the benefit of the additional plot in the following ways. As P1 explained, _"when you have, for example, the aSMD plot, you only see the average. I think it's better to see the entire distribution, it gives you much more information."_ P2 further elaborated on his prior experiences, commenting that _"Whenever I've done propensity score matching, if I want to look at exploratory plots, I would just look at [the covariates] one by one."_ In comparison, when using the CohortEvaluator module, P2 said that _"not having to write individual code to obtain each of these, I think that's nice."_ **Participants wanted feedback from the CohortEvaluator module after selecting instances from the propensity score plot.** During the expert evaluation study, participants found it useful that they could brush over the propensity score plot to obtain the samples that were not well balanced. P5, for example, said that _"I really like scrolling over the [propensity score plot] and having a look at which samples they were. It is pretty amazing."_ However, many participants wanted more visual feedback from the module after making the selection, such as visualizing the covariate values of the selected samples separately. P5 said that _"if we could have some visualization about the selection that is unbalanced, I think it would be really interesting. A straight, fast visualization about why your data isn't being sampled in those cases."_ Similarly, P3 wanted to compare the covariates of the selected samples with the entire cohort because _"it will be quite interesting to see the contrast in how the distribution looks."_ In addition to visualization feedback, some experts in the study also wanted the Jupyter notebook to provide a more detailed explanation about the visualizations and how to interact with them. For example, P10 said that even after brushing over the propensity score plot, _"it's still not clear what these people are"_ and what covariates differentiate them from the rest of the cohort. In such cases, he wanted the notebook to provide examples about what analysis steps to take next - _"It doesn't have to change the design, it doesn't change your package. Just add some use cases in here, some action you can take."_ P11 made a similar suggestion, saying that the notebook examples should include _"more guidance on what packages you recommend"_ to characterize and exclude samples after they have been selected. #### 5.3.3. Treatment Effect Explorer **The interactive visualizations in the TreatmentEffectExplorer module reduced the effort needed for participants to iteratively explore and compare subgroups in the data (T5).** Many participants in our study found the visualizations to be an improvement from their current approaches. P1, for instance, said that _"it's nice to visualize everything and not just look at numbers."_ Similarly, P8 _"really liked the three-variable visualization [because] it's a really hard thing to do."_ Participants also described the interactive selection of different variables as being _"very intuitive"_ (P4) and _"great fun"_ (P6). The interactivity was helpful to explore different subgroups, and _"get these different comparisons between the average treatment effect"_ (P4). It was also particularly useful for participants who often worked with domain experts to identify heterogeneous subgroups in the data. During exploration, domain experts may ask to compare different variables or stratify subgroups by different thresholds. As P6 described, with current approaches there is often _"this endless repetition of change"_. In comparison, the TreatmentEffectExplorer was _"easy to work with"_. **The TreatmentEffectExplorer module supported communication and storytelling about causal inference.** In addition to analysis tasks, a number of our participants also mentioned that _"it's the job of the data scientist to come up with the story"_ (P11). In particular, consultants need to communicate and make sense of results for clients, customers, and collaborators. However, this can be challenging with causal inference. As P10 described, _"most data scientists know that they have to do some storytelling, but in the causal inference setting, I think because the idea is so new, most data scientists don't know how to tell a story with causality because the story is harder to tell."_ To this end, participants commented that the visualizations in the TreatmentEffectExplorer module were well suited for this purpose. Compared to downloading the data and creating visualizations manually, P11 liked that _"this tool can have some graphs ready made to support our story... it saves us us a lot of time."_ **More visualization customization and guidance should be provided in the TreatmentEffectExplorer.** Although the visualizations and interactions implemented in this module were well received by our participants, not all participants were familiar with the raincloud plot. P5 found that _"it took me a while to get into the visualization"_ and P3 commented that _"it took less than a minute, but it wasn't immediate."_ To better interpret the visualizations, many participants wanted the option to customize the plots based on their familiarity and expertise. P8, for example, suggested putting _"some of the data that you visualize, such as the box plot, on a toggle that you can turn off and on"_, while P4 wanted to _"enable or disable the different [violin plots] because usually I don't plot these."_ Ultimately, participants wanted more instruction on how to interpret the visualizations in the TreatmentEffectExplorer module. P4 suggested implementing _"an instructional manual before hand so you know what you're looking at"_ in the visualization, while P3 wanted _"something to guide you to what you're looking for"_. #### 5.3.4. Version History. **The VersionHistory module was helpful for tracking provenance but should include additional features (T7).** Many participants mentioned that keeping track of their analysis process was something that they want or need to incorporate into their workflow. Referring to their causal inference studies, P2 said that _"I absolutely needed [such a tool]. I absolutely need to record how many patients are in my cohort"_, while P8 commented that _"[for] what I'm doing right now. [version control] would be really helpful because I am running at lot of data against a lot of DAGs."_ Visualizing the different cohorts and associated ATEs also has the added benefit of helping analysts evaluate the robustness of their estimated ATE when using a cross-validation approach. As P1 explained, _"you would like to see that the average treatment effect is mostly stable, that it resides in some range that is not very large. Then you can say that we're really robust, and there is real generalization in the model instead of overfitting to whatever data we get."_ However, some participants also suggested that the VersionHistory module needed to keep track of additional information to be completely useful. In cases where machine learning models are used in causal inference, the module should also record _"machine learning parameters"_ (P11) and _"the specifications of the model"_ (P6). **Participants wanted the VersionHistory module to help them compare the DAGs.** In addition to keeping track of the analysis process, multiple participants mentioned that they wanted the VersionHistory module to help them compare between different versions of the DAG. Participants suggested that a visualization would reduce the effort required to make such a comparison, providing an _"easy way to understand what is DAG 1 and what is DAG 2, and how do they differ from one another"_ (P1). Similarly, P2 said that _"having these comparisons, making these comparisons easier to do would be very helpful."_ More specifically, participants were interested in using visual comparison to identify unique structures in the causal graphs. P6, for instance, said that he would like to see _"which edges appear in one and not the other"_, while P8 wanted the module to _"emphasize to me the local structure, what the neighbors of the variables actually are, so I can be more sure that it is not a collider in the path form."_ P10 went a step further, and suggested keeping track of the version history directly within the DAG module itself because _"while I do the editing, if I have some way to look at the history, that will help."_ He explained that being able to see previous versions would be visual reminder of earlier discussions with collaborators, which can in turn guide decisions about the edits needed. Ultimately, there was strong consensus among study participants that being able to compare and contrast different versions of the DAG would enhance the VersionHistory module. This was best expressed by P3, who said that _"presenting would be a first benefit, but comparing would be even better."_ ## 6. Discussion In this section, we reflect on how Causalis contributes to the existing causal inference workflow, and discuss implications for future work. **Supporting rapid iterative hypothesis testing through interactive visualizations.** The process of causal inference requires data analysts to iterate through each step of the workflow multiple times to explore different causal structures, refine cohorts, and explore heterogeneous treatment effects in different subgroups. Participants in our evaluation study appreciated that the interactivity of the DAG and TreatmentEffectExplorer modules allowed for more rapid iterations compared to current tools where static visualizations must be edited programmatically in a manual and time-consuming process. Causalis thus better supported tasks such as collaboratively exploring different hypotheses of causal structure **(T1)** or comparing heterogeneous treatment effects across cohort subgroups **(T5)**. However, participants also felt that more could be done for the CohortEvaluator module. Although users could brush over the propensity score plot to select samples that are imbalanced between the treatment and control groups **(T4)**, the visualizations in the module did not update in response to this interaction. Users had to manually exclude the selected samples from the data set themselves and run the module again in order to see the visualization updates. Many participants in the study found that this process was unintuitive, and expected the visualizations to dynamically update with indications of how the selected samples differed from the entire cohort, which can then provide the guidance needed to adjust their selection and refine the cohort. Future work developing visualizations for causal inference should thus better consider the need for rapid iteration throughout the workflow, with particular emphasis on cohort construction and refinement tasks. **Explaining and communicating causal inference to domain experts and collaborators.** In causal inference, data scientists do not merely complete analysis tasks, they frequently also have to communicate with domain experts, publish results, and make sense of the outcomes for clients and collaborators. To this end, many participants in our evaluation study liked when the visualizations in Causalysis were effective for both analysis and communication. Participants found that the saving and sharing features in the DAG module to be an improvement over existing tools for visualizing causal structure models. They also liked that the TreatmentEffectExplorer module helped them rapidly identify heterogeneous treatment effects as part of the analysis, while also explaining results to collaborators and _"tell[ing] a story with causality"_ (P10). Taken together, this highlighted the need for causal inference visualizations to support both analysis and communication. Our evaluation study also revealed the potential use of annotations to keep track of discussions with domain experts. From our formative and evaluation studies, we found that collaborations during the causal inference workflow can be inconsistent. Data scientists may only meet domain experts every one or two weeks, during which time they must quickly validate changes and make the necessary refinements. This is particularly important when modeling causal structures, and many participants suggested adding annotation features to the DAG to better support this two-way collaboration. In future work, researchers can further explore how annotations might be incorporated into the various visualization modules to better support communication and collaboration needs during causal inference. Existing studies such as (Song et al., 2019) and (Beng et al., 2020) can also inform the design of these annotation features. **Evaluating estimation robustness through sensitivity analysis.** A characteristic of causal inference is the lack of ground-truth data that can be used to evaluate the accuracy of estimated treatment effects. Instead, causal inference analysts would often perform sensitivity analysis (Song et al., 2019) to check for robustness and generalizability of various analytical choices. Analysts may thus iterate through the causal inference process multiple times, using different combinations of covariates or testing different cohorts in each iteration (Song et al., 2019). As mentioned by P1 during the evaluation study, the VersionHistory module potentially supports a sensitivity analysis-like evaluation (see 5.3.4). For example, if the ATE remains stable across different covariates and cohorts (see 9 ), analysts can then be more confident that their estimate is likely to be robust and generalizable. This combinatorial testing of reasonable analysis decisions shares many similarities with multiverse analysis (Song et al., 2019; Wang et al., 2020), where an automated, exhaustive search of all combinations of analytic decisions are made to ensure the robustness of results. Although Causalvis is designed to be highly interactive and does not naturally lend itself to exhaustive combinatorical searches, analysts can still specify and iterate through the most plausible analytic scenarios. As such, many visualization strategies that have been developed to evaluate and review the results of multiverse analysis may be similarly effective for visualizing the subset of causal inference alternatives explored. Most immediately, for example, the VersionHistory can act as an equivalent of a forest plot for contrasting ATEs under different design choices. In future work, we hope to turn to existing studies such as (Song et al., 2019) and (Wang et al., 2020) to inform how the module might be extended to help causal inference analysts perform sensitivity analysis and evaluate for estimation robustness. ## 7. Lessons Learned From Design Study Reflecting on the process of conducting this design study with causal inference experts, we identified some challenges we encountered and the methodological lessons learned. **Designing visualization packages to complement existing workflows and tasks (T6).** One key decision we made during the design study was to identify how Causalvis modules should fit into current workflows and complement existing analytic tools instead of interrupting or replacing them. Doing so required us to understand current causal inference processes and packages. Working closely with causal inference experts during formative interviews, we refined our understanding of causal inference into the three-step workflow presented in this paper (see Fig. 3). We also identified where certain analytic tasks are not well supported by current tools, and made note of the data formats and analysis outputs produced at each step. This formative study helped guide the development of the Causalvis modules. For each module, we focused on developing visualizations for tasks that are not well supported by existing tools, while also integrating with the external packages or algorithms analysts have developed (see Fig. 10). For example, with the CohortEvaluator module, we learned that analysts computed propensity scores using a variety of different algorithms. We thus decided not to re-implement propensity score calculation in our module, and instead accepted calculated values as input in order to complement the diversity of existing user approaches. Our paper demonstrates how a visualization package for causal inference can be designed to _complement existing analytic workflows and tools instead of interrupting or replacing them_. In domains beyond causal inference, Figure 10. A summary of the causal inference process. Each step in the three-step workflow (1-3) is divided into more granular analysis activities. The analysis activities supported by Causalvis are highlighted in green. Gray boxes indicate other activities supported by existing packages or algorithms. Here, we provide a subset of examples, not an exhaustive list of existing tools. For clarity, we omit arrows indicating iteration between steps. there may exist data analysis tasks that require multiple steps and similarly complex workflows. The workflow discovery process we adopted in this paper may thus be applicable to these other domains as well. Future research can also investigate comprehensive criteria for making design decisions such that visualization tools optimally complement existing analytic approaches and user tasks. **Integrating into computational environments (T8).** Ideating through early designs of Causalyis, we initially considered implementing the visualization modules in an independent visual analytics system outside the interactive computing environment. However, we ultimately decided against this approach because causal inference experts in our formative study mentioned that they typically worked in the JupyterLab computational environment. Switching between systems was likely to interrupt users' workflows (Becker et al., 2016; Chen et al., 2017), and introducing a standalone application outside the computing environment can be disruptive during highly iterative analyses. We thus decided to develop Causalyis as visualization modules for use within the JupyterLab computational environment. To do so, we had to address challenges in both the way the modules were designed, as well as how they were evaluated. During module design, we incorporated JupyterLab into the earliest prototypes and wireframes of each module, using screenshots of pseudo-code to demonstrate how the visualization modules would fit into the computational environment. This helped surface more detailed requirements about the inputs and outputs of each module. For example, after viewing the initial wireframes, an expert in the formative study requested that the DAG module also support networkx graphs as input so that he can easily pass the outputs of the Causalnex package directly into Causalyis. Similarly, in later evaluation, we created notebooks to demonstrate how the visualizations would be used in JupyterLab, and asked participants to complete tasks that included directly modifying the notebook content (see 5.2). Through this study design, we identified useful feedback for how each module can be better integrated into JupyterLab. For example, participants asked questions about data types (_"Why do we convert to a dictionary?"_, _P11_) and made suggestions about expected outputs (_"I would want the output here to be indices in the data set rather than the data itself"_, _P3_). By conducting the study within the intended usage environment, we were thus able to evaluate how well each module integrated with the computational environment itself, and how the API might be improved. **Working with experts in diverse domains and roles.** In this study, we were fortunate to receive feedback from experts through online meetings on three occasions (as detailed in Section 3.2). The first session aimed to identify users' workflows for conducting causal inference analysis, the second session was for brainstorming and confirming functionalities based on prototypes and sketches, and the final evaluation aimed to assess the usability of Causalyis. One key lesson we learned was the importance of recruiting a diverse group of participants and understanding user workflows from different perspectives. All of our participants were experts in causal inference, but they had different domains and roles for day-to-day data analysis activities. Learning from a diverse group of participants allowed us to design for a broad range of use cases. For example, consultants valued using visualizations to explain the process and "tell a story" about causal inference to collaborators, while healthcare data analysts were most interested in comparing treatment effects across different subgroups. We conjecture that we would have had only partial solutions if we did not have access to a diverse group of participants, and we learned that it is important to discover applications from diverse users when designing a visualization package for a specific task. ## 8. Conclusion In this paper, we present a design study for Causalvis, a Python package of visualizations to support causal inference. Working closely with the experts over the course of three months, we first characterized the causal inference workflow and identified related analytic tasks. We then adopted an iterative design process to develop four visualization modules to support the tasks of causal structure modeling, cohort construction and refinement, and treatment effect exploration. The results of our evaluation study indicate the importance of designing visualizations to support rapid iteration through each step of the workflow, as well as how causal inference tools should be designed for both analysis and communication. Finally, we also shared methodological lessons learned from developing componentized visualizations to support a flexible workflow, as well as how visualizations can be designed and evaluated for integration into specific computational environments. In future work, we aim to further explore how visualizations for causal inference can be designed to support the visual comparison of DAGs and how annotation features can be implemented to facilitate communication between data scientists and collaborators. ###### Acknowledgements. We would like to express our gratitude to all participants who generously provided their valuable time and shared insights throughout the study. We also wish to thank the anonymous reviewers for their thoughtful and detailed feedback on this paper, and members of the Georgia Tech Visualization Lab and researchers from IBM Research for their helpful suggestions, thoughtful comments, and constructive feedback throughout the study. This project is supported in part by NSF IIS-1750474 and NSF 2247790.
2308.09227
Three-Dimensional Time Resolved Lagrangian Flow Field Reconstruction Based on Constrained Least Squares and Stable Radial Basis Function
The three-dimensional Time-Resolved Lagrangian Particle Tracking (3D TR-LPT) technique has recently advanced flow diagnostics by providing high spatiotemporal resolution measurements under the Lagrangian framework. To fully exploit its potential, accurate and robust data processing algorithms are needed. These algorithms are responsible for reconstructing particle trajectories, velocities, and differential quantities (e.g., pressure gradients, strain- and rotation-rate tensors, and coherent structures) from raw LPT data. In this paper, we propose a three-dimensional (3D) divergence-free Lagrangian reconstruction method, where three foundation algorithms -- Constrained Least Squares (CLS), stable Radial Basis Function (RBF-QR), and Partition-of-Unity Method (PUM) -- are integrated into one comprehensive reconstruction strategy. Our method, named CLS-RBF PUM, is able to (i) directly reconstruct flow fields at scattered data points, avoiding Lagrangian-to-Eulerian data conversions; (ii) assimilate the flow diagnostics in Lagrangian and Eulerian descriptions to achieve high-accuracy flow reconstruction; (iii) process large-scale LPT data sets with more than hundreds of thousand particles in two dimensions (2D) or 3D; (iv) enable spatiotemporal super-resolution while imposing physical constraints (e.g., divergence-free for incompressible flows) at arbitrary time and location. Validation based on synthetic and experimental LPT data confirmed that our method can consistently achieve the above advantages with accuracy and robustness.
Lanyu Li, Zhao Pan
2023-08-18T01:13:26Z
http://arxiv.org/abs/2308.09227v1
Three-Dimensional Time Resolved Lagrangian Flow Field Reconstruction Based on Constrained Least Squares and Stable Radial Basis Function ###### Abstract The three-dimensional Time-Resolved Lagrangian Particle Tracking (3D TR-LPT) technique has recently advanced flow diagnostics by providing high spatiotemporal resolution measurements under the Lagrangian framework. To fully exploit its potential, accurate and robust data processing algorithms are needed. These algorithms are responsible for reconstructing particle trajectories, velocities, and differential quantities (e.g., pressure gradients, strain- and rotation-rate tensors, and coherent structures) from raw LPT data. In this paper, we propose a three-dimensional (3D) divergence-free Lagrangian reconstruction method, where three foundation algorithms--Constrained Least Squares (CLS), stable Radial Basis Function (RBF-QR), and Partition-of-Unity Method (PUM)--are integrated into one comprehensive reconstruction strategy. Our method, named CLS-RBF PUM, is able to (i) directly reconstruct flow fields at scattered data points, avoiding Lagrangian-to-Eulerian data conversions; (ii) assimilate the flow diagnostics in Lagrangian and Eulerian descriptions to achieve high-accuracy flow reconstruction; (iii) process large-scale LPT data sets with more than hundreds of thousand particles in two dimensions (2D) or 3D; (iv) enable spatiotemporal super-resolution while imposing physical constraints (e.g., divergence-free for incompressible flows) at arbitrary time and location. Validation based on synthetic and experimental LPT data confirmed that our method can consistently achieve the above advantages with accuracy and robustness. ## 1 Introduction The high seeding density three-dimensional Time-Resolved Lagrangian Particle Tracking (3D TR-LPT) is a powerful tool for high spatiotemporal resolution flow diagnostics (Schanz et al., 2016; Tan et al., 2020). This technique offers three major advantages in fluid experiments. First, the 3D TR-LPT works in the Lagrangian perspective, which is intuitive and allows for a heuristic interpretation of fluid flows. For example, one of the most famous early Lagrangian flow 'diagnostics' is perhaps tracking the movement of tea leaves in a stirred teacup, as elucidated by Einstein's study on the tea leaf paradox (Einstein, 1926; Bowker, 1988). Second, the LPT technique, including 3D TR-LPT, facilitates trajectory-based measurements, providing insights into the history of the flows. By following the pathlines of tracer particles, one can recover the past and predict the future of the flows. For example, the LPT has been applied in particle residence time studies (Jeronimo et al., 2019; Zhang and Rival, 2020), indoor airflow measurements to improve air quality (Biwole et al., 2009; Fu et al., 2015) and fluid mixing studies (Alberini et al., 2017; Romano et al., 2021). Third, the 3D TR-LPT technique excels at capturing transient and time-correlated flow structures. This feature is crucial for studying complex flows characterized by intricate evolution of the flow structures (e.g., Lagrangian coherent structures (Peng and Dabiri, 2009; Wilson et al., 2009; Rosi et al., 2015; Onu et al., 2015) that are the'skeleton' of fluid flows (Peacock and Haller, 2013)). However, processing the raw LPT data is not necessarily trivial for a few reasons. First, key flow quantities, including particle trajectories, velocities, and differential quantities (e.g., pressure gradients, strain- and rotation-rate tensors) are not directly available in raw data. Technically, the raw LPT data only consist of particle spatial coordinates as a time series and anything beyond those requires additional reconstruction. Second, the particle spatial coordinates in the raw data are often subject to measurement errors (Ouellette et al., 2006; Cierpka et al., 2013; Gesemann et al., 2016; Machicoane et al., 2017). Deviations between the true and measured particle locations are inevitable and can challenge the data processing and undermine reconstruction quality. Furthermore, (numerical) differentiation amplifies the noise present in the data when computing derivatives, leading to even noisier reconstructions without careful regularization. This effect is evident when reconstructing velocities and differential quantities. Third, the scattered and nonuniform distribution of LPT data makes it difficult to employ simple numerical schemes, such as finite difference methods, for calculating spatial derivatives. To address these challenges and process the raw LPT data, several methods have been proposed and we provide a brief summary below. In the context of trajectory reconstruction, polynomials and basis splines (B-splines) are commonly used, with an order being second (Cierpka et al., 2013), third (Luthi, 2002; Cierpka et al., 2013; Gesemann et al., 2016; Schanz et al., 2016), or fourth (Ferrari and Rossi, 2008). Typically, least squares regression is applied to mitigate the impact of noise in the particle coordinates (Luthi, 2002; Cierpka et al., 2013; Gesemann et al., 2016; Schanz et al., 2016). However, polynomials and B-splines may encounter the following difficulties. First, low-order functions cannot capture high-curvature trajectories, while high-order ones may be prone to numerical oscillations known as Runge's phenomenon (Gautschi, 2011). Second, developing self-adaptive schemes that can accommodate trajectories with varied curvatures is not a trivial task. In the existing methods, the same trajectory function with a fixed order is commonly used throughout the entire reconstruction, offering limited flexibility to approximate varied curvature trajectories. Third, achieving a high degree of smoothness with high-order polynomials becomes challenging when the number of frames is limited. For example, with four frames, the maximum attainable order of a polynomial is third. This implies that the particle acceleration varies linearly, and its jerk is a constant, which may not always be true. Velocity reconstructions are commonly based on either finite difference methods (Malik and Dracos, 1993) or directly taking temporal derivatives of trajectory functions (Luthi, 2002; Gesemann et al., 2016; Schanz et al., 2016). For instance, the first-order finite difference method (Malik and Dracos, 1993) evaluates a particle's velocity via dividing the particle displacement between two consecutive frames by the time interval. The finite difference methods rely on the assumption that a particle travels a near-straight pathline within a short time interval. However, when this assumption fails, it may lead to inaccurate reconstruction. On the other hand, a trajectory-based velocity reconstruction first approximates particle pathlines by a continuous function, and next, velocities are derived from the definition of the velocity: that is the rate of change of a particle's location with respect to time. However, similar to polynomial-based trajectory reconstruction, trajectory-based velocity reconstruction suffers from a lack of global smoothness, and the dilemma of selecting an appropriate order. Methods for reconstructing differential quantities can be classified into meshless and mesh-based approaches. In mesh-based methods (e.g., Flowfit by Gesemann et al. (2016), Vortex-in-Cell+ by Schneiders and Scarano (2016), and Vortex-in-Cell# by Jeon et al. (2022)), Lagrangian data are first converted to an Eulerian mesh and differential quantities are then computed on the mesh using finite difference methods or some data conversion functions. However, relying on data conversions not only deviates from the original purpose of LPT but could introduce additional computational errors. LPT data typically exhibit irregular particle distributions (e.g., the distances between the particles may significantly vary over the domain, in space and time), but the Eulerian meshes are often structured. This mismatch between the Lagrangian data and Eulerian mesh is a classic challenge in the approximation theory (Fasshauer, 2007). In addition to the data conversion errors, mesh-based methods cannot directly evaluate differential quantities at the location of Lagrangian data. A typical solution is computing differential quantities at Eulerian meshes first and then interpolating Eulerian data back to the Lagrangian data points. On the other hand, meshless methods (Luthi, 2002; Takehara and Etoh, 2017; Sperotto et al., 2022; Li and Pan, 2023) avoid projecting Lagrangian data onto Eulerian mesh and can directly reconstruct (differential) flow quantities at scattered LPT data points. These methods approximate velocity fields in each frame using continuous model functions, such as polynomials (Luthi, 2002; Takehara and Etoh, 2017) and Radial Basis Functions (RBFs) (Sperotto et al., 2022; Li and Pan, 2023). Subsequently, the differential quantities are calculated based on the spatial derivatives of the velocity field. This approach is analogous to velocity reconstruction by taking the time derivative of the pathline. The current mesh-based and meshless methods may face some difficulties. First, physical constraints may be absent in the reconstruction. The constraints can arise from some _a priori_ knowledge about the fluid flow (e.g., velocity solenoidal conditions for incompressible flows and boundary conditions (Gesemann et al., 2016; Jeon et al., 2022; Sperotto et al., 2022; Li and Pan, 2023)). Neglecting these physical constraints could produce unrealistic or inaccurate flow structures in the reconstruction. Second, processing large-scale LPT data can be computationally demanding and unstable. Consequently, the computational complexity grows rapidly as the number of particles and frames increases, potentially straining computational resources and prolonging processing time. Some meshless methods, such as RBFs, may suffer from numerical instability when addressing a large number of data points without care being taken (Fornberg and Wright, 2004; Fornberg et al., 2011). Third, the velocity field in each frame is sometimes described by low-order polynomials or B-splines. While these functions provide the simplicity of computation, they may not be suitable for accurately capturing highly nonlinear flows. Last but not least, velocities calculated by temporal derivatives of trajectories (in the Lagrangian perspective) are often decoupled from the velocity field reconstructed for each frame (in the Eulerian perspective). The former is derived from the definition of velocity, while the latter often respects some physical constraints (e.g., velocity divergence-free conditions). However, the velocity field of a given flow is supposed to be unique, no matter looking at it from the Eulerian or Lagrangian perspective. Assimilating these velocities could improve reconstruction quality. Several comprehensive flow field reconstruction strategies have been proposed. Typically, these strategies begin by initializing particle trajectories, velocities, and acceleration using some simple schemes, then these initialized quantities are used for downstream analysis or data assimilation. One such strategy, introduced by Gesemann et al. (2016), comprises two algorithms: Trackfit and Flowfit. The Trackfit initializes particle trajectories, velocities, and accelerations along particle pathlines based on the raw LPT data. Particle trajectories are approximated using cubic B-splines with time being its independent variable. The velocity and acceleration are calculated by taking temporal derivatives of the trajectory functions. On the other hand, the Flowfit algorithm is a data assimilation program. It leverages the particle velocities and accelerations obtained from the Trackfit as inputs. In each frame, Flowfit converts the Lagrangian data onto an Eulerian mesh using 3D weighted cubic B-splines. Flowfit assimilates data by minimizing cost functions containing some constraints, such as divergence-free conditions for incompressible flows and curl-free conditions for pressure gradients. These constraints regularize the reconstructed flow quantities. However, in Trackfit, the trajectory functions are embedded in a cost function that relies on the assumption of minimum change in acceleration. While this practice is helpful to smooth the trajectories, it does not strictly reinforce physical conditions. Additionally, to comply with this assumption, the time interval between adjacent frames must be small, which may not be suitable for data without time-resolved properties. Last, converting Lagrangian data onto Eulerian meshes is needed in Flowfit. Luthi (2002) proposed a meshless comprehensive reconstruction method. This method first uses a localized cubic polynomial function to approximate particle trajectories along pathlines. Second, velocities and accelerations are calculated by taking temporal derivatives of the trajectory functions. Last, linear piece-wise polynomial functions are employed to approximate the velocity field in each frame, using velocities obtained from the previous step as inputs. This method can be considered a more sophisticated version of Trackfit. However, without further data assimilation, it cannot strictly enforce the velocity divergence-free constraint as intended. Instead, it attempts to incorporate the divergence-free property by using a nonzero divergence as a normalized weight for weighted least squares to improve the reconstruction quality. Additionally, the use of linear piece-wise polynomial functions only provides piece-wise linear smoothness throughout the domain, which may be inadequate to approximate complex velocity fields. In the current research, we propose a meshless comprehensive reconstruction strategy for processing raw data measured by 3D TR-LPT systems. This strategy incorporates (i) a stable RBF method (Fornberg and Piret, 2008; Fornberg et al., 2011; Larsson et al., 2013) to approximate particle trajectories and velocities along pathlines, as well as velocity fields and differential quantities in each frame; (ii) the Constrained Least Squares (CLS) algorithm to enforce physical constraints and suppress noise; and (iii) the Partition-of-Unity Method (PUM) (Melenk and Babuska, 1996; Babuska and Melenk, 1997) to reduce computational costs and improve numerical stability. We refer to our strategy as the CLS-RBF PUM method. This paper is organized as follows: in Sect. 2, the three foundation algorithms (i.e., the stable RBF, CLS, and PUM) are introduced. Sect. 3 elaborates on how these three foundation algorithms are integrated into one comprehensive reconstruction method. Sect. 4 shows validations of our method based on synthetic and experimental data. Sect. 5 concludes the paper. ## 2 Foundation Algorithms ### Stable radial basis function (RBF) The classic RBF, also known as RBF-Direct (Fornberg et al., 2011), is a kernel-based meshless algorithm for data approximation. It uses the Euclidean norm between two data points as an independent variable. With a specific kernel, such as a Gaussian or multiquadric kernel, the RBF-Direct enjoys infinitely smoothness and can be easily extended to high dimensions. Various versions of RBFs have been widely applied in computer graphics (Carr et al., 2001; Macedo et al., 2011; Drake et al., 2022), machine learning (Keerthi and Lin, 2003; Huang et al., 2006), as well as flow field reconstruction in some recent works (Sperotto et al., 2022; Li and Pan, 2023). However, as pointed out by Fornberg and Wright (2004), the RBF-Direct faces a critical dilemma regarding its shape factor. The shape factor controls the profile of RBF kernels: a small shape factor, corresponding to the near-flat kernel, offers an accurate approximation but leads to an ill-conditioned problem. On the other hand, a large shape factor, corresponding to a spiky kernel, provides a well-conditioned but inaccurate approximation. It was conventionally believed that a trade-off must be made regarding the shape factor to strike a balance between accuracy and stability until stable RBFs emerged. The stable RBFs can achieve numerical stability without compromising accuracy. The ill-conditioning problem due to a small shape factor can be overcome by a handful of stable RBFs (e.g., RBF-CP by Fornberg and Wright (2004), RBF-GA by Fornberg et al. (2013), RBF-RA by Wright and Fornberg (2017) and polynomial embedded RBF Sperotto et al. (2022)). The RBF-QR (Fornberg and Piret, 2008; Fornberg et al., 2011) is one such stable RBF. The RBF-QR kernel \(\psi\) is converted from an RBF-Direct kernel \(\phi\), via a process of factorization, Taylor expansion, coordinate conversion, Chebyshev polynomial substitution, QR factorization, etc. The RBF-QR kernels enjoy well-conditioning and stability for any small shape factor, i.e., \(\varepsilon\to 0^{+}\). More details about the RBF-QR can be found in Fornberg et al. (2011); Larsson et al. (2013), and here we briefly summarize its application for interpolation as an example. For an RBF-QR interpolation problem, given scalar data \(\hat{f}^{\mathrm{c}}_{i}\in\mathbb{R}\) located at a center \(\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}\in\mathbb{R}^{d}\), i.e., \((\hat{\mathbf{\xi}}^{\mathrm{c}}_{i},\hat{f}^{\mathrm{c}}_{i})\), an interpolant \(\tilde{s}(\varepsilon,\mathbf{\xi})\) can be written as a linear combination of \(N\) RBF-QR kernels \(\psi\): \[\tilde{s}(\varepsilon,\mathbf{\xi})=\sum_{i=1}^{N}\lambda_{i}\psi(\varepsilon, \|\mathbf{\xi}-\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}\|)=\mathbf{\Psi}(\varepsilon,\mathbf{\xi}) \mathbf{\lambda},\] where \(N\) is the number of centers, \(\mathbf{\lambda}=(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})^{\mathrm{T}}\) is the vector of expansion coefficients in which \(\lambda_{i}\) controls the weights of the kernels, \(\varepsilon\) is the shape factor, \(\|\mathbf{\xi}-\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}\|\) denotes the Euclidean norm between evaluation points \(\mathbf{\xi}\) and the center \(\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}\). The evaluation points \(\mathbf{\xi}\) are where data are interpolated at. The expansion coefficient \(\lambda_{i}\) can be calculated by forcing the evaluation points coinciding with the centers, and then substituting interpolants with the given data: \(\tilde{s}(\varepsilon,\mathbf{\xi})|_{\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}}=\tilde{s}( \varepsilon,\hat{\mathbf{\xi}}^{\mathrm{c}}_{i})=\hat{\mathbf{f}}^{\mathrm{c}}\), where \(\hat{\mathbf{f}}^{\mathrm{c}}=(\hat{f}^{\mathrm{c}}_{1},\hat{f}^{\mathrm{c}}_{2}, \ldots,\hat{f}^{\mathrm{c}}_{N})^{\mathrm{T}}\) is the vector of the given data. The derivative approximation can be calculated using the same expansion coefficients but with an RBF-QR derivative kernel \(\psi_{\mathcal{D}}\): \[\tilde{s}_{\mathcal{D}}(\varepsilon,\mathbf{\xi})=\sum_{i=1}^{N}\lambda_{i}\psi_{ \mathcal{D}}(\varepsilon,\|\mathbf{\xi}-\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}\|)=\mathbf{ \Psi}_{\mathcal{D}}(\varepsilon,\mathbf{\xi})\mathbf{\lambda},\] where \(\mathcal{D}\) denotes a linear derivative operation. With its scattered data approximation ability and easy derivative calculation, the RBF-QR is suitable for LPT data processing. In the CLS-RBF PUM method, we use the RBF-QR as a model function to approximate particle trajectories, velocities, and differential quantities. One can consider RBF-QR as a one-layer neural network that is fully 'transparent' and interpretable, and the need for hyper-parameter tuning is minimized. ### Constrained least squares (CLS) The CLS is based on the least squares regression with constraints enforced by the Lagrangian multiplier method. The Lagrangian objective function \(\mathcal{L}\) is created by appending an equality constraint \(\mathbf{C}\mathbf{\lambda}=\mathbf{d}\) to a residual \(\mathcal{R}\): \[\mathcal{L}(\mathbf{\lambda},\mathbf{\eta})=\mathcal{R}+\mathbf{\eta}(\mathbf{C}\mathbf{ \lambda}-\mathbf{d}), \tag{1}\] where \(\mathcal{R}=\sum_{i}^{N}\|\tilde{s}(\varepsilon,\hat{\mathbf{\xi}}^{\mathrm{c}}_{i })-\hat{f}^{\mathrm{c}}_{i}\|^{2}\) is the residual between the measurements \(\hat{f}^{\mathrm{c}}_{i}\) and RBF-QR model function \(\tilde{s}(\varepsilon,\hat{\mathbf{\xi}}^{\mathrm{c}}_{i})=\mathbf{B}(\varepsilon, \hat{\mathbf{\xi}}^{\mathrm{c}}_{i})\mathbf{\lambda}\), where \(\mathbf{B}(\varepsilon,\hat{\mathbf{\xi}}^{\mathrm{c}}_{i})=B_{ij}=\psi( \varepsilon,\|\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}-\mathbf{\xi}^{\mathrm{ref}}_{j}\|)\) is the RBF-QR system matrix constructed by \(N\) centers \(\hat{\mathbf{\xi}}^{\mathrm{c}}_{i}\) and \(M\) reference points \(\mathbf{\xi}^{\mathrm{ref}}_{j}\), \(i=1,2,\ldots,N\) and \(j=1,2,\ldots,M\). \(\mathbf{\eta}=(\eta_{1},\eta_{2},\ldots,\eta_{J})^{\mathrm{T}}\) is the vector of Lagrangian multipliers. \(\mathbf{C}\) is a generalized constraint matrix; it can be a constraint matrix \(\mathbf{C}_{\mathcal{O}}\) for function values and/or \(\mathbf{C}_{\mathcal{D}}\) for function derivatives. \(\mathbf{C}\) is established by \(J\) constraint points \(\mathbf{\xi}_{l}^{\mathrm{est}}\) and \(M\) reference points \(\mathbf{\xi}_{j}^{\mathrm{ref}}\) with entries \[\mathbf{C}_{\mathcal{O}}(\varepsilon,\mathbf{\xi}_{l}^{\mathrm{est}}) =C_{\mathcal{O},lj}=\psi(\varepsilon,\|\mathbf{\xi}_{l}^{\mathrm{est} }-\mathbf{\xi}_{j}^{\mathrm{ref}}\|)\] \[\mathbf{C}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{l}^{\mathrm{est}}) =C_{\mathcal{D},lj}=\psi_{\mathcal{D}}(\varepsilon,\|\mathbf{\xi}_{l }^{\mathrm{est}}-\mathbf{\xi}_{j}^{\mathrm{ref}}\|)^{\mathrm{T}},\] \(l=1,2,\ldots,J\). \(\mathbf{\lambda}=(\lambda_{1},\lambda_{2},\ldots,\lambda_{M})^{\mathrm{T}}\) is the vector of expansion coefficients. \(\mathbf{d}\) is the vector of constraint values; in this work, \(\mathbf{d}\) is a null vector to comply with the divergence-free constraint. An oversampling ratio is defined as \(\beta=N/M\). \(\beta>1\) is required for regression and a sufficiently large \(\beta\) provides smooth reconstruction. Next, we minimize the Lagrangian objective function Eq. (1) for expansion coefficients that will be used for approximation. By setting the gradient of \(\mathcal{L}\) with respect to the vectors \(\mathbf{\lambda}\) and \(\mathbf{\eta}\) to zero (i.e., \(\partial\mathcal{L}/\partial\mathbf{\lambda}=0\) and \(\partial\mathcal{L}/\partial\mathbf{\eta}=0\)), a linear system is established: \[\begin{bmatrix}\mathbf{G}&\mathbf{C}^{\mathrm{T}}\\ \mathbf{C}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{\lambda}\\ \mathbf{\eta}\end{bmatrix}=\begin{bmatrix}\mathbf{F}\\ \mathbf{d}\end{bmatrix}, \tag{2}\] where \(\mathbf{G}=\mathbf{B}^{\mathrm{T}}(\varepsilon,\hat{\mathbf{\xi}}_{i}^{\mathrm{c}} )\mathbf{B}(\varepsilon,\hat{\mathbf{\xi}}_{i}^{\mathrm{c}})\) and \(\mathbf{F}=\mathbf{B}^{\mathrm{T}}(\varepsilon,\hat{\mathbf{\xi}}_{i}^{\mathrm{c} })\hat{\mathbf{f}}^{\mathrm{c}}\). After solving \(\mathbf{\lambda}\) from Eq. (2), the RBF-QR approximation and its derivative functions are calculated by: \[\begin{split}\tilde{s}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}})& =\mathbf{E}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}})\mathbf{\lambda}\\ \tilde{s}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}})& =\mathbf{E}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}}) \mathbf{\lambda}^{\mathrm{}},\end{split} \tag{3}\] where the RBF-QR evaluation matrix \(\mathbf{E}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}})\) and its derivative matrix \(\mathbf{E}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}})\) are constructed by \(P\) evaluation points \(\mathbf{\xi}_{k}^{\mathrm{eva}}\) and \(M\) reference points \(\mathbf{\xi}_{j}^{\mathrm{ref}}\) with entries: \[\begin{split}\mathbf{E}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}}) &=E_{kj}=\psi(\varepsilon,\|\mathbf{\xi}_{k}^{\mathrm{eva}}-\mathbf{\xi}_{j}^ {\mathrm{ref}}\|)\\ \mathbf{E}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\mathrm{eva}}) &=E_{\mathcal{D},kj}=\psi_{\mathcal{D}}(\varepsilon,\|\mathbf{\xi}_{k}^ {\mathrm{eva}}-\mathbf{\xi}_{j}^{\mathrm{ref}}\|)^{\mathrm{}},\end{split} \tag{4}\] where \(k=1,2,\ldots,P\). Eqs. (2) and (3) cannot be directly applied in 3D while subject to divergence-free constraints without proper extensions. The matrix elements in these two equations must be extended to each direction of the coordinates since the reconstruction is in 3D and divergence-free constraints consist of derivatives in three directions. The extended linear system is written as \[\begin{bmatrix}\bar{\mathbf{G}}&\bar{\mathbf{C}}^{\mathrm{T}}\\ \bar{\mathbf{C}}&\mathbf{0}\end{bmatrix}\begin{bmatrix}\bar{\mathbf{\lambda}} \\ \bar{\mathbf{\eta}}\end{bmatrix}=\begin{bmatrix}\bar{\mathbf{F}}\\ \bar{\mathbf{d}}\end{bmatrix}. \tag{5}\] In Eq. (5), \(\bar{\mathbf{C}}=\begin{bmatrix}\mathbf{C}_{x}&\mathbf{C}_{y}&\mathbf{C}_{z} \end{bmatrix}\) is the extended constraint matrix, where \(\mathbf{C}_{x}\), \(\mathbf{C}_{y}\), and \(\mathbf{C}_{z}\) are first-order spatial derivative constraint matrices based on \(\mathbf{C}_{\mathcal{D}}\) in the \(x\), \(y\), and \(z\) directions, respectively. The extended matrices \(\bar{\mathbf{G}}\) and \(\bar{\mathbf{F}}\) are block diagonal matrices with entries \[\bar{\mathbf{G}}=\begin{bmatrix}\mathbf{B}^{\mathrm{T}}\mathbf{B}&&\\ &\mathbf{B}^{\mathrm{T}}\mathbf{B}&\\ &&\mathbf{B}^{\mathrm{T}}\mathbf{B}\end{bmatrix},\;\bar{\mathbf{F}}=\begin{bmatrix} \mathbf{B}^{\mathrm{T}}&&\\ &\mathbf{B}^{\mathrm{T}}&\\ &&\mathbf{B}^{\mathrm{T}}\end{bmatrix}\hat{\mathbf{f}}^{\mathrm{c}},\] where \(\hat{\mathbf{f}}^{\mathrm{c}}=\begin{pmatrix}\mathbf{u}&\mathbf{v}&\mathbf{w}\end{pmatrix}^{ \mathrm{T}}\), and \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) are the velocity vectors in the \(x\), \(y\), and \(z\) directions, respectively; \(\bar{\mathbf{d}}\) is a null column vector corresponding to the divergence-free constraints. After solving \(\mathbf{\lambda}\) in Eq. (5), the CLS RBF-QR approximation function \(\tilde{s}(\varepsilon,\mathbf{\xi}_{k}^{\text{eva}})\) and its differentiation function \(\tilde{s}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\text{eva}})\) are calculated by: \[\begin{split}\tilde{s}(\varepsilon,\mathbf{\xi}_{k}^{\text{eva}})& =\bar{\mathbf{E}}(\varepsilon,\mathbf{\xi}_{k}^{\text{eva}})\bar{ \mathbf{\lambda}}\\ \tilde{s}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\text{eva}})& =\bar{\mathbf{E}}_{\mathcal{D}}(\varepsilon,\mathbf{\xi}_{k}^{\text{eva}}) \bar{\mathbf{\lambda}}\end{split}, \tag{6}\] where \(\bar{\mathbf{E}}\) and \(\bar{\mathbf{E}}_{\mathcal{D}}\) are extended diagonal block matrices based on \(\mathbf{E}\) and \(\mathbf{E}_{\mathcal{D}}\) in Eq. (4), respectively: \[\bar{\mathbf{E}}=\begin{bmatrix}\mathbf{E}&&\\ &\mathbf{E}&\\ &&\mathbf{E}\end{bmatrix},\;\bar{\mathbf{E}}_{\mathcal{D}}=\begin{bmatrix} \mathbf{E}_{\mathcal{D}}&&\\ &\mathbf{E}_{\mathcal{D}}&\\ &&\mathbf{E}_{\mathcal{D}}\end{bmatrix}.\] Up to this point, a CLS RBF-QR framework is established for a 3D Lagrangian flow field reconstruction with divergence-free constraints as an example. This method can enforce other constraints in a similar fashion, if needed. The CLS-RBF PUM method relies on four types of points (listed below), each playing a distinct role in the flow reconstruction. We use one-dimensional (1D) unconstrained and constrained RBF-QR regression for demonstration as shown in Fig. 1. 1. Centers \(\hat{\mathbf{\xi}}^{\text{c}}\) (\(x\) coordinates of the orange crosses): they are locations of the given data. The centers are determined by experiments and are typically 'randomly' scattered throughout the flow domain. 2. Reference points \(\mathbf{\xi}^{\text{ref}}\) (\(x\) coordinates of scarlet crosses): a linear combination of kernels (see dashed curves) centered at the reference points can approximate the given data with a continuous function (i.e., the approximation function \(\tilde{s}(\mathbf{\xi})\), indicated by the blue solid curves). Note, the number of reference points should be smaller than that of the centers to ensure effective regression. Besides, the placement of the reference points prefers a quasi-uniform layout such as the Halton points for 2D and 3D, and a uniform point layout for 1D (Fasshauer, 2007; Larsson, 2023). 3. Constraint points \(\mathbf{\xi}^{\text{cst}}\) (\(x\) coordinates of scarlet squares): they are where physical constraints are enforced. In the current work, we impose divergence-free constraints at the centers to guarantee that velocity divergence at measured LPT data points is zero. Generally, there is no limitation to the placement of the constrained points (e.g., locations or numbers of the constraints), as long as the linear system in Eqs. (2) and (5) is well-posed. 4. Evaluation points \(\mathbf{\xi}^{\text{eva}}\) (\(x\) coordinates of blue dots): the locations where \(\tilde{s}(\mathbf{\xi})\) is reconstructed are the evaluation points. The number or the locations of evaluation points have no limitations. This means that the evaluation points can be densely placed in the domain to achieve super-resolution or placed at the centers \(\hat{\mathbf{\xi}}^{\text{c}}\) to directly evaluate at locations of LPT data. ### Partition-of-unity method (PUM) The PUM (Melenk and Babuska, 1996; Babuska and Melenk, 1997) is applied in reconstruction to further enhance numerical stability and efficiency. The ill-conditioning issue in the RBF-Direct is not only caused by a small shape factor but also by a large-scale data set (Fornberg and Wright, 2004; Fornberg et al., 2011). The same ill-conditioning problem due to large-scale data persists in stable RBFs. In addition, processing a large number of data at once can be prohibitively expensive. The PUM can partition the domain and localize flow field reconstruction. Here, we briefly summarize the implementation of RBF-QR with PUM, which was introduced by Larsson et al. (2017). First, identical spherical PUM patches \(\Omega_{m}\), \(m=1,2,\ldots,N_{P}\), are created to cover the entire 3D flow domain \(\Omega\). \(N_{P}\) is the number of PUM patches. These patches are overlapped and the overlap ratio is defined as \(\gamma\). The radius of PUM patches is calculated as \(\rho=(1+\gamma)\rho_{0}\), where \(\rho_{0}\) is the radius of patches that have no overlaps in the diagonal direction. For example in Fig. 2b, \(\rho_{0}=\overline{AO}\), and the diagonal directions refer to the lines \(\overline{AC}\) and \(\overline{BD}\). Next, every patch is assigned a weight function \(W_{m}(\mathbf{\xi})\). The weight function becomes zero outside a patch, i.e., \(W_{m}(\mathbf{\xi})=0\); and the sum of all weight functions from all patches at an arbitrary point in the domain is unity, i.e., \(\sum_{m=1}^{N_{P}}W_{m}(\mathbf{\xi})=1\). The weight functions are based on Shepard's method (Shepard, 1968) following Larsson's work: \[W_{m}(\mathbf{\xi})=\frac{\varphi_{m}(\mathbf{\xi})}{\sum_{m=1}^{N_{P}}\varphi_{m}( \mathbf{\xi})},\] where \(\varphi_{m}(\mathbf{\xi})\) is a compactly supported generating function, and the Wendland \(C^{2}\) function \(\varphi(r)=(1-r^{4})_{+}(4r+1)\)(Wendland, 1995) is chosen here. Last, the global evaluation function on the fluid domain \(\Omega\) can then be assembled by a weighted summation of local approximation functions: \[\begin{split}\tilde{s}(\varepsilon,\mathbf{\xi})&= \sum_{m}^{N_{p}}W_{m}(\mathbf{\xi})\tilde{s}_{m}(\varepsilon,\mathbf{\xi})\\ \tilde{s}_{\mathcal{D}}(\varepsilon,\mathbf{\xi})&= \sum_{m}^{N_{p}}W_{\mathcal{D},m}(\mathbf{\xi})\tilde{s}_{m}(\varepsilon,\mathbf{\xi} )+W_{m}(\mathbf{\xi})\tilde{s}_{\mathcal{D},m}(\varepsilon,\mathbf{\xi})\end{split}, \tag{7}\] Figure 1: A demonstration of RBF-QR approximation and centers, reference points, constraint points, and evaluation points in 1D. The RBF-QR (a) unconstrained and (b) constrained regression have six kernels. Gray solid curves: the ‘ground truth’ based on an exact function \(f(\xi)=\sin{(\xi)}+0.5\), \(\xi\in[0,2\pi]\); blue solid curves: RBF-QR reconstruction; dashed curves: RBF-QR kernels ‘centered’ at different reference points; orange crosses: the given data, which are sampled from the ground truth with random perturbation to simulate the corrupted experimental data; scarlet crosses: the reference points; scarlet squares: constraints of function values or function derivatives; the scarlet square with an arrowhead indicates the derivative constraint; blue dots: reconstructed results at the evaluation points. where \(W_{\mathcal{D},m}(\mathbf{\xi})\) is linear derivatives of weight functions in the patch \(m\) and \(\tilde{s}_{\mathcal{D},m}(\varepsilon,\mathbf{\xi})\) is a linear derivative approximation in the same patch, both are derived by the chain rule. Fig. 2 presents a 3D example of PUM patches in a unit cubic domain with an overlap ratio \(\gamma=0.2\). ## 3 Cls-Rbf Pum Method We integrate the aforementioned three foundation algorithms (i.e., BRF-QR, CLS, and PUM) as one comprehensive reconstruction method. This method consists of four steps, which will be elaborated on in this section. An important byproduct of this method is the capability of super-resolution in time and space, which will be discussed lastly in this section. Hereafter, the generalized independent variable \(\mathbf{\xi}\) used in Sect. 2 is substituted by particle spatial coordinates \(\mathbf{x}\) or time \(t\). Fig. 3 sketches the CLS-RBF PUM method processing TR-LPT data in 2D as a demonstration. ### Step 1: initialize particle trajectory and velocity _Step 1_ initializes smooth particle trajectories (and velocities) by fitting the particle spatial coordinates provided by an LPT system. This fitting is based on Eq. (2) with time being its independent variable, without any constraints. Trajectory fitting is performed for all coordinates of each particle. Here, we use the \(x\) coordinate of a particle as an example. The vector of spatial coordinates \(\hat{\mathbf{x}}=(\hat{x}_{1},\hat{x}_{2},\ldots,\hat{x}_{N_{\mathrm{trj}}})^{ \mathrm{T}}\) are measured at \(N_{trj}\) time instants \(\mathbf{t}^{\mathrm{c}}=(\iota_{1}^{\mathrm{c}},t_{2}^{\mathrm{c}},\ldots,t_{N_{ \mathrm{trj}}}^{\mathrm{c}})^{\mathrm{T}}\) by an LPT system. We refer to \(\mathbf{t}^{\mathrm{c}}\) as the vector of temporal centers. Based on Eq. (3), the trajectory function is given by: \[\tilde{\mathbf{x}}(\varepsilon,\mathbf{t}^{\mathrm{c}})=\mathbf{E}(\varepsilon,\mathbf{t}^ {\mathrm{c}})\mathbf{\lambda}^{\mathrm{trj}}=\mathbf{E}(\varepsilon,\mathbf{t}^{ \mathrm{c}})\mathbf{B}^{+}(\varepsilon,\mathbf{t}^{\mathrm{c}})\hat{\mathbf{x}}, \tag{8}\] where \(\mathbf{B}^{+}(\varepsilon,\mathbf{t}^{\mathrm{c}})=((\mathbf{B}^{\mathrm{T}}( \varepsilon,\mathbf{t}^{\mathrm{c}})\mathbf{B}(\varepsilon,\mathbf{t}^{\mathrm{c}}))^ {-1}\mathbf{B}^{\mathrm{T}}(\varepsilon,\mathbf{t}^{\mathrm{c}})\) is a generalized inverse of the RBF-QR system matrix \(\mathbf{B}(\varepsilon,\mathbf{t}^{\mathrm{c}})\) that has entries: \[\mathbf{B}(\varepsilon,\mathbf{t}^{\mathrm{c}})=B_{ij}=\psi(\varepsilon,\|\mathbf{t} ^{\mathrm{c}}_{i}-\mathbf{t}^{\mathrm{ref}}_{j}\|),\] Figure 2: Example PUM patch layout in 3D. The domain is partitioned by eight spherical PUM patches. (b) is the top view of (a). The gray lines indicate the edges of the domain \(\Omega\). Spheres in different colors are PUM patches, whose centers are marked by green dots; red dots indicate the reference points of a Halton layout; randomly distributed blue dots are the given data points. where \(i=1,2,\ldots,N_{\text{trj}}\) and \(j=1,2,\ldots,M_{\text{trj}}\); \(\mathbf{t}^{\text{ref}}=(t_{1}^{\text{ref}},t_{2}^{\text{ref}},\ldots,t_{M_{\text{ trj}}}^{\text{ref}})^{\text{T}}\) is the vector of temporal reference points. The trajectory evaluation matrix \(\mathbf{E}(\varepsilon,\mathbf{t}^{\text{c}})\) has the entries: \[\mathbf{E}(\varepsilon,\mathbf{t}^{\text{c}})=E_{ij}=\psi(\varepsilon,\|\mathbf{t}^{ \text{c}}_{i}-\mathbf{t}^{\text{ref}}_{j}\|).\] A temporal oversampling ratio is defined as \(\beta_{0}=N_{\text{trj}}/M_{\text{trj}}\), and \(\beta_{0}>1\) is essential for regression. The initial velocity is calculated based on the temporal derivatives of trajectory functions: \[\tilde{\mathbf{u}}(\varepsilon,\mathbf{t}^{\text{c}})=\mathbf{E}_{t}(\varepsilon,\mathbf{ t}^{\text{c}})\mathbf{\lambda}^{\text{trj}}=\mathbf{E}_{t}(\varepsilon,\mathbf{t}^{ \text{c}})\mathbf{B}^{+}(\varepsilon,\mathbf{t}^{\text{c}})\hat{\mathbf{x}}, \tag{9}\] where \(\mathbf{E}_{t}(\varepsilon,\mathbf{t}^{\text{c}})\) is the velocity evaluation matrix based on Eq. (4) with \([\cdot]_{t}\) denoting the first-order temporal derivative: \[\mathbf{E}_{t}(\varepsilon,\mathbf{t}^{\text{c}})=\mathbf{E}_{t}(\varepsilon,\mathbf{ t}^{\text{c}})=E_{t,ij}=\psi_{t}(\varepsilon,\|\mathbf{t}^{\text{c}}_{i}-\mathbf{t}^{ \text{ref}}_{j}\|).\] Acceleration can be computed accordingly if needed. Reconstruction of trajectories and velocities for \(y\) and \(z\) coordinates are similar. After computing the velocities in all three directions, the velocity field at each measured particle location in each frame is known in turn. Hereafter, the particle location output from _Step 1_ is referred to as the modified particle location, as the raw particle locations are slightly modified by the regression process. The computed velocity fields from this step are called initial velocities as the velocity of each particle or the velocity field in each frame is known for the first time. Note, the initial velocities are computed from Lagrangian perspectives based on the definition of the velocity; however, they are not subject to any physical constraints. ### Step 2: calculate intermediate divergence-free velocity field _Step 2_ calculates an intermediate divergence-free velocity field in each frame by constrained least squares. This step uses the modified particle locations and initial velocities as the inputs. To calculate the intermediate velocity fields, the matrices \(\mathbf{B}\), \(\mathbf{C}\), \(\mathbf{E}\), and \(\mathbf{E}_{\mathcal{D}}\) (described in Sect. 2.2) Figure 3: A 2D demonstration of CLS-RBF PUM method. Only three out of \(N\) frames and reconstruction at three particles are highlighted here. are constructed first. For example in the \(\kappa\)-th frame, an RBF-QR spatial system matrix \(\mathbf{B}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})\) is formulated based on \(N\) modified spatial points \(\tilde{\mathbf{x}}^{\mathrm{c}}_{i}\) and \(M_{1}\) spatial reference points \(\mathbf{x}^{\mathrm{ref}}_{j}\) with entries: \[\mathbf{B}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})=B_{ij}=\psi(\varepsilon, \|\tilde{\mathbf{x}}^{\mathrm{c}}_{i}-\mathbf{x}^{\mathrm{ref}}_{j}\|),\] where \(\tilde{\mathbf{x}}^{\mathrm{c}}=[\tilde{\mathbf{x}}^{\mathrm{c}}_{1},\tilde{\mathbf{x}}^{ \mathrm{c}}_{2},\ldots,\tilde{\mathbf{x}}^{\mathrm{c}}_{N}]^{\mathrm{T}}\) and \(\tilde{\mathbf{x}}^{\mathrm{c}}_{i}=(\tilde{x},\tilde{y},\tilde{z})^{\mathrm{c}}_ {i}\); \(\mathbf{x}^{\mathrm{ref}}=[\mathbf{x}^{\mathrm{ref}}_{1},\mathbf{x}^{\mathrm{ref}}_{2}, \ldots,\mathbf{x}^{\mathrm{ref}}_{M_{1}}]^{\mathrm{T}}\), and \(\mathbf{x}^{\mathrm{ref}}_{j}=(x,y,z)^{\mathrm{ref}}_{j}\), \(i=1,2,\ldots,N\), and \(j=1,2,\ldots,M_{1}\). An RBF-QR spatial derivative constraint matrix \(\mathbf{C}_{\mathcal{D}}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})\) is established between \(N\) modified spatial centers \(\tilde{\mathbf{x}}^{\mathrm{c}}_{i}\) and \(M_{1}\) spatial reference points \(\mathbf{x}^{\mathrm{ref}}_{j}\): \[\mathbf{C}_{\mathcal{D}}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})=C_{\mathcal{ D},ij}=\psi_{\mathcal{D}}(\varepsilon,\|\tilde{\mathbf{x}}^{\mathrm{c}}_{i}-\mathbf{x}^{ \mathrm{ref}}_{j}\|).\] Here, the constraint points \(\mathbf{x}^{\mathrm{cst}}\) coincide with modified particle locations \(\tilde{\mathbf{x}}^{\mathrm{c}}\) since we enforce the velocity divergence at the measured particle location to be zero. The RBF-QR spatial evaluation matrix \(\mathbf{E}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})\) is established by \(N\) modified particle locations \(\tilde{\mathbf{x}}^{\mathrm{c}}_{i}\) and \(M_{1}\) spatial reference points \(\mathbf{x}^{\mathrm{ref}}_{j}\): \[\mathbf{E}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})=E_{ij}=\psi(\varepsilon, \|\tilde{\mathbf{x}}^{\mathrm{c}}_{i}-\mathbf{x}^{\mathrm{ref}}_{j}\|).\] In _Step 2_, the evaluation points are placed at the same locations as the modified spatial centers. A spatial oversampling ratio in _Step 2_ is defined as \(\beta_{1}=N/M_{1}\) and chosen to be slightly larger than unity. After constructing the extended matrices, the expansion coefficients \(\tilde{\mathbf{\lambda}}\) is solved from Eq. (5), and the intermediate velocity field \(\tilde{\mathbf{U}}^{\mathrm{div}}_{\kappa}=(\tilde{\mathbf{u}}^{\mathrm{div}}_{ \kappa},\tilde{\mathbf{v}}^{\mathrm{div}}_{\kappa},\tilde{\mathbf{w}}^{\mathrm{div}}_{ \kappa})^{\mathrm{T}}\) is computed based on Eq. (6): \[\tilde{\mathbf{U}}^{\mathrm{div}}_{\kappa}(\varepsilon,\tilde{\mathbf{x}}^{ \mathrm{c}})=\bar{\mathbf{E}}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})\tilde{ \mathbf{\lambda}}. \tag{10}\] The velocity fields \(\tilde{\mathbf{u}}^{\mathrm{div}}_{\kappa}\), \(\tilde{\mathbf{v}}^{\mathrm{div}}_{\kappa}\), and \(\tilde{\mathbf{w}}^{\mathrm{div}}_{\kappa}\) can be extracted from \(\tilde{\mathbf{U}}^{\mathrm{div}}_{\kappa}(\varepsilon,\tilde{\mathbf{x}}^{ \mathrm{c}})\). The velocities reconstructed in _Step 2_ are divergence-free from the Eulerian perspective. However, they are not necessarily the same as the velocity fields obtained from _Step 1_, which are based on the definition of the velocity but are not divergence-free. This discrepancy is due to errors in measured particle spatial coordinates. To resolve this conflict, we assimilate the results from _Steps 1_ and \(2\) in _Step 3_. ### Step 3: update particle location by data assimilation _Step 3_ incorporates the Lagrangian and Eulerian reconstructions by updating the particle trajectories using least squares regression for all particles in all frames. The underlying motivation is that the velocities calculated by temporal derivatives of trajectories for each particle (i.e., velocities output from _Step 1_, which is a Lagrangian reconstruction) should be identical to the velocities reconstructed by the constrained regression in each frame (i.e., velocities output from _Step 2_, which is an Eulerian reconstruction). However, due to the errors in the measured particle spatial coordinates, these two velocity reconstructions are not necessarily equal to each other for the same flow field. Nevertheless, the velocities calculated in _Step 2_ are assumed to be more accurate than those in _Step 1_ since they respect physical constraints (incompressibility condition in this case). Therefore, solenoidal velocities output from _Step 2_ can be used to update particle locations. The expansion coefficients of the trajectory function are re-calculated to update particle locations. The update in particles' \(x\) coordinates is presented as an example. First, a linear system is constructed based on a modified trajectory matrix \(\tilde{\mathbf{X}}(\varepsilon,\mathbf{t}^{\mathrm{c}})\) and a divergence-free velocity matrix \(\tilde{\mathbf{V}}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}})\): \[\tilde{\mathbf{X}}(\varepsilon,\mathbf{t}^{\mathrm{c}}) =\mathbf{\Lambda}\mathbf{E}^{\mathrm{T}}(\varepsilon,\mathbf{t}^{\mathrm{ c}}), \tag{11a}\] \[\tilde{\mathbf{V}}(\varepsilon,\tilde{\mathbf{x}}^{\mathrm{c}}) =\mathbf{\Lambda}\mathbf{E}^{\mathrm{T}}_{t}(\varepsilon,\mathbf{t}^{ \mathrm{c}}), \tag{11b}\] where \(\mathbf{\Lambda}\) is an expansion coefficient matrix to be determined. The elements in \(\tilde{\mathbf{X}}(\varepsilon,\mathbf{t}^{\text{c}})\) is from Eq. (8) in _Step 1_, and \(\tilde{\mathbf{X}}(\varepsilon,\mathbf{t}^{\text{c}})\) has entries: \[\tilde{\mathbf{X}}(\varepsilon,\mathbf{t}^{\text{c}})=\begin{bmatrix}\tilde{x}_{1} (t_{1}^{\text{c}})&\tilde{x}_{1}(t_{2}^{\text{c}})&\ldots&\tilde{x}_{1}(t_{N_{ \text{trj}}}^{\text{c}})\\ \tilde{x}_{2}(t_{1}^{\text{c}})&\tilde{x}_{2}(t_{2}^{\text{c}})&\ldots&\tilde{ x}_{2}(t_{N_{\text{trj}}}^{\text{c}})\\ \vdots&\vdots&\ddots&\vdots\\ \tilde{x}_{N}(t_{1}^{\text{c}})&\tilde{x}_{N}(t_{2}^{\text{c}})&\ldots&\tilde{ x}_{N}(t_{N_{\text{trj}}}^{\text{c}})\end{bmatrix}.\] The elements in \(\tilde{\mathbf{V}}(\varepsilon,\tilde{\mathbf{x}}^{\text{c}})\) is calculated by Eq. (10), and \(\tilde{\mathbf{V}}(\varepsilon,\tilde{\mathbf{x}}^{\text{c}})\) has entries: \[\tilde{\mathbf{V}}(\varepsilon,\tilde{\mathbf{x}}^{\text{c}})=\begin{bmatrix} \tilde{u}_{1}^{\text{div}}(\tilde{\mathbf{x}}_{1}^{\text{c}})&\tilde{u}_{2}^{ \text{div}}(\tilde{\mathbf{x}}_{1}^{\text{c}})&\ldots&\tilde{u}_{N_{\text{trj}}}^ {\text{div}}(\tilde{\mathbf{x}}_{1}^{\text{c}})\\ \tilde{u}_{1}^{\text{div}}(\tilde{\mathbf{x}}_{2}^{\text{c}})&\tilde{u}_{2}^{ \text{div}}(\tilde{\mathbf{x}}_{2}^{\text{c}})&\ldots&\tilde{u}_{N_{\text{trj}}}^ {\text{div}}(\tilde{\mathbf{x}}_{2}^{\text{c}})\\ \vdots&\vdots&\ddots&\vdots\\ \tilde{u}_{1}^{\text{div}}(\tilde{\mathbf{x}}_{N}^{\text{c}})&\tilde{u}_{2}^{ \text{div}}(\tilde{\mathbf{x}}_{N}^{\text{c}})&\ldots&\tilde{u}_{N_{\text{trj}}}^ {\text{div}}(\tilde{\mathbf{x}}_{N}^{\text{c}})\end{bmatrix},\] the matrices \(\mathbf{E}(\varepsilon,\mathbf{t}^{\text{c}})\) and \(\mathbf{E}_{t}(\varepsilon,\mathbf{t}^{\text{c}})\) are the same as those in Eqs. (8) and (9). The Eqs. (11a) and (11b) are established based on explicit physical intuition. From the Lagrangian perspective, the particle trajectory reconstructed by the RBF-QR regression (i.e., the Right-Hand Side (RHS) of Eq. (11a)) should be as close as possible to the modified particle locations (i.e., the Left-Hand Side (LHS) of Eq. (11a)), as the modified particle locations from _Step 1_ are the best estimates available based on the raw LPT measurement. From the Eulerian perspective, the particle velocities along pathlines (i.e., the RHS of Eq. (11b)) should be equal to the divergence-free velocity field reconstructed by the constrained regression in each frame (i.e., the LHS of Eq. (11b)). Enforcing Eqs. (11a) and (11b) simultaneously achieves data assimilation from both Lagrangian and Eulerian perspectives. Next, we solve the expansion coefficient \(\mathbf{\Lambda}\) to update particle locations. Combining Eqs. (11a) and (11b) to share the same expansion coefficient \(\mathbf{\Lambda}\), an over-determined system is established: \[\mathbf{H}=\mathbf{\Lambda}\mathbf{K}, \tag{12}\] where \(\mathbf{K}=[\mathbf{E}^{\text{T}}(\varepsilon,\mathbf{t}^{\text{c}})\ \mathbf{E}_{t}^{\text{T}}( \varepsilon,\mathbf{t}^{\text{c}})]\) and \(\mathbf{H}=[\tilde{\mathbf{X}}(\varepsilon,\mathbf{t}^{\text{c}})\ \tilde{\mathbf{V}}( \varepsilon,\tilde{\mathbf{x}}^{\text{c}})]\). The update expansion coefficient \(\mathbf{\Lambda}\) is solved by \(\mathbf{\Lambda}=\mathbf{H}\mathbf{K}^{+}\), where \(\mathbf{K}^{+}=(\mathbf{K}^{\text{T}}\mathbf{K})^{-1}\mathbf{K}^{\text{T}}\). The matrix \(\mathbf{\Lambda}\) has dimensions of \(N\times M_{\text{trj}}\) with entries: \[\mathbf{\Lambda}=\begin{bmatrix}\lambda_{1,1}&\lambda_{1,2}&\ldots&\lambda_{1,M_ {\text{trj}}}\\ \lambda_{2,1}&\lambda_{2,2}&\ldots&\lambda_{2,M_{\text{trj}}}\\ \vdots&\vdots&\ddots&\vdots\\ \lambda_{N,1}&\lambda_{N,2}&\ldots&\lambda_{N,M_{\text{trj}}}\end{bmatrix}. \tag{13}\] In each row of \(\mathbf{\Lambda}\), the expansion coefficients are used to approximate a trajectory for a certain particle, while in each column of \(\mathbf{\Lambda}\), the expansion coefficients are used to approximate a velocity field for all particles in a certain frame. Each row of \(\mathbf{\Lambda}\) is used to update trajectories modelled by Eq. (8) in _Step 1_. For example, for a particle \(\tilde{\mathbf{x}}_{i}^{\text{c}}\), its updated trajectory is calculated by \(\tilde{\mathbf{x}}_{i}^{\text{up}}(\varepsilon,\mathbf{t}^{\text{c}})=\mathbf{E}( \varepsilon,\mathbf{t}^{\text{c}})\mathbf{\lambda}_{i}^{\text{trj}}\), where \(\mathbf{\lambda}_{i}^{\text{trj}}=(\lambda_{i,1},\lambda_{i,2},\ldots,\lambda_{i,M_ {\text{trj}}})^{\text{T}}\) is extracted from Eq. (13). The update of particle trajectories in the \(y\) and \(z\) directions follow the same procedure. The expansion coefficient matrix \(\mathbf{\Lambda}\) connects the physical knowledge in both spatial and temporal dimensions. This is justified by the intuition that the Eulerian (measuring over each flow field at a certain time instant) and Lagrangian (tracking each particle over time) observations of the same flow should provide the same information. The shared expansion coefficient \(\mathbf{\Lambda}\) in Eq. (12) implies that no 'discrimination' is projected to temporal and spatial dimensions, as well as to the Lagrangian and Eulerian descriptions of the flow. ### Step 4: calculate final velocity and differential quantity _Step 4_ calculates the final divergence-free velocity field in each frame using the same algorithms as those in _Step 2_. However, the updated particle locations from _Step 3_ and intermediate divergence-free velocities from _Step 2_ are used as the inputs for this step. A spatial oversampling ratio in _Step 4_ is defined as \(\beta_{2}=N/M_{2}\). Similar to \(\beta_{1}\), \(\beta_{2}\) is chosen to be larger than one. _Step 4_ also computes velocity gradients. For example, in the \(x\) direction the velocity gradient at \(\kappa\)-th frame is given by: \[\frac{\partial\tilde{u}(\varepsilon,\tilde{\mathbf{x}}^{\text{up}})}{\partial x} \bigg{|}_{\kappa}=\mathbf{E}_{x}(\varepsilon,\tilde{\mathbf{x}}^{\text{up}})\mathbf{ \lambda}_{\kappa}, \tag{14}\] where \(\mathbf{\lambda}_{\kappa}\) is the vector of the expansion coefficient in the \(x\) direction. \(\mathbf{\lambda}_{\kappa}\) is extracted from \(\bar{\mathbf{\lambda}}\) that is solved by Eq. (5). \(\mathbf{E}_{x}(\varepsilon,\tilde{\mathbf{x}}^{\text{up}})\) is the RBF-QR derivative matrix based on \(\mathbf{E}_{\mathcal{D}}\): \[\mathbf{E}_{\mathcal{D}}(\varepsilon,\tilde{\mathbf{x}}^{\text{up}})=E_{\mathcal{ D},ij}=\psi_{\mathcal{D}}(\varepsilon,\|\tilde{\mathbf{x}}^{\text{up}}_{i}-\mathbf{x}^{ \text{ref}}_{j}\|),\] where \(\mathbf{x}^{\text{ref}}=[\mathbf{x}^{\text{ref}}_{1},\mathbf{x}^{\text{ref}}_{2},\ldots, \mathbf{x}^{\text{ref}}_{M_{2}}]^{\text{T}}\) is the spatial reference points used in _Step 4_, \(\mathbf{x}^{\text{ref}}_{j}=(x,y,z)^{\text{ref}}_{j}\), \(j=1,2,\ldots,M_{2}\), and \(M_{2}\) is the number of spatial reference points in _Step 4_. Velocity gradients in other directions can be calculated similarly. When the data sets are large (e.g., more than ten thousand particles), applying the PUM is preferred. The above velocity reconstruction is first performed in each PUM patch and then the velocity field in the entire domain is assembled using Eq. (7). The PUM settings are the same for _Step 2_ and _Step 4_ since the computational domain remains unchanged in this work. In _Step 4_, the same PUM assembly practice is applied for differential quantity fields. As recommended by Larsson et al. (2017), the overlap ratio of PUM patches is set to \(\gamma=0.2\) to strike a balance between accuracy and computational cost. The number of reference points in a PUM patch is chosen between about 200 and 1,000.1 Footnote 1: Less than 200 reference points per patch in 3D may be inadequate to resolve fine structures of complex flows according to our tests (Li et al., 2022) and more than 1,000 particles may lead to ill-conditioning (Fornberg et al., 2011; Larsson et al., 2013). ### Spatial and temporal super-resolution The CLS-RBF PUM method can readily achieve spatial and temporal super-resolution. To apply super-resolution in our method, a pseudo-particle can be placed at an arbitrary location in the domain (e.g., \((x_{\text{s}},y_{\text{s}},z_{\text{s}})\in\Omega\)) at any time instant \(t\) between the first and last frames. In _Step 1_, all modified particle locations and initial velocities are calculated at time \(t\). Next, an intermediate divergence-free velocity field is reconstructed in _Step 2_, and then the given particle locations are updated in _Step 3_. Last, the final velocities and differential quantities are calculated based on the updated particle locations from _Step 3_ and the intermediate velocity fields from _Step 2_. Because the final velocity and differential quantity fields are recovered by continuous functions (i.e., the stable RBF), the velocity and velocity gradient at the location of the pseudo-particle \((x_{\text{s}},y_{\text{s}},z_{\text{s}})\) at time \(t\) can be evaluated. With this procedure, any number of pseudo-particles can be placed densely in both space and time dimensions to achieve spatiotemporal super-resolution. ## 4 Result and Discussion We first use synthetic LPT data generated by adding artificial noise to data from ground truth flows to test our method. The ground truth data are time series of particle spatial coordinates, which are based on a 3D Taylor-Green vortex (TGV) (Taylor and Green, 1937) or a Direct Numerical Simulation (DNS) of a wake behind a cylinder (Khojasteh et al., 2022). The artificial noise was zero-mean Gaussian noise with standard deviation \(\sigma\) that was proportional to the spatial span of the domain in a direction. For example, in the \(x\) direction, the standard deviation \(\sigma\) of the noise was \(\sigma=\zeta L\), where \(\zeta\) was the noise level and \(L\) was the spatial span of the domain in the \(x\) direction. \(\zeta=0.1\%\) and \(\zeta=1.0\%\) was chosen in the current work to represent a medium and high noise level in an LPT experiment, respectively. Details of the synthetic data generation can be found in Appendix A. We benchmark our method with some baseline algorithms. In the trajectory and velocity reconstruction, six algorithms, i.e., first- and second-order finite difference methods (1st and 2nd FDM), second-, third-, fourth-order polynomial regressions (2nd, 3rd, and 4th POLY), and cubic basis splines (B-splines) are used as 'baselines' to compare against our method. The baseline algorithms are briefed in Appendix B. To assess reconstruction quality, relative errors and normalized velocity divergence are introduced. Relative errors (\(\mathcal{E}\)) of reconstructed particle spatial coordinates, velocities, and velocity gradients are quantified as: \[\mathcal{E}=\frac{|\tilde{f}-f_{0}|}{\|f_{0}\|_{L^{\infty}(\Omega)}}\times 100\%,\] where \(\tilde{f}\) is the reconstruction result (e.g., particle locations, velocities, and velocity gradients), and \(f_{0}\) is the ground truth. A normalized velocity divergence is defined as: \[\|\nabla\cdot\tilde{\mathbf{U}}\|^{*}=\frac{\|\nabla\cdot\tilde{\mathbf{U}}\| _{L^{2}(\Omega)}}{\Big{\|}\big{\|}\frac{\partial\hat{u}}{\partial x}\big{\|}+ \Big{|}\frac{\partial\hat{v}}{\partial y}\Big{\|}+\big{|}\frac{\partial\hat{v }}{\partial z}\big{\|}\Big{\|}_{L^{\infty}(\Omega)}},\] following Luthi (2002), where \(\tilde{\mathbf{U}}\) is the reconstructed velocity vector; \(\frac{\partial\hat{u}}{\partial x}\), \(\frac{\partial\hat{v}}{\partial y}\) or \(\frac{\partial\hat{w}}{\partial z}\) is either the ground truth (if available) or reconstructed velocity gradients. ### Validation based on Taylor-Green vortex The validation based on the synthetic TGV data is presented in Figs. 4 - 7. The TGV synthetic data have 20,000 particles scattered in the domain \(\Omega\) of \(x\times y\times z\in[0,1]\times[0,1]\times[0,1]\), with a time interval \(\Delta t=0.1\) between two consecutive frames. As shown in Fig. 4, the reconstructed particle trajectories and velocity fields were almost identical to their ground truth, regardless of noise levels. In Fig. 4c2 - c3, only some minor distortions on the iso-surfaces appeared near the domain boundaries when high noise (\(\zeta=1.0\%\)) was added to generate the synthetic data. Figs. 5 and 6 present the relative errors of reconstructed trajectories and velocities, respectively. Although the reconstruction errors are relatively higher at the two ends of particle pathlines than the other frames for all methods, our method outperformed the baseline algorithms as the errors from our method were almost always lower than those of the baseline algorithms. The red lines in Fig. 5, which represent the errors in the trajectory reconstruction based on our method, lie below the green lines that denote the errors of the input data. This evidences that our method can effectively mitigate noise in particle spatial coordinates. Iso-surfaces of the reconstructed strain- and rotation-rate tensors based on synthetic data with high noise (\(\zeta=1\%\)) are shown in Fig. 7. The major structures of the flow were smooth and recognizable despite that the iso-surfaces are slightly distorted near the domain boundaries and edges (lower two rows in Fig. 7). The reconstructed iso-surfaces of co medium noise data (\(\zeta=0.1\%\)) were almost the same as the ground truth (not shown here for brevity). We assess the mean and standard deviation of the relative errors in the reconstructed velocity gradients. For noise level \(\zeta=0.1\%\), the mean and standard deviation of the errors were below 2.62% and 5.50% for all frames, respectively. The errors at the two ends of the pathlines (the boundaries in time) were higher than those in the frames in between. If the reconstruction results on the temporal Figure 4: 3D validation based on the synthetic TGV data. Left column: particle trajectories with temporal super-resolution. Middle column: particle velocity vector fields in the sixth of 11 frames. Right column: the iso-surfaces of coherent structures based on the Q-criterion (iso-value = 0.001) in the sixth of 11 frames with spatial super-resolution. Top row: ground truth; middle and bottom rows: reconstruction based on the synthetic data with the noise level of \(\zeta\approx 0.1\%\) and 1.0%, respectively. The particle trajectories are colored by time and the velocity fields and iso-surfaces are colored by the amplitude of velocity. One hundred particle trajectories and 2,000 velocity vectors out of 20,000 are shown in (a1) – (a3) and (b1) – (b3), respectively. boundaries were excluded, the overall reconstruction quality can be significantly improved. For example, after excluding the first and last frames, the mean and standard deviation of the errors were below 1.38% and 1.90%, respectively. For high-level artificial noise, \(\zeta=1.0\%\), the mean and standard deviation of the errors were below 4.16% and 4.65% for all frames, respectively. After excluding the first and last frames, they were below 3.74% and 3.91%, respectively. We emphasize that the absolute value of the normalized velocity divergence is almost always below \(5.7\times 10^{-7}\) regardless of the noise level in the synthetic data. This implies that our method effectively achieves divergence-free reconstruction. Figure 5: Reconstruction errors of particle coordinates in the TGV validation. From upper to lower rows: reconstruction based on synthetic data with 0.1% & 1.0% noise level, respectively. From left to right columns: reconstruction errors in spatial coordinates of \(x\), \(y\), and \(z\), respectively. Note that the green dashed & solid lines are overlapped in all sub-figures because the finite difference methods only evaluate velocities. Therefore, the particle locations from the synthetic data are directly used as the trajectory outputs of the 1st & 2nd FDMs. Figure 6: Reconstruction errors of particle velocities in the TGV validation. From upper to lower rows: reconstruction based on synthetic data with the noise level of \(\zeta=0.1\%\) and \(1.0\%\), respectively. From left to right columns: reconstruction errors in velocity components of \(u\), \(v\), and \(w\), respectively. Figure 7: The iso-surfaces of strain- and rotation-rate tensors (iso-value \(=\pm 0.50\)). Red and blue colors correspond to positive and negative iso-values, respectively. Upper two rows: the ground truth; lower two rows: reconstruction using the CLS-RBF PUM method. Reconstruction was in the sixth of 11 frames based on the synthetic data with 1.0% noise added to the raw LPT data of TGV. ### Validation based on DNS of a turbulent wake The validation based on the DNS synthetic data of the turbulent wake behind a cylinder is presented in Figs. 8 and 9. The DNS synthetic data have 105,000 particles scattered in the domain \(\Omega\) of \(x\times y\times z\in[6D,8D]\times[3D,5D]\times[2D,4D]\), with a time interval \(\Delta t=0.00375U/D\) between two successive frames, where \(U\) is free stream velocity and \(D\) is the diameter of the cylinder. As shown in Fig. 8, the reconstructed particle trajectories and velocity fields were almost identical to the ground truth. As illustrated in Fig. 9, similar to the validation based on the TGV, the relative errors of reconstructed trajectories and velocities were almost always lower than those of the baseline algorithms. Note, our method significantly outperformed the baseline algorithms at the two ends of particle pathlines and suppressed noise in particle spatial coordinates. The performance of our method on different flows (e.g., TGV or turbulent wake) with various noises (\(\zeta=0.1\%\) and \(\zeta=1.0\%\)) is consistent. The absolute value of normalized velocity divergence mostly was below \(8.6\times 10^{-5}\). Figure 8: 3D validation based on the synthetic DNS data. Upper row: the ground truth. Lower row: reconstruction based on the synthetic data with 0.1% artificial noise added. (a1) – (a2): particle trajectories. (b1) – (b2) & (c1) – (c2): particle velocity fields in the sixth of 11 frames. The particle trajectories & velocity vector fields are colored by particle velocities. 5,000 pathlines & 2,000 velocity vectors out of 105,000 are shown in particle trajectories & velocity fields, respectively. (c1) & (c2) are top views from the \(+z\) axis of (b1) & (b2), respectively. ### Experimental validation based on a pulsing jet Next, we validated our method using experimental LPT data from a low-speed pulsing jet. The experiment was conducted by Sakib (2022) at Utah State University, US. The jet flow had a Reynolds number \(Re_{\delta_{\nu}}=400\) based on the thickness of the Stokes boundary layer. The experimental facility consisted of a hexagonal water tank, a cylindrical piston, an impingement plate, and optical equipment. The synthetic pulsing jet was generated by driving the piston using an electromagnetic shaker, and pushing the water through a circular orifice until impinged on the plate. A dual cavity high-speed laser illuminated the measurement area with the dimensions of \(60\ \mathrm{mm}\times 57\ \mathrm{mm}\times 20\ \mathrm{mm}\). Four high-speed cameras recorded the jet flow and provided time-resolved LPT images. The raw experimental data were acquired using a volumetric system and commercial software LaVision DaVis 10 (Gottingen, Germany). The LPT module of DaVis 10 is based on the shake-the-box (Schanz et al., 2016) algorithm. The velocity data were post-processed by Vortex-In-Cell# (VIC#) algorithm (Jeon et al., 2022) to obtain velocity gradients on a structured mesh. About 7,100 particles with 11 frames were chosen from the original data set that had about 15,000 particles with 100 frames.2 This down-sampling led to a sparse data set to challenge our method. The reconstruction was carried out within the domain of \(x\times y\times z\in[-20,30]\times[-30,25]\times[-10,10]\ \mathrm{mm}^{3}\) and the time interval between two consecutive frames was \(5\times 10^{-4}\ \mathrm{s}\). The data reported an average Figure 9: Reconstruction errors in the DNS validation. (a) – (c): reconstruction errors of spatial coordinates of \(x\), \(y\), and \(z\) respectively; (d) – (f): reconstruction errors in velocity \(u\), \(v\), and \(w\), respectively. The reconstruction is based on synthetic data with 0.1% noise level. of about 0.016% nominal uncertainty in the particle spatial coordinates. To further assess the robustness, we tested our method on artificial error-contaminated LPT data. These contaminated data had a noise level of \(\zeta=0.2\%\). The experimental validation results are shown in Figs. 10 and 11. In Fig. 10, the left column (i.e., Fig. 10a1 - c1) presents reconstructions obtained from DaVis 10. The central and right Figure 10: The experimental validation. The velocity fields and iso-surfaces are from the sixth of 11 frames. The magenta, blue, and gray contours in (b1) – (b3) show the reconstruction using VIC# (iso-value = 40,000 s\({}^{-2}\)), and our method based on raw (iso-value = 1,000 s\({}^{-2}\)) and contaminated (iso-value = 500 s\({}^{-2}\)) LPT data, respectively. The iso-surfaces in (c1) – (c3) are based on spatial super-resolution reconstruction, and their iso-values are the same as those used in (b1) – (b3). (a1) – (a3) and (b1) – (b3) are views from the \(+z\) axis; (c1) – (c3) are the zoomed-in views near the jet core. Particle trajectories, velocity fields, and iso-surfaces are colored by particle velocities. columns illustrate the CLS-RBF PUM reconstructions based on raw and contaminated LPT data, respectively. The top row (i.e., Fig. 10a1 - a3) shows reconstructions of particle trajectories. The virtual size of particles is proportional to their velocities for visualization purposes. The middle row illustrates reconstructed velocity fields. We only visualized the velocity vectors within the range of \(z\in[-3,+3]\) mm, which covered the jet core. To emphasize the directions of the particle velocities, the length of the vectors was normalized and projected on the \(z=0\) plane. The bottom row represents iso-surfaces of coherent structures based on the Q-criterion. Intersections between the iso-surfaces and \(z=0\) plane are contours identifying the vertical region of the flow sliced at \(z=0\), and are overlaid on the quiver plots in the middle row (i.e., Fig. 10b1 - b3). As shown in Fig. 10a2 - a3, smooth particle trajectories were recovered by our method, whose profiles resembled those obtained from DaVis 10 (see Fig. 10a1), regardless of particle coordinates being significantly contaminated by the artificial noise (see Fig. 10a3). In addition, two trailing jets (at \(y\approx-5\) and \(-15\) mm) were revealed in the wake of the leading pulsing jet (at \(y\approx 5\) mm). Note, our method was able to reconstruct trajectories with temporal super-resolution, where each pathline consists of 51 frames (see Fig. 10a2 - a3). Compared Fig. 10b2 - b3 with Fig. 10b1, the velocity fields reconstructed by our method were virtually smoother, in terms of the transition of the vector directions over the space. This observation is more apparent near the core of the trailing jets. Furthermore, our method effectively captured major structures of the flow (illustrated by blue and gray contours in Fig. 10b1 - b3, and the coherent structures in Fig. 10c2 - c3). We can observe three vortical structures associated with the leading pulse jet and the two trailing jets (see e.g., Fig. 10a1). The normalized velocity divergence was below \(3.13\times 10^{-7}\) in all frames for our method (e.g., Fig. 10b2 - b3), regardless of the added artificial noise or not. This exhibits that the divergence-free constraint was enforced. On the contrary, the VIC# had a normalized velocity divergence of about 3.18. Fig. 10c2 - c3 represents the reconstructed coherent structures based on the Q-criterion. One large and two small toroidal structures emerged in the domain and their locations corresponded to the three high-speed areas observed in Fig. 10a1 - a3. This agreement between the coherent structures and the particle trajectory and velocity data further validates our method. Comparing the middle and right columns in Fig. 10, despite the overwhelming artificial noise added to the down-sampled data, no discernible difference was observed between the two reconstruction results. This indicates that our method is robust on noisy sparse data. The iso-surfaces of strain- and rotation-rate tensors are illustrated in Fig. 11. They show the kinematics of fluid parcels that may not be visually apparent from the velocity or vorticity fields alone. As examples, we interpret two components of the strain-rate tensor. For \(\tilde{S}_{12}\), two major tubular structures with reversed colors adhering to each other, developed along the \(y\) axis, and two minor vortical structures warped the major ones. The reversed colors of the major structures indicate shear deformations near the jet core, which were caused by the radial velocity gradients of the jet. The wavy tubular structure with three bumps reflects the leading pulsing jet and the two trailing ones. The two minor structures suggest that fluid parcels experienced shear deformations in opposite directions compared to the closest major structure, generating reversed flows that brought fluid parcels back to the jet. Regarding \(\tilde{S}_{13}\), the staggering pattern parallel to the \(x\)-\(z\) plane indicates the shear deformation of fluid parcels in the front and back of the core of the leading jet. The fluid parcels at the leading front plane of the vortex rings tended to be elongated along the circumferential direction and shrunk along the radial directions, while the fluid parcels just behind them underwent reversed deformations. This explains the forward movement of the vortex rings. Noting that the front patterns were larger than the back ones, one can tell that the vortex ring was expanding by observing this single frame. Conclusion In this paper, we propose the CLS-RBF PUM method, a novel 3D divergence-free Lagrangian flow field reconstruction technique. It can reconstruct particle trajectories, velocities, and differential quantities (e.g., pressure gradients, strain- and rotation-rate tensors, and coherent structures based on the Q-criterion) from raw Lagrangian Particle Tracking (LPT) data. This method integrates the Constrained Least Squares (CLS), a stable Radial Basis Function (RBF-QR), and Partition-of-Unity Method (PUM) into one comprehensive reconstruction strategy. The CLS serves as a platform for LPT data regression and enforcing physical constraints. The RBF-QR approximates particle trajectories along pathlines, using the time as an independent variable, and approximates velocity fields for each frame, with the particle spatial coordinates being their independent variables. The PUM localizes the reconstruction to enhance computational efficiency and stability. The intuition behind the CLS-RBF PUM method is straightforward. By assimilating the velocity field reconstructed based on Lagrangian and Eulerian perspectives, we intrinsically incorporate the information in the temporal and spatial dimensions with physical constraints enforced to improve flow field reconstruction and offer several advantages. This method directly reconstructs flow fields at scattered data points without Lagrangian-Eulerian data conversions and can achieve super-resolution at any time and location and enforce physical constraints. The constraints are velocity solenoidal conditions for incompressible flows in the current work while accommodating other constraints as needed. Large-scale LPT data sets with a substantial number of particles and frames can be efficiently processed and parallel computing is achievable. It demonstrates high accuracy and robustness, even when handling highly contaminated data with low spatial and/or temporal resolution. The tests based on synthetic and experimental LPT data show the competence of the CLS-RBF PUM method. Validation based on synthetic data has exhibited the superior trajectory and velocity reconstruction performance of our method compared to various baseline algorithms. The tests based on a pulsing jet experiment further confirm the effectiveness of our method. In summary, the CLS-RBF PUM method offers a versatile solution for reconstructing Lagrangian flow fields based on the raw LPT data with accuracy, robustness, and physical constraints being satisfied, and can be the foundation of other downstream post-processing and data assimilation. ## Acknowledgments We thank Prof. Elizabeth Larsson at Uppsala University for discussing the RBF-QR and sharing codes with us. We are also thankful to Dr. Md Nazmus Sakib at Utah State University for conducting experiments and providing data and timely support. This project is partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). ## Appendix A Synthetic Data ### Taylor-Green vortex (TGV) To generate the TGV synthetic data, we adopted a velocity field: \[\begin{split} u=&\alpha_{1}\cos\left(\omega(x-d_{x}) \right)\sin\left(\omega(y-d_{y})\right)\sin\left(\omega(z-d_{z})\right)\\ v=&\alpha_{2}\sin\left(\omega(x-d_{x})\right) \cos\left(\omega(y-d_{y})\right)\sin\left(\omega(z-d_{z})\right),\\ w=&\alpha_{3}\sin\left(\omega(x-d_{x})\right) \sin\left(\omega(y-d_{y})\right)\cos\left(\omega(z-d_{z})\right)\end{split} \tag{15}\] where \(\alpha_{1}=\alpha_{2}=0.5\), \(\alpha_{3}=-1\), and \(\omega=2\pi\), \(d_{x}=d_{y}=d_{z}=0.25\) were the offsets of the TGV flow structure in the \(x\), \(y\) and \(z\) directions, respectively. In the first snapshot, 20,000 particles were randomly placed in the domain \(\Omega\) of \(x\times y\times z\in[0,1]\times[0,1]\times[0,1]\). The particle locations in the next snapshot were calculated by the forward Euler method: \(\mathbf{x}_{\kappa+1}=\mathbf{x}_{\kappa}+\delta t\mathbf{U}_{\kappa}\), where \(\mathbf{x}_{\kappa}=(x_{\kappa},y_{\kappa},z_{\kappa})\) and \(\mathbf{x}_{\kappa+1}\) were the particle locations in the \(\kappa\)-th and \((\kappa+1)\)-th snapshots, respectively; \(\mathbf{U}_{\kappa}=(u_{\kappa},v_{\kappa},w_{\kappa})\) was the velocity calculated by Eq. (15) in the \(\kappa\)-th snapshot; \(\delta t\) was the time interval between snapshots and here we chose \(\delta t=5\times 10^{-5}\). After the integration, particle trajectories with \(2\times 10^{4}\) snapshots were down-sampled to 11 frames to generate the ground truth of the raw LPT data. Artificial noise (see Sect. 4 for details) was added to the particle spatial coordinates of the TGV-based ground truth. ### Direct Numerical Simulation (DNS) For the validation based on the DNS, synthetic data were generated by adding artificial noise to an available Lagrangian data set. More details about the data set can be found in Khojasteh et al. (2022) and we briefly summarize here. The DNS was conducted using the Incompact3D solver (Laizet and Lamballais, 2009) to simulate a flow with a free stream velocity \(U\) passing a cylinder with a diameter \(D\) at a Reynolds number of \(Re=3,900\). In the Lagrangian data sets, about 200,000 particles were scattered in the domain. The particle velocities were calculated by tri-linear interpolations using the nearest eight nodal data in space. The particle trajectories were integrated by a fourth-order Runge-Kutta method. The synthetic data had 105,000 particles that were down-sampled from the Lagrangian data set. These particles were distributed in a wake region, located \(0.5D\) downstream of the cylinder, with the dimensions of \(x\times y\times z\in[6D,8D]\times[3D,5D]\times[2D,4D]\). A total of 11 frames were selected from 350 DNS frames and these 11 frames had a time interval \(\Delta t=0.00375U/D\). The noise was defined the same way as that used in the TGV validation, with \(\zeta=0.1\%\) to represent a typical noise in LPT data. ## Appendix B Baseline Algorithms ### Finite difference methods We use the velocity component \(u\) calculation as an example. In a given frame, the 1st and 2nd order finite difference methods (FDMs) calculate velocities by the forward Euler method (i.e., Eq. (16a)) and central difference method (i.e., Eq. (16b)), respectively: \[\tilde{u}(t_{\kappa}) =\frac{\hat{x}_{\kappa+1}-\hat{x}_{\kappa}}{\Delta t}, \tag{16a}\] \[\tilde{u}(t_{\kappa}) =\frac{\hat{x}_{\kappa+1}-\hat{x}_{\kappa-1}}{2\Delta t}, \tag{16b}\] where \(\hat{x}_{\kappa}\) is the particle coordinate along the \(x\) direction in the \(\kappa\)-th frame from LPT experiments, \(\Delta t\) is the time interval between two consecutive frames, and \(t_{\kappa}\) is the time instant in the \(\kappa\)-th frame. The velocities of the first and last frames are calculated by the first-order finite difference method. Since the FDMs only evaluate particle velocities, the particle locations that are 'output' from the FDMs are assumed to be the same as those in the synthetic data, e.g., \(\tilde{x}(t_{\kappa})=\hat{x}(t_{\kappa})\). ### Polynomial regression We use trajectory and velocity reconstruction in the \(x\) direction as an example. First, the trajectory polynomial model function \(\tilde{x}(t)\) is given by \(\tilde{x}(t)=\sum_{j=0}^{m}p_{m,j}t^{j}\) where \(p_{m,j}\) is the polynomial coefficient, \(m\) is the order of polynomials, and \(t\) is the time. For instance, the trajectory and velocity model functions of the 2nd polynomial (2nd-POLY) are \[\tilde{x}(t) =p_{2,2}t^{2}+p_{2,1}t+p_{2,0}\] \[\tilde{u}(t) =\dot{\tilde{x}}(t)=2p_{2,2}t+p_{2,1}.\] The trajectory and velocity approximation functions of the other polynomials are similar to those in the \(x\) direction of the 2nd POLY. Next, we calculate the polynomial coefficient \(p_{m,j}\). A residual \(\mathcal{R}\) between measured particle locations \(\hat{x}_{\kappa}\) and the polynomial model function \(\tilde{x}(t_{\kappa})\) is minimized \(\min~{}\mathcal{R}=\sum_{\kappa=1}^{N_{\text{trj}}}\|\tilde{x}(t_{\kappa})- \hat{x}_{\kappa}\|^{2}\), where \(t_{\kappa}\) is the time instant in the \(\kappa\)-th frame. Setting the gradient of the residual \(\mathcal{R}\) regarding \(p_{m,j}\) to zero (i.e., \(\partial\mathcal{R}/\partial p_{m,j}=0\)) and the polynomial coefficient \(p_{m,j}\) can be explicitly solved. The polynomial coefficients in the \(y\) and \(z\) directions are the same as that in the \(x\) direction. Once all the coefficients are retrieved, the polynomial model functions can be constructed and used to approximate particle trajectories, velocities, and accelerations. ### Cubic B-splines The cubic B-spline algorithm is based on the MATLAB built-in functions from the Curve Fitting Toolbox(tm). We adopt a function spap2(piece,k,x,y) with piece=2 and k=4 to create a cubic spline with two pieces joined together, bypass setting knots. x and y are the given data points and their values, respectively. Then we use a function fnval(f,xe), in which f=spap2(piece,k,xe,y) is the spline function calculated above, and xe is the evaluation points. To calculate first-order derivatives for velocity as an example, we use fnder(f,d), where d=1. ## Appendix C Differential Quantity ### Strain-rate and rotation-rate tensor Strain-rate and rotation-rate tensors are kinematics quantities that describe the rate of change of a fluid parcel regarding deformation and rotation, respectively. The strain-rate tensor \(\mathbf{S}\) and rotation-rate tensor \(\mathbf{R}\) are the symmetric and anti-symmetric parts of the velocity gradient \(\nabla\mathbf{U}\), respectively: \(\nabla\mathbf{U}=\frac{1}{2}(\mathit{U}_{i,j}+\mathit{U}_{j,i})+\frac{1}{2}( \mathit{U}_{i,j}-\mathit{U}_{j,i})\), \(\mathbf{S}=S_{ij}=\frac{1}{2}(\mathit{U}_{i,j}+\mathit{U}_{j,i})\), and \(\mathbf{R}=R_{ij}=\frac{1}{2}(\mathit{U}_{i,j}-\mathit{U}_{j,i})\). Once all components of \(U_{i,j}\) are calculated in _Step 4_ of the CLS-RBF PUM method, \(S\) and \(R\) can be evaluated. ### Coherent structure based on Q-criterion The Q-criterion is a vortex identification method first proposed by Hunt et al. (1988) to identify vortical structures in the flow. The Q-criterion is given by (Hunt et al., 1988; Haller, 2005): \[Q=\frac{1}{2}(\|\mathbf{R}^{2}\|-\|\mathbf{S}^{2}\|)>0 \tag{17}\] where \(R\) is the rotation-rate tensor, and \(S\) is the strain-rate tensor. After \(S\) and \(R\) are calculated, vortices can be found using Eq. (17). Other criteria can be achieved similarly.
2310.13035
Collatz conjecture becomes theorem
The Collatz hypothesis is a theorem of the algorithmic theory of natural numbers. We prove the (algorithmic) formula that expresses the halting property of Collatz algorithm. The observation that Collatz's theorem cannot be proved in any elementary number theory completes the main result.
Grażyna Mirkowska, Andrzej Salwicki
2023-10-19T13:26:46Z
http://arxiv.org/abs/2310.13035v2
# Collatz conjecture becomes theorem ###### Abstract The Collatz hypothesis is a theorem of the algorithmic theory of natural numbers. We prove the (algorithmic) formula that expresses the halting property of Collatz algorithm. The calculus of programs is used in the proof. The observation that Collatz's theorem cannot be proved in any elementary number theory completes the main result. For the Collatz property can be falsified in non-standard models of such theories. **Key words:** algorithm of Collatz, halting property, Collatz tree, calculus of programs,algorithmic theory of numbers. MSC-class: 03D02 (Primary) 68Q02 (Secondary) ACM-class: F.3.1; D.2.4 ###### Contents * 1 Introduction * 2 Collatz tree * 3 Four algorithms, relatives of \(Cl\) algorithm * 3.1 Hotel Collatz * 4 On finite and infinite computations of Collatz algorithm * 4.1 Finite computations * 4.2 Infinite computations * 4.3 Collatz theorem * 4.4 A counterexample * 5 Final remarks * 6 Supplements * 6.1 A structure with counterexamples * 6.2 Presburger's arithmetic * 6.3 An introduction to the calculus of programs \(\mathcal{AL}\) * 6.4 An introduction to the algorithmic theory of numbers \(\mathcal{ATN}\) * 6.5 Proof of lemma 3 * 6.6 Proof of invariant of algorithm \(Gr3\) Introduction The \(3x+1\) problem remained open for over 80 years. It has been noticed in 1937 by Lothar Collatz. The problem became quite popular due to its wording, for it is short and easy to comprehend. Collatz remarked that for any given natural number \(n>0\), the sequence \(\{n_{i}\}\) defined by the following recurrence \[\left.\begin{array}{l}n_{0}=n\\ n_{i+1}=\begin{cases}n_{i}\div 2&\text{when $n_{i}$ is even}\\ 3\cdot n_{i}+1&\text{when $n_{i}$ is odd}\end{cases}\quad\text{ for }\ i\geq 0\end{array}\right\}\] (rec1) seem always reach the value 1. He formulated the following conjecture \[\text{for all $n$ exists $i$ such that $n_{i}=1$}\] (Collatz conjecture) One can give another formulation of the hypothesis of Collatz 1. The number of papers devoted to the problem surpasses 200, c.f. [1]. It is worthwhile to consult social media: wikipedia, youtube etc, there you can find some surprising ideas to prove the Collatz hypothesis as well as a technical analysis of the problem. Computers are used and are still crunching numbers in the search of an eventual counterexample to the Collatz conjecture. The reports on progress appear each year. We claim that the counterexample approach is pointless, i.e. the computers can be turned off. Namely, we shall prove that the program that searches a counterexample will never stop. Our goal will be achieved if we prove that for each number \(n\) the computation of the following \(Cl\) algorithm is finite. Footnote 1: Let \(f(n,0)\stackrel{{ df}}{{=}}n\), and \(f(n,i+1)\stackrel{{ df}}{{=}}\begin{cases}f(n,i)/2&\text{if $f(n,i)$ is even}\\ 3\cdot f(n,i)+1&\text{if $f(n,i)$ is odd}\end{cases}\). Now, conjecture reads \(\forall_{n}\,\exists_{i}\,f(n,i)=1\). \[\left\{\begin{array}{l}\textbf{while }n\neq 1\textbf{ do}\\ \textbf{if }even(n)\textbf{ then }n:=n\div 2\textbf{ else }n:=3n+1\textbf{ fi}\\ \textbf{od}\end{array}\right\}\] (Cl) We need the following items * a formula \(\Theta_{Cl}\) such that it expresses the termination property of program \(Cl\), * a definition of relation \(\mathcal{C}\) of logical consequence operation (provability) and * a verifiable proof \(\Pi\) of the halting formula \(\Theta_{Cl}\). Ah, we need also a specification of the domain in which the algorithm is to be executed, i.e. the axioms \(\mathcal{A}x\) of the algebraic structure of natural numbers. Question 1. How to express the termination property of a program \(K\) as a formula \(\Theta_{K}\) (i.e. a Boolean expression)? Note, there is no a universal algorithm that builds the halting formula of a given program \(K\) as an appropriate first-order logical formula \(\Theta_{K}\). The existence of such algorithm would contradict the theorem on incompleteness of arithmetics, cf. Kurt Godel. According to Godel, the property _to be a natural number_ is not expressible by any set of first-order formulas. The reader may wish to note, that halting property of the algoritm \[q:=0;\textbf{while }q\neq n\textbf{ do }q:=q+1\textbf{ od}\] is valid in a data structure\(\mathfrak{A}\) iff \(n\) is a standard (i.e. reachable) natural number. Therefore the halting property allow to define the set of natural numbers. In this situation it seems natural to pass from first-order language to another more expressive language. There are three candidates: 1\({}^{*}\): a second-order language admitting variables and quantifiers over sets, 2\({}^{*}\): the language of infinte disjunctions and conjunctions \(\mathcal{L}_{\omega_{1}\omega}\) and 3\({}^{*}\): language of algorithmic logic. Problem with second order logic is in lack of adequate definition of consequence operation. True, we can limit our considerations to the case of finite sets (_aka_, weak second order logic). Still we do not know a complete set of axioms and inference rules for the weak second-order logic. Applying second-order logic to program analysis results in a heavy overhead. Because, first you have to translate the semantic property of the program into a property of a certain sequence or set, prove this property and make a backward translation. The question of whether this approach is acceptable to software engineers seems to be appropriate. The language of infinite disjunctions and conjunctions is not an acceptable tool for software engeeners for the programs are of finite length. We shall use the language and the consequence operation offered by algorithmic logic i.e. calculus of programs. We enlarge the set of well formed expressions: beside terms and formulas of first order language we accept algorithms and we modify the definition of logical formulas. The simplest algorithmic formulas are of the form: \[\left\{\text{\it algorithm}\right\}(\text{\it formula}).\] As an example of an algorithmic formula consider the expression \[\left\{q:=0;\textbf{while}\ q\neq n\ \textbf{do}\ q:=q+1\ \textbf{od} \right\}(n=q) \tag{1}\] The latter formula is valid iff every element \(n\) can be reached from \(0\) by adding \(1\). Now our goal is to deduce the following statement \[\forall_{n\in N}\ \left\{\begin{array}{l}\textbf{while}\ n\neq 1\ \textbf{do}\\ \ \ \ \ \ \textbf{if}\ even(n)\ \textbf{then}\ n:=\frac{n}{2}\\ \ \ \ \ \ \textbf{else}\ n:=3n+1\ \textbf{fi}\\ algorithm is finite iff there exist three natural numbers \(x,y,z\) such that: a) the equation \(n\cdot 3^{x}+y=2^{z}\) is satisfied and b) the computation of another algorithm \(IC\), cf. page 14, is finite, the algorithm computes on triples\((x,y,z)\). It is worthwhile to mention that the subsequent triples are decreasing during computation. The proof we wrote in [20] is overly complicated. Here we show that the 4-argument relation \[\{n,x,y,z\}:\{IC\}(true).\] is elementary recursive, since it may be expressed by an arithmetic expression with operator \(\sum\). The present paper shows arguments simpler and easier to follow. ## 2 Collatz tree **Definition 1**: _Collatz tree \(\mathcal{DC}\) is a subset \(D\subset N\) of the set \(N\) of natural numbers and the function \(f\) defined on the set \(D\setminus\{0,1\}\)._ \[\mathcal{DC}=\langle D,f\rangle\] _where \(D\subset N,1\in D,f\colon D\setminus\{0,1\}\to D\)._ _Function \(f\) is determined as follows_ \[f(n)=\begin{cases}n\div 2&\text{when }n\mod 2=0\\ 3n+1&\text{when }n\mod 2=1\wedge n\neq 1\end{cases}\] _, the set \(D\) is the least set containing the number 1 and closed with respect to the function \(f\),_ \[D=\{n\in N:\exists_{i\in N}\ f^{i}(n)=1\ \}\.\] As it is easy to see, this definition is highly entangled and the decision whether the set \(D\) contains every natural number is equivalent to the Collatz problem. **Conjecture 2**: _The Collatz tree contains all the reachable natural numbers._ Figure 1: A fragment of Collatz tree, levels 4-15. It does not include levels 0-3, they consist of elements 1 — 2 — 4 — 8 —. ## 3 Four algorithms, relatives of \(Cl\) algorithm In this section we present an algorithm \(Gr\) equivalent to the algorithm \(Cl\) and three algorithms \(Gr1,Gr2,Gr3\) that are successive extensions of the \(Gr\) algorithm. **Lemma 3**: _The following algorithm \(Gr\) is equivalent to Collatz algorithm \(Cl\)._ \begin{tabular}{l l} \hline **while** even(n) **do** n:= n \(\div\) 2 **od** ; \\ **while** n \(\neq\)1 **do** \\ \(Gr\): & n:= 3*n+1: \\ **while** even(n) **do** n:= n \(\div\) 2 **od** \\ \(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; A couple of observations will be used in the sequel. **Remark 6**: _Among properties of sets \(W_{i}\) we find_ 1. _Each set_ \(W_{i}\) _is infinite. If it contains a number_ \(n\) _then the number_ \(2n\in W_{i}\)_._ 2. _The set_ \(W_{0}\) _contains one odd number 1. All other sets contain infinitely many odd numbers. For if_ \(a=2j+1\) _and_ \(a\in W_{i}\) _then_ \(4a+1\in W_{i}\)_._ 3. _Define the function_ \(f\colon N\to N\) _as follow:_ \(f(n)=\begin{cases}n\div 2&\text{ when $n$ is even}\\ 3n+1&\text{ when $n$ is odd $n\neq 1$}\\ undefined&\text{ when $n=1$}\end{cases}\)__ 4. _Let_ \(x>0\) _be a natural number. Let the sequence_ \(\left\{o_{j}\right\}_{i\in N}\) _contain all odd numbers that are in the set_ \(W_{x}\)_. Let_ \(S_{o_{j}}=\left\{2^{i}\cdot o_{j}\right\}\) _be the set. Every set_ \(W_{x}\) _may be partitioned as follow_ \(W_{x}=\bigcup\limits_{j=0}^{\infty}\;S_{o_{j}}\)_. If_ \(j\neq j^{\prime}\) _then_ \(S_{o_{j}}\cap S_{o_{j^{\prime}}}=\emptyset\)__ 5. _For every_ \(i,j\in Nat\) _if_ \(i\neq j\) _then_ \(W_{i}\cap W_{j}=\emptyset\)_._ 6. _For every_ \(n,j\in Nat\) _if_ \(n\in W_{j}\wedge j>1\wedge n\mod 2=1\) _then_ \(3n+1\in W_{j-1}\)_._ 7. _The sequence of sets_ \(\left\{\bigcup\limits_{i=0}^{j}\;W_{i}\right\}_{j\in N}\) _is monotone, increasing._ Figure 2: Strata \(W_{0}--W_{4}\) of Collatz tree Let \(s\) be a variable not occurring in algorithm \(Gr1\). The following lemma states the partial correctness of the algorithm \(Gr1\) w.r.t. precondition \(s=n\) and postcondition \(s\in W_{i}\). **Lemma 7**: _Algorithm \(Gr1\) computes the number \(i\) of storey \(W_{i}\) of number \(n\),_ \[\{Gr1\}(true)\implies\big{(}(s=n)\implies\{Gr1\}(s\in W_{i})\big{)}\] Next, we present another algorithm \(Gr2\) and a lemma. \begin{tabular}{|l|l|} \hline **var** & \(n,l,i,x,y,z\) :integer ; k,m :**arrayof** integer; \\ \cline{2-3} & \(i:=0;\;l:=0;\) \\ \(\Gamma_{2}\): & **while** even(n) **do** := n \(\div\) 2; \(l:=l+1\) **od** ; \\ & \(z\), \(k_{i}:=l\); \(m_{i}\):=n; \(y:=0;\) \\ \cline{2-3} \(Gr2\): & **while**\(m_{i}\not=\)**1** **do** \\ \cline{2-3} & \(n\):= **3*n+1**; \(i:=i+1\); \(l:=0\) ; \\ \(\Delta_{2}\): & **while** even(n) **do** := n \(\div\) 2; \(l:=l+1\) **od** ; \\ & \(k_{i}:=l\); \(m_{i}\):=n; \(z:=z+k_{i}\); \(y:=3*y+2^{z}\); \(x:=i\) \\ \cline{2-3} \cline Figure 3: Tree of triples (levels 4 – 15) Subsequent algorithm \(Gr3\) exposes the history of the calculations of \(x,y,z\). \[Gr3:\left[\begin{array}{c|c}\hline\mbox{\bf varn},i,aux,mn:integer;\ k,m,X,Y,Z: \mbox{\bf arrayofinteger};\\ \Gamma_{3}:\left[\begin{array}{l}i:=0;\,k_{i}:=exp(n,2);\,m_{i}:=\frac{n_{i} }{2^{k_{i}}};\,\,mn:=m_{i};\\ Z_{i}:=k_{0};\,Y_{i},X_{i}:=0;\end{array}\right]\\ \hline\mbox{\bf while }n\cdot 3^{i}+Y_{i}\neq 2^{Z_{i}}\mbox{\bf do}\\ \Delta_{3}:\left[\begin{array}{l}aux:=3*n_{i}+1;\quad\quad i:=i+1;\\ k_{i}:=exp(aux,2);\,m_{i}:=aux/2^{k_{i}};\,\,mn:=m_{i};\\ Y_{i}:=3Y_{i-1}+2^{Z_{i-1}};\,\,Z_{i}:=Z_{i-1}+k_{i};\,\,X_{i}:=i\\ \hline\mbox{\bf od}\end{array}\right]\end{array}\right]\] See some properties of the algorithm \(Gr3\). **Lemma 9**: _Both algorithms \(Gr2\) and \(Gr3\) are equivalent with respect to the halting property. For every element \(n\) after each \(i\)-th iteration of algorim \(Gr3\), the following formulas are satisfied_ \[\begin{array}{ll}\hline\varphi:\ n\cdot 3^{i}+Y_{i}=m_{i}\cdot 2^{Z_{i}}&X_{i}=i \\ Z_{i}=\sum\limits_{j=0}^{i}\,k_{j}&Y_{i}=\sum\limits_{j=0}^{i-1}\left(3^{i-1-j} \cdot 2^{Z_{j}}\right)\\ \hline\end{array}\] _where the sequences \(\{m_{i}\}\)and \(\{k_{i}\}\) are determined by the recurrence (rec2). in other words, the following formula is valid in the structure \(\mathfrak{N}\)_ \[\mathfrak{N}\models\Gamma_{3}\bigcap\left\{\mbox{\bf if }m_{i}\neq 1\mbox{ \bf then }\Delta_{3}\mbox{ \bf fi}\right\}\varphi\] **Remark 10**: _Hence, for every element \(n\) algorithm \(Gr3\) calculates an increasing, monotone sequence of triples \(\langle i\,(=X_{i}),Y_{i},Z_{i}\rangle\)._ **Remark 11**: _We can say informally that the algorithm \(Gr3\) performs as follow_ \[\begin{array}{l}\hline i:=0;\\ \mbox{\bf while }n\notin W_{i}\mbox{ \bf do }i:=i+1\mbox{ \bf od}\\ \hline\end{array}\] Note, \(\{Gr_{3}\}\,(n\in W_{i})\) ### Hotel Collatz Hotel contains rooms of any natural number. Let \(n=2^{i}\cdot(2j+1)\). It means that the room number \(n\) is located in tower number \(j\) on the floor number \(i\). Each tower is equipped with an elevator (shown as a green line). Moreover, each tower is connected to another by a staircase that connects numbers \(k=2j+1\) and \(3k+1\). This is shown as a red arrow \(\overline{\langle k,3k+1\rangle}\). Figure 4: Hotel Collatz **Definition 12** (Hotel Collatz): _The graph \(HC=\langle V,E\rangle\) is defined as follows \(V=N\) i.e. the set of vertices is the set of standard, reachable, natural numbers \(E=\{\overrightarrow{\langle k,p\rangle}:\exists_{p}k=p+p\}\cup\{\overrightarrow{ \langle k,3k+1\rangle}:\exists_{p}k=p+p+1\}\) are edges of the graph_ **Note.** Don't forget, our drawing is only a small fragment of the infinite HC structure. The picture shows a small part of red arrows. We drew only those red arrows that fit entirely on a page. **Conjecture 13**: _The hotel Collatz is an infinite, connected, acyclic graph, i.e. it is a tree. Number 1 is the root of the tree._ Making use of the definition 5 one can formulate the following **Conjecture 14**: _The set \(\overline{W}\stackrel{{ df}}{{=}}\bigcup\limits_{x\in N}W_{x}\) is a partition of the set \(N\) of nodes of Hotel Collatz._ ## 4 On finite and infinite computations of Collatz algorithm Question Can some computation contain both reachable and unreachable elements? No. The subset of reachable elements is closed with respect of division by 2 and multiplication by 3. The same observation applies to the set of unreachable elements. We know, cf. subsection 6.1 that computations of nonreachable elements are infinite. ### Finite computations Let \(\mathfrak{M}=\langle M;0,1,+,=\rangle\) be any algebraic structure that is a model of elementary theory of addition of natural numbers, c.f. subsection 6.2. **Denotation.** Let \(\theta(x,y)\) be a formula. The expression \((\mu x)\theta(x,y)\) denotes the least element \(x(y)\) such that the value of the formula is truth. Example. The formula \((\mu x)(x+x=y\lor x+x+1=y)\) defines the operation \(\div\), i.e. \(x=y\div 2\). The following lemma gathers the facts established earlier. **Lemma 15**: _Let \(n\) be an arbitrary element of the structure \(\mathfrak{M}\). The following conditions are equivalent_ * _The sequence_ \(n_{0}=n\) _and_ \(n_{i+1}=\begin{cases}n_{i}\div 2&\text{when}\ \ n_{i}\mod 2=0\\ 3n_{i}+1&\text{when}\ \ n_{i}\mod 2=1\end{cases}\) _determined by the recurrence (rec1) contains an element_ \(n_{j}=1\)__ * _The computation of the algorithm_ \(Cl\) _is finite._ * _The sequence_ \(m_{0}=\frac{n}{2^{n_{0}}}\) _and_ \(m_{i+1}=\frac{3m_{i}+1}{2^{k_{i}}}\) _determined by the recurrence (rec2) stabilizes, i.e. there exist_ \(l\) _such that_ \(m_{k}=1\) _for all_ \(k>l\)__ * _The computation of the algorithm_ \(Gr\) _is finite._ * _The computation of the algorithm_ \(Gr1\) _is finite and the subsequent values of the variables_ \(M_{i}\) _and_ \(K_{i}\) _satisfy the recurrence (rec2)._ * _The computation of the algorithm_ \(Gr2\) _is finite and the subsequent values of the variables_ \(m_{i}\) _and_ \(k_{i}\) _satisfy the recurrence (rec2). The formula_ \(n\cdot 3^{x}+y=m_{i}\cdot 2^{z}\) _holds after each iteration of_ _while instruction, i.e. it is the invariant of the program_ \(\Delta_{2}\)_. The final valuation of variables_ \(x,y,z\) _and_ \(n\) _satisfies the equation_ \(n\cdot 3^{x}+y=2^{z}\) _._ * _The computation of the algorithm_ \(Gr3\) _is finite._ _The subsequent values of the variables_ \(m_{i}\) _and_ \(k_{i}\) _satisfy the recurrence (rec2)._ _The subsequent values of the variables_ \(X_{i},Y_{i},Z_{i}\) _form a monotone, increasing sequence of triples._ _The formula_ \(n\cdot 3^{X_{i}}+Y_{i}=m_{i}\cdot 2^{Z_{i}}\) _is satisfied after each_ \(i\)_-th iteration of the program_ \(Gr3\)_, i.e. the value of the following expression_ \(\{\Gamma_{3};\,\Delta_{3}^{i}\}(X_{i}+Z_{i})\) _is the total number of operations executed. The value of the variable_ \(Y_{i}\) _encodes the history of the computation till the_ \(i\)_-th iteration of_ \(\Delta_{3}\)__ Suppose that for a given element \(n\) the computation algorithm \(Gr2\) is finite. Let \(\bar{x}=(\mu x)\Bigg{(}n\cdot 3^{x}+\left[\sum\limits_{j=0}^{x-1}\big{(}3^{x-1-j} \cdot 2^{\sum\limits_{l=0}^{j}k_{l}}\big{)}\right]=2^{\sum\limits_{i=0}^{x}k_{j}} \Bigg{)}\). Put \(\bar{y}=\sum\limits_{j=0}^{\bar{x}-1}\big{(}3^{\bar{x}-1-j}\cdot 2^{\sum \limits_{l=0}^{j}k_{l}}\big{)}\) and \(\bar{z}=\sum\limits_{j=0}^{\bar{x}}k_{j}\). We present the algorithm \(IC^{\prime}\), which is a slightly modified version of the algorithm \(IC\) devised in [10]. \[IC^{\prime}:\left\{\begin{array}{l}\mbox{\bf var x},\mbox{\rm y},\mbox{\rm z },k:integer,Err:Boolean;\\ Err:=false;\\ \mbox{\bf while}\mbox{\rm x}+\mbox{\rm y}+\mbox{\rm z}\neq 0\mbox{\rm\ \bf do}\\ \mbox{\bf\ \ **Lemma 17**: _For every element \(n\) the following conditions are equivalent_ 1. _computation of Collatz algorithm_ \(Cl\) _is finite,_ 2. _there exists the_ _LEAST _element_ \(x\) _such that the following equality holds_ \[n\cdot 3^{x}+\left(\sum\limits_{j=0}^{x-1}3^{x-1-j}\cdot 2\sum\limits_{i=0}^{j}k_{ i}\right)=2\sum\limits_{j=0}^{\sum\limits_{j=0}^{x}k_{j}}.\] (Mx) \[\mbox{where the sequence }\{k_{i}\}\mbox{ is determined by the element }n\mbox{ in accordance to the recurrence (rec2)}.\] **Proof.** The implication \((i)\Rightarrow(ii)\) follows from lemma 15(_vii_). Consider the inverse implication \((ii)\Rightarrow(i)\). If \((ii)\) holds, then the computation of program \(Gr_{3}\) reaches 1 after \(x\)-th iteration of internal instruction \(\Delta_{3}\). The value of \(m_{x}\) is then 1. The computation performs \(x\) multiplications and \(z=\sum\limits_{j=0}^{x}k_{j}\) divisions. The value of \(y=\left(\sum\limits_{j=0}^{x-1}\bigl{(}3^{x-1-j}\cdot 2\sum\limits_{i=0}^{j}k_{ i}\bigr{)}\right)\) codes the history of the computation. Which means: \(x_{0}\) is the number of multiplication by 3, \(z=\sum\limits_{j=0}^{x}k_{j}\) is the total number of divisions by 2 and for every \(0\leq j\leq x-1\) the number \(k_{j}\) is the number of divisions by 2 executed in between the \(j\)-th and \(j+1\)-th execution of multiplication by 3. The algorithm \(Cl\) executes \(x+z\) iterations. The lemma 17 gives the halting formula i.e. a satisfactory and necessary condition for the computation of the Collatz program to be finite. We shall summarize the considerations on finite computations in the following commutative diagram. ### Infinite computations Do infinite computations exist? There are two answers _yes_ and _no_. \begin{tabular}{l l} \hline Yes & Imagine your computer system is (_maliciously_) handled by a hacker. This \\ & can be done by preparing its hardware or software (e.g. someone modifies \\ & the class Int in Java). To hide the damage from the user, the hacker may \\ & come with a correct proof that all axioms of natural numbers (e.g. of \\ & Presburger's system) are valid. Yet, for many \(n\) execution of the Collatz \\ & algorithm will not terminate. See subsections 4.4 and 6.1. \\ No & If argument of Collatz procedure is a standard (i.e. reachable) natural \\ & number then the computation is finite. \\ \hline \end{tabular} Our aim is to prove the lemma **Lemma 18** (noCycle): _There is no reachable, natural number \(n\) such that Collatz computation for \(n\) is a cycle._ **Proof.** Suppose that there exists a cycle and certain reachable number \(n\) is in this cycle. Let \(q\) be the length of the cycle. Remark, in the following formulas (4) - (7) the precedent of the implication holds and implies the inequality in its successor. **Note**. Whenever the operator \(-\) of subtraction appears the first argument is bigger than the second one. Similarly, whenever the operator \(\div\) appears the dividend is bigger than divisor. In the sequel we make sure that \(a>b\) before writing \(a\div b\). \[n\cdot 3^{0}+0=n\cdot 2^{0}\wedge n>1 \tag{4}\] \[n\cdot 3^{q}+Y_{q}=n\cdot 2^{Z_{q}}\implies n>(2^{Z_{q}}-Y_{q})\div 3^{q} \tag{5}\] \[n\cdot 3^{2q}+Y_{2q}=n\cdot 2^{Z_{2q}}\implies n>(2^{Z_{2q}}-Y_{2q})\div 3^{2q} \tag{6}\] \[\cdots\] \[n\cdot 3^{r\cdot q}+Y_{r\cdot q}=n\cdot 2^{Z_{r\cdot q}}\implies n>(2^{Z_{r\cdot q}}-Y_{r\cdot q })\div 3^{r\cdot q}\qquad\mbox{where $r\in N$} \tag{7}\] \[\cdots\] From the equality \(n\cdot 3^{q}+Y_{q}=n\cdot 2^{Z_{q}}\) we infer inequality \(3^{q}<2^{Z_{q}}\). From inequality \(3^{q}<2^{Z_{q}}\) and equality \(3^{q}+(n-1)\cdot 3^{q}+Y_{q}=2^{Z_{q}}+(n-1)\cdot 2^{Z_{q}}\) we infer inequality \[3^{q}+Y_{q}<2^{Z_{q}} \tag{8}\] Since the computation is a cycle we have \(Z_{2q}=Z_{q}+Z_{q}\) and \(Y_{2q}=3^{q}\cdot Y_{q}+2^{Z_{q}}\cdot Y_{q}\). Similarly, \(Z_{(r+1)q}=Z_{rq}+Z_{q}\) and \(Y_{(r+1)q}=3^{q}\cdot Y_{rq}+2^{Z_{rq}}\cdot Y_{q}\). Next, we are going to prove that the sequence \[1,(2^{Z_{q}}-Y_{q})\div 3^{q},(2^{Z_{2q}}-Y_{2q})\div 3^{2q},\cdots,(2^{Z_{r\cdot q }}-Y_{r\cdot q})\div 3^{r\cdot q},(2^{Z_{(r+1)\cdot q}}-Y_{(r+1)\cdot q})\div 3^{(r+ 1)\cdot q},\cdots \tag{9}\] is infinite, increasing sequence of reachable, natural numbers. The proof is by induction with respect to \(r\). (**Base**). From inequality \(3^{q}+Y_{q}<2^{Z_{q}}\) we obtain \(3^{q}<2^{Z_{q}}-Y_{q}\) and \[1<(2^{Z_{q}}-Y_{q})\div 3^{q}.\] (**Induction step**). Without loss of the generality we can consider the step from \(r=1\) to \(r=2\). Once again we start with the valid inequality (8). \[3^{q}+Y_{q}<2^{Z_{q}}.\] Our next inequality is valid too \[3^{q}\cdot 2^{Z_{q}}+2^{Z_{q}}\cdot Y_{q}<2^{Z_{2q}}.\] Making use of remarks made earlier (just below the inequality (8)) we get \[(2^{Z_{q}}-Y_{q})\cdot 3^{q}<2^{Z_{2q}}-Y_{2q}\] which is equivalent to the inequality \[(2^{Z_{q}}-Y_{q})\div 3^{q}<(2^{Z_{2q}}-Y_{2q})\div 3^{2q}\] Hence, the induction step is proved. (The reader may wish to verify our reasoning for the step from \(r\)-th to \(r+1\)-th iteration of the cycle.) The sequence (9) is increasing, infinite. The number \(n\) is bigger than any element of this sequence. Therefore, the number \(n\) is bigger than any reachable number. This contradicts the assumption that number \(n\) is reachable. Now, we are going to prove the following **Lemma 19** (no Divergent computations): _If for certain number \(n\) computation of Collatz algorithm is infinite then \(n\) is not a reachable, natural number._ We remark that the semantical property: _for a given element \(n\) the computation of Collatz algorithm may be continued at will_, i.e. it is an infinite computation, is expressed by the following formula \[\left\{\Gamma_{3}\right\}\bigcap\left\{\Delta_{3}\right\}(mn\neq 1)\] (II) \[\text{where denotations }\Gamma_{3}\text{ and }\Delta_{3}\text{ and }mn\text{ were introduced on page \ref{eq:concon}, see program }Gr_{3}.\] We verify this assertion in a few steps * Define an infinite sequence of formulas \(\left\{\vartheta_{i}\right\}_{i=0}^{\infty}\) as follows \[\vartheta_{i}\stackrel{{ df}}{{=}}\left\{\Gamma_{3}\right\}\left\{ \Delta_{3}\right\}^{i}(m_{i}=1)\] * Let \(V\) be the set of all variables appearing in program \(Gr_{3}\). We put \(v_{0}(n)=n_{0}\), the values of the remaining variables are not important. For \(i=1,2,\dots\) we put \(v_{i}=(\left\{\Gamma_{3}\right\}\left\{\Delta_{3}\right\}^{i})_{\mathfrak{N}} (v_{0})\) * By the definition of semantics of general iteration quantifier we have the following equation \[\left(\left\{\Gamma_{3}\right\}\bigcap\left\{\Delta_{3}\right\}(mn\neq 1) \right)_{\mathfrak{N}}(v)=\mathbf{g.l.b.}\left\{(\vartheta_{i})_{\mathfrak{N}} \left(v_{0}\right)\right\}_{i\in N}=\mathbf{g.l.b.}\left\{(m_{i}\neq 1)_{ \mathfrak{N}}\left(v_{i}\right)\right\}_{i\in N}\] For it says: after every iteration of subprogram \(\left\{\Gamma_{3};\Delta_{3}^{i}\right\}\) of the program \(Gr_{3}\) the value of the variable \(mn\) is not equal \(1\). Now, he following formula \[\left\{i:=0;\right\}\bigcap\left\{i:=i+1\right\}(n\cdot 3^{i}+\sum_{j=0}^{i-1 }3^{i-1-j}\cdot 2^{\sum_{l=0}^{j}k_{l}}\neq 2^{\sum_{l=0}^{i}k_{l}})\] (II) where the values of \[k_{l}\] are defined by recurrence rec2 see page 6 does express the semantical property: _for every reachable, natural number \(i\) the inequality \((n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j}\cdot 2^{Z_{j}}\neq 2^{Z_{i}})\) holds._ By transposition, it is equivalent to following statement: _there is no reachable number \(i\) such that the equality \((n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j}\cdot 2^{Z_{j}}=2^{Z_{i}})\) holds._ In other words \[l.u.b.(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j}\cdot 2^{Z_{j}}=2^{Z_{i}})_{ \mathfrak{N}}(v)=truth\hskip 28.452756pt\text{where }i=1,2,\dots,\] Hence our goal is to prove the implication \((\Pi\implies\Xi)\). We begin with simple observation that every formula of the following set \(S_{a}\) is a tautology \[S_{a}\stackrel{{ df}}{{=}}\left\{\ (\left\{\Gamma_{3}\right\} \left\{\Delta_{3}\right\}^{r}(m_{i}\neq 1)\implies\left\{\Gamma_{3}\right\} \left\{\Delta_{3}\right\}^{r}(m_{i}\neq 1))\ \right\}_{r\in N}\] here \(\underline{r}=\underbrace{1+1+1+\cdots+1}_{r\ times}\) is any reachable, natural number. For every natural number \(r\) the following equivalence is a theorem of algorithmic theory of numbers \(\mathcal{ATN}\) \[\mathcal{ATN}\vdash\left(\ \left\{\Gamma_{3}\right\}\left\{\Delta_{3}\right\}^{r} \left(m_{i}=1\right)\Leftrightarrow\left\{\Gamma_{3}\right\}\left\{\Delta_{3} \right\}^{r}\left(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j}\cdot 2^{Z_{j}}=2^{Z_{i}} \right)\ \right)\] see lemma 15(_vii_), page 13. Hence the following set \(S_{b}\) consists of theorems of theory \(\mathcal{ATN}\) \[S_{b}\,\overset{\text{\tiny{\it def}}}{=}\left\{\ \left(\left\{\Gamma_{3} \right\}\left\{\Delta_{3}\right\}^{r}\left(m_{i}\neq 1\right)\Rightarrow \left\{\Gamma_{3}\right\}\left\{\Delta_{3}\right\}^{r}\left(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1 -j}\cdot 2^{Z_{j}}\neq 2^{Z_{i}}\right)\right)\,\right\}_{r\in N}\] Look at programs \(\left\{\Gamma_{3}\right\}\) and \(\left\{\Delta_{3}\right\}\), on page 11. Note that first program contains instruction \(i:=0\) and the second program contains the instruction \(i:=i+1\), other instructions can be dropped. Hence the set \(S_{c}\) of formulas \[S_{c}\,\overset{\text{\tiny{\it def}}}{=}\left\{\ \left\{\Gamma_{3} \right\}\left\{\Delta_{3}\right\}^{r}\left(n_{i}\neq 1\right)\Rightarrow \left\{i:=0\right\}\left\{i:=i+1\right\}^{r}\left(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1 -j}\cdot 2^{Z_{j}}\neq 2^{Z_{i}}\right)\right\}_{r\in N}\] contains only theorems of theory \(\mathcal{ATN}\). Now by induction with respect to \(r\) we can prove that every formula of the set \(S_{d}\) is a theorem of theory \(\mathcal{ATN}\). I.e. we introduce the universal iteration quantifier \(\bigcap\) in the precedent of every implication of the set \(S_{c}\) of theorems of theory \(\mathcal{ATN}\). \[S_{d}\,\overset{\text{\tiny{\it def}}}{=}\left\{\left\{\Gamma_{3}\right\} \bigcap\left\{\Delta_{3}\right\}\left(m_{i}\neq 1\right)\Rightarrow \left\{i:=0\right\}\left\{i:=i+1\right\}^{r}\left(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1 -j}\cdot 2^{Z_{j}}\neq 2^{Z_{i}}\right)\right\}_{r\in N}\] In the proof we used axiom \(Ax_{23}\) of calculus of programs, de Morgan's laws (case of infinite operations in a Boolean algebra) and propositional calculus, c.f. page 30. Now, we are ready to use the inference rule \(R_{5}\), see page 30 and to obtain the desired theorem of algorithmic theory \(\mathcal{ATN}\) \[\underbrace{\left\{\Gamma_{3}\right\}\bigcap\left\{\Delta_{3}\right\}\left(m_ {i}\neq 1\right)}_{\text{\tiny{\it n}}}\Rightarrow\underbrace{\left\{i:=0 \right\}\bigcap\left\{i:=i+1\right\}\left(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j} \cdot 2^{Z_{j}}\neq 2^{Z_{i}}\right)} \tag{10}\] which reads: _if for a given number \(n\) the computation of Collatz algorithm is infinite then the number \(n\) is an unreachable element of non-standard model \(\mathfrak{M}\) of Presburger arithmetic._ Why it is so? * the formula \(\Xi\) is eqivalent to \[\neg\left\{i:=0\right\}\bigcup\left\{i:=i+1\right\}\left(n\cdot 3^{i}+\sum_{j=0 }^{i-1}\,3^{i-1-j}\cdot 2^{Z_{j}}=2^{Z_{i}}\right)\] * If the formula \(\Xi\) holds then the value of \(i\) such that the equation \(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j}\cdot 2^{Z_{j}}=2^{Z_{i}}\) is satisfied, must be an unreachable number. * Note, if the equation \(n\cdot 3^{i}+\sum_{j=0}^{i-1}3^{i-1-j}\cdot 2^{Z_{j}}=2^{Z_{i}}\) holds then the elements \(n\) and \(i\) are either both reachable or both unreachable numbers,c.f. remark 27 on page 24. ### Collatz theorem Till now we proved that Collatz conjecture is valid in the structure \(\mathfrak{N}\) of standard (reachable) natural numbers as it is witnessed by the following lemma. **Lemma 20**: _Let \(n\) be any standard element of the structure \(\mathfrak{N}\). The computation of Collatz algorithm \(Cl\) that begins with \(n\) is finite._ **Proof.** The proof follows immediately from the lemmas 17 and 19. **Corollary 21**: _Three conjectures 2, 13 and 14 formulated above are valid statements._ We are ready to prove the main result of this paper. **Theorem 22**: _The formula \(\Theta_{CL}\) is a theorem of the algorithmic theory of natural numbers \(\mathcal{ATN}\vdash\Theta_{CL}\)._ \[\underbrace{\{n:=1\}\bigcap\{n:=n+1\}}_{\forall_{n\in N,\;n\geq 1}}\left\{ \begin{array}{l}\textbf{while}\;n\neq 1\;\textbf{do}\\ \textbf{if}\;odd(n)\textbf{then}\\ n:=3*n+1\\ \textbf{else}\\ n:=n\div 2\\ \textbf{fi}\\ \textbf{od}\end{array}\right\}(n=1)\] ( \[\Theta_{CL}\] ) **Proof.** The following formula \(\Psi_{CL}\) is valid in the algebraic structure \(\mathfrak{N}\) i.e. in the standard model of theory \(\mathcal{ATN}\). \[\bigcup\left\{\begin{array}{l}\textbf{if}\;n\neq 1\;\textbf{then}\\ \textbf{if}\;odd(n)\textbf{then}\\ n:=3*n+1\\ \textbf{else}\\ n:=n\div 2\\ \textbf{fi}\\ \textbf{fi}\end{array}\right\}(n=1)\] ( \[\Psi_{CL}\] ) For we have established two facts * if the argument \(n\) of the Collatz program is an unreachable element then the computation is infinite, c.f. remark (27 on page 24), * if for a certain argument \(n\) the computation of the Collatz program is infinite then the element is an unreachable number c.f. (lemma 20). Now, we ought to precede the formula \(\Psi_{Cl}\) by a general quantifier \(\forall_{n\in N}\). However, the membership predicate \(\in\) does not belong to the language of any theory considered so far. Fortunately, the expression can be replaced by \[\underbrace{\{n^{\prime}:=1\}\bigcap\{n^{\prime}:=n^{\prime}+1\}\left(n=n^{ \prime}\wedge\alpha(n)\right)}_{\forall_{n\in N,\;n\geq 1}}\] For the correctness of this replacement you can consult [10], thm 6.1, page 15. Hence, we know that the following formula \[\underbrace{\{n^{\prime}:=1\}\bigcap\{n^{\prime}:=n^{\prime}+1\}}_{\forall_{ n\in N,\;n\geq 1}}\Big{(}\big{(}n=n^{\prime}\big{)}\wedge\bigcup\left\{ \begin{array}{l}\textbf{if}\;n\neq 1\;\textbf{then}\\ \textbf{if}\;odd(n)\textbf{then}\\ n:=3*n+1\\ \textbf{else}\\ n:=n\div 2\\ \textbf{fi}\\ \textbf{fi}\end{array}\right\}(n=1)\Big{)}\] ( \[\Theta^{\prime}_{CL}\] ) is valid in the structure \(\mathfrak{N}\) of standard, reachable natural numbers. Note, the formulas \((\Theta^{\prime}_{CL})\) and \((\Theta_{CL})\) are equivalent, they express the halting property of Collatz algorithm. Now, we shall use two meta-mathematical facts, c.f. [11] pages 94 and 155. \begin{tabular}{l l} _Completeness_ & For every consistent algorithmic theory \(\mathcal{T}\), for any formula \(\alpha\), the formula is a theorem of \(\mathcal{T}\) iff the formula \(\alpha\) is valid in every model of theory \(\mathcal{T}\). \\ _Categoricity_ & Every two models of the theory \(\mathcal{ATN}\) are isomorphic. \\ _thm_ & From these observations we obtain the desired conclusion \\ \end{tabular} \[\mathcal{ATN}\vdash\underbrace{\{n:=1\}\bigcap\{n:=n+1\}}_{\forall_{n\in N, \,n\geq 1}}\left\{\begin{array}{l}\textbf{while}\ n\neq 1\textbf{do}\\ \textbf{if}\ odd(n)\textbf{then}\\ n:=3*n+1\\ \textbf{else}\\ n:=n\div 2\\ \textbf{fi}\\ \textbf{od}\end{array}\right\}(n=1)\] ( \[\Theta_{CL}\] ) Someone may ask: why this awkward prefix in the formula \(\Theta_{CL}\)? Note the difference between phrases \(\forall_{n}\) and \(\forall_{n\in N}\). ### A counterexample We argue, that the formulation of the Collatz problem requires more precision. For there are several algebraic structures that can be viewed as structure of natural numbers of addition. Some of them admit infinite computations of Collatz algorithm. We recall less known fact: arithmetic (i.e. first-order theory of natural numbers) has standard (_Archimedean_) model \(\mathfrak{N}\) as well as another _non-Archimedean_ model \(\mathfrak{M}^{3}\). The latter structure allows for the existence of infinitely great elements. Goedel's incompleteness theorem shows that there is no elementary theory \(T\) of natural numbers, such that every model is isomorphic to the standard model. Two things are missing from the commonly accepted texts: 1) What do we mean by proof? 2) what properties of natural numbers can be used in the proof? We recall an algebraic structure \(\mathfrak{M}\) that models [10] all axioms of elementary theory of addition of natural numbers, yet it admits unreachable elements [11]. It means that the model contains element \(\varepsilon\), such that the computation of Collatz algorithm that starts with \(\varepsilon\) is infinite. **Example** of a finite execution \[\langle 13,0\rangle\stackrel{{\times 3\pm 1}}{{=}}{1}\, \langle 40,0\rangle\stackrel{{\pm 2}}{{=}}{2}\, \langle 20,0\rangle\stackrel{{\pm 2}}{{=}}{2}\, \langle 10,0\rangle\stackrel{{\pm 2}}{{=}}{2}\, \langle 5,0\rangle\stackrel{{\times 3\pm 1}}{{=}}{1}\, \langle 16,0\rangle\stackrel{{\pm 2}}{{=}}{2}\, \langle 4,0\rangle\stackrel{{\pm 2}}{{=}}{2}\, \langle 2,0\rangle\stackrel{{\pm 2}}{{=}}{2}\, \langle 1,0\rangle\] **Example** of an infinite execution \[\langle 8,\frac{1}{2}\rangle\stackrel{{\times 2}}{{=}}\, \langle 4,\frac{1}{4}\rangle\stackrel{{\pm 2}}{{=}}\langle 2,\frac{1}{8} \rangle\stackrel{{\pm 2}}{{=}}\langle 1,\frac{1}{16}\rangle \stackrel{{\times 3\pm 1}}{{=}}\langle 4,\frac{3}{16}\rangle \stackrel{{\pm 2}}{{=}}\langle 2,\frac{3}{32}\rangle \stackrel{{\pm 2}}{{=}}\langle 1,\frac{3}{64}\rangle, \stackrel{{\times 2\pm 1}}{{=}}\langle 4,\frac{9}{64}\rangle \stackrel{{\pm 2}}{{=}}\langle 2,\frac{9}{128}\rangle \stackrel{{\pm 2}}{{=}}\cdots\] As you can guess, the data structure contains pairs \(\langle k,w\rangle\) where \(k\) is an integer and \(w\) is a non-negative, rational number. The addition operation is defined componentwise. An element \(\langle k,w\rangle\) is _even_ if k is even, otherwise is _odd._. A pair \(\langle k,w\rangle\) divided by 2 returns \(\langle k\div 2,w\div 2\rangle\). The reader may prefer to think of complex numbers instead of pairs, e.g. \((2+\frac{9}{128}i)\) may replace the pair \(\langle 2,\frac{9}{128}\rangle\). The following observation seems to be of importance:. **Remark 23**: _There exists an infinite computation \(\mathbf{c}\) of Collatz algorithm in the structure \(\mathfrak{M}\), such that the computation \(\mathbf{c}\) does not contain a cycle, and the sequence of pairs is not diverging into still growing pairs. The latter means, that there exist two numbers \(l_{1}\in\mathcal{Z}\) and \(l_{2}\in\mathcal{Q}\), such that for every step \(\langle k,v\rangle\) of computation \(\mathbf{c}\), the inequalities hold \(k<l_{1}\wedge v<l_{2}\)._ More details can be found in subsection 6.1. ## 5 Final remarks Our message does not limit itself to the proof of Collatz theorem. We show that the algorithmic language of program calculus is indispensable for expressing the semantic properties of programs. Halting property of program, axiomatic specification of data structure of natural numbers can not be expressed by (sets) of first-order formulas. We show the potential of calculus of programs as a tool for * specification of semantical properties of software and * verification of software against some specifications. We hope the reader will forgive us for a moment of insistence (is it a propaganda?). Calculus of programs \(\mathcal{AL}\) is a handy tool. For there are some good reasons to use the calculus of programs * The language of calculus \(\mathcal{AL}\) contains algorithms (programs) and _algorithmic formulas_ besides terms and first-order formulas. * Any semantical property of an algorithm can be _expressed_ by an appropriate algorithmic formula. Be it termination, correctness or other properties. * Algorithmic formulas enable to create complete, categorical _specifications_ of data structures in the form of algorithmic theories. * Calculus of programs \(\mathcal{AL}\) offers a _complete_ set of tools for proving theorems of algorithmic theories. For over 50 years we are studying the program calculus and use it in specification and verification of software. Some examples can be found in [10], [14] and other publications. In spite of appearance of \(\omega\)-rules4 one can verify proofs in an automatic way. Footnote 4: Le. the inference rules with enumerably many premises. ### Historical remarks Pal Erdes said on Collatz conjecture: _"Mathematics may not be ready for such problems."_ We disagree. In our opinion a consortium of Alfred Tarski, Kurt Goedel and Stephen C. Kleene was able to solve the Collatz conjecture in 1937. (Tarski was advisor of M. Presburger and S. Jaskowski.) * Mojzesz Presburger has proved the completeness and decidability of arithmetic of addition of natural numbers in 1929. * In the same year Stanislaw Jaskowski found a non-standard model of Presburger theory (see a note of A. Tarski of 1934). * Kurt Godel (1931) published his theorem on incompleteness of Peano's theory. His result is of logic, not an arithmetic fact. * Thoralf Skolem (in 1934) wrote a paper on the non-characterization of the series of numbers by means of a finite or countably infinite number of statements with exclusively individual variables [15] * Stephen C. Kleene has shown (in 1936) that any recurrence that defines a computable function can be replaced by the operation of effective minimum (nowadays one can say every recursive function in the integers, is programmable by means of **while** instruction). - professors Rozsa Peter and Laszlo Kalmar (specialists in the theory of recursive functions) were able to point it out to him. Andrzej Mostowski had a hope that many arithmetic theorems independent of the Peano axioms should be found. Collatz theorem is an example. The theorem on termination of Euclid's algorithm is another example of a theorem which is valid and unprovable in Peano theory. The law of Archimedes is yet another example. Note, both theorems need to be stated as algorithmic formulas, there is no first-order formula that expresses the termination property of Euclid's4.2 algorithm or law of Archimedes. ### Further research Perhaps you noted that two parts of our proof are formal (subsection 6.6) or near formal (subsection 4.2 ) proofs. We hope that a complete, easy to verify proof will be done in future. #### Acknowlegments Andrzej Szalas has shown to us the lacunes in our proofs. Hans Langmaack and Wiktor Danko sent some comments. Antek Ciaputa helped in calculations and drawing of Hotel Collatz. ## 6 Supplements For the reader's convenience, in this section we have included some definitions, some useful theorems, and samples of proofs in algorithmic natural number theory. ### A structure with counterexamples _where Collatz computations may be of infinite length_ Here we present some facts that are less known to the IT community. These facts may seem strange. The reader may doubt the importance of those facts. Yet, it is worth considering, non-standard data structures do exist, and this fact has ramifications. Strange as they seem, still it is worthwhile to be aware of their existence. Now, we will expose the algebraic structure \(\mathfrak{J}\), which is a model of the theory \(Ar\), i.e. all axioms of theory \(Ar\) are true in the structure \(\mathfrak{J}\). First we will describe this structure as mathematicians do, then we will write a class (i.e. a program module) implementing this structure. #### Mathematical description of Jaskowski's structure \(\mathfrak{J}\) is an algebraic structure \[\mathfrak{J}=\langle M;\,\underline{\mathbf{0}},\underline{\mathbf{1}},\oplus ;=\rangle\] (NonStandard) such that \(M\) is a set of complex numbers \(k+iw\), i.e. of pairs \(\langle k,w\rangle\), where element \(k\in\mathbb{Z}\) is an integer, and element \(w\in\mathbb{Q}^{+}\) is a rational, non-negative number \(w\geq 0\) and the following requirements are satisfied: * for each element \(k+iw\) if \(w=0\) then \(k\geq 0\), * \(\underline{\mathbf{0}}\stackrel{{ def}}{{=}}\boxed{\langle 0+ \underline{\mathbf{0}}\rangle}\), * \(\underline{\mathbf{1}}\stackrel{{ def}}{{=}}\boxed{\langle 1+ \underline{\mathbf{0}}\rangle}\), * the operation \(\oplus\) of addition is determined as usual \[(k+\imath w)\oplus(k^{\prime}+\imath w^{\prime})\stackrel{{\mbox{\scriptsize$ df$}}}{{=}}(k+k^{\prime})+\imath(w+w^{\prime}).\] * the predicate = denotes as usual identity relation. **Lemma 24**: _The algebraic structure \(\mathfrak{J}\) is a model of first-order arithmetic of addition of natural numbers \(\mathcal{T}\)._ The reader may check that every axiom of the \(\mathcal{T}\) theory (see definition32, p.25), is a sentence true in the structure \(\mathfrak{J}\), cf. next subsection 6.2. The substructure \(\mathfrak{N}\subset\mathfrak{J}\) composed of only those elements for which \(w=0\) is also a model of the theory \(\mathcal{T}\). It is easy to remark that elements of the form \(\langle k,0\rangle\) may be identified with natural numbers \(k\), \(k\in N\). Have a look at table 1 The elements of the structure \(\mathfrak{N}\) are called _reachable_, for they enjoy the following algorithmic property \[\forall_{n\in N}\,\{y:=\mathbf{0};\mbox{\bf while }y\neq n\ \mathbf{do}\ y:=y+ \mathbf{1}\ \mathbf{od}\}(y=n)\] The structure \(\mathfrak{J}\) is not a model of the \(\mathcal{ATN}\), algorithmic theory of natural numbers, cf. subsection 6.4. Elements of the structure \(\langle k,w\rangle\). such as \(w\neq\mathbf{0}\) are _unreachable_. i.e. for each element \(x_{0}=\langle k,w\rangle\) such that \(w\neq 0\) the following condition holds \[\neg\{y:=\mathbf{0};\mbox{\bf while }y\neq x_{0}\ \mathbf{do}\ y:=y+ \mathbf{1}\ \mathbf{od}\}(y=x_{0})\] The subset \(\mathfrak{N}\subset\mathfrak{J}\) composed of only those elements for which \(w=0\) is a model of the theory \(\mathcal{ATN}\) c.f. subsection 6.4. The elements of the structure \(\mathfrak{N}\) are called _reachable_. A very important theorem of the foundations of mathematics is **Lemma 25**: _The structures \(\mathfrak{N}\) and \(\mathfrak{J}\) are not isomorphic._ For the proof see [10], p. 256. As we will see in a moment, this fact is also important for IT specialists. An attempt to visualize structure \(\mathfrak{M}\) is presented in the form of table 1. The universe of the structure \(\mathfrak{J}\) decomposes onto two disjoint subsets (one green and one red). Every element of the form \(\langle k,0\rangle\) (in this case \(k>0\)) represents the natural number \(k\). Such elements are called _reachable_ ones. Note, **Definition 26**: _An element \(n\) is a standard natural number (i.e. is reachable ) iff the program of adding ones to initial zero terminates_ \[n\in N\stackrel{{\mbox{\scriptsize$df$}}}{{\Leftrightarrow}} \{q:=\mathbf{0};\mbox{\bf while }q\neq n\ \mathbf{do}\ q:=q+\mathbf{1}\ \mathbf{od}\}(q=n)\] _or, equivalently_ \[n\in N\stackrel{{\mbox{\scriptsize$df$}}}{{\Leftrightarrow}} \{q:=\mathbf{0}\}\bigcup\{\mbox{\bf if }n\neq q\ \mbox{\bf then }q:=q+\mathbf{1}\ \mathbf{ fi}\}(q=n)\] Note that the subset that consists of all non-reachable elements is well separated from the subset of reachable elements. Namely, every reachable natural number is less that any unreachable one. Moreover, there is no least element in the set of unreachable elements. I.e. the principle of minimum does not hold in the structure \(\mathfrak{M}\). Moreover, for every element \(n\) its computation contains either only standard, reachable numbers or is composed of only unreachable elements. This remark will be of use in our proof. **Remark 27**: _For every element \(n\) the whole Collatz computation is either in green or in reed quadrant of the table 1._ Elements of the structure \(\mathfrak{M}\) are ordered as usual \[\forall_{x,y}\ x<y\stackrel{{\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$\mbox{\tiny$ \mbox{\tiny$\mbox{\tiny$\mbox{\mbox{\tiny$\mbox{\mbox{\tiny$}$}}}}}}}}}}}\to \exists_{x\neq\mathbf{0}}\ x+z=y.\] Therefore, each reachable element is smaller than every unreachable element. The order defined in this way is the lexical order. (Given two elements \(p\) and \(q\), the element lying higher is bigger, if both are of the same height then the element lying on the right is bigger.) The order type is \(\omega+(\omega^{*}+\omega)\cdot\eta\) **Remark 28**: _The subset of unreachable elements (red ones on the table 1) does not obey the principle of minimum._ Perhaps you have already noticed that the \(\mathfrak{M}\) is a computable structure. The following is a class that implements the structure \(\mathfrak{M}\). The implementation uses the integer type, we do not introduce rational numbers explicitly. \begin{tabular}{l} \hline unit StrukturaM: class; \\ unit Elm: class(k,li,mia: integer); \\ begin \\ if ma=0 then raise Error fi; \\ if li * ma <0 then raise Error fi; \\ if li=0 and k<0 then raise Error fi; \\ end Elm; \\ add: function(x,y:Elm): Elm; \\ begin \\ result := new Elm(x,k+y,k, x,ll*y,mia+x.mia*y,li, x,mia*y,mia ) \\ end add; \\ unit one : function:Elm; begin result:= new Elm(1,0,2) end one; \\ unit zero : function:Elm; begin result:= new Elm(0,0,2) end zero; \\ unit e: function(x,y:Elm): Boolean; \\ begin \\ result := (x,k=y,k) and (x,li*y,mia=x.mia*y,li ) \\ end eq; \\ end StrukturaM \\ \hline \end{tabular} The following lemma expresses the correctness of the implementation with respect to the axioms of Presburger arithmetic \(\mathcal{AP}\) (c.f. subsection 6.2) treated as a specification of a class (i.e. a module of program). **Lemma 29**: _The structure \(\mathfrak{E}=\langle E,add,zero,one,eq\rangle\) composed of the set \(E=\{o\ object:o\ inElm\}\) of objects of class Elm with the add operation is a model of the \(\mathcal{AP}\) theory,_ \[\mathfrak{E}\models\mathcal{AP}\] ### Infinite Collatz algorithm computation How to execute the Collatz algorithm in StructuraM? It's easy. ``` prefStrukturnMblock varn:Elm; unitodd:function(x:Elm)Boolean;...result:=(x:kmod2)=1...endodd; unitdiv2:function(x:Elm):Elm;... unit3xpl:function(n:Elm):Elm;...result:=add(n,add(n,add(n,one)));...end3xpl; beginn:=newElm(8,1,2); whilenoteq(n,one)do ifodd(n)then n:=3xpl(n)elsen:=div2(n) fi od odlock; endblock; ``` Below we present the computation of Collatz algorithm for \(n=\langle 8,\frac{1}{2}\rangle\). \[\langle 8,\frac{1}{2}\rangle,\,\langle 4,\frac{1}{4}\rangle,\langle 2,\frac{1}{8}\rangle, \,\langle 1,\frac{1}{16}\rangle,\,\langle 4,\frac{3}{16}\rangle,\,\langle 2, \frac{3}{32}\rangle,\,\langle 1,\frac{3}{64}\rangle,\,\langle 4,\frac{9}{64}\rangle, \langle 2,\frac{9}{128}\rangle,\cdots\] Note, the computation of algorithm \(Gr\) for the same argument, looks simpler \[\langle 8,\frac{1}{2}\rangle,\,\langle 4,\frac{1}{4}\rangle,\langle 2,\frac{1}{8} \rangle,\,\langle 1,\frac{1}{16}\rangle,\,\langle 1,\frac{3}{64}\rangle,\,\langle 1, \frac{9}{256}\rangle,\cdots\] None of the elements of the above sequence is a standard natural number. Each of them is unreachable. It is worth looking at an example of another calculation. Will something change when we assign \(\mathsf{n}\) a different object? e.g. \(\mathsf{n}\): = newElm (19,2,10)? \[\begin{array}{l}\langle 19,\frac{40}{20}\rangle,\,\langle 58,\frac{30}{20} \rangle,\,\langle 29,\frac{30}{80}\rangle,\,\langle 88,\frac{90}{40}\rangle,\, \langle 44,\frac{90}{22}\rangle,\,\langle 11,\frac{90}{20}\rangle,\,\langle 34, \frac{270}{80}\rangle,\,\langle 17,\frac{270}{80}\rangle,\\ \langle 52,\frac{80}{20}\rangle,\,\langle 26,\frac{305}{206}\rangle,\,\langle 13,\frac{405}{128} \rangle,\,\langle 40,\frac{1125}{128}\rangle,\,\langle 20,\frac{1215}{256}\rangle,\, \langle 10,\frac{2155}{512}\rangle,\langle 16,\frac{3645}{512}\rangle,\langle 8, \frac{3645}{1024}\rangle,\\ \langle 4,\frac{3645}{2048}\rangle,\,\langle 2,\frac{3645}{1096}\rangle,\langle 1,\frac{3645}{81 92}\rangle,\langle 4,\frac{3645}{8192}\rangle,\langle 2,\frac{3645}{2 48192}\rangle,\langle 1,\frac{3645}{48192}\rangle,\langle 4,\frac{3645}{48192}\rangle, \cdots\end{array}\] And one more computation \[\begin{array}{l}\langle 19,0\rangle,\,\langle 58,0\rangle,\langle 29,0 \rangle,\,\langle 88,0\rangle,\,\langle 44,0\rangle,\,\langle 22,0\rangle,\,\langle 11,0 \rangle,\,\langle 34,0\rangle,\,\langle 17,0\rangle,\langle 52,0\rangle,\langle 26,0 \rangle,\\ \langle 13,0\rangle,\,\langle 40,0\rangle,\langle 20,0\rangle,\langle 10,0\rangle,\langle 5,0 \rangle,\langle 16,0\rangle,\langle 8,0\rangle,\,\langle 4,0\rangle,\langle 2,0 \rangle,\langle 1,0\rangle.\end{array}\] **Corollary 30**: _The structure \(\mathfrak{M}\), which we have described in two different ways, is the model of the \(\mathcal{AP}\) theory with the non-obvious presence of unreachable elements in it._ **Corollary 31**: _The halting property of the Collatz algorithm cannot be proved from the axioms of the \(\mathcal{T}\) theory, nor from the axioms of \(\mathcal{AP}\) theory._ ### Presburger's arithmetic Presburger's arithmetic is another name of elementary theory of natural numbers with addition. We shall consider the following theory, cf. [11],[12] p. 239 and following ones. **Definition 32**: _Theory \(\mathcal{T}=\langle\mathcal{L},\mathcal{C},Ax\rangle\) is the system of three elements:_ * _is a language of first-order. The alphabet of this language consist of: the set_ \(V\) _of variables, symbols of operations:_ \(0,S,+\)_, symbol of equality relation_ \(=\)_, symbols of logical functors and quantifiers, auxiliary symbols as brackets... The set of well formed expressions is the union of_ \(\mathsf{t}\) _set_ \(T\) _of terms and the set of formulas_ \(F\)_. The set_ \(T\) _is the least set of expressions that contains the set_ \(V\) _and constants 0 and 1 and closed with respect to the rules: if two expressions_ \(\tau_{1}\) _and_ \(\tau_{2}\) _are terms, then the expression_ \((\tau_{1}+\tau_{2})\) _is a term too. The set_ \(F\) _of formulas is the least set of expressions that contains the equalities (i.e. the expressions of the form_ \((\tau_{1}=\tau_{2})\)_) and closed with respect to the following formation rules: if expressions_ \(\alpha\) _and_ \(\beta\) _are formulas, then the aexpression of the form_ \[(\alpha\vee\beta),\ (\alpha\wedge\beta),\ (\alpha\implies\beta),\ \neg\alpha\] _are also formulas, moreover, the expressions of the form_ \[\forall_{x}\,\alpha,\ \exists_{x}\,\alpha\] _where \(x\) is a variable and \(\alpha\) is a formula, are formulas too._ * _is the operation of consquence determined by axioms of first-order logic and the inference rules of the logic,_ * _is the set of formulas listed below._ \[\forall_{x}\ x+1\neq 0\] (a) \[\forall_{x}\,\forall_{y}\ x+1=y+1\implies x=y\] (b) \[\forall_{x}\ x+0=x\] (c) \[\forall_{x,y}\ (y+1)+x=(y+x)+1\] (d) \[\Phi(0)\wedge\forall_{x}\left[\Phi(x)\implies\Phi(x+1)\right] \implies\forall_{x}\Phi(x)\] (I) _The expression \(\Phi(x)\) may be replaced by any formula. The result is an axiom of theory This is the induction scheme. We augment the set of axioms adding four axioms that define a couple of useful notions._ \[even(x)\stackrel{{ df}}{{\equiv}}\exists_{y}\,x=y+y\] (e) \[odd(x)\stackrel{{ df}}{{\equiv}}\exists_{y}\,x=y+y+1\] (o) \[x\,div\,2=y\equiv(x=y+y\,\vee\,x=y+y+1)\] (D2) \[3x\stackrel{{ df}}{{=}}x+x+x \tag{3x}\] The theory \(\mathcal{T}^{\prime}\) obtained in this way is a conservative extension of theory \(\mathcal{T}\). Below we present another theory \(\mathcal{AP}\) c.f. [30], we shall use two facts: 1) theory \(\mathcal{AP}\) is complete and hence is decidable, 2) both theories are elementarily equivalent. **Definition 33**: _Theory \(\mathcal{AP}=\langle\mathcal{L},\mathcal{C},AxP\rangle\) is a system of three elements :_ * _is a language of first-order. The alphabet of this language contains the set_ \(V\) _of variables, symbols of functors :_ \(0,+\)_, symbol of equality predicate_ \(=\)_._ _The set of well formed-expressions is the union of set of terms_ \(T\) _and set of formulas_ \(F\)_. The set of terms_ \(T\) _is the least set of expressions that contains the set of variables_ \(V\) _and the expression_ \(0\) _and closed with respect to the following two rules: 1) if two expressions_ \(\tau_{1}\) _and_ \(\tau_{2}\) _are terms, then the expression_ \((\tau_{1}+\tau_{2})\) _is also a term, 2) if the expression_ \(\tau\) _is a term, then the expression_ \(S(\tau)\) _is also a term._ * _is the consequence operation determined by the axioms of predicate calculus and inference rules of first-order logic_ * _The set of axioms of the_ \(\mathcal{AP}\) _theory is listed below._ \[\forall_{x}\ x+1\neq 0\] (A) \[\forall_{x}\ x\neq 0\implies\exists_{y}x=y+1\] (B) \[\forall_{x,y}\ x+y=y+x\] (C) \[\forall_{x,y,z}\ x+(y+z)=(x+y)+z\] (D) \[\forall_{x,y,z}\ x+z=y+z\implies x=y\] (E) \[\forall_{x}\ x+0=x\] (F) \[\forall_{x,z}\ \exists_{y}\ (x=y+z\lor z=y+x)\] (G) \[\forall_{x}\exists_{y}\ (x=y+y\lor x=y+y+1)\] (H2) \[\forall_{x}\exists_{y}\ (x=y+y+y\lor x=y+y+y+1\lor x=y+y+y+1+1)\] (H3) \[\forall_{x}\exists_{y}\ \left(\begin{array}{c}x=\underbrace{y+y+\cdots+y} _{k}\vee\\ x=\underbrace{y+y+\cdots+y}_{k}+1\vee\\ x=\underbrace{y+y+\cdots+y}_{k}+\underbrace{1+1}_{2}\vee\\ \cdots\\ x=\underbrace{y+y+\cdots+y}_{k}+\underbrace{1+1+\cdots+1}_{k-2}\vee\\ x=\underbrace{y+y+\cdots+y}_{k}+\underbrace{1+1+\cdots+1}_{k-1}\end{array}\right)\] (Hk) \[\cdots\] _The axioms \(H2\) -\(Hk\)... may be given a shorter form. Let us introduce numerals, ie. the constants representing term of the form_ \[\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline \cdot}}}}}}}}}}}}}}}\ 1+1+1\] \[\cdots\] \[\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\underline{\underline{\underline{\underline{\underline{ \underline{\underline{\underline{\cdot}}}}}}}}}}}}}}}\ 1+1+\ldots 1\] \[\cdots\] _Now, the axioms take form_ \(\forall_{x}\ x\ mod\,\underline{\underline{2}}=\underline{\textbf{0}}\lor x\ mod\, \underline{\underline{2}}=\underline{\textbf{1}}\) (H2') \(\forall_{x}\ x\ mod\,\underline{\underline{3}}=\underline{\textbf{0}}\lor x\ mod\, \underline{\underline{3}}=\underline{\textbf{1}}\lor x\ mod\,\underline{ \underline{3}}=\underline{\textbf{2}}\) (H3') \(\ldots\) \(\forall_{x}\bigvee_{j=0}^{k-1}\ x\ mod\,\underline{\textbf{k}}=\underline{ \textbf{j}}\) (Hk') Let us recall a couple of useful theorems **F1**. Theory \(\mathcal{T}\) is elementarily equivalent to the theory \(\mathcal{AP}\).[10][21] **F2**. Theory \(\mathcal{AP}\) is decidable. [10]. **F3**. The computational complexity of theory \(\mathcal{AP}\), is double exponential \(O(2^{2^{n}})\) this result belongs to Fisher and Rabin, see [12]. **F4**. Theories \(\mathcal{T}\) and \(\mathcal{AP}\) have non-standard model, see section 6.1, p. 22. Now, we shall prove a couple of useful theorems of theory \(\mathcal{T}\). First, we shall show that the sentence \(\forall_{n}\exists_{x,y,z}\,n\cdot 3^{x}+y=2^{z}\) is a theorem of the theory \(\mathcal{T}\) of addition. Operations of multiplication and power are inaccessible in the theory \(\mathcal{T}\). However, we do not need them. We enrich the theory \(\mathcal{T}\) adding two functions \(P2(\cdot)\) and \(P3(\cdot\cdot\cdot).\) defined in this way **Definition 34**: _Two functions are defined \(P2\) (of oneargument) and \(P3\) (of two-arguments)._ \(P2(x+1)\stackrel{{ df}}{{=}}P2(x)+P2(x)\)\(P3(y,x+1)\stackrel{{ df}}{{=}}P3(y,x)+P3(y,x)+P3(y,x)\) **Lemma 35**: _The definitions given above are correct, i.e. the following sentences aretheorems of the theory with two definitions_ \(\mathcal{T}\vdash\forall_{x}\exists_{y}\,P2(x)=y\) _and_ \(\mathcal{T}\vdash\forall_{x,y,z}P2(x)=y\wedge P2(x)=z\implies y=z.\) _Similarly, the sentences \(\forall_{y,x}\exists_{z}\,P3(y,x)=z\) and \(\forall_{y,x,z,u}P3(y,x)=z\wedge P3(y,x)=u\implies z=u\) are theorems of theory \(\mathcal{T}\)._ An easy proof goes by induction with respect to the value of variable \(x\). In the proof of the lemma 36, below, we shall use the definition of the order relation \(a<b\stackrel{{ df}}{{=}}\exists_{c\not=0}\ a+c=b.\) Making use of the definition of function \(P2\) and \(P3\) we shall write the formula \(P3(n,x)+y=P2(z)\) as it expresses the same content as expression \(n\cdot 3^{x}+y=2^{z}\). **Lemma 36**: _The following sentence is a theorem of the theory \(\mathcal{T}\) enriched by the definitions of \(P2\) and \(P3\) functions._ \(\forall_{n}\exists_{x,y,z}P3(n,x)+y=P2(z)\) **Proof.** We begin proving by induction that \({\cal T}\vdash\forall_{n}\,n<2^{n}\). It is easy to see that \({\cal T}\vdash 0<P2(0)\). We shall prove that \({\cal T}\vdash\forall_{n}(n<P2(n)\implies(n+1<P2(n+1))\). Inequality \(n+1<P2(n+1)\) follows from the two following inequalities \({\cal T}\vdash n<P2(n)\) and \({\cal T}\vdash 1<P2(n)\). Hence the formula \(n+1<P2(n)+P2(n))\) is a theorem of theory \({\cal T}\). By definition \(P2(n)+P2(n)=P2(n+1)\). In the similar manner, we can prove the formula \({\cal T}\vdash\forall_{n}\,\forall_{x}\,P3(n,x)<P2(n+x+x)\) As a consequence we have \({\cal T}\vdash\forall_{n}\exists_{x,y,z}\ P3(n,x)+y=P2(z)\). **Lemma 37**: _Let \(\mathfrak{M}\) be any model of Presburger arithmetic. If there exists a triple \(\langle x,y,z\rangle\) of reachable elements such that it satisfies the equation \(P3(n,x)+y=P2(z)\) i.e. \(n\cdot 3^{x}+y=2^{z}\) then the element \(n\) is reachable._ **Proof.** If the following formulas are valid in the structure \(\mathfrak{M}\) \(\{q:=0;\)**while**\(q\neq x\)**do**\(q:=q+1\)**od\(\}(x=q)\), \(\{q:=0;\)**while**\(q\neq y\)**do**\(q:=q+1\)**od\(\}(y=q)\), \(\{q:=0;\)**while**\(q\neq z\)**do**\(q:=q+1\)**od\(\}(z=q)\) and the following equation is valid too \(P3(n,x)+y=P2(z)\) then it is easy to verify that the formula \(\{t:=0;\)**while**\(n\neq t\)**do**\(t:=t+1\)**od\(\}(t=n)\) is valid too. \begin{tabular}{l l} \hline Nr & Reason \\ 1 & a1=P2(z) is reachable \\ 2 & y+a2 =a1, & a2 is reachable and a2=2\({}^{2}\)-y \\ 3 & a3=P3(1,x) & is reachable, & a3=3\({}^{x}\) \\ 4 & \(\left\{\begin{array}{l}q:=1;a5:=a3;\\ \textbf{while}\ a5\neq a2\textbf{do}\\ q:=q+1;\\ a5:=a5+a3\end{array}\right\}(q*a3=a2)\) hence q=n \\ **od** \\ \hline \hline \end{tabular} ### An introduction to the calculus of programs \(\mathcal{AL}\) For the convenience of the reader we cite the axioms and inference rules of calculus of programs i.e. algorithmic logic \(\mathcal{AL}\). **Note**. _Every axiom of algorithmic logic is a tautology. Every inference rule of \(\mathcal{AL}\) is sound._[20] **Axioms** _axioms_ (\((\alpha\Rightarrow\beta)\Rightarrow((\beta\Rightarrow\delta)\Rightarrow( \alpha\Rightarrow\delta))\)) \(Ax_{2}\)\((\alpha\Rightarrow(\alpha\vee\beta))\) \(Ax_{3}\)\((\beta\Rightarrow(\alpha\vee\beta))\) \(Ax_{4}\)\(((\alpha\Rightarrow\delta)\ \Rightarrow((\beta\Rightarrow\delta)\ \Rightarrow\ ((\alpha\vee\beta) \Rightarrow\delta)))\) \(Ax_{5}\)\(((\alpha\wedge\beta)\Rightarrow\alpha)\) \(Ax_{6}\)\(((\alpha\wedge\beta)\Rightarrow\beta)\) \(Ax_{7}\)\(((\delta\Rightarrow\alpha)\Rightarrow((\delta\Rightarrow\beta)\Rightarrow( \delta\Rightarrow(\alpha\wedge\beta))))\) \(Ax_{8}\)\(((\alpha\Rightarrow(\beta\Rightarrow\delta))\Leftrightarrow((\alpha\wedge \beta)\Rightarrow\delta))\) \(Ax_{9}\)\(((\alpha\wedge\neg\alpha)\Rightarrow\beta)\) \(Ax_{10}\)\(((\alpha\Rightarrow(\alpha\wedge\neg\alpha))\Rightarrow\neg\alpha)\) \(Ax_{11}\)\((\alpha\vee\neg\alpha)\) _axioms_ _of predicate calculus_ \(Ax_{12}\)\(((\forall x)\alpha(x)\Rightarrow\alpha(x/\tau)))\) where term \(\tau\) is of the same type as the variable x \(Ax_{13}\)\((\forall x)\alpha(x)\Leftrightarrow\neg(\exists x)\neg\alpha(x)\) _axioms of calculus of programs_ \[Ax_{14}\ \ K(((\exists x)\alpha(x))\Leftrightarrow(\exists y)(K\alpha(x/y)) \qquad\text{ for }y\notin V(K)\] \[Ax_{15}\ \ K(\alpha\vee\beta)\Leftrightarrow((K\alpha)\vee(K\beta))\] \[Ax_{16}\ \ K(\alpha\wedge\beta)\Leftrightarrow((K\alpha)\wedge(K\beta))\] \[Ax_{17}\ \ K(\neg\alpha)\Rightarrow\neg(K\alpha)\] \[Ax_{18}\ \ ((x:=\tau)\gamma\Leftrightarrow(\gamma(x/\tau)\wedge(x:= \tau)true))\ \wedge((q:=\gamma)\gamma\Leftrightarrow\gamma(q/\gamma\prime))\] \[Ax_{19}\ \ \text{\bf begin}\ K;M\ \text{\bf end}\ \alpha \Leftrightarrow K(M\alpha)\] \[Ax_{20}\ \ \text{\bf if}\ \gamma\ \text{\bf then}\ K\ \text{\bf else}\ M\ \text{\bf fi}\ \alpha \Leftrightarrow((\neg\gamma\wedge M\alpha)\vee(\gamma\wedge K\alpha))\] \[Ax_{21}\ \ \text{\bf while}\ \ \gamma\ \text{\bf do}\ K\ \text{\bf end}\ \alpha\Leftrightarrow((\neg\gamma\wedge\alpha)\vee(\gamma\wedge K(\text{\bf while}\ \gamma\ \text{\bf do}\ K\ \text{\bf od}(\neg\gamma\wedge\alpha))))\] \[Ax_{22}\ \bigcap K\alpha\Leftrightarrow(\alpha\wedge(K\bigcap K\alpha))\] \[Ax_{23}\ \bigcup K\alpha\equiv(\alpha\vee(K\bigcup K\alpha))\] ### Inference rules #### 3.1.1 propositional calculus \[R_{1}\qquad\frac{\alpha,(\alpha\Rightarrow\beta)}{\beta}\qquad\text{ (also known as modus ponens)}\] \[\text{predicate calculus}\] \[R_{6}\qquad\frac{(\alpha(x)\ \Rightarrow\ \beta)}{((\exists x) \alpha(x)\ \Rightarrow\ \beta)}\] \[R_{7}\qquad\frac{(\beta\ \Rightarrow\ \alpha(x))}{(\beta \Rightarrow(\forall x)\alpha(x))}\] \[\text{calculus of programs }\text{AL}\] \[R_{2}\qquad\frac{(\alpha\Rightarrow\beta)}{(K\alpha\Rightarrow K\beta)}\] \[R_{3}\qquad\frac{\{s(\text{\bf if}\ \gamma\ \text{\bf then}\ K\ \text{\bf fi})^{i}(\neg\gamma \wedge\alpha)\Rightarrow\beta\}_{i\in N}}{(s(\text{\bf while}\ \gamma\ \text{\bf do}\ K\ \text{\bf od}\ \alpha)\Rightarrow\beta)}\] \[R_{4}\qquad\frac{\{(K^{i}\alpha\Rightarrow\beta)\}_{i\in N}}{( \bigcup K\alpha\Rightarrow\beta)}\] \[R_{5}\qquad\frac{\{(\alpha\Rightarrow K^{i}\beta)\}_{i\in N}}{( \alpha\Rightarrow\bigcap K\beta)}\] In rules \(R_{6}\) and \(R_{7}\), it is assumed that \(x\) is a variable which is not free in \(\beta\), i.e. \(x\notin FV(\beta)\). The rules are known as the rule for introducing an existential quantifier into the antecedent of an implication and the rule for introducing a universal quantifier into the successor of an implication. The rules \(R_{4}\) and \(R_{5}\) are algorithmic counterparts of rules \(R_{6}\) and \(R_{7}\). They are of a different character, however, since their sets of premises are infinite. The rule \(R_{3}\) for introducing a **while** into the antecedent of an implication of a similar nature. These three rules are called \(\omega\)-rules. The rule \(R_{1}\) is known as _modus ponens_, or the _cut_-rule. In all the above schemes of axioms and inference rules, \(\alpha\), \(\beta\), \(\delta\) are arbitrary formulas, \(\gamma\) and \(\gamma^{\prime}\) are arbitrary open formulas, \(\tau\) is an arbitrary term, \(s\) is a finite sequence of assignment instructions, and \(K\) and \(M\) are arbitrary programs. **Theorem 38** (_theorem on completeness of the calculus \(\mathcal{AL}\)_): _Let \(\mathcal{T}=\langle\mathcal{L},\mathcal{C},\mathcal{A}x\rangle\) be a consistent algorithmic theory, let \(\alpha\in\mathcal{L}\) be a formula. The following conditions are equivalent_ 1. _Formula_ \(\alpha\) _is a theorem of the theory T,_ \(\alpha\in\mathcal{C}(\mathcal{A}x)\)_,_ 2. _Formula_ \(\alpha\) _is valid in every model of the theory T,_ \(\mathcal{A}x\models\alpha\)_._ The proof may be found in [11]. ### An introduction to the algorithmic theory of numbers \(\mathcal{ATN}\) The language of algorithmic theory of natural numbers \(\mathcal{ATN}\) is very simple. Its alphabet contains one constant \(0\)_zero_, one one one-argument functor \(s\) and predicate = of equality. We shall write \(x+1\) instead of \(s(x)\). Axioms of \(\mathcal{ATN}\) were presented in the book [11] \[\left.\begin{array}{ll}A_{1})&\forall x\{q:=0;\,\textbf{while}\,\,q\neq x \,\,\textbf{do}\,\,q:=s(q)\,\,\textbf{od}\}(q=x)&(R)\\ A_{2})&\forall x\,\,s(x)\neq 0&(N)\\ A_{3})&\forall x\forall y\,s(x)=s(y)\implies x=y&(J)\end{array}\right.\] We can add another two-argument functor + and its definition \[\left.\begin{array}{ll}A_{4})&\forall x\,\forall y\left\{\begin{array}{l}q: =0;w:=x;\\ \textbf{while}\,\,q\neq y\,\,\textbf{do}\\ q:=s(q)\,;w:=s(w)\\ textfod\end{array}\right\}(x+y=w)\qquad(D)\] The termination property of the program in \(A_{4}\) is a theorem of \(\mathcal{ATN}\) theory as well as the formulas \(x+0=x\) and \(x+s(y)=s(x+y)\). #### A sample (11 - 15) of Theorems of \(\mathcal{ATN}\) \[\mathcal{ATN}\vdash\exists_{x}\,\alpha(x)\Leftrightarrow\{x:=0\} \bigcup\{x:=x+1\}\alpha(x) \tag{11}\] \[\mathcal{ATN}\vdash\forall_{x}\,\alpha(x)\Leftrightarrow\{x:=0\} \bigcap\{x:=x+1\}\alpha(x) \tag{12}\] Law of Archimedes \[\mathcal{ATN}\vdash 0<x<y\implies\{a:=x;\,\textbf{while}\,\,a<y\,\,\textbf{do} \,\,a:=a+x\,\,\textbf{od}\}(a\geq y) \tag{13}\] Scheme of induction \[\mathcal{ATN}\vdash\Big{(}\alpha(x/0)\wedge\forall_{x}\big{(}\alpha(x) \Rightarrow\alpha(x/s(x))\big{)}\Big{)}\implies\forall_{x}\alpha(x) \tag{14}\] Correctness of Euclid's algorithm \[\mathcal{ATN}\vdash\left(\begin{array}{c}n_{0}>0\wedge\\ m_{0}>0\end{array}\right)\implies\left\{\begin{array}{l}n:=n_{0};\,\,m:=m_{0 };\\ \textbf{while}\,\,n\neq m\,\,\textbf{do}\\ \textbf{if}\,\,n>m\,\,\textbf{then}\,\,n:=n\,\,\_\,m\\ \textbf{else}\,\,m:=m\,\,\textbf{-}\,n\\ \textbf{fi}\\ \textbf{od}\end{array}\right\}(n=gcd(n_{0},m_{0})) \tag{15}\] The theory \(\mathcal{ATN}\) enjoys an important property of categoricity. **Theorem 39** ( _meta_-theorem on categoricity of \(\mathcal{ATN}\) theory): _Every model \(\mathfrak{A}\) of the algorithmic theory of natural numbers is isomorphic to the structure \(\mathfrak{N}\), c.f. subsection 6.1._ ### Proof of lemma 3 Let \(P\) and \(P^{\prime}\) be two programs. Let \(\alpha\) be any formula. The semantic property _programs \(P\) and \(P^{\prime}\) are equivalent with respect to the postcondition \(\alpha\)_ is expressed by the formula of the form \(\,(\{P\}\alpha\Leftrightarrow\{P^{\prime}\}\,\alpha)\). We shall use the following tautology of calculus of programs \(\mathcal{AL}\). \[+\left(\left\{\begin{array}{l}\textbf{while}\,\,\gamma\,\,\textbf{do}\\ \textbf{if}\,\,\textbf{$\delta$ then}\,\,K\,\textbf{else}\,\,M\,\textbf{ fi}\\ \textbf{od};\end{array}\right\}\alpha\Leftrightarrow\left\{\begin{array}{l} \textbf{while}\,\,\gamma\,\,\textbf{do}\\ \textbf{while}\,\,\gamma\,\,\wedge\,\delta\,\,\textbf{do}\,\,K\,\textbf{od}; \\ \textbf{while}\,\,\gamma\,\,\wedge\,\neg\,\delta\,\textbf{do}\,\,M\,\,\textbf{ od}\\ \textbf{od}\end{array}\right\}\alpha\right) \tag{16}\] We apply the axioms Ax20 and Ax21 \[+\left(\left\{\begin{array}{c}\textbf{while }\gamma\textbf{ do }\\ \textbf{if }\delta\textbf{ then }K\textbf{ else }M\textbf{ fi }\\ \textbf{od};\\ \end{array}\right\}\alpha\Leftrightarrow\left\{\begin{array}{c}\textbf{if }\gamma\textbf{ then }\\ \textbf{while }\gamma\wedge\delta\textbf{ do }K\textbf{ od};\\ \textbf{while }\gamma\wedge\neg\delta\textbf{ do }M\textbf{ od};\\ \textbf{while }\gamma\textbf{ do }K\textbf{ od};\\ \textbf{while }\gamma\wedge\neg\delta\textbf{ do }M\textbf{ od}\\ \textbf{od}\\ \end{array}\right\}\alpha\right) \tag{17}\] We can omit the instruction **if** (why?). We swap internal instructions **while** inside the instruction **while**. \[+\left(\left\{\begin{array}{c}\textbf{while }\gamma\textbf{ do }\\ \textbf{if }\delta\textbf{ then }K\textbf{ else }M\textbf{ fi }\\ \textbf{od};\\ \end{array}\right\}\alpha\Leftrightarrow\left\{\begin{array}{c}\textbf{ while }\gamma\wedge\delta\textbf{ do }K\textbf{ od};\\ \textbf{while }\gamma\wedge\neg\delta\textbf{ do }M\textbf{ od};\\ \textbf{while }\gamma\wedge\delta\textbf{ do }K\textbf{ od}\\ \textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{ }\textbf{otherwise }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }}\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{}\textbf{ \[\varphi\implies\left\{\begin{array}{l}\begin{array}{l}aux:=3*m_{i}+1;\\ k_{i+1}:=exp(aux,2);\\ \par m_{i+1}:=aux/2^{k_{i+1}};\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \
2308.11468
Multitemporal analysis in Google Earth Engine for detecting urban changes using optical data and machine learning algorithms
The aim of this work is to perform a multitemporal analysis using the Google Earth Engine (GEE) platform for the detection of changes in urban areas using optical data and specific machine learning (ML) algorithms. As a case study, Cairo City has been identified, in Egypt country, as one of the five most populous megacities of the last decade in the world. Classification and change detection analysis of the region of interest (ROI) have been carried out from July 2013 to July 2021. Results demonstrate the validity of the proposed method in identifying changed and unchanged urban areas over the selected period. Furthermore, this work aims to evidence the growing significance of GEE as an efficient cloud-based solution for managing large quantities of satellite data.
Mariapia Rita Iandolo, Francesca Razzano, Chiara Zarro, G. S. Yogesh, Silvia Liberata Ullo
2023-08-22T14:29:19Z
http://arxiv.org/abs/2308.11468v1
Multitemporal analysis in Google Earth Engine for detecting urban changes using optical data and machine learning algorithms ###### Abstract The aim of this work is to perform a multitemporal analysis using the Google Earth Engine (GEE) platform for the detection of changes in urban areas using optical data and specific machine learning (ML) algorithms. As a case study, Cairo City has been identified, in Egypt country, as one of the five most populous megacities of the last decade in the world. Classification and change detection analysis of the region of interest (ROI) have been carried out from July 2013 to July 2021. Results demonstrate the validity of the proposed method in identifying changed and unchanged urban areas over the selected period. Furthermore, this work aims to evidence the growing significance of GEE as an efficient cloud-based solution for managing large quantities of satellite data. Optical and SAR data classification, Machine Learning algorithms, change detection, Google Earth Engine, Earth Observation, Landsat-8. ## I Introduction Nowadays good urban planning cannot be based only on classical methods, but it is important to take advantage of the advanced techniques which are catching on the remote sensing (RS) field. Among them, ML and Deep Learning (DL) techniques are increasingly playing a crucial role in handling huge amounts of satellite data and enhancing results in terms of classification analysis, land use monitoring, and natural phenomena detection [1, 2, 3], just to give some examples. Measuring and understanding the nature and entity of changes affecting urban and non-urban territories are crucial in order to determine their future expansion and impact in terms of environmental and economic issues. For this reason, extensive studies and research have been carried out on detecting urban changes and on using their results as valuable information to support Governments and Municipalities in taking better decisions. In [4], change detection is applied to Newly Constructed Areas (NCA) as the first step in the development monitoring of urban areas, through a ML approach. Various spectral indices for a rapid and accurate built land classification are proposed in [5]. Their performance is examined and compared in the classification and detection of land changes when Landsat-images7 ETM+ (Enhanced Thematic Mapper Plus) and Landsat-8 OLI/TIRS (Operational Land Imager/Thermal Infrared Sensor) are used as satellite images. In [6], authors deal with the change detection of the urban area of Bauchi which is one of the cities in the northeastern part of Nigeria that has witnessed a huge expansion due to rapid urbanization. In particular, they compare three change detection algorithms, supervised, unsupervised, and post-classification comparison, the latter for achieving a "from-to" evaluation, and demonstrate that the supervised classification produces the best results in terms of overall precision in the reference years. In our paper, we aim to perform a multitemporal analysis of optical satellite data for the detection of urban changes when specific ML algorithms are used and the GEE platform is employed. GEE has gained great attention for its offer of built-in solutions above all for what concerns the ML and DL algorithms. Therefore, our work aims also to demonstrate the growing significance of GEE as an efficient cloud-based tool for processing huge amounts of satellite data producing valuable results and useful information in the field of RS and urban planning. The pivotal phase of our work remains the application of change detection to remote sensing images belonging to different time periods (almost ten years are considered) for the identification and discrimination of urban variations. Change detection can allow keeping track of occurred modifications otherwise not detectable on official documentation or through sample surveys. Cairo City shown in Figure 1 has been chosen as the case study and supervised ML algorithms on GEE have been selected. ## II GEE and Data sources In order to perform the supervised classification process and the subsequent change detection, GEE was used with the supervised ML techniques made available on the platform. The satellite images were downloaded still through GEE, and imagery of the Landsat-8 mission was considered, a mission born thanks to the collaboration between NASA and the U.S. Geological Survey. ### _Google Earth Engine_ GEE is a powerful web platform for the cloud-based processing of large-scale remote sensing data. It brings together many world satellite images and makes them available online for scientists, independent researchers, and nations who want to use them and to detect changes, monitor the atmosphere, map trends, and quantify the differences in the Earth's surface. The advantage lies in its remarkable calculation speed since the processing is outsourced to Google servers. Application programming interfaces (APIs) are available in JavaScript and Python with code editor features designed to make developing complex geospatial workflows quick and easy. There are different ML techniques that can be used in GEE, such as Supervised and Unsupervised Classification. In particular, in our investigation, we made use of the package handling supervised classification and we chose the CART "ee.Classifier.SmileCart()" tool. The CART algorithm was first published by Leo Breiman in 1984 [7], and it is a decision tree model based on if-else rules, able to predict outcome values based on other quantities. The _SmileCart_ ML algorithm was therefore applied to Landsat-8 data related to the urban agglomeration of Cairo City, chosen as ROI. ### _Landsat-8_ Landsat-8 was launched on February 11, 2013, and it is collecting about 740 scenes per day on the Worldwide Reference System-2 (WRS-2). With a revisit cycle of 16 days, it carries on board the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) instruments, featuring 11 bands with the addition of a first ultra-blue band (0.43-0.45 micrometers), nine bands useful for displaying clouds (1.36-1-38 micrometers) and the thermal band split into two separate bands TIR1 (10.60-11.19 micrometers) and TIR2 (11.50-12.51 micrometers) with 100 meters resolution. ## III Region of Interest The area chosen for the specific case study is Cairo City, the capital of Egypt. It has about 18 million inhabitants living inside the urban area (spread over 453 \(km^{2}\)) and about 20.4 million residents in the adjacent metropolitan area, creating the mega metropolis Cairota, Greater Cairo, which is the most populous African city after the Nigerian city Lagos. This area has been chosen since the purpose of the proposed method is to offer a way of analysis over time to verify whether or not there have been changes in a certain urban area and if at some points there is an extension of the urban agglomeration or if instead, it has remained unchanged, if some urban areas have become non-urban and vice versa, or have simply remained not urban. ## IV Methodology The following procedure has been applied for the classification and change detection analysis of the chosen area. ### _Classification_ For classification purposes, useful guidelines can be found at [8] and [9], and the main steps considered in GEE are reported in the Block Diagram of Figure 2. In our case, the following description summarizes what done: * the _Roi_, Cairo City, with coordinates [31.206,30.248], has been selected; * an image Landsat-8 has been chosen for the oldest date of the chosen time period, that is 2013, in the month of July to avoid problems related to the possible presence of clouds; from this image, a dataset is created for the training model. A feature collection (a collection of GeoJSON objects) has been created, assigning each sample a label, in particular: \(0\) for samples of the non-urban class, \(1\) for samples of the urban class. This phase in the block diagram is called _Dataset creation_; Fig. 1: Il Cairo City Fig. 2: Block Diagram for Classification steps in GEE * after instantiating the selected classifier (the _SmileCart_ classifier), and setting its parameters, the classifier has been trained with the training data. This phase in the block diagram is called the _Training Model_; * the trained model has been used for the prediction of all other images (so from 2013 to 2021). This phase is called _Prediction_ after which the results are the _Classification Maps_. ### _Change Detection_ Once the classification data for each year have been acquired and assessed, we proceeded with the change detection step by taking into account the oldest classification (2013) and the most recent one (2021), to detect in a fairly large period of time changes or invariances, and using intermediate years for additional checking and comparisons. Four possible results have been considered: 1. the case where there has actually been an expansion of the urban agglomeration, this result in the acquired image corresponds to the red color; 2. the case in which there was no expansion of the urban agglomeration that just remained the same as it was on the older date considered, this result in the acquired image corresponds to the color purple; 3. the case where an originally urban environment became non-urban, this result in the acquired image corresponds to the blue color; 4. and finally the case where an originally non-urban environment is still not urban and undergoes no change, this result in the acquired image corresponds to the green color. ## V Results and discussion We used Landsat-8 images with almost 10 years of difference, for which qualitative evaluation could allow us to establish the occurred variations, if any. From change detection images it has been possible, as discussed ahead, to prove the proposed methodology. Moreover, further analysis has been applied to smaller areas when some additional investigation was needed. We started from the simplest case, which is the one that refers to the non-change of one of the non-urban areas around the megalopolis considered: the river Nile and the land around it. From Figure 3 (a), referring to the year 2013, acquired through Landsat-8, you can easily distinguish the river Nile. If we consider the same image but acquired in the year 2021 (Figure 3 (b)), through visual inspection, any significant change related to the watercourse is noticed. As a result of the performed change detection the area is mainly purple with some red spots in other places that will be further analyzed. In the area surrounding the Nile, it is possible to verify how the overall method and the change detection image were able to convey a clear situation of what happened. As a result, we got that there were actually no changes concerning this area, from non-urban it continued to remain non-urban. We then considered the case in which one of the many areas surrounding the megalopolis from non-urban became urban. Namely, a real extension of the urban agglomeration occurred. The city of Asim was considered again by taking into account first the image acquired by the satellite in 2013 (Figure 4) (a) and then the image of the same area in 2021 (Figure 4) (b). What you can notice through visual inspection is that the urban agglomeration of Asim has expanded. So the non-urban area around it should turn red as mentioned above. Once checked with the proposed method and the change detection output, from Figure 4 (c) the city of Asim has effectively expanded: the whole area surrounding its borders has been colored red. This means that the surrounding non-urban area has undergone a change, turning into an urban area. We have considered as the penultimate case an urban area surrounding the Cairota megalopolis that did not undergo any change, that is urban continued to be urban. We chose to analyze the city of Tanta, again with an acquisition first in 2013 and then in 2021 (Figure 5 (a) e (b)). From a visual inspection of the satellite images, it can be said that between 2013 and 2021 there were no major changes and that the area of Tanta remained unchanged. As expected, the proposed method and the final change detection produced a final image colored all purple, according to what was said before. In conclusion, as seen from Figure 5 (c), the city of Tanta in the chosen time frame has not undergone any change. However, just a few blue and red spots are present and we decided to further investigate the blue ones, representing a new situation among those already analyzed. In the final case, consideration is given to an urban agglomeration that has become non-urban within the chosen time frame. We decided to further analyze the portion of territory that in the previous Figure was showing some blue pixels. Fig. 4: (a) Satellite image for 2013: the city of Asim. (b) Satellite image for 2021 of the same Asim city. (c) Change detection image: Asim’s city has expanded (red circular crown). Fig. 3: (a) Satellite image for 2013: Nile river. (b) Satellite image for 2021 of the same river. (c) Change detection image: the Nile River and its surrounding area have not changed The zoomed area of this area belonging to the city of Tanta was taken into consideration; acquiring the satellite image in 2013 (Figure 6 (a)) we could see that the area indicated within the yellow oval in the figure is remotely detected as an urban environment. The image for 2021 (Figure 6 (b)) cannot be considered the same, since it is remotely detected by the satellite as a non-urban environment. So after performing the change detection, you expect that the area of interest is colored blue. From Figure 6 (c) it can be noted that the area of interest has undergone a change, just what was expected. The urban area became non-urban and for this reason, it is colored blue. With this last analysis, the case study ends. ## VI Conclusions With this study, we have presented a straightforward procedure to verify the existence of possible changes in the urban area of Cairo and how these changes can be kept under control to monitor urbanization and other modifications (or their absence) over about ten years of observations. From the results obtained, we can say that the performed analysis was successful, because, point by point, in each case considered (all four cases) we have obtained conclusions that prove the veracity of this procedure. It is worth highlighting that all processing including the ML ones has been carried out inside the GEE platform, offering also the advantage of cloud-based tools and services. Moreover, such an analysis is a small piece of a much larger project. Future work is to extend the analysis to other parts of the world and make further comparisons to assess the proposed methodology in detail.
2307.08964
Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information
Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer $\mathbf{g}$ to tackle these challenging problems with $f$ as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function $f\circ \mathbf{g}$. The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer $\mathbf{g}$ during both training and testing. The training is further challenged by sparse gradients of $\mathbf{g}$, especially for combinatorial solvers. To address these challenges, we propose using a smooth and learnable Landscape Surrogate $M$ as a replacement for $f\circ \mathbf{g}$. This surrogate, learnable by neural networks, can be computed faster than the solver $\mathbf{g}$, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization. We test our approach on both synthetic problems, including shortest path and multidimensional knapsack, and real-world problems such as portfolio optimization, achieving comparable or superior objective values compared to state-of-the-art baselines while reducing the number of calls to $\mathbf{g}$. Notably, our approach outperforms existing methods for computationally expensive high-dimensional problems.
Arman Zharmagambetov, Brandon Amos, Aaron Ferber, Taoan Huang, Bistra Dilkina, Yuandong Tian
2023-07-18T04:29:16Z
http://arxiv.org/abs/2307.08964v2
Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information ###### Abstract Recent works in learning-integrated optimization have shown promise in settings where the optimization problem is only partially observed or where general-purpose optimizers perform poorly without expert tuning. By learning an optimizer \(\mathbf{g}\) to tackle these challenging problems with \(f\) as the objective, the optimization process can be substantially accelerated by leveraging past experience. The optimizer can be trained with supervision from known optimal solutions or implicitly by optimizing the compound function \(f\circ\mathbf{g}\). The implicit approach may not require optimal solutions as labels and is capable of handling problem uncertainty; however, it is slow to train and deploy due to frequent calls to optimizer \(\mathbf{g}\) during both training and testing. The training is further challenged by sparse gradients of \(\mathbf{g}\), especially for combinatorial solvers. To address these challenges, we propose using a smooth and learnable _Landscape Surrogate_\(\mathcal{M}\) as a replacement for \(f\circ\mathbf{g}\). This surrogate, learnable by neural networks, can be computed faster than the solver \(\mathbf{g}\), provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization. We test our approach on both synthetic problems, including shortest path and multidimensional knapsack, and real-world problems such as portfolio optimization, achieving comparable or superior objective values compared to state-of-the-art baselines while reducing the number of calls to \(\mathbf{g}\). Notably, our approach outperforms existing methods for computationally expensive high-dimensional problems. ## 1 Introduction Mathematical optimization problems in various settings have been widely studied, and numerous methods exist to solve them [24; 31]. Although the literature on this topic is immense, real-world applications consider settings that are nontrivial or extremely costly to solve. The issue often stems from uncertainty in the objective or in the problem definition. For example, combinatorial problems involving nonlinear objectives are generally hard to address, even if there are efficient methods that can handle special cases (e.g., \(k\)-means). One possible approach could be learning so-called _linear surrogate costs_[16] that guide an efficient linear solver towards high quality solutions for the original hard nonlinear problem. This automatically finds a surrogate mixed integer linear program (MILP), for which relatively efficient solvers exist [19]. Another example is the _smart predict+optimize_ framework (a.k.a. decision-focused learning) [13; 40] where some problem parameters are unknown at test time and must be inferred from the observed input using a model (e.g., neural nets). Despite having completely different settings and purposes, what is common among learning surrogate costs, smart predict+optimize, and other integrations of learning and optimization, is the need to learn a certain target mapping to estimate the parameters of a latent optimization problem. This makes the optimization problem well-defined, easy to address, or both. In this work, we draw general connections between different problem families and combine them into a unified framework. The core idea (section 3) is to formulate the learning problem via constructing a compound function \(f\circ\mathbf{g}\) that includes a parametric solver \(\mathbf{g}\) and the original objective \(f\). To the best of our knowledge, this paper is the first to propose a generic optimization formulation (section 3) for these types of problems. Minimizing this new compound function \(f\circ\mathbf{g}\) via gradient descent is a nontrivial task as it requires differentiation through the argmin operator. Although various methods have been proposed to tackle this issue [3; 2], they have several limitations. First, they are not directly applicable to combinatorial optimization problems, which have 0 gradient almost everywhere, and thus require various computationally expensive approximations [33; 40; 15; 38]. Second, even if the decision variables are continuous, the solution space (i.e., argmin) may be discontinuous. Some papers [12; 17] discuss the fully continuous domain but typically involve computing the Jacobian matrix, which leads to scalability issues. Furthermore, in some cases, an explicit expression for the objective may not be given, and we may only have black-box access to the objective function, preventing straightforward end-to-end backpropagation. These limitations motivate _Landscape Surrogate_ losses (LANCER), a unified model for solving coupled learning and optimization problems. LANCER accurately approximates the behavior of the compound function \(f\circ\mathbf{g}\), allowing us to use it to learn our target parametric mapping (see fig. 1). Intuitively, LANCER must be differentiable and smooth (e.g., neural nets) to enable exact and efficient gradient computation. Furthermore, we propose an efficient alternating optimization algorithm that jointly trains LANCER and the parameters of the target mapping. Our motivation is that training LANCER in this manner better distills task-specific knowledge, resulting in improved overall performance. Experimental evaluations (section 5) confirm this hypothesis and demonstrate the scalability of our proposed method. The implementation of LANCER can be found at [https://github.com/facebookresearch/LANCER](https://github.com/facebookresearch/LANCER). ## 2 Related work **Smart Predict+Optimize framework** considers settings where we want to train a target mapping which predicts latent components to an optimization problem to improve downstream performance. A straightforward and naive approach in P+O is to build the target machine learning model in a two-stage fashion: train the model on ground truth problem parameters using some standard measure of accuracy (e.g., mean squared error) and then use its prediction to solve the optimization problem. However, this approach is prone to produce highly suboptimal models [6; 40] since the learning problem does not capture the task-specific objectives. Instead, "smart" Predict+Optimize (SPO) [13] proposed to minimize the _decision_ regret, i.e., the error induced by the optimization solution based on the estimated problem parameters coming from the machine learning model. A variety of methods exist for learning such models in the continuous, and often convex, optimization setting. These approaches usually involve backpropagating through the solver [11; 12]. In some isolated and simple scenarios, an optimal solution exists [22], or the learning problem can be formulated via efficient surrogate losses [13]. Extending the SPO framework for combinatorial problems is challenging, and current methods rely on identifying heuristic gradients often via continuous, primal, or dual relaxations [32; 40; 33; 30; 38; 15; 27]. Alternative approaches leverage specific structures of the optimization problem [9; 21; 39], such as being solvable via dynamic programming or graph Figure 1: Overview of our proposed framework LANCER. We replace the non-convex and often non-differentiable function \(f\circ\mathbf{g}\) with landscape surrogate \(\mathcal{M}\) and use it to learn the target mapping \(\mathbf{c}_{\boldsymbol{\theta}}\). The current output of \(\mathbf{c}_{\boldsymbol{\theta}}\) is then used to evaluate \(f\) and to refine \(\mathcal{M}\). This procedure is repeated in alternating optimization fashion. partitioning. Further work focuses on SPO for individual problems Shah et al. [36], which collects perturbed input instances and learns a separate locally convex loss for each instance. In contrast, we define the loss over the domain rather than per instance, allowing for generalization to unseen instances, and our bilevel optimization formulation enables more efficient training. Compared to prior works, our method generically applies to a wider variety of problem settings, losses, and problem formulations. See Table 1 for a direct comparison of the requirements and capabilities for different approaches. Mixed integer nonlinear programming (MINLP)Lancer can be considered a solver for constrained combinatorial problems with nonlinear and nonconvex objectives (MINLP), a challenging class of optimization problems [4]. Apart from SurCo [16], which we will discuss in detail in section 3, specialized solvers exist that deal with various MINLP formulations [7; 1; 19]. These specialized solvers generally require an analytical form for the objective and cannot handle black-box objectives. Additionally, these methods are not designed to handle general-purpose large-scale problems without making certain approximations or using specialized modeling techniques, such as linearizing the objective function or applying domain-specific heuristics. Lastly, some previous work directly predicts solutions to economic dispatch problems [8], uses reinforcement learning to build solutions to linear combinatorial problems [23], or uses reinforcement learning for ride hailing problems [42]. These approaches are designed for specific application domains or are tailored to linear optimization settings where the reward for a single decision variable is its objective coefficient, which is not trivially applicable in nonlinear settings. Optimization as a layerThe optimization-as-a-layer family of methods considers the composition of functions where one (or more) of the functions is defined as the solution to a mathematical optimization problem [3; 2; 17; 18]. Since both P+O and SurCo can be formulated as a nesting of a target mapping with the argmin operator (i.e., optimization solver), one can leverage approaches from this literature. The core idea here is based on using the implicit function theorem to find necessary gradients and backpropagate through the solver. Similar concepts exist for combinatorial optimization [15; 38; 27; 30; 28], where gradient non-existence is tackled through improved primal or dual relaxations. ## 3 A Unified Training Procedure In this work, we focus on solving the following optimization problems: \[\min_{\mathbf{x}}f(\mathbf{x};\mathbf{z})\qquad\mathrm{s.t.}\quad\mathbf{x}\in\Omega \tag{1}\] where \(f\) is the function to be optimized (linear or nonlinear), \(\mathbf{x}\in\Omega\) are the decision variables that must lie in the feasible region, typically specified by (non)linear (in)equalities and possibly integer constraints, and \(\mathbf{z}\in\mathcal{Z}\) is the problem description (or problem features). For example, if \(f\) is to find a shortest path in a graph, then \(\mathbf{x}\) is the path to be optimized, and \(\mathbf{z}\) represents the pairwise distances (or features used to estimate them) in the formulation. Ideally, we would like to have an optimizer that can (1) deal with the complexity of the loss function landscape (e.g., highly nonlinear objective \(f\), complicated and possibly combinatorial domain \(\Omega\)), (2) leverage past experience in solving similar problems, and (3) can deal with a partial information setting, in which only an observable problem description \(\mathbf{y}\) can be seen but not the true problem description \(\mathbf{z}\) when the decision is made at test time. To design such an optimizer, we consider the following setting: assume that for the training instances, we have access to the full problem descriptions \(\{\mathbf{z}_{i}\}\subseteq\mathcal{Z}\), as well as the observable descriptions \(\{\mathbf{y}_{i}\}\subseteq\mathcal{Y}\), while for the test instance, we only know its observable description \(\mathbf{y}_{\mathrm{test}}\), but not its full description \(\mathbf{z}_{\mathrm{test}}\). Note that such a setting naturally incorporates optimization under uncertainty, in which a decision need to be made without full information, while the full information can be obtained in hindsight (e.g., portfolio optimization). Given this setting, we propose the following general _training_ procedure on a training set \(\mathcal{D}_{\mathrm{train}}:=\{(\mathbf{y}_{i},\mathbf{z}_{i})\}_{i=1}^{N}\) to learn a good optimizer: \[\min_{\boldsymbol{\theta}}\mathcal{L}(Y,Z):=\sum_{i=1}^{N}f\left(\mathbf{g}_{ \boldsymbol{\theta}}(\mathbf{y}_{i});\mathbf{z}_{i}\right) \tag{2}\] Here \(\mathbf{g}_{\boldsymbol{\theta}}:\mathcal{Y}\mapsto\Omega\) is a _learnable solver_ that returns a high quality solution for objective \(f\)_directly_ from the observable problem description \(\mathbf{y}_{i}\). \(\boldsymbol{\theta}\) are the learnable solver's parameters. Once \(\mathbf{g}_{\boldsymbol{\theta}}\) is learned, we can solve new problem instances with observable description \(\mathbf{y}_{\mathrm{test}}\) by either calling \(\mathbf{x}_{\mathrm{test}}=\mathbf{g}\boldsymbol{\theta}(\mathbf{y}_{\mathrm{ test}})\) to get a reasonable solution, or continue to optimize Eqn. 1 using \(\mathbf{x}=\mathbf{x}_{\mathrm{test}}\) as an initial solution. Theoretically, if problem description \(\mathbf{z}\) is fully observable (i.e., \(\mathbf{y}=\mathbf{z}\)), the optimization oracle \(\arg\min_{\mathbf{x}\in\Omega}f(\mathbf{x};\mathbf{z})\) solves Eqn. 2. However, solving may be computationally intractable even with full information as in nonlinear combinatorial optimization. Our proposed training procedure is general and covers many previous works that rely on either fully or partially observed problem information. **In Smart Predict+Optimize (P+O)**, \(f\) belongs to a specific function family (e.g., linear or quadratic programs). The full problem description \(\mathbf{z}\) includes objective coefficients, but we only have access to noisy versions of them in \(\mathbf{y}\). Then the goal in P+O is to identify a mapping \(\mathbf{c}_{\boldsymbol{\theta}}\) (e.g. a neural net) so that a downstream solver outputs a high quality solution: \(\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{y})=\arg\min_{\mathbf{x}\in\Omega}f( \mathbf{x};\mathbf{c}_{\boldsymbol{\theta}}(\mathbf{y}))\). Here \(\arg\min_{\mathbf{x}\in\Omega}f\) can often be solved with standard approaches, and the main challenge is to estimate the problem description accurately (w.r.t. eq. (2)). Note that other P+O formulations can be encompassed within our framework in eq. (2). For instance, the regret-based formulation described in SPO [13] can be represented as \(\max_{\mathbf{z}\in\mathbf{g}\boldsymbol{\theta}(\mathbf{y})}f\left(\hat{ \mathbf{z}};\mathbf{z}\right)-f^{*}\) where \(f^{*}\) is the optimal loss that is independent of \(\boldsymbol{\theta}\). Learning surrogate costs for MINLPWhen \(f\) is a general nonlinear objective (but \(\mathbf{y}=\mathbf{z}\) is fully observed), computing \(\arg\min_{\mathbf{x}\in\Omega}f\) also becomes non-trivial, especially if \(\mathbf{x}\) is in combinatorial spaces. Such problems are commonly referred as mixed integer nonlinear programming (MINLP). To leverage the power of linear combinatorial solvers, SurCo [16] sets the learnable solver to be \(\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{y})=\arg\min_{\mathbf{x}\in\Omega} \mathbf{x}^{\top}\mathbf{c}_{\boldsymbol{\theta}}(\mathbf{y})\), which is a linear solver and does not include the nonlinear function \(f\) at all. Intuitively, this models the complexity of \(f\) by the learned _surrogate cost_\(\mathbf{c}_{\boldsymbol{\theta}}\), which is parameterized by a neural network. Surprisingly, this works quite well in practice [16]. ## 4 Lancer: Learning Landscape Surrogate Losses Variations of training objective Eqn. 2 have been proposed to learn \(\boldsymbol{\theta}\) in one way or another. This includes derivative-based approaches discussed in section 2 as well as domain-specific methods that learn \(\boldsymbol{\theta}\) efficiently and avoid backpropagating through the solver, e.g., SPO+ [13]. While these are valid approaches, at each step of the training process, we need to call a solver to evaluate \(\mathbf{g}_{\boldsymbol{\theta}}\), which can be computationally expensive. Furthermore, \(\mathbf{g}_{\boldsymbol{\theta}}\) is learned via gradient descent of Eqn. 2, which involves backpropagating through the solver. One issue of this procedure is that the gradient is non-zero only at certain locations (i.e., when changes in the coefficients lead to changes in the optimal solution), which makes the gradient-based optimization difficult. One question arises: can we model the composite function \(f\circ\mathbf{g}_{\boldsymbol{\theta}}\) jointly? The intuition here is that while \(\mathbf{g}_{\boldsymbol{\theta}}\) can be hard to compute, \(f\circ\mathbf{g}_{\boldsymbol{\theta}}\) can be smooth to model, since \(f\) can be smooth around the solution provided by \(\mathbf{g}_{\boldsymbol{\theta}}\). If we model \(f\circ\mathbf{g}_{\boldsymbol{\theta}}\) locally by a _landscape surrogate_ model \(\mathcal{M}\), and optimize directly on the local landscape of \(\mathcal{M}\), then the target mapping \(\mathbf{c}_{\boldsymbol{\theta}}\) can be trained without running expensive solvers: \[\min_{\boldsymbol{\theta}}\mathcal{M}(Y,Z):=\sum_{i=1}^{N}\mathcal{M}\left( \mathbf{c}_{\boldsymbol{\theta}}(\mathbf{y}_{i});\mathbf{z}_{i}\right). \tag{3}\] Note that \(\mathcal{M}\) directly depends on \(\mathbf{c}_{\boldsymbol{\theta}}\) (not on \(\mathbf{g}_{\boldsymbol{\theta}}\)). Obviously, \(\mathcal{M}\) cannot be any arbitrary function. Rather it should satisfy certain conditions: 1) capture a task-specific loss \(f\circ\mathbf{g}_{\boldsymbol{\theta}}\); 2) be differentiable and smooth. Differentiability allows us to train our target model \(\mathbf{c}_{\boldsymbol{\theta}}\) in end-to-end fashion (assuming \(\mathbf{c}\) is itself differentiable). The primary advantage is that we can _avoid backpropagating through the solver or even through \(f\)_ (e.g., multi-armed bandits). Moreover, \(\mathcal{M}_{\mathbf{w}}\) is typically high dimensional (e.g., a neural net) and potentially can make the learning problem for \(\mathbf{c}_{\boldsymbol{\theta}}\) much easier. The question is how to obtain such a model \(\mathcal{M}\)? One way is to parameterize it and formulate the learning problem: \[\begin{split}\min_{\mathbf{w}}\|&\mathcal{M}_{ \mathbf{w}}(Y,Z;\boldsymbol{\theta}^{*})-\mathcal{L}(Y,Z;\boldsymbol{\theta}^{* })\|\\ &\text{s.t.}\quad\boldsymbol{\theta}^{*}\in\operatorname*{argmin}_{ \boldsymbol{\theta}}\mathcal{M}_{\mathbf{w}}(Y,Z;\boldsymbol{\theta}).\end{split} \tag{4}\] Here, we add \(\boldsymbol{\theta}\) as an argument to explicitly emphasize the dependence of the loss function on the target mapping \(\mathbf{c}\). Note that \(\mathcal{M}\) does not need to be accurate over the entire domain, but only needs to be accurate around the optimal solution \(\mathbf{\theta}^{*}\). In other words, \(\mathcal{M}_{\mathbf{w}}\) serves as a _surrogate loss_ that approximates \(\mathcal{L}\) in a certain _landscape_: \(\mathcal{M}_{\mathbf{w}}(Y,Z,\mathbf{\theta})\sim\mathcal{L}(Y,Z,\mathbf{\theta})\). Notice that Eqn. 4 is an instance of _bi-level optimization_. Established methods from the bi-level optimization literature, such as [17; 41], could potentially be used, but most of them still rely on \(\nabla_{\mathbf{\theta}}\mathcal{L}\) (or even \(\nabla_{\mathbf{\theta}}^{2}\)), which involves differentiating through the solver. To overcome this issue, we propose a simple and generic algorithm 1, which is based on alternating optimization (high-level idea is depicted in fig. 1). The core idea is to simultaneously learn both mappings (\(M_{\mathbf{w}}\) and \(\mathbf{c}_{\mathbf{\theta}}\)) to explore different solution spaces. By improving our target model \(\mathbf{c}_{\mathbf{\theta}}\), we obtain better estimates of the surrogate loss around the solution, and a better estimator \(M_{\mathbf{w}}\) leads to better optimization of the desired loss \(\mathcal{L}\). The use of alternating optimization helps both mappings reach a common goal. ``` 1:Input: \(\mathcal{D}_{\mathsf{train}}\leftarrow\{\mathbf{y}_{i},\mathbf{z}_{i}\}_{i=1}^ {N}\), solver \(\mathbf{g}\), objective \(f\), target model \(\mathbf{c}_{\mathbf{\theta}}\); 2:Initialize \(\mathbf{c}_{\mathbf{\theta}}\) (e.g. random, warm start); 3:for\(t=1\dots T\)do 4:\(\bullet\) w-step (fix \(\mathbf{\theta}\) and optimize over \(\mathbf{w}\)): 5:for\((\mathbf{y}_{i},\mathbf{z}_{i})\in\mathcal{D}_{\mathsf{train}}\)do 6: evaluate \(\hat{\mathbf{c}}_{i}=\mathbf{c}_{\mathbf{\theta}}(\mathbf{y}_{i})\); 7: evaluate \(\hat{f}_{i}=f(\mathbf{g}(\hat{\mathbf{c}}_{i});\mathbf{z}_{i})\); 8: add \((\hat{\mathbf{c}}_{i},\mathbf{z}_{i},\hat{f}_{i})\) to \(\mathcal{D}\); 9:endfor 10: solve \(\min_{\mathbf{w}}\sum_{i\in\mathcal{D}}\left\|\mathcal{M}_{\mathbf{w}}(\hat{ \mathbf{c}}_{i},\mathbf{z}_{i})-\hat{f}_{i}\right\|\) via supervised learning; 11:\(\bullet\)\(\mathbf{\theta}\)-step (fix \(\mathbf{w}\) and optimize over \(\mathbf{\theta}\)): 12: solve \(\min_{\mathbf{\theta}}\sum_{i\in\mathcal{D}_{\mathsf{train}}}\mathcal{M}_{\mathbf{ w}}(\mathbf{c}_{\mathbf{\theta}}(\mathbf{y}_{i}),\mathbf{z}_{i})\) via supervised learning. 13:endfor ``` **Algorithm 1** Pseudocode for simultaneously learning LANCER and target model \(\mathbf{c}_{\mathbf{\theta}}\). Note that the algorithm may vary slightly based on setting (e.g., P+O and variations of SurCo ), see Appendix B. Note that the Algorithm 1 avoids backpropagating through the solver or even through \(f\). The only requirement is evaluating the function \(f\) at the solution of \(\mathbf{g}\), which can be achieved by blackbox solver access. As a result, this approach eliminates the complexity and computational expense associated with computing derivatives of combinatorial solvers, making it a more efficient and practical solution as shown in Table 1. ### Reusing landscape surrogate model \(\mathcal{M}_{\mathbf{w}}\) Once Algorithm 1 finishes, we usually discard \(\mathcal{M}_{\mathbf{w}}\) as it is an intermediate result of the algorithm, and we only retain \(\mathbf{c}_{\mathbf{\theta}}\) (and solver \(\mathbf{g}\)) for model deployment. However, we have found through empirical exploration that the learned surrogate loss \(\mathcal{M}_{\mathbf{w}}\) can be _reused_ for a range of problems, increasing the versatility of the approach. This is particularly advantageous for _SurCo_ setting, where we handle one instance at a time. In this scenario, we utilize the _trained_\(\mathcal{M}_{\mathbf{w}}\) for unseen test instances by executing only the \(\mathbf{\theta}\)-step of Algorithm 1. The main advantage of this extension is that it eliminates the need for access to the solver \(\mathbf{g}\), leading to significant deployment runtime improvements. ### Reusing past evaluations of \(f\circ\mathbf{g}_{\mathbf{\theta}}\) In LANCER, the learning process of \(\mathcal{M}_{\mathbf{w}}\) is solely reliant on \(\mathcal{D}\) and is independent of the current state of \(\mathbf{c}_{\mathbf{\theta}}\). Put simply, to effectively learn \(\mathcal{M}_{\mathbf{w}}\), we only need the inputs and outputs of \(f\circ\mathbf{g}_{\mathbf{\theta}}\), namely \(\mathbf{c}_{\mathbf{\theta}}(\mathbf{y}_{i})\), \(Z\), and the corresponding objective value \(\hat{\mathbf{f}}\). Interestingly, we can cache the predicted descriptions themselves, \(\mathbf{c}_{\mathbf{\theta}}(\mathbf{y}_{i})\), without the need for the model \(\mathbf{\theta}\) or problem information. This caching mechanism allows us to reuse the data \((\mathbf{c}_{\mathbf{\theta}}(\mathbf{y}_{i}),\mathbf{z},\hat{\mathbf{f}})\) from previous iterations (\(1\dots T-1\)) as-is. By adopting this practice, we enhance and diversify the available training data for \(\mathcal{M}_{\mathbf{w}}\), which proves particularly advantageous for neural networks. This concept bears resemblance to the concept of a _replay buffer_[26] commonly found in the literature on Reinforcement Learning. ## 5 Experiments We validate our approach (LANCER) in two settings: smart predict+optimize and learning surrogate costs for MINLP. For each setting, we study a range of problems, including linear, nonlinear, combinatorial, and others, encompassing both synthetic and real-world scenarios. Overall, LANCER _exhibits superior or comparable objective values while maintaining efficient runtime_. Additionally, we perform ablation studies, such as re-using \(\mathcal{M}\). ### Synthetic data #### 5.1.1 Combinatorial optimization with linear objective The shortest path (SP) and multidimensional knapsack (MKS) are both classic problems in combinatorial optimization with broad practical applications. In this setting, we consider a scenario where problem parameters \(\mathbf{z}\), such as graph edge weights and item prices, cannot be directly observed during test time, and instead need to be estimated from \(\mathbf{y}\) via learnable mapping \(\mathbf{z}=\mathbf{c}_{\boldsymbol{\theta}}(\mathbf{y})\). That is, we consider smart P+O setting. SetupWe use standard linear program (LP) formulation for SP and mixed integer linear program (MILP) formulation for MKS. Observed features \(\mathbf{y}\) and the corresponding ground truth problem descriptions \(\mathbf{z}\) for SP are generated using the same procedure as in [37]: grid size of \(5\times 5\), feature dimension of \(\mathbf{y}\in\mathbb{R}^{5}\) (obtained using a random linear mapping from \(\mathbf{z}\)), 1000 instances for both train and test. For MKS, we increased the knapsack dimension to 5 and capacity to 45, and we set the number of items to 100. Moreover, we use randomly initialized MLP (1 ReLu hidden layer) to generate features of dimension \(\mathbf{y}\in\mathbb{R}^{256}\). As for the baselines, apart from the naive 2-stage approach, we have SPO+ [13] and DBB [33], both implemented in PyEPO [37] library. Furthermore, we added LODLs, a novel method from Shah et al. [36]. We explored as best as we could all important hyperparameters for all methods on a fixed cross-validation set. Regarding the LANCER, we utilize MLP with 2 tanh hidden layers of size 200 for surrogate model \(\mathcal{M}\). We set \(T=10\) and the number of updates for \(\mathcal{M}\) and \(\mathbf{c}_{\boldsymbol{\theta}}\) is at most 10. We use SCIP [1] to solve LP for the shortest path and MILP for knapsack. Further details can be found in Appendix C.1.1. ResultsThe results are summarized in fig. 2. We report the normalized regret as described in [37]. The findings indicate that LANCER and SPO+ consistently outperform the two-stage baseline, particularly when considering the warm start. As SPO+ is specifically designed for linear programs, it provides informative gradients, making it a robust baseline. Even in MKS, where theorems proposed in [13] are no longer applicable, SPO+ performs decently with minimal tuning effort. The \begin{table} \begin{tabular}{l|c|c c c c} \hline \hline Feature \textbackslash{} Method & LANCER & SurCo [16] & LODLs [36] & SPO+ [13] & DiffOpt [15; 2; 33] & Exact [1; 35] \\ \hline On-the-fly opt & \(+\) & \(+\) & \(-\) & \(-\) & \(-\) & \(+\) \\ Nonlinear \(f\) & \(+\) & \(+\) & \(+\) & \(-\) & \(+\) & \(+\) \\ Blackbox \(f\) & \(+\) & \(-\) & \(+\) & \(-\) & \(-\) & \(-\) \\ \(\partial f\) not required & \(+\) & \(-\) & \(+\) & \(+\) & \(-\) & \(+\) \\ \(\partial\mathbf{g}_{\boldsymbol{\theta}}\) not required & \(+\) & \(-\) & \(+\) & \(+\) & \(-\) & \(+\) \\ Generalization & \(+\) & \(+\) & \(-\) & \(+\) & \(+\) & \(-\) \\ Few fast solver calls & \(\pm\) & \(\pm\) & \(-\) & \(\pm\) & \(\pm\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 1: Conceptual comparison of methods from related literature. Capabilities on the left are present or not for Methods on the top. \(\pm\) is given when this capability depends empirically on the problem at hand. Figure 2: Normalized test regret (lower is better) for different P+O methods: 2-stage, SPO+ [13], DBB [33], LODLs [36] and ours (LANCER). Overlaid dark green bars (right) indicate that the method warm started from the solution of 2stg. DBB performs considerably worse on the right benchmark and is cut off on the \(y\)-axis. DBB approach, however, demonstrates unsatisfactory default performance but can yield favorable outcomes with proper initialization and tuning (see the right plot). Interestingly, the other P+O baselines, initialized randomly, were unable to outperform a naive 2stg in both benchmarks. Lancer achieves superior performance in both tasks, with a noticeable advantage in MKS. This may be attributed to the high dimension of the MKS problem and the large feature space (\(\mathbf{y}\)). One possible explanation is that the sparse gradients of the derivative-based method make the learning problem harder, whereas LANCER models the landscape of \(f\circ g\), providing informative gradients for \(\mathbf{c_{\theta}}\). #### 5.1.2 Combinatorial optimization with nonlinear objective In this section, we apply LANCER for solving mixed integer nonlinear programs (MINLP). Specifically, we transform a combinatorial problem with a nonlinear objective into an instance of MILP via learning linear surrogate costs as described in Ferber et al. [16]. Note that in this setting, we assume that the full problem description \(\mathbf{y}=\mathbf{z}\) is given and fully observable (in contrast to the P+O setting). We begin by examining on-the-fly optimization, where each problem is treated independently. In this scenario, the cost vector \(\mathbf{c_{\theta}}(\mathbf{y})\) simplifies to a constant value \(\mathbf{c}\). SurCo is then responsible for directly training the cost vector \(\mathbf{c}\) of the linear surrogate. As we lack a distribution of problems to train \(\mathcal{M}\), specific adaptations to Algorithm 1 are necessary, which are outlined in detail in Appendix B. We refer to this version as LANCER-zero to be consistent with SurCo-zero. SetupNonlinear shortest path problems arise when the objective is to maximize the probability of reaching a destination before a specified time in graphs with random edges [25, 14]. The problem formulation is similar to the standard linear programming (LP) formulation of the shortest path, as described in section 5.1.1, with a few adjustments: 1) the weight of each edge follows a normal distribution, i.e., \(w_{e}\sim\mathcal{N}(\mu_{e},\sigma_{e})\); 2) the objective is to maximize the probability that the sum of weights along the shortest path is below a threshold \(W\), which can be expressed using the standard Gaussian cumulative distribution function (CDF), \(P(\sum_{e\in E}w_{e}\leq W)=\Phi\left((W-\sum_{e\in E}\mu_{e})/\sqrt{\sum_{e\in E }\sigma_{e}}\right)\) where \(E\) is the set of edges belonging to the shortest path. We use \(5\times 5\) and \(15\times 15\) grid graphs with 25 draws of edge weights. We set the threshold \(W\) to three different values corresponding to loose, normal, and tight deadlines. The remaining settings are adapted from Ferber et al. [16], and additional details can be found in Appendix C.1.2. ResultsFig. 3 illustrates the performance of different methods in both grid sizes. SCIP directly formulates the MINLP to maximize the CDF, resulting in an optimal solution. However, this approach is not scalable for larger problems and is limited to smaller instances like the \(5\times 5\) grid. The heuristic method assigns each edge weight as \(w_{e}=\mu_{e}+\gamma\sigma_{e}\), where \(\gamma\) is a user-defined hyperparameter, and employs standard shortest path algorithms (e.g., Bellman-Ford). As the results indicate, this heuristic approach produces highly suboptimal solutions. SurCo-zero and LANCER-zero demonstrate similar performance, with LANCER-zero being superior in almost all scenarios. Figure 3: Results on stochastic shortest path using different grid sizes: 5x5 (left) and 15x15 (right). We report avg objective values (higher is better) on three settings described in [16]. For grid size of 15x15, SCIP [1] was unable to finish within the 30 min time limit. ### Real-world use case: quadratic and broader nonlinear portfolio selection #### 5.2.1 The quadratic programming (QP) formulation In this study, we tackle the classical quadratic Markowitz [29] portfolio selection problem. We use real-world data from Quandl [34] and follow the setup described in Shah et al. [36]. The prediction task leverages each stock's historical data \(\mathbf{y}\) to forecast future prices \(\mathbf{z}\), which are then utilized to solve the QP (i.e., P+O setting). The predictor is the MLP with 1 hidden layer of size 500. SetupWe follow a similar setup described in [36], except for a fix in the objective's quadratic term that slightly affects the decision error's magnitude. More details can be found in Appendix C.2.1. We compare LANCER against two-stage and LODLs. However, SPO+ is not applicable in this nonlinear setting, and DBB's performance is notably worse since it is designed for purely combinatorial problems. Additionally, we report the optimal solution (using the ground truth values of \(\mathbf{z}\)) and the (Melding Decision Focused Learning) MDFL [40] method, which leverages the implicit function theorem to differentiate through the KKT conditions. For our LANCER implementation, we use an MLP with 2 hidden layers for \(\mathcal{M}\) and update each of the parametric mappings 5 times per iteration with a total of \(T=8\) iterations. We use cvxpy [10] to solve the QP. ResultsTable 2 summarizes our results. We report the normalized decision loss (i.e., normalized Eqn. (2)) on test data. Since the problem is smooth and exact gradients can be calculated, MDFL achieves the best performance closely followed by the LANCER. The remaining results are in agreement with [36]. While LANCER does not achieve the best overall performance, it does so using a significantly smaller number of calls to a solver, as we discuss in more detail in Section 5.3. #### 5.2.2 Combinatorial portfolio selection with third-order objective The convex portfolio optimization problem discussed in the previous section 5.2.1 is unable to capture desirable properties such as logical constraints [5, 15], or higher-order loss functions [20] that integrate metrics like co-skewness to better model risk. We use the Quandl [34] data (see Appendix C.2.2 for details on setup) and similar to section 5.1.2, we assume that the full problem description \(\mathbf{z}\) is given at train/test time. **Results** are shown in fig. 4. We first tried to solve the given MINLP exactly via SCIP. However, it fails to produce the optimal solution within a 1 hour time limit and we report the best incumbent feasible solution. MIQP (blue squares) and MILP (blue triangles) approximations overlook the co-skewness and non-linear terms, respectively. Comparing their performance, MIQP exhibits a \(2\times\) lower loss than the MILP baseline but in a significantly longer runtime. For LANCER and SurCo, we present results for two scenarios: learning the linear cost vector \(\mathbf{c}\) directly (_zero_) for each instance, and a parameterized version \(\mathbf{c}_{\theta}(\mathbf{z})\) (_prior_). The main distinction of "prior" is that no learning occurs during test time, as we directly map the problem descriptor \(\mathbf{z}\) to a linear cost vector and solve the MILP. Consequently, the deployment runtime is similar to that of the MILP approximation, but LANCER-prior produces slightly superior solutions. Remarkably, LANCER-zero achieves significantly better loss values, surpassing all other methods. Although it takes longer to run, the runtime remains manageable, and importantly, the solution quality improves with an increasing number of iterations. \begin{table} \begin{tabular}{l c} \hline Method & Test DL \\ \hline Random & 1 \\ Optimal & 0 \\ \hline 2–Stage & 0.57 \(\pm\) 0.02 \\ LODLs [36] & 0.55 \(\pm\) 0.02 \\ MDFL [40] & 0.52 \(\pm\) 0.01 \\ **LANCER** & **0.53 \(\pm\) 0.02** \\ \hline \end{tabular} \end{table} Table 2: Portfolio selection normalized test decision loss (lower is better). Figure 4: Objective (lower is better) and deployment runtime for combinatorial portfolio selection problem. For LANCER–zero and SurCo–zero, numbers at each point correspond to the number of iterations. ### Computational efficiency Comparing baseline methods, including LANCER, we find that querying solver \(\mathbf{g}_{\theta}\) is the primary computational bottleneck. To evaluate this aspect, we empirically analyze different algorithms on various benchmarks in the P+O domain. The results, depicted in fig. 5, highlight that LODLs require sampling a relatively large number of points per training instance, leading to potentially time-consuming solver access. On the other hand, gradient-based methods like DBB, MDFL, and SPO+ typically solve the optimization problem 1-2 times per update but require more iterations to converge. In contrast, LANCER accesses the solver in the \(\mathbf{w}\)-step, with the number of accesses proportional to the training set size and a small total number of alternating optimization iterations. Moreover, we leverage saved solutions from previous iterations, akin to a replay buffer, when fitting \(\mathcal{M}\). These combined factors allow us to achieve favorable results with a small value of \(T\). ### Reusing landscape surrogate \(\mathcal{M}_{\mathbf{w}}\) In this scenario, we introduce a dependency of \(\mathcal{M}_{\mathbf{w}}\) on both the predicted linear cost (\(\mathbf{c}\)) and the problem descriptor (\(\mathbf{y}\)), as described in section 4.1. This enables us to reuse \(\mathcal{M}_{\mathbf{w}}\) for different problem instances without retraining, and eliminate the dependency on the solver \(\mathbf{g}_{\theta}\), giving LANCER a substantial runtime acceleration. To validate this hypothesis, we pretrain \(\mathcal{M}_{\mathbf{w}}\) using 200 instances of the stochastic shortest path on a \(15\times 15\) grid by providing concatenated \((\mathbf{c},\mathbf{y})\) as input. We apply LANCER-zero to the same test set as before and present results in fig. 3, demonstrating comparable performance between these two approaches, with "reused \(\mathcal{M}_{\mathbf{w}}\)" being much faster. ## 6 Conclusion This paper makes a dual contribution. Firstly, we derive a unified training procedure to address various coupled learning and optimization settings, including smart predict+optimize and surrogate learning. This is significant as it advances our understanding of learning-integrated optimization under partial information. Secondly, we propose an effective and powerful method called LANCER to tackle this training procedure. LANCER offers several advantages over existing literature, such as versatility, differentiability, and efficiency. Experimental results validate these advantages, leading to significant performance improvements, especially in high-dimensional spaces, both in problem description and feature space. One potential drawback is the complexity of tuning \(\mathcal{M}\), requiring model selection and training. However, future research directions include addressing this drawback and exploring extensions of LANCER, such as applying it to fully black box \(f\) scenarios. Figure 5: Trade-off curves between black-box solver calls (MILP or QP) vs decision loss (or regret) on P+O problems. Point labels (e.g. 1,4,7) correspond to the epoch; except for LODLs, where they correspond to the number of samples per instance. Each algorithm uses a different number of BB calls per epoch. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline Method & \multicolumn{2}{c|}{Loose Deadline} & \multicolumn{2}{c|}{Normal Deadline} & \multicolumn{2}{c}{Tight Deadline} \\ & obj. & time (s) & obj. & time (s) & obj. & time (s) \\ \hline LANCER–zero & 0.556 \(\pm\) 0.006 & 61.1 \(\pm\) 3.2 & 0.497 \(\pm\) 0.004 & 62.3 \(\pm\) 2.9 & 0.434 \(\pm\) 0.005 & 62.8 \(\pm\) 2.7 \\ LANCER–reused \(\mathcal{M}_{\mathbf{w}}\) & 0.556 \(\pm\) 0.007 & 2.9 \(\pm\) 0.3 & 0.496 \(\pm\) 0.004 & 2.7 \(\pm\) 0.6 & 0.432 \(\pm\) 0.004 & 2.5 \(\pm\) 0.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of reusing \(\mathcal{M}\) on stochastic shortest path problem from fig. 3 (\(15\times 15\) grid). Here, “reused \(\mathcal{M}_{\mathbf{w}}\)” has limited access to the solver \(\mathbf{g}\), and thus is much faster while retaining solution quality.
2304.11965
Optimal work fluctuations for finite-time and weak processes
The optimal protocols for the irreversible work achieve their maximum usefulness if their work fluctuations are the smallest ones. In this work, for classical and isothermal processes subjected to finite-time and weak drivings, I show that the optimal protocol for the irreversible work is the same for the variance of work. This conclusion is based on the fluctuation-dissipation relation $\overline{W}=\Delta F+\beta \sigma_W^2/2$, extended now to finite-time and weak drivings. To illustrate it, I analyze a white noise overdamped Brownian motion subjected to an anharmonic stiffening trap for fast processes. By contrast with the already known results in the literature for classical systems, the linear-response theory approach of the work probabilistic distribution is not a Gaussian reduction.
Pierre Nazé
2023-04-24T10:01:28Z
http://arxiv.org/abs/2304.11965v2
# Optimal work fluctuations for finite-time and weak processes ###### Abstract The optimal protocols for the irreversible work achieve their maximum usefulness if their work fluctuations are the smallest ones. In this work, for isothermal processes subjected to finite-time and weak drivings, I show using the linear-response theory that the optimal protocol for the irreversible work is the same for the variance of work. This conclusion is based on the work fluctuation-dissipation theorem \(\overline{W}=\Delta F+\beta\sigma_{W}^{2}/2\), extended now to finite-time and weak drivings. To illustrate such a relation, I analyze the example of an overdamped Brownian motion subjected to an anharmonic stiffening trap and white noise for fast processes. By contrast with the already known results in the literature for classical systems, the linear-response theory approach of the work probabilistic distribution is not a Gaussian reduction. _Introduction.--_ The optimization of the thermodynamic work of driving processes is a practical example where averages and fluctuations work side-by-side. Finding a protocol to which the external parameter of the system leads to the minimal value of the average of the thermodynamic work has its utmost value when its fluctuations are equally minimal. Few remarkable works have been done in the last decades unveiling relations between those concepts involving averaging and fluctuations of the thermodynamic work. In the classical realm, Jarzynski [1] found out from its famous equality that systems with time-dependent quadratic potentials, starting its driving process in contact with a heat bath of temperature \(\beta^{-1}\), obey the following fluctuation-dissipation theorem \[\overline{W}=\Delta F+\frac{\beta}{2}\sigma_{W}^{2}, \tag{1}\] where \(\overline{W}\) is the average work, \(\sigma_{W}^{2}\) is the variance of the work, and \(\Delta F\) is the difference of Helmholtz free energy between the final and initial equilibrium states of the process. Some years later, Speck and Seifert [2] deduced the same result, but now for slowly-varying processes, that is, processes whose rate is not fast enough when compared to the relaxation rate of the system. Very recently, in the quantum realm, Miller and coauthors found out that such fluctuation-dissipation theorem fails when the coherence of quantum systems is added [3]. From the point-of-view of optimization, in these regimes where the fluctuation-dissipation theorem holds, the minimal work is the preciest one. The objective of this work is to derive the same result for isothermal, finite-time, and weak driving processes, where the rate of the process is arbitrary and the perturbation of the external parameter is small compared to its initial value. To accomplish that, I generalize the work fluctuation-dissipation theorem (1) to this regime using linear-response theory. To illustrate such a relation, I analyze the case of the overdamped Brownian motion subjected to an anharmonic stiffening trap and white noise for fast processes. Finally, as a main consequence of our findings, the Euler-Lagrange equation that determines the optimal protocol for the minimization of the variance of the work is the same as the one that minimizes the irreversible work. _Preliminaries.--_ I start defining notations and developing the main concepts to be used in this work. Consider a classical system with a Hamiltonian \(\mathcal{H}(\mathbf{z}(\mathbf{z_{0}},t)),\lambda(t))\), where \(\mathbf{z}(\mathbf{z_{0}},t)\) is a point in the phase space \(\Gamma\) evolved from the initial point \(\mathbf{z_{0}}\) until time \(t\), with \(\lambda(t)\) being a time-dependent external parameter. During a switching time \(\tau\), the external parameter is changed from \(\lambda_{0}\) to \(\lambda_{0}+\delta\lambda\), with the system being in contact with a heat bath of temperature \(\beta\equiv\left(k_{B}T\right)^{-1}\), where \(k_{B}\) is Boltzmann's constant. The average work performed on the system during this interval of time is \[\overline{W}\equiv\int_{0}^{\tau}\left\langle\overline{\partial_{\lambda} \mathcal{H}}(t)\right\rangle_{0}\dot{\lambda}(t)dt, \tag{2}\] where \(\partial_{\lambda}\) is the partial derivative in respect to \(\lambda\) and the superscripted dot the total time derivative. The generalized force \(\left\langle\overline{\partial_{\lambda}\mathcal{H}}\right\rangle_{0}\) is calculated using the averaging \(\overline{\cdot}\) over the stochastic path and the averaging \(\left\langle\cdot\right\rangle_{0}\) over the initial canonical ensemble. The external parameter can be expressed as \[\lambda(t)=\lambda_{0}+g(t)\delta\lambda, \tag{3}\] where, to satisfy the initial conditions of the external parameter, the protocol \(g(t)\) must satisfy the following boundary conditions \[g(0)=0,\quad g(\tau)=1. \tag{4}\] We consider as well that \(g(t)\equiv g(t/\tau)\), which means that the intervals of time are measured according to the switching time unit. Linear-response theory aims to express average quantities until the first-order of some perturbation parameter considering how this perturbation affects the observable to be averaged and the process of average [4]. In our case, we consider that the parameter does not considerably changes
2303.04471
Non-convexity of extremal length
With respect to every Riemannian metric, the Teichm\"uller metric, and the Thurston metric on Teichm\"uller space, we show that there exist measured foliations on surfaces whose extremal length functions are not convex. The construction uses harmonic maps to $\mathbb{R}$-trees and minimal surfaces in $\mathbb{R}^n.$
Nathaniel Sagman
2023-03-08T09:43:47Z
http://arxiv.org/abs/2303.04471v2
# Non-convexity of extremal length ###### Abstract. With respect to every Riemannian metric, the Teichmuller metric, and the Thurston metric on Teichmuller space, we show that there exist measured foliations on surfaces whose extremal length functions are not convex. The construction uses harmonic maps to \(\mathbb{R}\)-trees and minimal surfaces in \(\mathbb{R}^{n}.\) ## 1. Introduction Let \(\Sigma_{g}\) be a closed oriented surface of genus \(g\geq 2,\) and let \(\mathbf{T}_{g}\) be the Teichmuller space of marked Riemann surface structures on \(\Sigma_{g}.\) To any measured foliation \(\mathcal{F}\) on \(\Sigma_{g}\) we can associate the extremal length function \(\mathbf{EL}_{\mathcal{F}}:\mathbf{T}_{g}\rightarrow(0,\infty).\) Extremal length functions play a large role in Teichmuller theory. See, for instance, Kerckhoff's formula [8, Theorem 4] and the Gardiner-Masur compactification [6]. Liu-Su proved that \(\mathbf{EL}_{\mathcal{F}}\) is plurisubharmonic, and Miyachi proved the stronger result that it is log-plurisubharmonic (see [11] and [17]). Note that convexity with respect to a Riemannian metric implies plurisubharmonicity. Rafi-Lenhzen proved that, on Teichmuller geodesics, extremal length is \(K\)-quasi-convex, but they also constructed a Teichmuller geodesic along which the extremal length is not convex [10]. Continuing in this direction, Bourque-Rafi proved that the Teichmuller metric admits non-convex balls by finding foliations and geodesics where the extremal length is not convex under any reparametrization [3] (see especially Lemma 1.2 in [3]). In this note, we extend the non-convexity result of Rafi-Lenhzen [10]. Let \(\mathcal{C}\) denote the class of (possibly asymmetric) Finsler metrics on \(\mathbf{T}_{g}\) such that for every point \(S\) in \(\mathbf{T}_{g}\) and every tangent vector \(\mu\) at that point, there is a \(C^{2}\) geodesic starting at \(S\) and tangent to \(\mu\) at time zero. \(\mathcal{C}\) includes every Riemanian metric, notably the Weil-Petersson metric, but also the Teichmuller metric and the Thurston metric. Rafi-Lenhzen build an explicit foliation and a Teichmuller ray that has pieces along which the slope of the extremal length function decreases. In contrast, we show that convexity fails at an infinitesimal level. **Theorem 1.1**.: _For all \(g\geq 2\) and \(m\in\mathcal{C}\), there exists a measured foliation \(\mathcal{F}\) on \(\Sigma_{g}\) with real analytic extremal length function and a geodesic \(t\mapsto S_{t}\) for \(m\) with the property that_ \[\frac{d^{2}}{dt^{2}}|_{t=0}\mathbf{EL}_{\mathcal{F}}(S_{t})<0.\] _In particular, \(\mathbf{EL}_{\mathcal{F}}\) is not convex with respect to \(m\)._ As noted in [13], it follows from the main result of [13] that with respect to every Riemannian metric on \(\mathbf{T}_{g},\) the energy functional for harmonic maps associated with a Fuchsian representation can be non-convex. By the paper [23], the same result holds for (non-Fuchsian) Hitchin representations. We prove Theorem 1.1 by interpreting extremal length as an energy. Drawing from recent work on minimal surfaces (see [12],[13],[14],[15],[23]), we establish a link between non-convexity of extremal length and instability of minimal surfaces in \(\mathbb{R}^{n}.\) One of the main takeaways of the proof is that a destabilizing variation of an equivariant minimal surface in \(\mathbb{R}^{n}\) produces a foliation (or even a number of foliations) whose extremal length can be lowered to second order. And although it's probably difficult in practice, if one has the explicit minimal surface data, then one could compute quantities associated with the extremal length (see Remark 3.13). It would require some care, but one could try to use minimal surfaces to construct a foliation and a geodesic (for some metric) such that, in restriction to the geodesic, the extremal length has a local maximum at time zero. This would imply that the extremal length is not convexoidal for the metric. In fact, it is conjectured in [2] that the extremal length systole attains a local maximum at the regular octahedron punctured at its vertices, which would imply that Voronoi's criterion fails for the extremal length systole, and moreover that extremal length is not convexoidal for any metric in \(\mathcal{C}\) (see [1, Definition 1.4 and Proposition 1.5] for definitions and justification). Finally, let us remark that a number of questions remain open related to convexity in Teichmuller geometry. It is not known if the Teichmuller metric convex hull of \(3\) points in \(\mathbf{T}_{g}\) can be all of \(\mathbf{T}_{g}\). While sufficiently small Teichmuller balls are always convex (the analogous fact holds for any Finsler metric), it is unclear if sets of the form \(\{S\in\mathbf{T}_{g}:\mathbf{E}\mathbf{L}_{\mathcal{F}}(S)<\alpha\}\), referred to as horoballs in [3], are convex for \(\alpha\) small. Our proof of Theorem 1.1 suggests a new way to probe the convexity question for such horoballs. ### Acknowledgements I'd like to thank Kasra Rafi and Maxime Fortier Bourque for discussion on this topic. I am funded by the FNR grant O20/14766753, _Convex Surfaces in Hyperbolic Geometry_. ## 2. Preliminaries ### Measured foliations Let \(\mathcal{S}\) be the set of non-trivial homotopy classes of simple closed curves on \(\Sigma_{g}\), and \(\mathbb{R}^{\mathcal{S}}\) the product space with the weak topology. Any \(\gamma\in\mathcal{S}\) determines a point in \(\mathbb{R}^{\mathcal{S}}\) through the intersection number, \[\gamma\mapsto(i(\gamma,\alpha))_{\alpha\in\mathcal{S}}. \tag{1}\] A weighted multicurve is a formal positive linear combination of classes in \(\mathcal{S}\). We extend the intersection number to the space of weighted multicurves \(\mathcal{W}\mathcal{S}\) by \[i\Big{(}\sum_{j=1}^{n}a_{j}\gamma_{j},\sum_{k=1}^{m}b_{k}\alpha_{k}\Big{)}= \sum_{j=1}^{n}\sum_{k=1}^{m}a_{j}b_{k}i(\gamma_{j},\alpha_{k}),\] which as above yields an embedding from \(\mathcal{W}\mathcal{S}\) into \(\mathbb{R}^{\mathcal{S}}\) via the same map (1). To us, the space of measured foliations \(\mathcal{MF}\) is the closure of \(\mathcal{W}\mathcal{S}\) in \(\mathbb{R}^{\mathcal{S}}\). Note that the intersection number extends continuously to \(\mathcal{MF}\)[5]. Alternatively, a measured foliation \(\mathcal{F}\) is a singular foliation on \(\Sigma_{g}\), the singularities being \(k\)-prongs, \(k\geq 3\), equipped with a transverse measure: an absolutely continuous measure defined on arcs transverse to the foliation and which is invariant under leaf-preserving isotopy. Two measured foliations are measure equivalent if they differ by a leaf-preserving isotopy and Whitehead moves. See [5, Expose 5] for the precise definitions. The intersection function is defined on simple closed curves by integration against the transverse measure. Let \(S\) be a Riemann surface structure on \(\Sigma_{g}\). The vertical (resp. horizontal) foliation of a holomorphic quadratic differential \(\phi\) on \(S\) is the singular foliation whose leaves are the integral curves of the line field on \(S\backslash\phi^{-1}(0)\) on which \(\phi\) is a negative (resp. positive) real number. The singularities are indeed prongs at the zeros, with a zero of order \(k\) corresponding to a prong with \(k+2\) segments. Both foliations come with transverse measures determined by \(|\mathrm{Re}\sqrt{\phi}|\) and \(|\mathrm{Im}\sqrt{\phi}|\) respectively. In this paper, we will always use the vertical foliation. The Hubbard-Masur theorem asserts that on the given Riemann surface \(S\), every measured foliation \(\mathcal{F}\) is measure equivalent to one arising from the construction above [7]. We refer to the corresponding differential \(\phi\) as the Hubbard-Masur differential. ### Extremal length (and its regularity) Let \(S\) be a Riemann surface structure on \(\Sigma_{g}\), and \(A\subset S\) a doubly connected domain, conformally equivalent to an annulus \(\{z\in\mathbb{C}:1<|z|<R\}.\) The modulus of \(A\) is the quantity \[\mathrm{Mod}(A)=\frac{1}{2\pi}\log R.\] **Definition 2.1**.: The extremal length of a homotopically non-trivial simple closed curve \(\gamma\) with respect to \(S\) is \[\mathrm{EL}(S,\gamma)=\inf_{A}\mathrm{Mod}(A),\] where the infimum is taken over all doubly connected domains \(A\) homotopic to \(\gamma.\) Given a weighted multicurve \(\gamma=\sum_{j=1}^{n}a_{j}\gamma_{j},\) we define \[\mathrm{EL}(S,\gamma)=\sum_{j=1}^{n}a_{j}^{2}\mathrm{EL}(S,\gamma_{j}).\] Kerckhoff showed that the map \(\mathrm{EL}(S,\cdot)\) extends continuously to all measured foliations, defining a map \(\mathrm{EL}(S,\cdot):\mathcal{MF}\to(0,\infty)\)[8]. Fix a measured foliation \(\mathcal{F}\) on \(\Sigma_{g}.\) We define the extremal length function on Teichmuller space, \(\mathbf{EL}_{\mathcal{F}}:\mathbf{T}_{g}\to(0,\infty)\), by \[\mathbf{EL}_{\mathcal{F}}(S)=\mathrm{EL}(S,\mathcal{F}).\] In terms of the Hubbard-Masur differential \(\phi\), the extremal length is the \(L^{1}\) norm: \[\mathbf{EL}_{\mathcal{F}}(S)=\int_{S}|\phi|.\] Recall that the tangent space of \(\mathbf{T}_{g}\) at a surface \(S\) identifies with the vector space of harmonic Beltrami forms on \(S\). By direct computation, \(\mathbf{EL}_{\mathcal{F}}\) is \(C^{1}\), and the derivative is given by \[d(\mathbf{EL}_{\mathcal{F}})_{S}(\mu)=-4\mathrm{Re}\int\phi\mu.\] From our understanding, it is unknown if \(\mathbf{EL}_{\mathcal{F}}\) is \(C^{2}\), and we'll have to address this point in the main proof. Royden's computation in [22, Lemma 1] seems relevant to this problem, and the papers [19] and [20] suggest that it is at most \(C^{2}.\) Around a point in \(\mathbf{T}_{g}\) where all zeros of the Hubbard-Masur differential are simple, \(\mathbf{EL}_{\mathcal{F}}\) is real analytic (see [16]). This condition is generic, and guaranteed when \(\mathcal{F}\) has only 3-pronged singularities and no saddle connections. We do not pursue the general regularity question in the current paper. ### Harmonic maps We plan to interpret extremal length in terms of harmonic maps to \(\mathbb{R}\)-trees. As above, let \(S\) be a Riemann surface structure on \(\Sigma_{g}\) and \(\nu\) a smooth metric that is conformal with respect to the complex structure. Let \((M,d)\) be a complete and non-positively curved (NPC) length space equipped with an action \(\rho:\pi_{1}(\Sigma_{g})\to\mathrm{Isom}(M,d).\) Let \(\tilde{S}\) be the universal cover and \(h:\tilde{S}\to(M,d)\) a \(\rho\)-equivariant and Lipschitz map. Korevaar-Schoen [9, Theorem 2.3.2] associate a locally \(L^{1}\) measurable metric \(g=g(h)\), defined on pairs of Lipschitz vector fields. If \(h\) is a \(C^{1}\) map to a smooth Riemannian manifold \((M,\sigma)\), and the distance \(d\) is induced by a Riemannian metric \(\sigma\), then \(g(h)\) is represented by the pullback metric \(h^{*}\sigma\). Since \(\rho\) is acting by isometries, the tensor \(g(h)\) descends to \(S\). Henceforth we consider it a function on \(S\). The energy density is the locally \(L^{1}\) function \[e(h)=\frac{1}{2}\text{trace}_{\nu}g(h).\] The total energy is \[\mathcal{E}(S,h)=\int_{S}e(h)dA,\] where \(dA\) is the area form of \(\nu\). The measurable 2-form \(e(h)dA\) does not depend on the choice of compatible metric \(\nu\), but only on the complex structure. **Definition 2.2**.: \(h\) is harmonic if it is a critical point for the energy \(h\mapsto\mathcal{E}(S,h).\) Let \(g_{ij}(h)\) be the components of \(g(h)\) in a holomorphic local coordinate \(z=x_{1}+ix_{2}.\) The Hopf differential of \(h\) is the measurable tensor on \(S\) given in the local coordinate by \[\phi(h)(z)=\frac{1}{4}(g_{11}(h)(z)-g_{22}(h)(z)-2ig_{12}(h)(z))dz^{2}. \tag{2}\] In the Riemannian setting, (2) is \[\phi(h)(z)=h^{*}\sigma\Big{(}\frac{\partial}{\partial z},\frac{\partial}{ \partial z}\Big{)}(z)dz^{2}.\] When \(h\) is harmonic, even in the metric space setting, the Hopf differential is represented by a holomorphic quadratic differential. Assume that \(\rho\) has the following property: for any Riemann surface \(S\) representing a point in \(\mathbf{T}_{g}\), there is a unique \(\rho\)-equivariant harmonic map \(h:\tilde{S}\to(M,d)\). The energy functional on Teichmuller space \(\mathbf{E}_{\rho}:\mathbf{T}_{g}\to[0,\infty)\) is defined by \[\mathbf{E}_{\rho}(S)=\mathcal{E}(S,h).\] When \(\mathbf{E}_{\rho}\) is \(C^{1}\) and the associated harmonic map has Hopf differential \(\phi\), the derivative in the direction of a harmonic Beltrami form \(\mu\) is \[d(\mathbf{E}_{\rho})_{S}(\mu)=-4\text{Re}\int\phi\mu. \tag{3}\] See [25] for the proof in the metric space context. ### \(\mathbb{R}\)-trees dual to foliations **Definition 2.3**.: An \(\mathbb{R}\)-tree is a length space \((T,d)\) such that any two points are connected by a unique arc, and every arc is a geodesic, isometric to a segment in \(\mathbb{R}\). It is a basic exercise to show that \(\mathbb{R}\)-trees are complete and NPC length spaces. We concern ourselves with a particular class of actions on \(\mathbb{R}\)-trees, obtained as follows. Let \(\mathcal{F}\) be a measured foliation on \(S\) with Hubbard-Masur differential \(\phi\). Lifting \(\mathcal{F}\) to the universal cover \(\tilde{S}\), we define an equivalence relation on \(\tilde{S}\) by \(x\sim y\) if \(x\) and \(y\) lie on the same leaf. The quotient space \(\tilde{S}/\sim\) is denoted \(T\). Pushing the transverse measure down via the projection \(\pi:\tilde{S}\to T\) yields a distance function \(d\) that turns \((T,d)\) into an \(\mathbb{R}\)-tree, with an induced action \(\rho:\pi_{1}(\Sigma_{g})\to\text{Isom}(T,d).\) Under this distance, the projection map \(\pi:\tilde{S}\to(T,d)\) is \(\rho\)-equivariant and harmonic, and the Hopf differential is exactly \(\phi/4\) (see [26, Section 3]). The energy density of \(\pi\) can be described explicitly: at a point \(p\in\tilde{S}\) on which \(\phi(p)\neq 0\), \(\pi\) locally isometrically factors through a segment in \(\mathbb{R}\). In a small neighbourhood around that point, \(g(h)\) is represented by the pullback metric of the locally defined map to \(\mathbb{R}\). Therefore, it can be computed that \[e(\pi)=\nu^{-1}|\phi|/2. \tag{4}\] Similarly, this provides one way to compute \(\phi(\pi)=\phi/4\). In view of (4), we will always rescale the metric on \(T\) from \((T,d)\) to \((T,2d).\) In this normalization, the total energy is \[\mathcal{E}(S,\pi)=\int_{S}|\phi|. \tag{5}\] Keeping \(\rho\) and varying the source Riemann surface, \(\rho\)-equivariant harmonic maps always exist and are unique [27], and hence there is an energy functional \(\mathbf{E}_{\rho}\). From the formula (5), we deduce the following. **Proposition 2.4**.: _Let \(\mathcal{F}\) be a measured foliation with Hubbard-Masur differential \(\phi\), and \(\rho\) the action on the \(\mathbb{R}\)-tree dual to \(\mathcal{F}\). As functions on \(\mathbf{T}_{g},\)\(\mathbf{E}_{\rho}=\mathbf{E}\mathbf{L}_{\mathcal{F}}.\)_ Accordingly, the same discussion on regularity from Section 2.2 applies to \(\mathbf{E}_{\rho}.\) From this point on, we will think about extremal length solely in terms of harmonic maps to \(\mathbb{R}\)-trees. ## 3. Non-convexity Let \(m\) be a metric distance function on \(\mathbf{T}_{g}\) in which \((\mathbf{T}_{g},m)\) is a length space. We say that a function \(F:\mathbf{T}_{g}\rightarrow\mathbb{R}\) is convex with respect to \(m\) if for all geodesics \(c:[0,1]\rightarrow\mathbf{T}_{g},\) the function \(F\circ c:[0,1]\rightarrow\mathbb{R}\) is convex. If \(c\) is \(C^{2}\) and \(F\) is \(C^{2}\) around the image of \(c\), then \(F\circ c\) is convex if and only if the second derivative is non-negative at all points. After discussing metrics on \(\mathbf{T}_{g}\) in 3.1, we recall constructions from [15] relating variations of minimal surfaces in \(\mathbb{R}^{n}\) to variations of minimal maps to products of \(\mathbb{R}\)-trees. We will then use minimal surfaces in \(\mathbb{R}^{n}\) to find a harmonic map to an \(\mathbb{R}\)-tree (or, a measured foliation) whose energy (extremal length) can be lowered to second order. ### Metrics on \(\mathbf{T}_{g}\) Recall the class of metrics \(\mathcal{C}\) from the introduction. Let's briefly justify that the Teichmuller metric \(d_{T}\) and the Thurston metric \(d_{Th}\) are contained in \(\mathcal{C}.\) This may follow from a general theory of asymmetric Finsler metrics with certain properties, but we couldn't find a source and we prefer to be hands-on. **Definition 3.1**.: The Teichmuller metric is defined by \(d_{T}(S,S^{\prime})=\inf_{g}\log K(g),\) where \(K(g)\) is the maximum quasiconformal dilatation of a quasiconformal map \(g:S\to S^{\prime}.\) Recall that at a Riemann surface \(S\), \(T_{S}\mathbf{T}_{g}\) identifies with the space of harmonic Beltrami forms on \(S\). If \(\mu\) is any such Beltrami form, away from the finite zero set of \(\mu\) we can locally choose a coordinate \(z=x+iy\) in which \(\mu=d\overline{z}/dz.\) The Teichmuller mapping in the direction of \(\mu\) at scale \(K\) is defined in such a coordinate by \[f_{\mu,K}(x,y)=K^{1/2}x+K^{-1/2}y \tag{6}\] We define \(f_{\mu,K}\) globally by doing (6) over the local patches, and extending to all of \(S\) by continuity. The Teichmuller ray \(K\mapsto f_{\mu,K}\) is a geodesic for \(d_{T}\) tangent to \(\mu\) at \(K=0.\) **Definition 3.2**.: The Thurston metric is defined by \(d_{Th}(S,S^{\prime})=\inf_{g}\log\operatorname{Lip}(g),\) where \(\operatorname{Lip}(g)\) is the Lipschitz constant of a Lipschitz map \(g\) taking \(S\) to \(S^{\prime}.\) We couldn't find a clean statement in the literature about the existence of Thurston geodesics in a given tangent direction. The result can probably be established through the constructions of Thurston's original paper [24], but for the sake of brevity we'll instead cite the recent work of Pan and Wolf [18]. For any Riemann surface \(S\) and projective measured lamination on \(S\), Pan and Wolf construct a "harmonic stretch line," which is a Thurston geodesic that in some sense solves an energy-minimization problem. Every unit tangent vector to \(\mathbf{T}_{g}\) at \(S\) is tangent to a harmonic stretch line [18, Remark 1.12]. Moreover, harmonic stretch lines are special examples of "piecewise harmonic stretch lines," which are, as stated in Theorem 1.7 of [18], real analytic paths in Teichmuller space. ### Harmonic functions Let \(S\) be a Riemann surface structure on \(\Sigma_{g}\) and \(\phi\) a holomorphic quadratic differential. Let \((T,2d)\) be the dual \(\mathbb{R}\)-tree with action \(\rho\) and harmonic projection map \(\pi:\tilde{S}\to(T,2d)\). Assume that \(\phi\) is the square of an abelian differential \(\alpha\). The cohomology class of the harmonic \(1\)-form \(\mathrm{Re}\alpha\) determines a representation \(\chi:\pi_{1}(\Sigma_{g})\to(\mathbb{R},+)\), and integrating from a basepoint \(p\in\tilde{S}\) yields a \(\chi\)-equivariant harmonic function \[h:\tilde{S}\to\mathbb{R},\;f_{i}(z)=\int_{p}^{z}\mathrm{Re}\alpha.\] We can compute directly that \(\phi(h)=\phi\) and that for any choice of conformal metric on \(S\), \(e(h)=e(\pi)\). Geometrically, \(h\) is related to \(\pi\) through the folding map \(p\), which is a map \(p:(T,d)\to\mathbb{R}\) satisfying \(h=p\circ\pi\) and restricting to an isometry on geodesic segments of \((T,d)\) (see [14, Section 4]). For any Riemann surface, \(\chi\)-equivariant harmonic functions exist, but they are unique only up to translations of \(\mathbb{R}^{n}\). Nevertheless, the energy density is independent of the choice of harmonic function, and it is possible to choose the harmonic functions locally to vary real analytically with the choice of Riemann surface (see [4, Section 5]). Thus, as in 2.3, we may define a (real analytic) energy functional \(\mathbf{E}_{\chi}\) on \(\mathbf{T}_{g}\). In general, there is a degree \(2\) branched covering \(\tau:C\to S\) in which \(\phi\) lifts to a square \(\tilde{\phi}=\alpha^{2}\), disconnected if \(\phi\) is already a square, and which has the universal property that any other branched cover on which \(\phi\) lifts to a square must factor through \(\tau\). We can repeat the above construction on \(C\), obtaining an equivariant harmonic map to an \(\mathbb{R}\)-tree \(\pi^{\prime}:\tilde{C}\to(T^{\prime},2d^{\prime})\) that folds onto an equivariant harmonic function \(h\) from \(\tilde{C}\) to \(\mathbb{R}\). \(C\) comes with a holomorphic involution that negates \(\alpha\). The involution leaves the energy densities of \(\pi^{\prime}\) and \(f\) invariant, so that they descend all the way to \(S\), where they agree with that of the map to the original \(\mathbb{R}\)-tree. ### Minimal maps Classically, a minimal map to a Riemannian manifold is a harmonic and conformal immersion. An immersion is conformal precisely when the Hopf differential vanishes identically. For an NPC space \((M,d)\), we make the following definition. **Definition 3.3**.: \(h:\tilde{S}\to(M,d)\) is minimal if it is harmonic and \(\phi(h)=0\). In the presence of a \(C^{1}\) energy functional \(\mathbf{E}_{\rho}\), by (3), \(S\) is minimal if and only if \(S\) is a critical point of \(\mathbf{E}_{\rho}\). For equivariant maps to \(\mathbb{R}^{n}\), minimal maps are also critical points of the area functional \[f\mapsto A(f)=\int_{\Sigma_{g}}dA_{f},\] where \(dA_{f}\) is the area form of the pullback of the Euclidean metric by \(f\). We record the following consequence of the definitions. Let \((X,d)\) be a product of NPC spaces \((M_{i},d_{i})\). **Proposition 3.4**.: \(h:\tilde{S}\to(X,d)\) _is harmonic if and only if every component map \(h_{i}:\tilde{S}\to(M_{i},d_{i})\) is harmonic. Moreover, \(\phi(h)=\sum_{i=1}^{n}\phi(h_{i})\)._ Let \(\phi_{1},\dots,\phi_{n}\) be holomorphic quadratic differentials on \(S\) and let \((M,d)\) be the product of the dual \(\mathbb{R}\)-trees, with product action \(\rho\) and product of projection maps \(\pi=(\pi_{1},\dots,\pi_{n}):\tilde{S}\to(M,d)\). We can write \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\). We can write \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{2}\) as \(\phi_{1}\) and \(\phi_{3}\) as \(\phi_{1}\) and \ \(\tilde{S}\to(M,d).\) The energy functional \(\mathbf{E}_{\rho}\) is the sum of the component energy functionals. Similar to the previous subsection, there is a degree \(2^{n}\) branched covering \(\tau:C\to S\) on which each \(\phi_{i}\) lifts to a square, with the analogous universal property, and which comes with \(n\) commuting holomorphic involutions that each negate a \(1\)-form. There is a product representation \(\chi=(\chi_{1},\dots,\chi_{n}):\pi_{1}(C)\to(\mathbb{R}^{n},+)\) with energy functional \(\mathbf{E}_{\chi}\) and equivariant harmonic function \(h=(h_{1},\dots,h_{n}):\tilde{C}\to\mathbb{R}^{n}\). Proposition 3.4 gives the observation below. **Proposition 3.5**.: \(h\) _is minimal if and only if \(\pi\) is minimal if and only if \(\sum_{i=1}^{n}\phi_{i}=0\)._ Let us thus assume the \(\phi_{i}\)'s sum to zero. The main input toward Theorem 1.1 is Proposition 3.6 below, which is used to turn variations of \(h\) into variations of \(\pi\). We will need to restrict to a class of variations. Set \(\operatorname{Var}_{\tau}(h)\) to be the space of \(C^{\infty}\) functions \(\dot{h}:\tilde{C}\to\mathbb{R}^{n}\) that are invariant under \(\pi_{1}(C)\) and the lifts to \(\tilde{C}\) of the \(n\) holomorphic involutions. Of course, such \(\dot{h}\) is equivalent to a function on \(S\). **Proposition 3.6** (Propositions 4.4, 4.6, and 5.1 of [15]).: _Let \(\dot{h}\in\operatorname{Var}_{\tau}(h)\). For every \(\epsilon>0\), there exists a \(C^{\infty}\) path of Riemann surfaces \(t\mapsto C_{t}\) and \(C^{\infty}\) paths of \(C^{\infty}\) maps \(t\mapsto f_{i}^{t}:C\to C_{t}\) starting at the identity such that_ \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathbf{E}_{\chi_{i}}(C_{t},h_{i}\circ (\tilde{f}_{i}^{t})^{-1})\leq\frac{d^{2}}{dt^{2}}|_{t=0}A(h_{t})+\epsilon, \tag{7}\] _where \(\tilde{f}_{i}^{t}\) is the lift to \(\tilde{C}\). The Riemann surfaces \(C_{t}\) descend through the branched cover \(\tau\) to Riemann surface structures \(S_{t}\) on \(S\). Similarly, the \(f_{i}^{t}\)'s descend to \(f_{i}^{t}:S\to S_{t}\)._ Very briefly: without perturbing the area too much, one can modify \(\dot{h}\) to be zero in a neighbourhood of the zeros of the \(\phi_{i}\)'s. There is a canonical way to pull such a variation back to \(n\) vector fields on the surface \(\tilde{C}\), which then generate flows \(t\mapsto f_{i}^{t}\). With respect to the conformal structure of the image of \(h_{t}=(h_{1}\circ(f_{1}^{t})^{-1},\dots,h_{n}\circ(f_{n}^{t})^{-1})\), which we label \(C_{t}\), the energy of \(h_{t}\) is equal to its area. By invariance properties of \(\dot{h}\), everything can be chosen to descend to \(\Sigma_{g}\). Such a self-maps variation gives a variation of \(\pi\), \[\pi_{t}=(\pi_{1}\circ(f_{1}^{t})^{-1},\dots,\pi_{n}\circ(f_{n}^{t})^{-1}). \tag{8}\] If \(\tilde{f}_{i}^{t}\) also denotes the lift to \(\tilde{S}\), then we can see by the local isometric factoring described in 2.4, or the folding map of 3.2, \[e(\pi_{i}\circ(\tilde{f}_{i}^{t})^{-1})=e(h_{i}\circ(\tilde{f}_{i}^{t})^{-1}) \tag{9}\] (note that both densities descend to \(S\), which is where the equation (9) is defined). Finally, we will want destabilizing variations. **Definition 3.7**.: \(h\) is \(\tau\)-unstable if there exists \(\dot{h}\in\operatorname{Var}_{\tau}(h)\) such that \[\frac{d^{2}}{dt^{2}}|_{t=0}A(h+t\dot{h})<0.\] For all \(g\geq 3\) and \(n\geq 3\), we can take \(C=S\): one can find abelian differentials on \(S\) whose squares sum to \(0\) that give an unstable minimal map. One way to produce such a map is to lift an unstable minimal surface in the \(3\)-torus to the universal covers. For \(g=2\), it turns out that any equivariant minimal surface is stable. However, we proved **Theorem 3.8** (Section 5.3 in [15]).: _There exists a Riemann surface \(S\) of genus \(2\) with \(\phi_{1},\dots,\phi_{n},\)\(n\geq 3,\) summing to \(0\) and which give a non-trivial branched cover \(\tau:C\to S\) and a \(\tau\)-unstable minimal map \(h:\tilde{C}\to\mathbb{R}^{n}\)._ ### Proof of Theorem 1.1 Resuming the setup from Section 3.3, assume that \(h\) is destabilized by some \(\dot{h}\in\mathrm{Var}_{\tau}(h)\). Choose \(\epsilon>0\) small enough so that \[\frac{d^{2}}{dt^{2}}|_{t=0}A(h+t\dot{h})+\epsilon<0.\] Applying Proposition 3.6, defining \(\pi_{t}\) as in (8) and using (9), we arrive at the following. **Proposition 3.9**.: _There exist \(C^{\infty}\) paths of \(C^{\infty}\) maps to Riemann surfaces \(t\mapsto f_{i}^{t}:S\to S^{t}\) starting at the identity such that_ \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t},\pi_{i}\circ(f_{i}^{ t})^{-1})\leq 2^{-n}\frac{d^{2}}{dt^{2}}|_{t=0}A(h+t\dot{h})+\epsilon<0. \tag{10}\] All recorded examples of \(\tau\)-unstable maps, in particular examples from Theorem 3.8, come from differentials with even order zeros. Recalling our regularity concerns from Section 2.2, we need the Proposition below. Say that a holomorphic quadratic differential \(\phi\) is generic if it has only simple zeros. Generic quadratic differentials on \(S\) form an open and dense subset. **Proposition 3.10**.: _For all \(g\geq 2,\,n\geq 3,\) we can choose generic holomorphic quadratic differentials \(\phi_{i}\) that give maps to \(\mathbb{R}\)-trees \(\pi_{i},\) and \(C^{\infty}\) paths \(t\mapsto f_{i}^{t}:S\to S_{t}\) starting at the identity such that (10) holds:_ \[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t},\pi_{i}\circ(f_{i}^ {t})^{-1})<0.\] Morally, we're using that instability in the sense of (10) is an open property. To formalize the argument, we borrow a formula from Reich-Strebel. **Proposition 3.11** (Equation 1.1 in [21] and Proposition 3.1 in [15]).: _Let \(\pi:\tilde{S}\to(T,2d)\) be any equivariant map to an \(\mathbb{R}\)-tree with Hopf differential \(\phi\) and let \(f:S\to S^{\prime}\) be any quasiconformal map to another Riemann surface \(S^{\prime}\) with lift \(\tilde{f}\) to \(\tilde{S}\) and Beltrami form \(\mu\)._ \[\mathcal{E}(S,\pi\circ\tilde{f}^{-1})-\mathcal{E}(S,\pi)=-4\text{Re}\int_{S} \phi\cdot\frac{\mu}{1-|\mu|^{2}}dxdy+4\int_{S}|\phi_{i}|\cdot\frac{|\mu|^{2}} {1-|\mu|^{2}}dxdy. \tag{11}\] Proof of Proposition 3.10.: Begin with the data \(\phi_{i},f_{i}^{t}\) from Proposition 3.9. The \(\phi_{i}\)'s may not be generic, but we know that (10) holds. Let \(\mu_{i}^{t}\) be the Beltrami form of \(f_{i}^{t},\) and \(\alpha_{i}\) the \(C^{\infty}\)\((1,-1)\)-form and \(\beta_{i}\) the \(C^{\infty}\) function on \(S\) described by \[\alpha_{i}(z)=\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}\frac{\mu_{i}^{t}(z)}{1-|\mu_ {i}^{t}(z)|^{2}},\;\beta_{i}(z)=\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}\frac{|\mu_ {i}^{t}(z)|^{2}}{1-|\mu_{i}^{t}(z)|^{2}}.\] By the formula (11), \[\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t },\pi_{i}\circ(f_{i}^{t})^{-1}) =\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}\sum_{i=1}^{n}\Big{(}\mathcal{E }(S_{t},\pi_{i}\circ(f_{i}^{t})^{-1})-\mathcal{E}(S,\pi_{i})\Big{)}\] \[=\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}4\sum_{i=1}^{n}\Big{(}-\text{Re }\int_{S}\phi_{i}\cdot\frac{\mu_{i}^{t}}{1-|\mu_{i}^{t}|^{2}}dxdy+\int_{S}|\phi _{i}|\cdot\frac{|\mu_{i}^{t}|^{2}}{1-|\mu_{i}^{t}|^{2}}dxdy\Big{)}.\] Taking the derivative into the integral, \[0>\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t },\pi_{i}\circ(f_{i}^{t})^{-1})=4\text{Re}\sum_{i=1}^{n}\Big{(}-\int_{S}\phi_{ i}\cdot\alpha_{i}+\int_{S}|\phi_{i}|\cdot\beta_{i}dA\Big{)}. \tag{12}\] We perturb \(\phi_{1},\ldots,\phi_{n-1}\) ever so slightly to be generic, and then redefine \[\phi_{n}:=-\phi_{1}-\cdots-\phi_{n-1},\] which is, of course, very close to the original \(\phi_{n}\). If \(\phi_{n}\) is not generic, then we perturb it to be. By openness, if our perturbation of \(\phi_{n}\) is sufficiently small, then if we redefine \(\phi_{n-1}\) so that the sum of the \(\phi_{i}\)'s is again zero, \(\phi_{n-1}\) will still be generic. Since \(\alpha_{i}\) and \(\beta_{i}\) are uniformly bounded, it is clear from the right hand side of (12) that if the perturbations are small enough, then both sides of (12) remain negative. So, we can take these new \(\phi_{i}\)'s to be our holomorphic quadratic differentials, and keep the same Riemann surfaces \(S_{t}\) and paths of \(C^{\infty}\) maps \(t\mapsto f_{i}^{t}:S\to S_{t}\). Proof of Theorem 1.1.: We find an \(\mathbb{R}\)-tree such that energy can be decreased to second order. We will then invoke Proposition 2.4 to say that the same happens for the extremal length of the associated foliation. Our starting point is Theorem 3.8: we fix a \(\tau\)-unstable minimal map from a branched cover of a Riemann surface \(S\) of genus \(g\). By Proposition 3.10, we can adjust the Hopf differentials of the component maps to obtain generic quadratic differentials \(\phi_{1},\ldots,\phi_{n}\) that yield an action \(\rho=(\rho_{1},\ldots,\rho_{n})\) on a product of \(\mathbb{R}\)-trees and an equivariant minimal map \(\pi=(\pi_{1},\ldots,\pi_{n})\), as well as paths of \(C^{\infty}\) maps \(t\mapsto f_{1}^{t},\ldots,f_{n}^{t}:S\to S_{t}\) starting at the identity such that \[\frac{d^{2}}{dt^{2}}\bigg{|}_{t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t},\pi_{i} \circ(f_{i}^{t})^{-1})<0. \tag{13}\] Since the \(\phi_{i}\)'s are generic, their energy functionals on \(\mathbf{T}_{g}\) are all real analytic. By the definition of minimality and (3), \[\frac{d}{dt}|_{t=0}\mathbf{E}_{\rho}(S_{t})=0\text{ and }\frac{d}{dt}\bigg{|}_{ t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t},\pi_{i}\circ(f_{i}^{t})^{-1})=0. \tag{14}\] Since harmonic maps minimize energy, for all \(t\), \[\mathbf{E}_{\rho}(S_{t})=\sum_{i=1}^{n}\mathbf{E}_{\rho_{i}}(S_{t})\leq\sum_{ i=1}^{n}\mathcal{E}(S_{t},\pi_{i}\circ(f_{i}^{t})^{-1}). \tag{15}\] It follows from (13), (14), and (15) that \[\frac{d^{2}}{dt^{2}}|_{t=0}\mathbf{E}_{\rho}(S_{t})\leq\frac{d^{2}}{dt^{2}} \bigg{|}_{t=0}\sum_{i=1}^{n}\mathcal{E}(S_{t},\pi_{i}\circ(f_{i}^{t})^{-1})<0. \tag{16}\] By the first equation in (14), the left hand side of (16) does not depend on the specific path in Teichmuller space, but only on the initial tangent vector \(\mu\). Thus, if we fix a metric \(m\) in \(\mathcal{C}\), we can replace the path with the geodesic for \(m\) starting at \(S\) and tangent to \(\mu\) at time zero, say \(t\mapsto S_{t}^{\prime}\). Finally, since the energy splits into the energies of the component maps, for (16) to hold along our path there must be at least one component representation \(\rho_{i}\) such that \[\frac{d^{2}}{dt^{2}}|_{t=0}\mathbf{E}_{\rho_{i}}(S_{t}^{\prime})<0.\] If \(\mathcal{F}\) is the measured foliation corresponding to \(\rho_{i}\), then by Proposition 2.4, \[\frac{d^{2}}{dt^{2}}|_{t=0}\mathbf{E}\mathbf{L}_{\mathcal{F}}(S_{t}^{\prime}) <0.\] This completes the proof. **Remark 3.12**.: One could instead ask about convexity with respect to a connection: a function is convex with respect to a connection if the restriction to every geodesic for that connection is a convex function. This setting is considered in [1]. Our proof shows that extremal length is not convex for any connection that admits \(C^{2}\) geodesics through every tangent vector of \(\mathbf{T}_{g}\). **Remark 3.13**.: In principle, one can write down the tangent vectors explicitly. One begins with a \(\tau\)-unstable minimal surface and a destabilizing variation. For example, one could work in genus \(3\), and take \(C=S\) and any non-planar equivariant minimal map from \(\tilde{S}\to\mathbb{R}^{3}\) with its destabilizing unit normal variation (see [15, Section 5.3]). The proofs of Propositions 4.4, 4.6, and 5.1 in [15] explain how to build the flows \(f_{1}^{t},f_{2}^{t},f_{3}^{t}.\) One can then try to compute the corresponding path \(t\mapsto S_{t}\) and take the derivative at time zero, although the computation may be involved.
2302.05446
Robust Scheduling with GFlowNets
Finding the best way to schedule operations in a computation graph is a classical NP-hard problem which is central to compiler optimization. However, evaluating the goodness of a schedule on the target hardware can be very time-consuming. Traditional approaches as well as previous machine learning ones typically optimize proxy metrics, which are fast to evaluate but can lead to bad schedules when tested on the target hardware. In this work, we propose a new approach to scheduling by sampling proportionally to the proxy metric using a novel GFlowNet method. We introduce a technique to control the trade-off between diversity and goodness of the proposed schedules at inference time and demonstrate empirically that the pure optimization baselines can lead to subpar performance with respect to our approach when tested on a target model. Furthermore, we show that conditioning the GFlowNet on the computation graph enables generalization to unseen scheduling problems for both synthetic and real-world compiler datasets.
David W. Zhang, Corrado Rainone, Markus Peschl, Roberto Bondesan
2023-01-17T18:59:15Z
http://arxiv.org/abs/2302.05446v2
# Robust Scheduling with GFlowNets ###### Abstract Finding the best way to schedule operations in a computation graph is a classical NP-hard problem which is central to compiler optimization. However, evaluating the goodness of a schedule on the target hardware can be very time-consuming. Traditional approaches as well as previous machine learning ones typically optimize proxy metrics, which are fast to evaluate but can lead to bad schedules when tested on the target hardware. In this work, we propose a new approach to scheduling by sampling proportionally to the proxy metric using a novel GFlowNet method. We introduce a technique to control the trade-off between diversity and goodness of the proposed schedules at inference time and demonstrate empirically that the pure optimization baselines can lead to subpar performance with respect to our approach when tested on a target model. Furthermore, we show that conditioning the GFlowNet on the computation graph enables generalization to unseen scheduling problems for both synthetic and real-world compiler datasets. ## 1 Introduction Efficient execution of computation graphs is paramount to many scientific and industrial applications, with deep learning being a prominent example (Amodei & Hernandez, 2018). Scheduling is the action of assigning operations to the available compute resources, such as threads, cores, or nodes in a cluster (Kwok & Ahmad, 1999; Hennessy & Patterson, 2011; Pinedo, 2012). Unfortunately, finding the schedule with the shortest possible _makespan_ (start-to-end runtime) is in general NP-hard (Papadimitriou & Steiglitz, 1998). As a result, domain experts have come up with heuristics that are tailored to specific problem instances (Ibarra & Kim, 1977). Machine learning approaches promise the possibility to automate this process allowing for fast adaptation to new graph distributions (Wang & O'Boyle, 2018; Bengio et al., 2021). In this work, we consider the problem of scheduling a set of operations with precedence constraints on a fixed number of homogeneous devices, i.e., any operation can run on any device and the runtime is the same on all devices. Evaluating the makespan of a schedule involves running all operations in the computation graph on some target hardware. This can be very resource intensive, especially when the computation graph includes lengthy operations, the evaluated schedule is inefficient, or the intended target hardware is a cluster with many nodes. Heuristic optimizers, like genetic algorithms (Hou et al., 1994), or machine learning (Mao et al., 2019) approaches further exacerbate this problem because they require many evaluations to converge (Chen et al., 2018). Proxies are a much faster alternative that estimates the makespan using a simplified model of the hardware. However, this comes at the cost of discrepancies between the proxy makespan and the one observed on the hardware; as a result, performant solutions on the proxy might ultimately be unsatisfactory once tested on the target. Nonetheless, proxies remain a good indicator for most schedules and are essential due to their efficiency. We aim to learn a scheduler that can be trained using the proxy, whilst being robust to its inaccuracies. The common approach to scheduling problems (and combinatorial optimization problems in general) is to look for the single best schedule that minimizes a makespan measure which can be an analytical proxy (Paliwal et al., 2020), the output of a simulator (Zhou et al., 2020), or even the real makespan on hardware (Khadka et al., 2021). We propose a different philosophy: generate a set of candidate schedules that have a low makespan according to the proxy and are diverse. By hav ing multiple good schedules that are significantly different, we can reduce the impact of systematic errors in the proxy, and hope for robust performance on the target. Our goal is to learn a generative model that assigns higher probability to low-makespan schedules, and importantly can also discover the different modes associated with local optima of the makespan cost. Generative Flow Networks (GFlowNets) have recently been introduced as a method for learning a stochastic policy that can piece-by-piece construct discrete and composite objects, proportional to a given reward (Bengio et al., 2021). By computing the reward from the proxy-makespan we can use GFlowNets to sample a diverse set of candidate schedules. **Our main contributions are: 1.** We introduce an alternative to the pure proxy optimization viewpoint of scheduling that achieves better robustness to proxy errors, by generating multiple candidate schedules to evaluate directly on the target hardware. 2. We extend GFlowNets to generate schedules conditioned on a computation graph. Additionally, we introduce a method to control diversity and goodness at inference time, without the need for retraining. These contributions may be of general interest, beyond the scheduling problem. 3. We empirically demonstrate the robustness of our method to proxy errors and verify the generalization ability on a diverse set of synthetic and real-world computation graphs. ## 2 Robust scheduling In this section, we first provide a definition of the scheduling problem we consider in this work. Then, we discuss how a proxy simulates the schedule execution as well as the difficulties of specifying a reliable proxy. Finally, we describe our proposed generative scheduling framework. ### Problem definition In scheduling, we are given a computation graph \(G_{C}=(O,P)\) that is a direct acyclic graph (DAG) consisting of operations (nodes) \(o\in O\) and precedence constraints (edges) \(p\in P\) that encode a partial order in which the operations need to be executed. In particular, the edge \(p_{ij}\) encodes that operation \(o_{i}\) needs to finish before \(o_{j}\) can start, for example because \(o_{j}\) requires the output of \(o_{i}\) as input. Our task is to run all operations on a set of _devices_\(\mathcal{D}=\{d_{1},\dots,d_{m}\}\), without violating the precedence constraints. In addition to the precedence constraints, the devices can only run one operation at a time. We can then view scheduling as performing two distinct tasks: assign a device to each operation, and determine a (complete) order among all operations on the same device that is compatible with the precedence constraints encoded in \(G_{C}\). We can model the schedule as a chain of operations for each device, where the chain denotes the order in which the operations run on that device. See Figure 1 for a visual example of the chain graphs. Our aim is to find the schedule with the lowest makespan for some target hardware. ### Target model vs. proxies The makespan of any schedule can be evaluated on the target hardware by running all the operations in the specified order and on the specified devices. However, this can take up significant time Figure 1: Full pipeline of our generative scheduling approach. Conditioned on the computation graph we generate multiple candidate schedules using GFlowNet, filter for the best \(k\) with the proxy and pick the best performing one out of the \(k\) that we check on the target. Here we illustrate the pipeline for \(k=2\) and two devices, \(d_{1},d_{2}\). and compute resources when the computation graph is large, has costly operations, or the target hardware is a cluster with many nodes. In addition to this, when optimizing the makespan one needs to evaluate many different schedules, further exacerbating the resource requirements. A _proxy_ is any tool that allows one to estimate the makespan of a given schedule, without having to run the schedule on the target hardware. Proxies come with significant speed advantages, which remedy the problems mentioned above. However, this comes at the cost of possible mistakes in the estimation of the makespan and relative comparison of schedules. Mistakes can occur for example when the task durations are not accurately profiled, memory movements are too complex to fully model, or additional hardware-specific features are changed. Ideally, we would like to rely on a proxy for the majority of the schedule evaluations, and only evaluate a small fraction of promising schedules on the target hardware. This approach differs from previous works, that either evaluate every schedule on the target (Khadka et al., 2021), leading to very long optimization times, or evaluate everything on the proxy (Paliwal et al., 2020), which is susceptible to modeling failures. Next, we describe how the proxy we use in this work assigns start times to each operation given a schedule and estimates the makespan based on those. We recall that a schedule is an order of operations for each device, which can be represented by one chain graph per device. For each of these, let us denote with \(C_{d}\), \(d\in\mathcal{D}\) the set of edges of the chain graph for device \(d\) and with \(D\coloneqq\bigcup_{k=1}^{m}C_{d_{k}}\) the set of all device constraints. The operations correspond to graph nodes and are labeled in the same way as in \(G_{C}\). No other operation can run on the same device during the _runtime_ or _duration_\(\rho_{i}\) of operation \(o_{i}\) In practice, \(\rho_{i}\) is estimated directly on the hardware in a profiling stage that precedes scheduling. We denote the start time of \(o_{i}\) as \(\tau_{i}\) and can thus express the precedence constraints as: \[\tau_{j}\geq\tau_{i}+\rho_{i},\quad\forall(i,j)\in P\cup D \tag{1}\] An operation cannot start unless all of those that produce its inputs and all of those that precede it on its assigned device have finished first. To ensure that these constraints are satisfied the proxy assigns each operation \(o_{i}\) the start time \[\tau_{i}=\max_{k}\{\tau_{k}+\rho_{k}|(k,i)\in P\cup D\} \tag{2}\] If a node has no parents in \(P\cup D\) the proxy assigns the start time \(\tau_{i}=0\). The start times of all \(o_{i}\in O\) can be computed by assigning a start time to a node whenever it has no parents or all its parents have an assigned start time. If the graph \((O,P\cup D)\) is a DAG, then this algorithm is guaranteed to assign start times that satisfy Equation 2. The proxy then estimates the _makespan_\(T\) of the schedule \(x\) as: \[T(x)\coloneqq\max_{i}(\tau_{i}+\rho_{i})-\min_{i}(\tau_{i}) \tag{3}\] Optimizing this cost over the set of all possible schedules is already a very rich problem, and yet, we made significant simplifying assumptions in the construction of the proxy. In particular, we assume perfectly estimated runtimes, and in Equation 2 we effectively assume that an operation can start as soon as all of the operations producing its inputs finish, meaning that data can be moved between devices instantaneously (zero latency) independently of their size (infinite bandwidth). These assumptions might be unrealistic, depending on the specific target devices (Valiant, 1990). ### Generative scheduling Our aim is to come up with a good set of candidate schedules to be tested on the target hardware while relying only on the proxy for generating this set. While the proxy is imperfect, it still offers good guidance for most schedules; thus, we would like to include schedules that perform well according to the proxy. Nevertheless, we also know that systematic errors in the proxy can cause it to incorrectly predict a low makespan for some schedules. Therefore, we would like the set of candidate schedules to be diverse, while still high-quality from the point of view of the proxy. If we had a ranking over all the schedules according to the proxy, we could just go through the list top-to-bottom, and add a schedule to the batch whenever it is significantly different from the previous ones. A full ranking like this is infeasible to construct, but we can instead learn a generative model that samples higher ranked schedules with higher probability. When generating schedules we need to satisfy the precedence and device constraint outlined in Section 2.1. To avoid generating invalid schedules we construct a schedule in a step-by-step process: start with an empty schedule at the initial state \(s_{0}\), and at each step add an operation to the partial schedule until the schedule contains all operations at the terminal state \(s_{n}\). At each intermediate state \(s_{t}\), an action \(a_{t}\) consists in picking an operation and assigning it to one of the devices, leading to a new state \(s_{t+1}\). We define the set of valid actions at every step \(t\) in a way such that the precedence constraints are automatically satisfied. In particular, adding an operation \(o_{t}\) is a valid action if and only if \(\forall k:(k,t)\in P\), \(o_{k}\) is already in the partial schedule at state \(s_{t}\). This is a sufficient condition for the final "schedule" graph \((O,P\cup D)\) to be a DAG, implying that the constructed schedule is feasible. The final states represent full schedules \(x\), for which we can compute the makespan \(T(x)\) with the proxy, given the runtimes \(\{\rho_{i}\}_{i=1}^{n}\). We compute the relative _speedup_ compared to the makespan on a single device as \(U(x){=}\sum_{i}\rho_{i}/T(x)\), from which we compute the reward as we present in the next section. ## 3 Generative Flow Networks for Scheduling GFlowNets (Bengio et al., 2021; 20) are methods for training a stochastic policy to sample discrete and composite objects proportionally to a given reward. Each object \(x\) is generated incrementally by a sequence of actions. In the previous section, we discussed how to limit the action space to guarantee that we sample valid schedules. After a brief introduction to GFlowNets, the following sections will present our proposed extensions that include a new training objective that is suitable for learning conditional GFlowNets and a method for controlling the selectivity of samples at inference time. ### Background We start by introducing some notation. We denote by \(\mathbf{s}{=}(s_{0},s_{1},\dots,s_{n})\) a _trajectory_ that consists of a sequence of states \(s_{t}\). In the case of scheduling, trajectories start with the empty schedule \(s_{0}\), followed by partial schedules, and end with a complete schedule \(s_{n}\). We denote by \(\mathcal{T}\) the set of all such trajectories and by \(\mathcal{T}_{x}\) the set of trajectories that end at \(x\). Based on this, we define a flow function \(F:\mathcal{T}\rightarrow\mathbb{R}^{+}\) and its associated normalized probability distribution \(P(\mathbf{s})=F(\mathbf{s})/Z\), \(Z=\sum_{\mathbf{s}\in\mathcal{T}}F(\mathbf{s})\). A flow function that fulfills the condition: \(R(x)=\sum_{\mathbf{s}\in\mathcal{T}_{x}}F(\mathbf{s})\) (every terminal state has a total flow matching its reward), results in a probability over schedules \(P(x)=\sum_{\mathbf{s}\in\mathcal{T}_{x}}F(\mathbf{s})/Z\) that is proportional to the reward \(P(x)\propto R(x)\), and further entails that \(Z=\sum_{x}R(x)\). For any Markovian flow, we can decompose the probability of a trajectory in terms of the forward probability: \[P(\mathbf{s})=\prod_{t=1}^{n}P_{F}(s_{t}|s_{t-1}) \tag{4}\] This way, we can generate trajectories \(\mathbf{s}\) by sampling a sequence of actions starting from \(s_{0}\). In Section 2.3 we described how to limit the action space appropriately to guarantee that every sampled schedule is valid. Similarly, we can define a backward probability \(P_{B}\) that factorizes the trajectory probability conditioned on a terminal state: \[P(\mathbf{s}|s_{n}=x)=\prod_{t=1}^{n}P_{B}(s_{t-1}|s_{t}) \tag{5}\] The training objectives considered in previous works aim to achieve a consistent flow (Bengio et al., 2021; Malkin et al., 2022), where consistency means that the flow estimated for the forward direction should equal the flow for the backward direction. A consistent flow \(F(\mathbf{s})\) for trajectories \(\mathbf{s}\in\mathcal{T}_{x}\) can then be written in terms of \(P_{F}\) and \(P_{B}\) and has to fulfill the equality: \[Z\prod_{t=1}^{n}P_{F}(s_{t}|s_{t-1})=R(x)\prod_{t=1}^{n}P_{B}(s_{t-1}|s_{t}) \tag{6}\] Based on this equation, Malkin et al. (2022) propose to estimate \(Z\), \(P_{F}\), and \(P_{B}\) by optimizing the _trajectory balance_ loss which is the squared difference between the logarithms of the l.h.s. and the r.h.s. of Equation 6. ### Log-partition variance loss In order to apply the trajectory balance loss in the conditional case, we would need to learn an additional regression model that estimates the log-partition function \(\log Z\) conditioned on \(G_{C}\). Training such a network accurately is difficult but crucial for learning the probabilities \(P_{F}\). In particular, a wrong estimation of \(\log Z\) can incorrectly change the direction of the gradients of the loss function. We explain why this occurs in Appendix B. In practice, we found this approach to perform poorly when different computation graphs had large differences in their \(\log Z\) value. Instead, we can rewrite Equation 6 to implicitly estimate \(\log Z\) based on the forward and backward flows of a single trajectory \(\mathbf{s}\), where \(P_{F}\) and \(P_{B}\) are neural networks with parameters \(\mathbf{\theta}\): \[\zeta(\mathbf{s};\mathbf{\theta})=\log R(x)+\sum_{t=1}^{n}\log P_{B}(s_{t-1}|s_{t};\bm {\theta})-\sum_{t=1}^{n}\log P_{F}(s_{t}|s_{t-1};\mathbf{\theta}) \tag{7}\] In the optimal case, \(\zeta(\mathbf{s};\mathbf{\theta})\) is equal to the true \(\log Z\) which is the same for all trajectories corresponding to the same computation graph \(G_{C}\). Thus, our optimization goal turns into minimizing the variance of \(\zeta(\mathbf{s};\mathbf{\theta})\) over different trajectories \(\mathbf{s}\) with the loss \[\mathcal{L}_{\mathbf{\mathrm{V}}}(\mathbf{s};\mathbf{\theta})=(\zeta(\mathbf{s};\mathbf{\theta}) -\mathbb{E}_{\mathbf{s}}\left[\zeta(\mathbf{s};\mathbf{\theta})\right])^{2} \tag{8}\] In practice, we use the training distribution to estimate \(\mathbb{E}_{\mathbf{s}}\left[\zeta(\mathbf{s})\right]\) with a mini-batch of sampled trajectories. For more details on the training process, we refer to Appendix C. We note that by optimizing the log-partition variance loss in Equation 8, one only needs to parametrize the forward and backward probabilities \(P_{F}\) and \(P_{B}\). This is similar to the non-forward trajectory loss mentioned in the appendix by Malkin et al. (2022), which also does not involve learning any state flows, including the initial flow \(Z\). However, our loss does not mix forward and backward steps from different trajectories and directly optimizes the consistency of the total flow \(Z\) for each trajectory associated with a given computation graph \(G_{C}\). ### Temperature-conditioned Topoformer Reward temperature.We compute the reward as a function of the speedup. In particular, we choose \(\log R(x;m,\sigma)=(U(x)-m)/\sigma\) where \(U(x)\) is the speedup of the schedule \(x\), \(m\) is the number of devices, and \(\sigma\in\mathbb{R}^{+}\) plays the role of a temperature. The temperature allows us to concentrate the distribution on the modes and control the selectivity of the generator. This is useful since there can be many more schedules with low speedup when compared to good ones. For example, when simply setting the reward equal to the speedup, we observed that finding schedules with high speedup requires a prohibitively large amount of samples. We expect this temperature term to allow trade-offs between diversity and shifting the mean of the sampling distribution towards better schedules. Previous works on GFlowNets apply a constant temperature value during training and at inference time (Bengio et al., 2021; Jain et al., 2022). This can lead to low performance (when set too high), and low diversity or unstable training (when set too low). Furthermore, different computation graphs can have different ideal temperature values, making this approach less suitable when learning conditional GFlowNets. Instead, we propose to learn a single model for multiple different reward functions \(R(x;m,\sigma)\), by conditioning the policy networks (\(P_{F}\) and \(P_{B}\)) on the temperature \(\sigma\). Approximating the temperature-conditioned policy with a neural network is feasible because flows for a given temperature can be continuously morped into flows for any other temperature. Since our reward \(R(x;m,\sigma)\) is continuous with respect to the temperature \(\sigma\), we expect the change of flow for different temperatures to be learnable by a neural network. We provide a proof for the following theorem in Appendix A. **Theorem 1** (Flow Continuity).: _Let \(\{R_{i}\}_{i=1}^{\infty}\) be a sequence of non-negative reward functions such that for all terminal states \(x\), \(R_{i}(x)\to R(x)\) as \(i\to\infty\). Then, for any flow \(F^{R}\) with reward \(R\), there exists a sequence of flow functions \(\{F^{R_{i}}\}_{i=1}^{\infty}\) with \(F^{R_{i}}(\mathbf{s})\to F^{R}(\mathbf{s})\) for all \(\mathbf{s}\in\mathcal{T}\)._ The output policy changes more rapidly as a function of the temperature for values close to 0 than for larger values. To account for this, we use the logarithm of the temperature as input to the policy instead. During training, we sample temperatures from the log-uniform distribution with support between \([\log\sigma_{min},\log\sigma_{max}]\), where \(\sigma_{min}\) is a minimum temperature that is necessary for numerical stability. In comparison to sampling from \(\mathcal{U}(\sigma_{min},\sigma_{max})\), this avoids oversampling from high temperature regions that have little difference in the resulting flow network. At inference time, we choose how close the samples are to the mode by adjusting the \(\sigma\). Topoformer architecture.For the neural network architecture of our policy, we use the Topoformer (Gagrani et al., 2022), which has been recently introduced for learning topological orderings of computation graphs. It builds on the Transformer encoder (Vaswani et al., 2017) and additionally masks the multi-head attention depending on the topology of the computation graph. Both forward and backward policies use separate MLP heads on top of a shared Topoformer encoder. Taking inspiration from the successful use of time conditioning in diffusion models (Song et al., 2020; Ho et al., 2020), we add temperature conditioning by first embedding the temperature using an MLP to produce \(e_{\sigma}\), and then reuse the embedding in every first linear layer block of the Topoformer: \[\texttt{lin}(h,e_{\sigma})=\texttt{lin}_{\text{scale}}(e_{\sigma})\odot \texttt{lin}(h)+\texttt{lin}_{\text{shift}}(e_{\sigma}) \tag{9}\] Here \(\texttt{lin}_{\text{scale}}\) and \(\texttt{lin}_{\text{shift}}\) are linear layers and \(\odot\) is the elementwise multiplication (Perez et al., 2018). In contrast to diffusion models, we observe better performance on large temperature ranges with the ReLU (Nair and Hinton, 2010) activation function. We hypothesize that this is connected to the monotonicity of the underlying policy function with respect to decreasing temperatures (see Corollary 1 in the Appendix) and the propensity for linear extrapolation of ReLU MLPs (Xu et al., 2020). For a detailed description of the neural network architecture, we refer to Appendix D. Sub-graph training.Training with a full computation graph might not always be necessary and we hypothesize that learning on sub-graphs can lead to policies that generalize to the full computation graph. This can be seen as a form of data augmentation and increases the amount of training data, while simultaneously improving the training time. We shall use sub-graph training for the larger graphs that we study in this work. ## 4 Related work Reinforcement learning for scheduling.Reinforcement learning has been the predominant machine learning approach to optimize the makespan for computation graph schedules (Addanki et al., 2019; Paliwal et al., 2020; Zhang et al., 2020). The rewards used include simple analytical proxies of the makespan (Paliwal et al., 2020; Zhou et al., 2020), but also more refined proxies which incorporate modeling of memory movements (Addanki et al., 2019). Khadka et al. (2021) directly train on the target hardware, but consider only a few computation graphs, and do not show generalization to unseen ones. Addanki et al. (2019) use a sophisticated simulator of the makespan which is customized to the target hardware. Similar to our work, Zhang et al. (2020) also construct the schedule piece-by-piece. Instead of finding a single (local) mode of the proxy, our work proposes to learn the full distribution over the proxy to improve the robustness against inaccuracies in the proxy. Generative Flow Networks.GFlowNets have been applied to generating small molecules (Bengio et al., 2021), Bayesian networks (Deleu et al., 2022), discrete images (Zhang et al., 2022), and biological sequences (Jain et al., 2022). We extend its application to scheduling, a classical combinatorial optimization problem. Similar to previous works, our state-action space is also a DAG, hence training the policy with maximum entropy reinforcement learning methods is inadequate (Bengio et al., 2021). Our robust scheduling approach shares the same motivation as methods in drug discovery which leverage cheap proxies to generate multiple candidates to be evaluated in the true environment (Bengio et al., 2021; Jain et al., 2022). Conditional GFlowNets have previously only been theoretically discussed by Bengio et al. (2021). We enable training conditional GFlowNets with our proposed log-partition variance loss and empirically demonstrate generalization to unseen computation graphs. Note that this differs from previous work that tests the generalization of GFlowNets to unseen data (Nica et al., 2022). To control the selectiveness of the generator, previous works augment the reward with a fixed temperature (Bengio et al., 2021; Deleu et al., 2022; Jain et al., 2022). Instead, we condition the policy neural network on the temperature term which allows us to tune the selectiveness of the generator at inference time. ## 5 Experiments In this section, we evaluate different aspects of our generative scheduling approach by incrementally adding complexity to the computation graph dataset. First, we restrict training and evaluation to a single computation graph, which corresponds to the same unconditional setting considered by previous works on GFlowNets (Bengio et al., 2021; Deleu et al., 2022; Jain et al., 2022). Next, we train with multiple computation graphs and evaluate on unseen ones. To the best of our knowledge, this is the first time that the generalization of conditional GFlowNets to unseen conditioning is tested empirically. Finally, we verify the generalization ability on real-world computation graphs of neural networks that are being used in a diverse set of AI products. Experimental setup.In all experiments, we only use the node time duration as a feature of the computation graph. For simplicity and ease of reproducibility, we avoid any complicated heuristics to add extra features. All our experiments are based on four homogenous devices, which implies that the speedup is upper bounded by 4. In practice, most computation graphs have a lower maximal possible speedup due to their precedence constraints. Candidate sampler.We consider two heuristic and two neural methods for generating candidate schedules. The first is our GFlowNet approach described in Section 3 from which we generate 1000 samples at temperature \(\sigma=0.005\) and take the top 100 following the proxy; the other three are: * Critical path-based list scheduling, a heuristic algorithm for scheduling on homogeneous devices (Micheli, 1994). List scheduling first forms a topological order of the operations and then assigns them in that order one by one to a free device. In our implementation, we use the Critical Path method (Micheli, 1994) to order the operations. It ensures that operations on the time critical path are scheduled first. This method produces a single schedule. * Biased Random Key Genetic Algorithm (BRKGA) (Goncalves and Resende, 2011), a genetic algorithm that has previously shown good performance on scheduling tasks (Paliwal et al., 2020). We use the top 100 schedules from the final population as the candidate schedules. * Proximal Policy Optimization (PPO) (Schulman et al., 2017), a deep reinforcement learning method that has been successfully applied to scheduling problems (Zhou et al., 2020). PPO also trains a stochastic policy, which makes it a natural choice for comparison with GFlowNets (Bengio et al., 2021). We employ the same definitions of states, actions, and reward function (with temperature \(\sigma=0.25\); lower was not beneficial) as the GFlowNet approach. To ensure that PPO keeps exploring even after finding a local optimum we employ entropy regularization and decay both the entropy coefficient (Ahmed et al., 2019) and the learning rate to ensure convergence to a good solution. Same as for GFlowNets, we sample 1000 schedules and pick the top 100 as the candidate schedules. Metrics.We measure the performance in terms of the speedup \(U(x)\). For the diversity, we report three different measures: graph-edit distance (GED), the L2 distance between the proxy start-times (\(d_{\text{inv}}\)), and the L2 distance between the proxy start-times concatenated with the device placement (\(d_{\text{sen}}\)). For diversity, we report the average pairwise distances over the top 100 candidate schedules. See Appendix E.2 for more details on diversity measures. ### Proxy errors: diversity for robust scheduling We examine how differences between the proxy and the target performance model can affect the final runtime. To do so, we first focus on a single computation graph that is used both for training and testing to avoid possible confounding factors that may happen in the generalization setting. Based on the possible reasons for proxy errors discussed in Section 2.2 we design three different target models that each reflect a different setting. In the first setting node durations are incorrectly profiled (Noisy Runtimes). In the second and third settings, the target models the memory movement across devices with a linear model (Valiant, 1990), which can be either bottlenecked by limited bandwidth (Bandwidth Limited), or by high latency (Latency Limited). The linear model has been shown to be a Figure 2: Correlation between proxy and target speedup for different target environments. Modes with varying performance can be observed for a fixed proxy speedup. good makespan estimator for certain devices (Hockney, 1994; Culler et al., 1993). We refer to Appendix E.3 for more details. In Figure 2, we show the correlation between the proxy and the different target environments. For all three targets, the proxy is highly correlated but can have target speedups that differ by a factor of up to \(\times 2\) for the schedules with high proxy speedups. We report the speedups and diversity measures in Table 1. The results highlight that any method that can generate multiple good candidate schedules achieves higher speedups on the target environments than list scheduling, which only produces a single unique schedule. Furthermore, if the candidate schedules are more diverse -- as is the case for GFlowNet -- the target performance is also better on average. PPO and BRKGA exhibit high variability in performance between different runs, where a few runs end up with high speedups on some targets, and other runs result in much lower target speedups. In contrast, the GFlowNet model is consistent over the different random seeds, both in terms of diversity and speedup. The results confirm our hypothesis that a diverse set of candidate schedules with high average proxy speedups can improve robustness towards a misspecified proxy. ### Generalizing to unseen computation graphs Next, we evaluate how well our conditional GFlowNet can generalize to unseen computation graphs. We train and evaluate on a diverse set of synthetic computation graphs sampled from different random graph distributions. In particular, we train on graphs of size 50 sampled from the random graph distributions (a) Erdos-Renyi (Erdos et al., 1960), and (b) Layered Graphs (Gagrani et al., 2022) and evaluate, in addition to (a) and (b), on stochastic block model (Holland et al., 1983), Watts-Strogatz (Watts & Strogatz, 1998), and Barabasi-Albert (Albert & Barabasi, 2002). For details on the generative process of the computation graphs, we refer to Appendix E.5. In Table 2, we demonstrate that both PPO and the conditional GFlowNet are able to generalize to previously unseen computation graphs, regardless of whether they originate from the same random graph distribution. Next, we ablate our proposed temperature conditioning method by generating 1000 samples at different temperature values. In Figure 3, we observe that decreasing the temperature does indeed shift the sample distribution to the right and also sharpens it when the temperature approaches zero. Notably, the temperature \(\sigma=0.005\) is not in the training distribution, which demonstrates that the model can extrapolate to temperature values outside of the training range. Surprisingly, we observe that training with a variable temperature can improve the performance further than is possible with a fixed temperature, which we demonstrate in Appendix F. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Speedup} & \multicolumn{3}{c}{Diversity} \\ \cline{2-7} & Proxy & Noisy & Bandwidth & Latency & & \\ & & Runtimes & Limited & Limited & & \\ \hline List scheduling & 3.23\(\pm\)0.00 & 2.75\(\pm\)0.00 & 1.02\(\pm\)0.00 & 1.74\(\pm\)0.00 & 0 & 0 & 0 \\ BRKGA & 3.22\(\pm\)0.00 & 2.86\(\pm\)0.15 & 1.29\(\pm\)0.45 & 1.80\(\pm\)0.34 & 55.92\(\pm\)2.56 & 22.83\(\pm\)2.39 & 56.21\(\pm\)1.50 \\ PPO & 3.28\(\pm\)0.07 & 3.07\(\pm\)0.09 & 1.38\(\pm\)0.49 & 1.87\(\pm\)0.38 & 85.08\(\pm\)3.54 & 31.71\(\pm\)0.05 & 105.64\(\pm\)0.08 \\ GFFlowNet & 3.21\(\pm\)0.02 & 3.05\(\pm\)0.04 & 1.78\(\pm\)0.03 & 2.11\(\pm\)0.03 & 94.79\(\pm\)0.15 & 42.08\(\pm\)0.33 & 115.98\(\pm\)0.09 \\ \hline \hline \end{tabular} \end{table} Table 1: Robustness results on a single computation graph. We compare different methods for generating candidate schedules. Higher diversity correlates with better robustness against a mismatch of the proxy and the target, with GFlowNet achieving the best diversity and the best target performance on average. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{Speedup} & \multicolumn{3}{c}{Diversity} \\ \cline{2-7} & Proxy 1 & Proxy 100 & GED & \(d_{\text{inv}}\) & \(d_{\text{len}}\) \\ \hline List scheduling & 3.44 & 3.44 & 0 & 0 & 0 \\ BRKGA & 3.46 & 3.45 & 46.59 & 12.75 & 40.11 \\ PPO & 3.48 & 3.46 & 69.54 & 13.45 & 80.84 \\ GFlowNet & 3.46 & 3.41 & 92.02 & 24.27 & 90.17 \\ \hline \hline \end{tabular} \end{table} Table 2: Generalization to different random graph distributions. We report the speedup and diversity for the top 100 schedules. PPO and GFlowNet are trained on graphs from the Erdos-Renyi and Layered Graph distribution and we report the average performance over all random graph distributions. The Speedup Proxy 100 column reports the average proxy speedup over the top 100 schedules. ### Real world computation graphs Finally, we verify the generalization ability on a small set of real-world computation graphs used for the commercial development of our artificial intelligence hardware and software products (see Appendix E.6 for details). We report the speedup on the same target models used in Section 5.1 to assess robustness on unseen real-world computation graphs. To speed up training, we apply the graph subsampling strategy presented in Section 3.3 to randomly pick between 25 to 75 nodes at every training step. In Table 3, we observe that the conditional GFlowNet retains the benefits of high diversity and robustness to misspecifications in the proxy even when applied to graphs not seen during training and of larger sizes. PPO shows unstable training behavior and the reward training curve does not converge, despite using the same hyperparameters that worked for the previous two experiments. We conjecture that this is due to the inhomogeneous maximum possible speedup of the training graphs that lead to different reward scales per training graph. In comparison, GFlowNet still converges as before without any changes to the hyperparameters. Note that while PPO exhibits higher diversity than compared to BRKGA, it still underperforms BRKGA due to low average proxy speedups. This highlights that high diversity alone is not sufficient, otherwise, a uniform distribution as the forward policy would already suffice. We ablate our proposed log-partition variance loss by comparing it against the trajectory balance loss that uses a Topoformer to predict \(\log Z\) given a computation graph. Learning such a model is difficult due to large differences in the output space of different computation graphs that arise from the differences in the number of nodes, which in turn impedes the training progress of the policy network. We confirm in Appendix E.6 that our proposed loss function remedies the slow start problem of the baseline and achieves a higher speedup in the end. ## 6 Conclusion We have empirically demonstrated how the conventional optimization approach to scheduling, which optimizes a proxy of the real makespan, is brittle to modeling failures in the proxy itself. Our proposed approach evaluates multiple schedules on the target and thereby achieves more robustness to discrepancies between the proxy and the target. We demonstrated that GFlownets can sample a diverse set of candidate schedules that achieve better target performance than alternative methods which achieve lower diversity. Further, we showed that conditioning on temperature allows a trade-off between diversity and proxy performance, and that conditional GFlowNets can generalize to unseen computation graphs. Interesting future directions include scaling up our method to larger graphs and integrating scheduling heuristics to speed up training. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{4}{c}{Speedup} & \multicolumn{4}{c}{Diversity} \\ \cline{2-7} & Proxy 1 & Proxy 100 & Noisy & Bandwidth & Latency & & & \\ & & Runtimes & Limited & Limited & & & \\ \hline List scheduling & 2.74\(\pm\)0.00 & 2.74\(\pm\)0.00 & 2.51\(\pm\)0.00 & 0.89\(\pm\)0.00 & 1.43\(\pm\)0.00 & 0 & 0 & 0 \\ BRKGA & 2.59\(\pm\)0.18 & 2.58\(\pm\)0.18 & 2.46\(\pm\)0.16 & 1.55\(\pm\)0.17 & 1.80\(\pm\)0.18 & 52.32\(\pm\)21.59 & 17.14\(\pm\)15.17 & 42.64\(\pm\)12.23 \\ PPO & 2.41\(\pm\)0.20 & 2.23\(\pm\)0.27 & 2.28\(\pm\)0.26 & 0.91\(\pm\)0.20 & 1.43\(\pm\)0.10 & 53.05\(\pm\)7.27 & 42.70\(\pm\)3.44 & 64.92\(\pm\)4.08 \\ GFlowNet & 2.71\(\pm\)0.03 & 2.66\(\pm\)0.01 & 2.71\(\pm\)0.01 & 1.73\(\pm\)0.01 & 1.95\(\pm\)0.03 & 87.95\(\pm\)0.13 & 26.56\(\pm\)0.36 & 91.33\(\pm\)0.15 \\ \hline \hline \end{tabular} \end{table} Table 3: Generalization on real-world graphs. We train on a small set of real-world graphs and evaluate on unseen ones. GFlowNet retains a high diversity and exhibits consistently better performances than the baselines on the target models. PPO uses the same hyperparameters as in the previous experiments but does not manage to converge on this dataset. Figure 3: Empirical reward distribution on a 25-node Erdos-Renyi graph for different inference temperatures in the conditional GFlowNet. Lower temperatures allocate more probability mass to better schedules.
2305.10952
Actor-Critic Methods using Physics-Informed Neural Networks: Control of a 1D PDE Model for Fluid-Cooled Battery Packs
This paper proposes an actor-critic algorithm for controlling the temperature of a battery pack using a cooling fluid. This is modeled by a coupled 1D partial differential equation (PDE) with a controlled advection term that determines the speed of the cooling fluid. The Hamilton-Jacobi-Bellman (HJB) equation is a PDE that evaluates the optimality of the value function and determines an optimal controller. We propose an algorithm that treats the value network as a Physics-Informed Neural Network (PINN) to solve for the continuous-time HJB equation rather than a discrete-time Bellman optimality equation, and we derive an optimal controller for the environment that we exploit to achieve optimal control. Our experiments show that a hybrid-policy method that updates the value network using the HJB equation and updates the policy network identically to PPO achieves the best results in the control of this PDE system.
Amartya Mukherjee, Jun Liu
2023-05-18T13:21:38Z
http://arxiv.org/abs/2305.10952v1
Actor-Critic Methods using Physics-Informed Neural Networks: Control of a 1D PDE Model for Fluid-Cooled Battery Packs ###### Abstract This paper proposes an actor-critic algorithm for controlling the temperature of a battery pack using a cooling fluid. This is modeled by a coupled 1D partial differential equation (PDE) with a controlled advection term that determines the speed of the cooling fluid. The Hamilton-Jacobi-Bellman (HJB) equation is a PDE that evaluates the optimality of the value function and determines an optimal controller. We propose an algorithm that treats the value network as a Physics-Informed Neural Network (PINN) to solve for the continuous-time HJB equation rather than a discrete-time Bellman optimality equation, and we derive an optimal controller for the environment that we exploit to achieve optimal control. Our experiments show that a hybrid-policy model that updates the value network using the HJB equation and updates the policy network identically to PPO achieves the best results in the control of this PDE system. **keywords:** Actor-Critic Method, Physics-Informed Neural Network, Fluid Cooled Battery Packs, Hamilton Jacobi Bellman Equation ## 1 Introduction In recent years, there has been a growing interest in Reinforcement Learning (RL) for continuous control problems. RL has shown promising results in environments with unknown dynamics through a balance of exploration in the environment and exploitation of the learned policies. Since the advent of REINFORCE with Baseline, the value network in RL algorithms has shown to be useful towards finding optimal policies as a critic network (Sutton & Barto (2018)). This value network continues to be used in state-of-the-art RL algorithms today. Proximal Policy Optimization (PPO) is an actor-critic method introduced by Schulman et al. (2017). It limits the update of the policy network to a trust region at every iteration. This ensures that the objective function of the policy network is a good approximation of the true objective function and forces smooth and reliable updates to the value network as well. In discrete-time RL, the value function estimates returns from a given state as a sum of the returns over time steps. This value function is obtained by solving the Bellman Optimality Equation. On the other hand, in continuous-time RL, the value function estimates returns from a given state as an integral over time. This value function is obtained by solving a partial differential equation (PDE) known as the Hamilton-Jacobi-Bellman (HJB) equation Munos (1999). Both equations are difficult to solve analytically and numerically, and therefore the RL agent must explore the environment and make successive estimations. The introduction of physics-informed neural networks (PINNs) by Raissi et al. (2019) has led to significant advancements in scientific machine learning. PINNs leverage auto-differentiation to compute derivatives of neural networks with respect to their inputs and model parameters exactly. This enables the laws of physics (described by ODEs or PDEs) governing the dataset of interest to act as a regularization term for the neural network. As a result, PINNs outperform regular neural networks on such datasets by exploiting the underlying physics of the data. Control of PDEs is considered to be challenging compared to control of ODEs. Works such as Vazquez & Krstic (2017) introduced the backstepping method for the boundary control of reaction-advection-diffusion equations using kernels. For PDE control problems where the control input is encoded in the PDE, the HJB equation has been used (Sirignano & Spiliopoulos (2018),Kalise & Kunisch (2017)). Works from control of ODEs have been used by writing the PDE as an infinite-dimensional ODE. To the best of our knowledge, this paper is the first to explore the intersection between PINNs and RL in a PDE control problem. We discretize the PDE as an ODE to derive an HJB equation. In order to force the convergence of the value network in PPO towards the solution of the HJB equation, we utilize PINNs to encode this PDE and train the value network. Upon deriving the HJB equation, we also derive an optimal controller. We introduce two algorithms: HJB value iteration and Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO), that train the value function using the HJB equation and use the optimal controller. The HJBPPO algorithm shows superior performance compared to PPO and HJB value iteration on the PackCooling environment. ## 2 Preliminaries ### The 1D pack cooling problem The 1D system for fluid-cooled battery packs was introduced by Kato & Moura (2021) and is modeled by the following coupled PDE: \[u_{t}(x,t)= -D(x,t)u_{xx}(x,t)+h(x,t,u(x,t))+\frac{1}{R(x,t)}(w-u) \tag{1}\] \[w_{t}=-\sigma(t)w_{x}+\frac{1}{R(x,t)}(u-w), \tag{2}\] with the following boundary conditions: \[u_{x}(0,t)=u_{x}(1,t)=0 \tag{3}\] \[w(0,t)=U(t) \tag{4}\] where \(u(x,t)\) is the heat distribution across the battery pack, \(w(x,t)\) is the heat distribution across the cooling fluid, \(D(x,t)\) is the thermal diffusion constant across the battery pack, \(R(x,t)\) is the heat resistance between the battery pack and the cooling fluid, \(h(x,t,u)\) is the internal heat generation in the battery pack, \(U(t)\) is the temperature of the cooling fluid at the boundary, and \(\sigma(t)\) is the transport speed of the cooling fluid, which will be the controller in this paper. The objective of the control problem in this paper is to determine \(\sigma(t)\) such that \(u(x,t)\) is as close to zero as possible. The transport speed \(\sigma(t)\) is strictly non-negative so the cooling fluid travels only in the positive \(x\)-direction. We restrict \(\sigma(t)\) to \([0,1]\). ### Hamilton-Jacobi-Bellman equation To achieve optimal control for the 1D PDE pack cooling problem, we will utilize works from control theory for ODEs. Consider a controlled dynamical system modeled by the following equation: \[\dot{x}=f(x,\sigma),\quad x(t_{0})=x_{0}, \tag{5}\] where \(x(t)\) is the state and \(\sigma(t)\) is the control input. In control theory, the optimal value function \(V^{*}(x)\) is useful towards finding a solution to control problems (Munos et al. (1999)): \[V^{*}(x)=\sup_{\sigma}\frac{1}{\Delta t}\int_{t_{0}}^{\infty}\gamma^{\frac{t }{\Delta t}}L(x(\tau;t_{0},x_{0},\sigma(\cdot)),\sigma(\tau))d\tau, \tag{6}\] where \(L(x,\sigma)\) is the reward function, \(\Delta t\) is the time step size for numerical simulation, and \(\gamma\) is the discount factor. The following theorem introduces a criteria for assessing the optimality of the value function (Liberzon (2012), Kamalapurkar et al. (2018)). **Theorem 2.1**.: _A function \(V(x)\) is the optimal value function if and only if:_ 1. \(V\in C^{1}(\mathbb{R}^{n})\) _and_ \(V\) _satisfies the Hamilton-Jacobi-Bellman (HJB) Equation_ \[(\gamma-1)V(x)+\sup_{\sigma\in U}\{L(x,\sigma)+\gamma\Delta t\nabla_{x}V^{T}(x )f(x,\sigma)\}=0\] (7) _for all_ \(x\in\mathbb{R}^{n}\)_._ 2. _For all_ \(x\in\mathbb{R}^{n}\)_, there exists a controller_ \(\sigma^{*}(\cdot)\) _such that:_ \[(\gamma-1)V(x)+L(x,\sigma^{*}(x))+\gamma\Delta t\nabla_{x}V^{T}(x )f(x,\sigma^{*}(x))\] \[=(\gamma-1)V(x)+\sup_{\hat{\sigma}\in U}\{L(x,\hat{\sigma})+ \gamma\Delta t\nabla_{x}V^{T}(x)f(x,\hat{\sigma})\}.\] (8) The proof of part 1 of this theorem is in Appendix B. The HJB equation will be used in this paper to determine a new loss function for the value network \(V(x)\) in this pack cooling problem and an optimal controller \(\sigma^{*}(t)\). ## 3 Related work The HJB equation we intend to solve is a first-order quasi-linear PDE. The use of HJB equations for continuous RL has sparked interest in recent years among the RL community as well as the control theory community and has led to promising works. Kim et al. (2021) introduced an HJB equation for Q Networks and used it to derive a controller that is Lipschitz continuous in time. This algorithm has shown improved performance over Deep Deterministic Policy Gradient (DDPG) in three out of the four tested MuJoCo environments without the need for an actor network. Wiltzer et al. (2022) introduced a distributional HJB equation to train the FD-WGF Q-Learning algorithm. This models return distributions more accurately compared to Quantile Regression TD (QTD) for a particle-control task. Finite difference methods are used to solve this HJB equation numerically. Furthermore, the authors mentioned the use of auto-differentiation for increased accuracy of the distributional HJB equation as a potential area for future research in their conclusion. The use of neural networks to solve the HJB equation has been an area of interest across multiple research projects. Jiang et al. (2016) uses a structured Recurrent Neural Network to solve the HJB equation and achieve optimal control for the Dubins car problem. Tassa and Erez (2007) uses the Pineda architecture (Pineda (1987)) to estimate partial derivatives of the value function with respect to its inputs. They used the iterative least squares method to solve the HJB equation. This algorithm shows convergence in several control problems without the need for an initial stable policy. RL for PDE control is a challenging field that has been of interest to the machine learning community lately. Farahmand et al. (2017) introduces the Deep Fitted Q Iteration to solve a boundary control problem for a 2D convection-diffusion equation. The model stabilizes the temperature in the environment without encoding any knowledge of the governing PDE. Sirignano and Spiliopoulos (2018) develops the DGM algorithm to solve PDEs. They use auto-differentiation to compute first-order derivatives and Monte Carlo methods to estimate higher-order derivatives. This algorithm was used to solve the HJB equation to control a stochastic heat equation and achieved an error of 0.1%. Kalise and Kunisch (2017) approximates the solution to the HJB equation using polynomials. This was used to control a semilinear parabolic PDE. PINNs have been used for the control of dynamical systems in recent works. Antonelo et al. (2021) uses a PINN for model predictive control of a dynamical system over a long time interval. The PINN takes the initial condition, the control input, and the spatial and temporal coordinates as input and estimates the trajectory of the dynamical system while repeatedly shifting the time interval towards zero to allow for long-range interval predictions. Nicodemus et al. (2022) uses a PINN-based model predictive control for the tracking problem of a multi-link manipulator. Djeumou et al. (2022) uses a PINN to incorporate partial knowledge about a dynamical system such as symmetry and equilibrium points to estimate the trajectory of a controlled dynamical system. The use of a PINN to solve the HJB equation for the value network was done by Nakamura-Zimmerer et al. (2020) in an optimal feedback control problem setting. The paper achieves results similar to that of the true optimal control function in high-dimensional problems. ## 4 HJB control of the pack cooling problem In this section, we will connect the pack cooling PDE model with the HJB equation to derive a new loss function for the value network \(V(u,w)\) using the HJB equation and an optimal controller. The HJB equation has been useful in finding optimal controllers for systems modeled by ODEs. In Kalise & Kunisch (2017), the controlled PDE system has been discretized in space to form an ODE that can be used in the HJB equation. Similarly, to form the HJB equation for this paper, we need to write equations 1 and 2 as an ODE. ### ODE discretization of PDE We can write equations 1 and 2 as an ODE by discretizing it in the \(x\) variable. By letting \(\Delta x=\frac{1}{N_{x}}\) where \(N_{x}\) is the number of points we choose to discretize the system along the x-axis, we arrive at a \(2N_{x}\) dimensional ODE: \[\dot{\hat{U}}=-DA\hat{U}+h(\hat{U})+\frac{1}{R}(\hat{W}-\hat{U}) \tag{9}\] \[\dot{\hat{W}}=-\sigma(t)B\hat{W}+\frac{1}{R}(\hat{U}-\hat{W}), \tag{10}\] where \[\hat{W}(t)=\begin{pmatrix}w(x_{1},t)\\ \vdots\\ w(x_{N_{x}},t)\end{pmatrix},\hat{U}(t)=\begin{pmatrix}u(x_{1},t)\\ \vdots\\ u(x_{N_{x}},t)\end{pmatrix},\] and \(A\hat{U}\) is a second-order discretization of \(u_{xx}\), e.g., \[[A\hat{U}]_{k}=\frac{u(x_{k+1},t)-2u(x_{k},t)+u(x_{k-1},t)}{\Delta x^{2}},\] \(B\hat{W}\) is a second-order discretization of \(w_{x}\), e.g., \[[B\hat{W}]_{k}=\frac{w(x_{k+1},t)-w(x_{k-1},t)}{2\Delta x}.\] ### Derivation of the optimal controller The ODE system derived in section 4.1 can be used in the HJB equation to determine a loss function and an optimal controller. **Theorem 4.1**.: _Let \(u(\cdot,t),w(\cdot,t)\in L_{2}[0,1]\). With \(\sigma(t)\in[0,1]\) and the reward function \(L(U_{t},W_{t},\sigma_{t})=-||U_{t+1}||_{2}^{2}\Delta x\), the HJB equation for the 1D pack cooling problem is:_ \[(\gamma-1)V-||u(\cdot,t+\Delta t)||^{2}\] \[+\langle V_{u}(u(\cdot,t),w(\cdot,t)),u_{t}(\cdot,t)\rangle\] \[+\frac{1}{R}\langle V_{w}(u(\cdot,t),w(\cdot,t)),u(\cdot,t)-w( \cdot,t)\rangle\] \[+\max(0,-\langle V_{w}(u(\cdot,t),w(\cdot,t)),w_{x}(\cdot,t) \rangle)=0 \tag{11}\] _where \(||\cdot||\) is the \(L_{2}[0,1]\) norm and \(\langle\cdot,\cdot\rangle\) is the \(L_{2}[0,1]\) inner product._ The proof of this theorem is in Appendix C. Theorem 2.1 shows that there exists a controller that satisfies equation 8. This allows us to determine an optimal controller, as shown in the following corollary: **Corollary 4.2**.: _Let \(w(\cdot,t)\in L_{2}[0,1]\). With \(\sigma(t)\in[0,1]\) and the reward function \(L(U_{t},W_{t},\sigma_{t})=-||U_{t+1}||_{2}^{2}\Delta x\), provided the optimal value function \(V^{*}(u,w)\) with \(V^{*}_{w}(\cdot,t)\in L_{2}[0,1]\), the optimal controller for the 1D pack cooling problem is:_ \[\sigma^{*}(t)=\begin{cases}1,&\langle V^{*}_{w}(u(\cdot,t),w(\cdot,t)),w_{x}( \cdot,t)\rangle<0,\\ 0,&\text{otherwise},\end{cases} \tag{12}\] _where \(\langle\cdot,\cdot\rangle\) is the \(L_{2}[0,1]\) inner product._ The proof of this corollary is in Appendix D. These results will be used in our algorithms to achieve optimal control of the pack cooling problem. ## 5 Algorithm For the control of the PDE, we introduce two algorithms. The first algorithm, called HJB Value Iteration, uses only a value network and exploits the HJB equation and optimal controller derived in Theorem 4.1 and Corollary 4.2. The second algorithm, called HJBPPO, is a hybrid-policy model that uses policy network updates from PPO and value network updates from HJB Value Iteration. To define these algorithms, we first define two loss functions. The first loss function is derived from the proof of theorem 4.1. \[MSE_{f}=\frac{1}{T}\sum_{t=0}^{T-1} ((\gamma-1)V(\hat{U}_{t},\hat{W}_{t})\] \[-||\hat{U}_{t+1}||_{2}^{2}\Delta x\] \[+\nabla_{U}V^{T}(\hat{U}_{t},\hat{W}_{t})\hat{\hat{U}}_{t}\Delta t\] \[+\frac{1}{R}\nabla_{W}V^{T}(\hat{U}_{t}-\hat{W}_{t})\Delta t\] \[+\max(0,-\nabla_{W}V^{T}B\hat{W}))^{2}\Delta t \tag{13}\] The second loss function provides an initial condition. At \(u(x,T)=0,w(x,T)=-R(x,t)=-2\), we have: \(u(x,T)=0\) and \(u_{t}(x,T)=0\). As a result, we have \(L(0,-R(x,t))=0\) and \(L_{t}(0,-R(x,t))=0\). This shows us that \(u(x,T)=0,w(x,T)=-R(x,t)\) is considered a stable point that maximizes the reward. Thus, we choose to let \(V(0,-R(x,t))=0\) be the Dirichlet boundary condition for the HJB equation. This leads to the second loss function: \[MSE_{u}=(V(0,-R(x,t)))^{2}. \tag{14}\] Since the value function achieves its global maximum at \(u(x,T)=0,w(x,T)=-2\), this means the derivatives of \(V\) must be zero along all directions. Thus, we choose to let \(\frac{\partial V}{\partial n}=0\) at \(u(x,T)=0,w(x,T)=-R(x,t)\) along every normal be the Neumann boundary condition for the HJB equation. This leads to the third loss function: \[MSE_{n}=||\nabla_{U}V(0,-R(x,t))||_{2}^{2}+||\nabla_{W}V(0,-R(x,t))||_{2}^{2} \tag{15}\] We derived an optimal controller in corollary 4.2. Gym environments recommend that actions be in the range \([-1,1]\). We can use the proof of the optimal controller in Appendix D to derive a way of selecting actions: \[a_{t}=-\text{sign}(\nabla_{W}V^{T}B\hat{W}(t)) \tag{16}\] The algorithms introduced in this paper will focus on minimizing both of the loss functions defined above and using the optimal controller. ``` 1:Initiate value network parameter \(\phi\) 2:Run the control as given in equation (16) in the environment for \(T\) timesteps and observe samples \(\{(s_{t},a_{t},R_{t},s_{t+1})\}_{t=1}^{T}\). 3:Compute the value network loss as: \(J(\phi)=MSE_{f}+MSE_{u}+MSE_{n}\) described in equations (13), (14), and (15) 4:Update \(\phi\leftarrow\phi-\alpha_{2}\nabla_{\phi}J(\phi)\) 5:Run steps 2-4 for multiple iterations ``` **Algorithm 1** HJB Value Iteration ### HJB value iteration The HJB Value Iteration trains the loss function without the need for an actor-network. We treat the value network as a PINN, using auto-differentiation to estimate gradient vectors to compute the loss in equation 13 and the control in equation 16. At every time step, it uses the controller given in equation 16. It updates the value network using the loss functions as shown above. The method is provided in Algorithm 1. ### Hjbppo HJBPPO is an algorithm that combines policy optimization from PPO with HJB value iteration. This is implemented by modifying the PPO implementation by Barhate (2021). To facilitate exploration of the environment and exploitation of the models, we introduce an action selection method that uses the policy network and equation 16 with equal probability, as shown in Algorithm 2. Upon running the policy \(\pi_{\theta}\), we sample from a distribution \(N(\mu,s)\) where \(\mu\) is the output from the policy network. We initiate \(s\) to \(0.3\) and decrease it by \(0.01\) every \(1000\) episodes until it reaches \(0.1\). After sampling an action from the normal distribution, we clip it between \(-1\) and \(1\). This action selection method ensures that we select actions that are not only in \(\{-1,1\}\) but also in \([-1,1]\). It introduces a new method of exploration of the environment by choosing from two different methods of action selection. Actions selected using equation 16 are also stored in the memory buffer and are used to train the policy network \(\pi_{\theta}\). The method is provided in Algorithm 3. We will train PPO, HJB value iteration, and HJBPPO on the PackCooling environment and compare these algorithms. ## 6 Results ### Training To ensure the reproducibility of our results, we have posted our code in the following link: [https://github.com/amartyamukherjee/PPO-PackCooling](https://github.com/amartyamukherjee/PPO-PackCooling). We posted our hyperparameters in Appendix E. The details of the implementation of the PackCooling gym environment are posted in Appendix A. The code was run using Kaggle CPUs. Each algorithm was trained for a million timesteps. Training each algorithm took approximately 5 hours. ``` 1:Initiate policy network parameter \(\theta\) and value network parameter \(\phi\) 2:Run action selection as given in algorithm 2 in the environment for \(T\) timesteps and observe samples \(\{(s_{t},a_{t},R_{t},s_{t+1})\}_{t=1}^{T}\). 3:Compute the advantage \(A_{t}\) 4:Compute \(r_{t}(\theta)=\frac{\pi_{\theta}(a_{t}|s_{t})}{\pi_{\theta_{\text{old}}}(a_{t} |s_{t})}\) 5:Compute the objective function of the policy network: \[L(\theta)=\frac{1}{T}\sum_{t=0}^{T-1}\min[r_{t}(\theta)A_{t},\text{clip}(r_{t}( \theta),1-\epsilon,1+\epsilon)A_{t}],\] 6:Update \(\theta\leftarrow\theta+\alpha_{1}\nabla_{\theta}L(\theta)\) 7:Compute the value network loss as: \(J(\phi)=MSE_{f}+MSE_{u}+MSE_{n}\) described in equations (13), (14), and (15) 8:Update \(\phi\leftarrow\phi-\alpha_{2}\nabla_{\phi}J(\phi)\) 9:Run steps 2-8 for multiple iterations ``` **Algorithm 3** HJBPPO ### Reward Curves The reward curves have been plotted in Figure 1, comparing PPO, HJB value iteration, and HJBPPO. Each algorithm was run for 5 different seeds. We plotted the mean reward over each seed and over 20 consecutive episodes, and shaded the area 0.2 standard deviations from the mean. HJB value iteration shows the worst performance, as its rewards decrease past PPO after training for multiple episodes. PPO shows a rapid increase in average rewards after the first episode and a slow increase in average rewards afterward. HJBPPO shows the best performance in the graph, achieving the highest average reward in each episode and an increase in average rewards after training for multiple episodes. The significantly higher average reward in HJBPPO in the first episode shows that the action selection method described in Algorithm 2 provides a robust strategy to explore the environment and train the models. The higher average rewards are due to the exploitation of the dynamics of the environment as done by the HJB equation. ### Trajectories The plots of the trajectories have been posted in Appendix F. After training for a million timesteps, we tested our models on the PackCooling environment and produced the plots. These plots were generated using the rendering feature explained in section A.4. The trajectory of HJB value iteration shows the worst results. \(\sigma(t)\) returns 1.0 only once. It achieves a cumulative reward of \(-7294.51\). Thus, the input of the cooling fluid from the boundary is minimal. As a result of the internal heat generation in the battery pack, \(u(x,t)\) reaches high values of roughly 5 at \(t=10\), and as a result, \(w(x,t)\) also reaches high values of roughly 4. This shows that the training of the value function in HJB value iteration is inadequate and we have not arrived at an optimal controller for the pack cooling problem. This is because exploration of the environment was at a minimum, as we only exploited equation 16 at each time step. The trajectory of PPO shows that the values \(\sigma(t)\) takes at every timestep have a large variance with its neighboring timesteps. It achieves a cumulative reward of \(-3970.02\). Control of the temperature of the battery pack has been achieved as \(u(x,t)\) takes values between \(-2\) and \(2\) at \(t=10\). The trajectory of u(x,t) with HJBPPO shows it takes values between \(-2\) and \(2\) at \(t=10\). The values \(\sigma(t)\) takes at every timestep have a lower variance with its neighboring timesteps compared to PPO. It achieves a cumulative reward of \(-881.55\). For \(t\in[4,6]\), \(u(x,t)\) shows an increasing trend towards \(u=2\). In response, the controller \(\sigma(t)\) took values closer to 1.0 to allow for greater input of cooling fluid from the boundary so that \(u(x,t)\) decreases towards zero. Due to higher average rewards as shown in Figure 1, this shows that a model that exploits the dynamics of the environment to return a controller shows improved performance compared to a model that returns noisy control centered at \(\sigma=0.5\). ## 7 Conclusion In this paper, we have introduced two algorithms that use PINNs to solve the pack cooling problem. This paper combines PINNs with RL in a PDE control setting. In the HJB value iteration algorithm, the HJB equation is used to introduce a loss function and a controller using a value network. The HJBPPO algorithm is a hybrid-policy model that combines the training of the value network from HJB value iteration and the training of the policy network from PPO. HJBPPO shows an overall improvement in performance compared to PPO due to its ability to exploit the physics of the environment to improve the learning curve of the agent. ## 8 Future research Despite showing an overall improvement in the reward curves, the HJBPPO algorithm leaves room for improved RL algorithms using PINNs. In this paper, we computed the HJB equation by expressing the PDE as an ODE by discretizing in \(x\). This was possible because the pack cooling problem was modeled by 1D PDEs. Currently existing works such as (Sirignano & Spiliopoulos (2018)) and (Kalise & Kunisch (2017)) solve the HJB equation for 1D PDEs by discretizing it in \(x\). It will be interesting to see how HJB control can be extended to higher dimensional PDEs. The goal of PINNs is to solve PDEs without the need for numerical methods. In this paper, we solved the pack cooling problem numerically using the Crank-Nicolson method and the method of characteristics. An area for further research may be the use of PINNs to solve for the HJB equation and the PDE that governs the dynamics of the system. In the PackCooling environment, the HJBPPO algorithm showed an improvement compared to PPO. But this is due to the fact that we knew the dynamics of the system, thus allowing for the physics of the environment to be exploited. The environments give all the details of the state needed to choose an action. One limitation of HJBPPO is that it may not perform well in partially observable environments because the estimate of the dynamics of the system may be inaccurate. Deep Transformer Q Network (DTQN) was introduced by Esslinger et al. (2022) and achieves state-of-the-art results in many partially observable Figure 1: Reward curves of PPO (red), HJB value iteration (blue), and HJBPPO (green) averaged over 5 seeds. Shaded area indicates 0.2 standard deviations. environments. A potential area for further research may be the introduction of an HJB equation that facilitates partial observability. The DTQN algorithm may be improvised by incorporating this HJB equation using PINNs.
2307.06949
HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
Personalization has emerged as a prominent aspect within the field of generative AI, enabling the synthesis of individuals in diverse contexts and styles, while retaining high-fidelity to their identities. However, the process of personalization presents inherent challenges in terms of time and memory requirements. Fine-tuning each personalized model needs considerable GPU time investment, and storing a personalized model per subject can be demanding in terms of storage capacity. To overcome these challenges, we propose HyperDreamBooth-a hypernetwork capable of efficiently generating a small set of personalized weights from a single image of a person. By composing these weights into the diffusion model, coupled with fast finetuning, HyperDreamBooth can generate a person's face in various contexts and styles, with high subject details while also preserving the model's crucial knowledge of diverse styles and semantic modifications. Our method achieves personalization on faces in roughly 20 seconds, 25x faster than DreamBooth and 125x faster than Textual Inversion, using as few as one reference image, with the same quality and style diversity as DreamBooth. Also our method yields a model that is 10000x smaller than a normal DreamBooth model. Project page: https://hyperdreambooth.github.io
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Wei Wei, Tingbo Hou, Yael Pritch, Neal Wadhwa, Michael Rubinstein, Kfir Aberman
2023-07-13T17:59:47Z
http://arxiv.org/abs/2307.06949v1
# HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models ###### Abstract Personalization has emerged as a prominent aspect within the field of generative AI, enabling the synthesis of individuals in diverse contexts and styles, while retaining high-fidelity to their identities. However, the process of personalization presents inherent challenges in terms of time and memory requirements. Fine-tuning each personalized model needs considerable GPU time investment, and storing a personalized model per subject can be demanding in terms of storage capacity. To overcome these challenges, we propose HyperDreamBooth--a hypernetwork capable of efficiently generating a small set of personalized weights from a single image of a person. By composing these weights into the diffusion model, coupled with fast finetuning, HyperDreamBooth can generate a person's face in various contexts and styles, with high subject details while also preserving the model's crucial knowledge of diverse styles and semantic modifications. Our method achieves personalization on faces in roughly 20 seconds, **25x** faster than DreamBooth and **125x** faster than Textual Inversion, using as few as _one_ reference image, with the same quality and style diversity as DreamBooth. Also our method yields a model that is **10000x** smaller than a normal DreamBooth model. Project page: [https://hyperdreambooth.github.io](https://hyperdreambooth.github.io) Figure 1: Using only a _single_ input image, _HyperDreamBooth_ is able to personalize a text-to-image diffusion model **25x** faster than DreamBooth [25], by using (1) a HyperNetwork to generate an initial prediction of a subset of network weights that are then (2) refined using fast finetuning for high fidelity to subject detail. Our method both _conserves model integrity and style diversity_ while closely approximating the subject’s essence and details. Introduction Recent work on text-to-image (T2I) personalization [25] has opened the door for a new class of creative applications. Specifically, for face personalization, it allows generation of new images of a specific face or person in different styles. The impressive diversity of styles is owed to the strong prior of pre-trained diffusion model, and one of the key properties of works such as DreamBooth [25], is the ability to implant a new subject into the model without damaging the model's prior. Another key feature of this type of method is that subject's essence and details are conserved even when applying vastly different styles. For example, when training on photographs of a person's face, one is able to generate new images of that person in animated cartoon styles, where a part of that person's essence is preserved and represented in the animated cartoon figure - suggesting some amount of visual semantic understanding in the diffusion model. These are two core characteristics of DreamBooth and related methods, that we would like to leave untouched. Nevertheless, DreamBooth has some shortcomings: size and speed. For size, the original DreamBooth paper finetunes all of the weights of the UNet and Text Encoder of the diffusion model, which amount to more than 1GB for Stable Diffusion. In terms of speed, mouthstanding inference speed issues of diffusion models, training a DreamBooth model takes about 5 minutes for Stable Diffusion (1,000 iterations of training). This limits the potential impact of the work. In this work, we want to address these shortcomings, without altering the impressive key properties of DreamBooth, namely _style diversity_ and _subject fidelity_, as depctied in Figure 1. Specifically, we want to _conserve model integrity_ and _closely approximate subject essence_ in a fast manner with a small model. Our work proposes to tackle the problems of **size** and **speed** of DreamBooth, while preserving **model integrity**, **editability** and **subject fidelity**. We propose the following contributions: * a personalized text-to-image model, where the customized part is roughly 100KB of size. This is achieved by training a DreamBooth model in a low-dimensional weight-space generated by a random orthogonal incomplete basis inside of a low-rank adaptation [16] weight space. * New _HyperNetwork_ architecture that leverages the Lightweight DreamBooth configuration and generates the customized part of the weights for a given subject in a text-to-image diffusion model. These provide a strong directional initialization that allows us to further finetune the model in order to achieve strong subject fidelity within a few iteration. Our method is **25x** faster than DreamBooth while achieving similar performances. * We propose the technique of _rank-relaxed finetuning_, where the rank of a LoRA DreamBooth model is relaxed during optimization in order to achieve higher subject fidelity, allowing us to initialize the personalized model with an initial approximation using our HyperNetwork, and then approximate the high-level subject details using rank-relaxed finetuning. One key aspect that leads us to investigate a HyperNetwork approach is the realization that in order to be able to synthesize specific subjects with high fidelity, using a given generative model, we have to "modify" its output domain, and insert knowledge about the subject into the model, namely by modifying the network weights. ## 2 Related Work Text-to-Image ModelsSeveral recent models such as Imagen [26], DALL-E2 [22], Stable Diffusion (SD) [24], Muse [8], Parti [33] etc. demonstrate excellent image generation capabilities given a text prompt. Some Text-to-Image (T2I) models such as Stable Diffusion and Muse also allows conditioning the generation with a given image via an encoder network. Techniques such as ControlNet [35] propose ways to incorporate new input conditioning such as depth. Test text and image based conditioning in these models do not capture sufficient subject details. Given the relatively small size of SD, for the ease of experimentation, we demonstrate our HyperDreamBooth on SD model. But the proposed technique is generic and can be applicable to any T2I model. Personalization of Generative ModelsGiven one or few subject images, the aim of personalized generation is to generate images of that particular subject in various contexts. Earlier works in this space use GANs to edit a given subject image into new contexts. Pivotal tuning [23] proposes to finetune a GAN with an inverted latent code. The work of [21] proposes to finetune StyleGAN using around 100 face images to obtain a personalized generative prior. Casanova et al. [7] proposes to condition a GAN using an input image to generate variations of that input image. All these GAN based techniques suffer from either poor subject fidelity or a lack of context diversity in the generated images. HyperNetworks were introduced as an idea of using an auxiliary neural network to predict network weights in order to change the functioning of a specific neural network [13]. Since then, they have been used for tasks in image generation that are close to personalization, such as inversion for StyleGAN [4], similar to work that seeks to invert the latent code of an image in order to edit that image in the GAN latent space [3]. T2I Personalization via FinetuningMore recently, several works propose techniques for personalizing T2I models resulting in higher subject fidelity and versatile text based contextualization of a given subject. Textual Inversion [11] proposes to optimize an input text embedding on the few subject images and use that optimized text embedding to generate subject images. [30] propose a richer textual inversion space capturing more subject details. DreamBooth [25] proposes to optimize the entire T2I network weights to adapt to a given subject resulting in higher subject fidelity in output images. Several works propose ways to optimize compact weight spaces instead of the entire network as in DreamBooth. CustomDiffusion [19] proposes to only optimize cross-attention layers. SVDiff [14] proposes to optimize singular values of weights. LoRa [2; 16] proposes to optimize low-rank approximations of weight residuals. StyleDrop [28] Figure 2: **HyperDreamBooth Training and Fast Fine-Tuning.** Phase-1: Training a hypernetwork to predict network weights from a face image, such that a text-to-image diffusion network outputs the person’s face from the sentence _“a [v] face”_ if the predicted weights are applied to it. We use pre-computed personalized weights for supervision, using an L2 loss, as well as the vanilla diffusion reconstruction loss. Phase-2: Given a face image, our hypernetwork predicts an initial guess for the network weights, which are then fine-tuned using the reconstruction loss to enhance fidelity. proposes to use adapter tuning [15] and finetunes a small set of adapter weights for style personalization. DreamArtist [10] proposes a one-shot personalization techniques by employing a positive-negative prompt tuning strategy. Most of these finetuning techniques, despite generating high-quality subject-driven generations, are slow and can take several minutes for every subject. Fast T2I PersonalizationSeveral concurrent works propose ways for faster personalization of T2I models. The works of [12] and [31] propose to learn encoders that predicts initial text embeddings following by complete network finetuning for better subject fidelity. In contrast, our hypernetwork directly predicts low-rank network residuals. SuTI [9] proposes to first create a large paired dataset of input images and the corresponding recontexualized images generated using standard DreamBooth. It then uses this dataset to train a separate network that can perform personalized image generation in a feed-forward manner. Despite mitigating the need for finetuning, the inference model in SuTI does not conserve the original T2I model's integrity and also suffers from a lack of high subject fidelity. InstantBooth [27] and Taming Encoder [17] create a new conditioning branch for the diffusion model, which can be conditioned using a small set of images, or a single image, in order to generate personalized outputs in different styles. Both methods need to train the diffusion model, or the conditioning branch, to achieve this task. These methods are trained on large datasets of images (InstantBooth 1.3M samples of bodies from a proprietary dataset, Taming Encoder on CelebA [20] and Getty [1]). FastComposer [32] proposes to use image encoder to predict subject-specific embeddings and focus on the problem of identity blending in multi-subject generation. The work of [5] propose to guide the diffusion process using face recognition loss to generate specific subject images. In such guidance techniques, it is usually difficult to balance diversity in recontextualizations and subject fidelity while also keeping the generations within the image distribution. Face0 [29] proposes to condition a T2I model on face embeddings so that one can generate subject-specific images in a feedforward manner without any test-time optimization. Celeb-basis [34] proposes to learn PCA basis of celebrity name embeddings which are then used for efficient personalization of T2I models. In contrast to these existing techniques, we propose a novel hypernetwork based approach to directly predict low-rank network residuals for a given subject. ## 3 Preliminaries **Latent Diffusion Models (LDM)**. Text-to-Image (T2I) diffusion models \(\mathcal{D}_{\theta}(\epsilon,\mathbf{c})\) iteratively denoises a given noise map \(\epsilon\in\mathbb{R}^{h\times w}\) into an image \(I\) following the description of a text prompt \(T\), which is converted into an input text embedding \(\mathbf{c}=\Theta(T)\) using a text encoder \(\Theta\). In this work, we use Stable Diffusion [24], a specific instantiation of LDM [24]. Briefly, LDM consists of 3 main components: An image encoder that encodes a given image into latent code; a decoder that decodes the latent code back to image pixels; and a U-Net denoising network \(\mathcal{D}\) that iteratively denoises a noisy latent code. See [24] for more details. **DreamBooth**[25] provides a network fine-tuning strategy to adapt a given T2I denoising network \(\mathcal{D}_{\theta}\) to generate images of a specific subject. At a high-level, DreamBooth optimizes all the diffusion network weights \(\theta\) on a few given subject images while also retaining the generalization ability of the original model with class-specific prior preservation loss [25]. In the case of Stable Diffusion [24], this amounts to finetuning the entire denoising UNet has over 1GB of parameters. In addition, DreamBooth on a single subject takes about 5 minutes with 1K training iterations. **Low Rank Adaptation (LoRA)**[16; 2] provides a memory-efficient and faster technique for DreamBooth. Specifically, LoRa proposes to finetune the network weight residuals instead of the entire weights. That is, for a layer \(l\) with weight matrix \(W\in\mathbb{R}^{n\times m}\), LoRa proposes to finetune the residuals \(\Delta W\). For diffusion models, LoRa is usually applied for the cross and self-attention layers of the network [2]. A key aspect of LoRa is the decomposition of \(\Delta W\) matrix into low-rank matrices \(A\in\mathbb{R}^{n\times r}\) and \(B\in\mathbb{R}^{r\times m}\): \(\Delta W=AB\). The key idea here is that \(r<<n\) and the combined number of weights in both \(A\) and \(B\) is much lower than the number of parameters in the original residual \(\Delta W\). Priors work show that this low-rank residual finetuning is an effective technique that preserves several favorable properties of the original DreamBooth while also being memory-efficient as well as fast, remarkably even when we set \(r=1\). For stable diffusion 1.5 model, LoRA-DreamBooth with \(r=1\) has approximately 386K parameters corresponding to only about 1.6MB in size. ## 4 Method Our approach consists of 3 core elements which we explain in this section. We begin by introducing the concept of the Lightweight DreamBooth (LiDB) and demonstrate how the Low-Rank decomposition (LoRa) of the weights can be further decomposed to effectively minimize the number of personalized weights within the model. Next, we discuss the HyperNetwork training and the architecture the model entails, which enables us to predict the LiDB weights from a single image. Lastly, we present the concept of rank-relaxed fast fine-tuning, a technique that enables us to significantly amplify the fidelity of the output subject within a few seconds. Fig. 2 shows the overview of hypernetwork training followed by fast fine-tuning strategy in our HyperDreamBooth technique. ### Lightweight DreamBooth (LiDB) Given our objective of generating the personalized subset of weights directly using a HyperNetwork, it would be beneficial to reduce their number to a minimum while maintaining strong results for subject fidelity, editability and style diversity. To this end, we propose a new low-dimensional weight space for model personalization which allows for personalized diffusion models that are 10,000 times smaller than a DreamBooth model and more than 10 times smaller than a LoRA DreamBooth model. Our final version has only 30K variables and takes up only 120 KB of storage space. The core idea behind Lightweight DreamBooth (LiDB) is to further decompose the weight-space of a rank-1 LoRa residuals. Specifically, we do this using a random orthogonal incomplete basis within the rank-1 LoRA weight-space. We illustrate the idea in Figure 3. The approach can also be understood as further decomposing the Down (\(A\)) and Up (\(B\)) matrices of LoRA into two matrices each: \(A=A_{\text{aux}}A_{\text{train}}\) with \(A_{\text{aux}}\in\mathbb{R}^{n\times a}\) and \(A_{\text{train}}\in\mathbb{R}^{a\times r}\) and \(B=B_{\text{train}}B_{\text{aux}}\) with \(B_{\text{train}}\in\mathbb{R}^{r\times b}\) and \(B_{\text{aux}}\in\mathbb{R}^{b\times m}\). where the _aux_ layers are randomly initialized with row-wise orthogonal vectors and are frozen; and the train layers are learned. Two new hyperparameters are introduced: \(a\) and \(b\), which we set experimentally. Thus the weight-residual in a LiDB linear layer is represented as: \[\Delta Wx=A_{\text{aux}}A_{\text{train}}B_{\text{train}}B_{\text{aux}}, \tag{1}\] where \(r<<\text{min}(n,m)\), \(a<n\) and \(b<m\). \(A_{\text{aux}}\) and \(B_{\text{aux}}\) are randomly initialized with orthogonal row vectors with constant magnitude - and frozen, and \(B_{\text{train}}\) and \(A_{\text{train}}\) are learnable. Surprisingly, we find that with \(a=100\) and \(b=50\), which yields models that have only 30K trainable variables and are 120 KB in size, personalization results are strong and maintain subject fidelity, editability and style diversity. We show results for personalization using LiDB in the experiments section. Figure 3: **Lightweight DreamBooth:** we propose a new low-dimensional weight-space for model personalization generated by a random orthogonal incomplete basis inside LoRA weight-space. This achieves models of roughly 100KB of size (**0.01%** of original DreamBooth and **7.5%** of LoRA DreamBooth size) and, surprisingly, is sufficient to achieve strong personalization results with solid editability. ### HyperNetwork for Fast Personalization of Text-to-Image Models We propose a HyperNetwork for fast personalization of a pre-trained T2I model. Let \(\tilde{\theta}\) denote the set of all LiDAR residual matrices: \(A_{\text{train}}\) and \(B_{\text{train}}\) for each of the cross-attention and self-attention layers of the T2I model. In essence, the HyperNetwork \(\mathcal{H}_{\eta}\) with \(\eta\) parameters takes the given image \(\mathbf{x}\) as input and predicts the LiDAR low-rank residuals \(\hat{\theta}=\mathcal{H}_{\eta}(\mathbf{x})\). The HyperNetwork is trained on a dataset of domain-specific images with a vanilla diffusion denoising loss and a weight-space loss: \[L(\mathbf{x})=\alpha||\mathcal{D}_{\hat{\theta}}(\mathbf{x}+\epsilon,\mathbf{c })-\mathbf{x}||_{2}^{2}+\beta||\hat{\theta}-\theta||_{2}^{2}, \tag{2}\] where \(\mathbf{x}\) is the reference image, \(\theta\) are the pre-optimized weight parameters of the personalized model for image \(\mathbf{x}\), \(\mathcal{D}_{\theta}\) is the diffusion model (with weights \(\theta\)) conditioned on the noisy image \(\mathbf{x}+\epsilon\) and the supervisory text-promt \(\mathbf{c}\), and finally \(\alpha\) and \(\beta\) are hyperparameters that control for the relative weight of each loss. Fig. 2 (top) illustrates the hypernetwork training. Supervisory Text PromptWe propose to eschew any type of learned token embedding for this task, and our hypernetwork acts solely to predict the LiDAR weights of the diffusion model. We simply propose to condition the learning process "a [V] face" for all samples, where [V] is a rare identifier described in [25]. At inference time variations of this prompt can be used, to insert semantic modifications, for example "a [V] face in impressionist style". HyperNetwork ArchitectureConcretely, as illustrated in Fig. 4, we separate the HyperNetwork architecture into two parts: a ViT image encoder and a transformer decoder. We use a ViT-H for the encoder architecture and a 2-hidden layer transformer decoder for the decoder architecture. The transformer decoder is a strong fit for this type of weight prediction task, since the output of a diffusion UNet or Text Encoder is sequentially dependent on the weights of the layers, thus in order to personalize a model there is interdependence of the weights from different layers. In previous work [13; 4], this dependency is not rigorously modeled in the HyperNetwork, whereas with a transformer decoder with a positional embedding, this positional dependency is modeled - similar to dependencies between words in a language model transformer. To the best of our knowledge this is the first use of a transformer decoder as a HyperNetwork. Iterative PredictionWe find that the HyperNetwork achieves better and more confident predictions given an iterative learning and prediction scenario [4], where intermediate weight predictions are fed to the HyperNetwork and the network's task is to improve that initial prediction. We only perform the image encoding once, and these extracted features \(\mathbf{f}\) are then used for all rounds of iterative prediction for the HyperNetwork decoding transformer \(\mathcal{T}\). This speeds up training and inference, and we find that it does not affect the quality of results. Specifically, the forward pass of \(\mathcal{T}\) becomes: \[\hat{\theta}_{k}=\mathcal{T}(\mathbf{f},\hat{\theta}_{k-1}), \tag{3}\] where \(k\) is the current iteration of weight prediction, and terminates once \(k=s\), where \(s\) is a hyperparameter controlling the maximum amount of iterations. Weights \(\theta\) are initialized to zero for \(k=0\). Trainable linear layers are used to convert the decoder outputs into the final layer weights. We use the CelebAHQ Figure 4: **HyperNetwork Architecture: Our hypernetwork consists of a Visual Transformer (ViT) encoder that translates face images into latent face features that are then concatenated to latent layer weight features that are initiated by zeros. A Transformer Decoder receives the sequence of the concatenated features and predicts the values of the weight features in an iterative manner by refining the initial weights with delta predictions. The final layer weight deltas that will be added to the diffusion network are obtained by passing the decoder outputs through learnable linear layers.** dataset [18] for training the HyperNetwork, and find that we only need 15K identities to achieve strong results, much less data than other concurrent methods. ### Rank-Relaxed Fast Finetuning We find that the initial HyperNetwork prediction is in great measure directionally correct and generates faces with similar semantic attributes (gender, facial hair, hair color, skin color, etc.) as the target face consistently. Nevertheless, fine details are not sufficiently captured. We propose a final fast finetuning step in order to capture such details, which is magnitudes faster than DreamBooth, but achieves virtually identical results with strong subject fidelity, editability and style diversity. Specifically, we first predict personalized diffusion model weights \(\hat{\theta}=\mathcal{H}(\mathbf{x})\) and then subsequently finetune the weights using the diffusion denoising loss \(L(\mathbf{x})=||\mathcal{D}_{\hat{\theta}}(\mathbf{x}+\epsilon,\mathbf{c})- \mathbf{x}||_{2}^{2}\). A key contribution of our work is the idea of _rank-relaxed_ finetuning, where we relax the rank of the LoRA model from \(r=1\) to \(r>1\) before fast finetuning. Specifically, we add the predicted HyperNetwork weights to the overall weights of the model, and then perform LoRA finetuning with a new higher rank. This expands the capability of our method of approximating high-frequency details of the subject, giving higher subject fidelity than methods that are locked to lower ranks of weight updates. To the best of our knowledge we are the first to propose such rank-relaxed LoRA models. We use the same supervision text prompt "a [V] face" this fast finetuning step. We find that given the HyperNetwork initialization, fast finetuning can be done in 40 iterations, which is **25x** faster than DreamBooth [25] and LoRA DreamBooth [2]. We show an example of initial, intermediate and final results in Figure 5. ## 5 Experiments We implement our HyperDreamBooth on the Stable Diffusion v1.5 diffusion model and we predict the LoRa weights for all cross and self-attention layers of the diffusion UNet as well as the CLIP text encoder. For privacy reasons, all face images used for visuals are synthetic, from the SFHQ dataset [6]. For training, we use 15K images from CelebA-HQ [18]. ### Subject Personalization Results Our method achieves strong personalization results for widely diverse faces, with performance that is identically or surpasses that of the state-of-the art optimization driven methods [25; 11]. Moreover, we achieve very strong editability, with semantic transformations of face identities into highly different domains such as figurines and animated characters, and we conserve the strong style prior of the model which allows for a wide variety of style generations. We show results in Figure 6. Figure 5: **HyperNetwork + Fast Finetuning** achieves strong results. Here we show, for each reference (row), outputs from the initial hypernetwork prediction (HyperNetwork Prediction column), as well as results after HyperNetwork prediction and fast finetuning (HyperNetwork + Fast Finetuning). We also show generated results without the HyperNetwork prediction component, demonstrating its importance. Given the statistical nature of HyperNetwork prediction, some samples that are OOD for the HyperNetwork due to lighting, pose, or other reasons, can yield suboptimal results. Specifically, we identity three types of errors that can occur. There can be (1) a semantic directional error in the HyperNetwork's initial prediction which can yield erroneous semantic information of a subject (wrong eye color, wrong hair type, wrong gender, etc.) (2) incorrect subject detail capture during the fast finetuning phase, which yields samples that are close to the reference identity but not similar enough and (3) underfitting of both HyperNetwork and fast finetuning, which can yield low editability with respect to some styles. Figure 6: **Results Gallery: Our method can generate novel artistic and stylized results of diverse subjects (depicted in an input image, left) with considerable editability while maintaining the integrity to the subject’s key facial characteristics. The output images were generated with the following captions (top-left to bottom-right): “_An Instagram selfie of a [V] face”, “A Pixar character of a [V] face”, “A [V] face with bark skin”, “A [V] face as a rock star”_. Rightmost: “_A professional shot of a [V] face”_. ### Comparisons Qualitative ComparisonsWe compare our method to both Textual Inversion [11] and DreamBooth [25] using the parameters proposed in both works, with the exception that we increase the number of iterations of DreamBooth to 1,200 in order to achieve improved personalization and facial details. Results are shown in Figure 7. We observe that our method outperforms both Textual Inversion and DreamBooth generally, in the one-input-image regime. Quantitative Comparisons and AblationsWe compare our method to Textual Inversion and DreamBooth using a face recognition metric ("Face Rec." using an Inception ResNet, trained on VGGFace2), and the DINO, CLIP-I and CLIP-T metrics proposed in [25]. We use 100 identities from CelebAHQ [18], and 30 prompts, including both simple and complex style-modification and recontextualization prompts for a total of 30,000 samples. We show in Table 1 that our approach obtains the highest scores for all metrics. One thing to note is that face recognition metrics are relatively weak in this specific scenario, given that face recognition networks are only trained on real images and are not trained to recognize the \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Face Rec. \(\uparrow\) & DINO \(\uparrow\) & CLIP-I \(\uparrow\) & CLIP-T \(\uparrow\) \\ \hline Ours & **0.655** & **0.473** & **0.577** & **0.286** \\ DreamBooth & 0.618 & 0.441 & 0.546 & 0.282 \\ Textual Inversion & 0.623 & 0.289 & 0.472 & 0.277 \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparisons.** We compare our method for face identity preservation (Face Rec.), subject fidelity (DINO, CLIP-I) and prompt fidelity (CLIP-T) to DreamBooth and Textual Inversion. We find that our method preserves identity and subject fidelity more closely, while also achieving a higher score in prompt fidelity. Figure 7: **Qualitative Comparison:** We compare random generated samples for our method (HyperDreamBooth), DreamBooth and Textual Inversion for two different identities and five different stylistic prompts. We observe that our method generally achieves very strong editability while preserving identity, generally surpassing competing methods in the single-reference regime. same person in different styles. In order to compensate for this, we conduct a user study described further below. We also conduct comparisons to more aggressive DreamBooth training, with lower number of iterations and higher learning rate. Specifically, we use 400 iterations for DreamBooth-Agg-1 and 40 iterations for DreamBooth-Agg-2 instead of 1200 for DreamBooth. We increase the learning rate and tune the weight decay to compensate for the change in number of iterations. Note that DreamBooth-Agg-2 is roughly equivalent to only doing fast finetuning without the hypernetwork component of our work. We show in Table 2 that more aggressive training of DreamBooth generally degrades results when not using our method, which includes a HyperNetwork initialization of the diffusion model weights. Finally, we show an ablation study of our method. We remove the HyperNetwork (No Hyper), only use the HyperNetwork without finetuning (Only Hyper) and also use our full setup without iterative HyperNetwork predictions (k=1). We show results in Table 3 and find that our full setup with iterative prediction achieves best subject fidelity, with a slightly lower prompt following metric. User StudyWe conduct a user study for face identity preservation of outputs and compare our method to DreamBooth and Textual Inversion. Specifically, we present the reference face image and two random generations using the same prompt from our method and the baseline, and ask the user to rate which one has most similar face identity to the reference face image. We test a total of 25 identities, and query 5 users per question, with a total of 1,000 sample pairs evaluated. We take the majority vote for each pair. We present our results in Table 4, where we show a strong preference for face identity preservation of our method. \begin{table} \begin{tabular}{l c} \hline \hline Method & Identity Fidelity \(\uparrow\) \\ \hline Ours & **0.648** \\ DreamBooth & 0.233 \\ Undecided & 0.119 \\ \hline Ours & **0.706** \\ Textual Inversion & 0.216 \\ Undecided & 0.078 \\ \hline \hline \end{tabular} \end{table} Table 4: **User Study**. Since face recognition networks are not trained to recognize the same face with different styles and can sometimes fail catastrophically, we conduct a user study for identity fidelity in our stylized generations and compare one-to-one against DreamBooth and Textual Inversion. Users generally prefer images generated by our approach. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Face Rec. \(\uparrow\) & DINO \(\uparrow\) & CLIP-I \(\uparrow\) & CLIP-T \(\uparrow\) \\ \hline Ours & **0.655** & **0.473** & **0.577** & 0.286 \\ DreamBooth & 0.618 & 0.441 & 0.546 & 0.282 \\ DreamBooth-Agg-1 & 0.615 & 0.323 & 0.431 & **0.313** \\ DreamBooth-Agg-2 & 0.616 & 0.360 & 0.467 & 0.302 \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparisons with DreamBooth. We compare our method to DreamBooth with differently tuned hyperparameters to close the optimization time gap. We find that by increasing the learning rate and decreasing the number of iterations there is degradation of results, and DreamBooth does not achieve results similar to our method. DreamBooth-Agg-1 uses 400 iterations and DreamBooth-Agg-2 uses 40 iterations instead of the normal 1200 for our vanilla DreamBooth.** \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Face Rec. \(\uparrow\) & DINO \(\uparrow\) & CLIP-I \(\uparrow\) & CLIP-T \(\uparrow\) \\ \hline Ours & **0.655** & **0.473** & **0.577** & 0.286 \\ No Hyper & 0.647 & 0.392 & 0.498 & **0.299** \\ Only Hyper & 0.631 & 0.414 & 0.501 & 0.298 \\ Ours (k=1) & 0.648 & 0.464 & 0.570 & 0.288 \\ \hline \hline \end{tabular} \end{table} Table 3: **HyperNetwork Ablation**. We ablate several components of our approach, including not using the hypernetwork component at test-time (No Hyper), only using the hypernetwork prediction without fast finetuning (Only Hyper) and using our full method without iterative prediction (k=1). We show that our full method performs best for all fidelity metrics, although No Hyper achieves slightly better prompt following. ## 6 Societal Impact This work aims to empower users with a tool for augmenting their creativity and ability to express themselves through creations in an intuitive manner. However, advanced methods for image generation can affect society in complex ways [26]. Our proposed method inherits many possible concerns that affect this class of image generation, including altering sensitive personal characteristics such as skin color, age and gender, as well as reproducing unfair bias that can already be found in pre-trained model's training data. The underlying open source pre-trained model used in our work, Stable Diffusion, exhibits some of these concerns. All concerns related to our work have been present in the litany of recent personalization work, and the only augmented risk is that our method is more efficient and faster than previous work. In particular, we haven't found in our experiments any difference with respect to previous work on bias, or harmful content, and we have qualitatively found that our method works equally well across different ethnicities, ages, and other important personal characteristics. Nevertheless, future research in generative modeling and model personalization must continue investigating and revalidating these concerns. ## 7 Conclusion In this work, we have presented _HyperDreamBooth_ a novel method for fast and lightweight subject-driven personalization of text-to-image diffusion models. Our method leverages a HyperNetwork to generate Lightweight DreamBooth (LiDB) parameters for a diffusion model with a subsequent fast rank-relaxed finetuning that achieves a significant reduction in size and speed compared to DreamBooth and other optimization-based personalization work. We have demonstrated that our method can produce high-quality and diverse images of faces in different styles and with different semantic modifications, while preserving subject details and model integrity.
2310.05657
A Closer Look into Automatic Evaluation Using Large Language Models
Using large language models (LLMs) to evaluate text quality has recently gained popularity. Some prior works explore the idea of using LLMs for evaluation, while they differ in some details of the evaluation process. In this paper, we analyze LLM evaluation (Chiang and Lee, 2023) and G-Eval (Liu et al., 2023), and we discuss how those details in the evaluation process change how well the ratings given by LLMs correlate with human ratings. We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings. We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal. Last, we reveal that asking the LLM to explain its own ratings consistently improves the correlation between the ChatGPT and human ratings and pushes state-of-the-art (SoTA) correlations on two meta-evaluation datasets.
Cheng-Han Chiang, Hung-yi Lee
2023-10-09T12:12:55Z
http://arxiv.org/abs/2310.05657v1
# A Closer Look into Automatic Evaluation Using Large Language Models ###### Abstract Using large language models (LLMs) to evaluate text quality has recently gained popularity. Some prior works explore the idea of using LLMs for evaluation, while they differ in some details of the evaluation process. In this paper, we analyze _LLM evaluation_Chiang and Lee (2023)1 and _G-Eval_Liu et al. (2023), and we discuss how those details in the evaluation process change how well the ratings given by LLMs correlate with human ratings. We find that the auto Chain-of-Thought (CoT) used in G-Eval does not always make G-Eval more aligned with human ratings. We also show that forcing the LLM to output only a numeric rating, as in G-Eval, is suboptimal. Last, we reveal that asking the LLM to explain its own ratings consistently improves the correlation between the ChatGPT and human ratings and pushes state-of-the-art (SoTA) correlations on two meta-evaluation datasets. Footnote 1: In this paper, the term _LLM evaluation_ is used to refer to the specific method proposed by Chiang and Lee (2023). ## 1 Introduction Large language models (LLMs) trained with task instructions and human feedback can follow natural language instructions to complete a task Askell et al. (2021); Sanh et al. (2022); Wei et al. (2022); Ouyang et al. (2022). Recently, the instruction-following ability of LLMs makes them promising candidates for automatic evaluation Chiang and Lee (2023); Liu et al. (2023); Wang et al. (2023); Huang et al. (2023). By simply instructing the LLMs on how to rate and giving the LLMs the sample to be rated, the LLM can follow the instructions and provide a rating of the sample. Chiang and Lee (2023) propose _LLM evaluation_Liu et al. (2023) propose _G-Eval_; both of which use LLMs to evaluate samples by giving the LLM instructions, and they both show that some LLMs can yield evaluation results that are aligned to the evaluation results of humans. Still, LLM evaluation and G-Eval differ in some specific design choices in the evaluation procedure. Since Chiang and Lee (2023) and Liu et al. (2023) use distinct tasks, it is hard to know how the differences between LLM evaluation and G-Eval affect the evaluation results. This makes practitioners in the future hard to determine how to conduct an automatic evaluation using LLMs. Given that LLM evaluation and G-Eval have already received significant attention shortly after publication, these methods will likely revolutionize the evaluation in NLP. Therefore, conducting a detailed analysis of these approaches is essential and timely. This paper aims to identify the crucial components in LLM evaluation and G-Eval that contribute to stronger correlations with human ratings. Based on our analysis, we provide guidelines on how to use LLMs for automatic evaluations. We have the following findings: * Auto-CoT (proposed by G-Eval) does not always improve the correlation between LLM and human ratings. * Making the LLMs output only a single numeric rating is suboptimal. * Asking the LLMs to rationalize their own ratings significantly improves the correlation between the LLMs' ratings and human ratings. * On two datasets, we improve the best correlation that ChatGPT's rating can achieve, and some correlations even exceed prior SoTA correlations obtained using the ratings of GPT-4 in Liu et al. (2023). ## 2 Experiment Setup Our paper studies what components in LLM evaluation and G-Eval make the ratings generated by LLM correlate with human ratings better, and we aim to improve the correlation. ### LLM as an Automatic Evaluation Metric Both LLM evaluation [12] and G-Eval [13] propose to ask LLMs to rate a sample regarding some attributes of the sample (e.g., fluency, grammaticality) using a \(k\)-point Likert scale. They give the LLMs (1) **descriptions of the rating task**, (2) the **definition and rating criteria** of the attribute to be rated, (3) the **sample to be rated**, and (4) **a sentence that prompts the LLM to give the rating2**. The LLM outputs a sequence containing the rating. Unless specified, we follow prior works to sample \(N=20\) sequences from the LLM and average those ratings as the final rating. While the two methods share the core concept, they differ in two details. Footnote 2: In our paper, we use different highlight colors to represent different parts of the prompt, as shown in the above text. Additionally, we use cyan to represent the parts generated by **auto Chain-of-Thought** **Difference 1: Auto Chain-of-Thought** The task descriptions and rating criteria in LLM evaluation and G-Eval are all human-written. However, Liu et al. (2023) argue that some evaluated attributes require more than simple definition and evaluation criteria, so they use LLMs to determine the evaluation steps. Specifically, they concatenate the task description, definition, and criteria of the attributes and append a line "Evaluation steps:" to prompt the LLM. The LLM then generates an ordered list containing the step-by-step evaluation steps. They dub this process _auto chain-of-thought (CoT)_. G-Eval uses human-written task instructions and auto-CoT-generated evaluation steps to prompt the LLM to rate the sample. **Difference 2: Prompts for Output** At the end of the input to LLMs, G-Eval uses the prompt "{placeholder} (score only):" to restrict the LLM to output **only the numeric rating**; the placeholder will be replaced by the evaluated attributes. In contrast, LLM evaluation uses the following question to ask the LLM to assign the rating: "How {placeholder} is the sample? (on a scale of 1-k, with 1 being the lowest)". The LLM's **output form is not restricted**. ### Meta-Evaluating an Evaluation Metric Given a sample, an evaluation metric assigns it a rating. To evaluate an evaluation metric, we need a dataset containing human ratings for samples in the dataset. We calculate the correlation coefficient between the ratings obtained by the evaluation metric and the human ratings. A higher correlation indicates the evaluation metric better aligns with human ratings. We adopt Pearson \(r\) and Kendall's \(\tau\) as they are widely used in meta-evaluations [1, 13, 14, 15]. **In our paper, all the _correlation_ refers to the correlation coefficient between the ratings of LLM and human ratings.** Details on the calculation of correlation coefficients are in Appendix C. We use **SummEval**[13] and **Topical-Chat**[12, 14] as the meta-evaluation datasets, following Liu et al. (2023). SummEval is a meta-evaluation dataset for summarization derived from the CNN/DailyMail dataset [10]. Each summary in SummEval is rated by humans based on the _coherence_, _consistency_, _fluency_ of the summary, and _relevance_ between the summary and the source document. Topical-Chat is a dataset that evaluates the quality of a response given the dialogue history and a piece of knowledge relating to the dialogue. We follow Zhong et al. (2022) to evaluate the _naturalness_, _coherence_, _engagingness_, and _groundedness_ (whether the response is grounded on the provided knowledge) of the response. The dataset details are in Appendix E. ### Large Language Models An LLM used as an evaluation metric should be affordable and accessible to whoever wants to use it. Based on this principle, we use ChatGPT (gpt3.5-turbo-0613) [1] for evaluation since it has lower cost and improved performance compared with other GPT-3.5 models. ChatGPT is also used in LLM evaluation and G-Eval. While Liu et al. (2023) further use GPT-4 [1] in their experiments, we cannot use GPT-4 in our experiments since most people, including us, have limited or no access to GPT-4, making it utterly unsuitable as an evaluation metric. In our preliminary experiments, we also try to use the best open LLM (at the time of writing this manuscript) on Open LLM leaderboard, the falcon-40b-instruct model [1], but we find it cannot follow the instructions and rate the samples very well. Hence, we exclude open LLMs in our paper. ## 3 Better Usage of LLM for Evaluation ### Is Auto CoT Always Useful? Liu et al. (2023) shows that adding the evaluation steps generated by auto CoT improves the correla tion on SummEval when using GPT-4 for evaluation. By scrutinizing their results, we find that the correlations when using auto CoT and not using it often differ by less than 0.02. This raises two questions: (1) Is this difference statistically significant? (2) Does auto CoT yield higher correlations for different LLMs and datasets? To answer these questions, we use ChatGPT to rate the samples in SummEval and Topical-Chat using two sets of prompts, one with the evaluation steps generated using auto CoT and one without those evaluation steps. In this experiment, we follow G-Eval and restrict ChatGPT to output only a numeric score. Following Graham and Baldwin (2014), we use William's test for significance to see if the Pearson's \(r\) of using and not using auto CoT is statistically significantly different. We try to follow the prompts used in G-Eval when possible; still, we have to construct some prompts since Liu et al. (2023) only release part of the prompts and some of which are problematic. We list all the prompts and how they are obtained in Appendix F. The experiment results for SummEval are shown in the block in blue in Table 1. We also list the best results of G-Eval using GPT-4 from Liu et al. (2023) in the first row of Table 1 only for reference. Comparing our results with GPT-4 is unfair since we use ChatGPT, which is weaker than GPT-4. **A more reasonable baseline for our paper is the "_auto CoT + score only_" using ChatGPT on the second row**, which is the method proposed by G-Eval and shows the highest correlation that ChatGPT can achieve in Liu et al. (2023). The numbers here differ from results in Liu et al. (2023) because we carefully reproduce their results ourselves. Back to Table 1, we can see that auto CoT leads to higher correlations for _coherence_, _consistency_, and _relevance_. By William's test, these higher correlations reach statistical significance with \(p\)-values less than \(0.05\). However, using auto CoT results in a lower Pearson's \(r\) for _fluency_, and this inferiority in Pearson's \(r\) is also statistically significant. The results for Topical-Chat are illustrated in Table 2. For Topical-Chat, the Pearson's \(r\) of using and not using auto CoT are very close for all four attributes except _groundedness_, with differences less than \(0.025\), and these differences are not statistically significant. For _groundedness_, auto CoT even drastically decreases the correlation. In summary, using auto CoT does not yield consistent and meaningful improvements compared with not using CoT. This should not be surprising since the evaluation steps generated with auto CoT often merely paraphrases the evaluation criterion and instructions given to the LLM. ### Prompt for Outputs In this section, we explore if the difference in how ChatGPT is prompted to output makes it's ratings better aligned with human ratings. We use two sets of prompts that share the same task descriptions and evaluation criteria but differ in how they prompt the LLM to generate the output. One uses "score only", as in G-Eval. The other replaces the "score only" with "How {(placeholder}) is the sample? (on a scale of 1-k, with 1 being the lowest)", as in LLM evaluation. We call the latter prompts _free text_ since they do not \begin{table} \begin{tabular}{c|c c|c c c c c c c} \hline \multirow{2}{*}{**Sec.**} & \multicolumn{2}{c|}{**Ablations**} & \multicolumn{2}{c}{**Coherence**} & \multicolumn{2}{c}{**Consistency**} & \multicolumn{2}{c}{**Fluency**} & \multicolumn{2}{c}{**Relevance**} \\ \cline{2-10} & **CoT** & **Output** & \(r\) & \(\tau\) & \(r\) & \(\tau\) & \(r\) & \(\tau\) & \(r\) & \(\tau\) \\ \hline \hline GPT-4\({}^{\dagger}\) & \(?^{\ddagger}\) & _Score only_ & 0.581 & 0.463 & 0.575 & 0.419 & 0.6 & 0.457 & 0.599 & 0.409 \\ \hline \multirow{3}{*}{3.1} & ✓ & \multirow{3}{*}{_Score only_} & 0.45 & 0.359 & 0.37 & 0.286 & 0.319 & 0.203 & 0.403 & 0.327 \\ & ✗ & & 0.344 & 0.248 & 0.328 & 0.185 & **0.361** & 0.177 & 0.353 & 0.248 \\ \cline{2-10} & ✗ & & 0.344 & 0.248 & 0.328 & 0.185 & **0.361** & 0.177 & 0.353 & 0.248 \\ \cline{2-10} & ✗ & _Free Text_ & **0.46** & 0.342 & **0.476** & 0.334 & **0.477** & 0.273 & 0.324 & 0.228 \\ \cline{2-10} & ✗ & _Rate-explain_ & **0.557** & 0.44 & **0.473** & 0.337 & **0.451** & 0.306 & **0.509** & 0.348 \\ \cline{2-10} & ✗ & _Analyze-rate_ & **0.635** & 0.476 & **0.537** & 0.34 & **0.479** & 0.302 & **0.444** & 0.305 \\ \hline \end{tabular} \end{table} Table 1: The Pearson’s \(r\) and Kendall’s \(\tau\) correlation coefficient between LLMs’ ratings and human ratings for SummEval. All the results in this table, except the first row, are from ChatGPT. We consider _auto CoT + score only_ using ChatGPT proposed in G-Eval as the baseline of this paper. We **boldface** the Pearson’s \(r\) statistically significantly higher than the baseline (except GPT-4). \(\dagger\): results from Liu et al. (2023). Some numbers are different because we re-calculate the correlations based on the GPT-4 responses Liu et al. (2023) released. \(\ddagger\): The results of GPT-4 cannot serve as a reasonable comparison since we find something odd in the prompts Liu et al. (2023) use, which we elaborate in Appendix A. restrict the output form. The results for SummEval are shown in the yellow blocks in Table 1, and the results for Topical-Chat are shown in Table 2. We find that allowing ChatGPT to respond to the question freely yields Pearson's \(r\) and Kendall's \(\tau\) much higher than restricting the model to output a single numeric score for almost all attributes of both datasets. The higher Pearson's \(r\) of _free text_ compared with _score only_ is statistically significant. The only exception is the _relevance_ of SummEval, where _free text_ yields slightly lower correlations. Initially, we thought ChatGPT aligns better with human ratings in _free text_ because it can generate natural language explanations to justify their rating, making the ratings more correlated with human ratings. However, we observe that the responses of ChatGPT when prompted with _free text_ mostly contain a single numeric rating, which is the same behavior when it is instructed by _score only_. This means that what the model is _allowed to generate_ is more important than what it _really generates_. The above observations make us curious if the correlations can be higher if ChatGPT is instructed to justify its ratings. Inspired by chain-of-thought in Wei et al. (2022) and Kojima et al. (2022) (not the auto CoT in G-Eval), we ask ChatGPT to provide their reasoning and rationales on the ratings. Instead of asking ChatGPT to output only a score, we construct two types of prompts that ask ChatGPT to rationalize its decision. The first type of prompt, called _analyze-rate_, asks ChatGPT to analyze the samples regarding the evaluated criteria first and give the rating. The second type of prompt, called _rate-explain_, asks ChatGPT to provide the numeric ratings first and explain why it gives such a rating. _analyze-rate_ is more like the zero-shot chain-of-thought (Kojima et al., 2022). Refer to Appendix F.1.1 for the exact prompts we use. The results of asking ChatGPT to explain/analyze how they rate the sample are shown in the last two rows in Table 1 and Appendix Table 2. We find that for all attributes of both datasets, _rate-explain_ and _analyze-rate_ both lead to correlations stronger than or at least comparable to the correlation of asking ChatGPT to output only a numeric rating (_score only_). By asking ChatGPT to explain/analyze, we improve the best correlations that can be achieved by ChatGPT in Liu et al. (2023) (the _Auto-CoT + score only_). Moreover, when asked to explain/analyze when rating, ChatGPT's correlation can be better than or comparable to the state-of-the-art correlation coefficients obtained from GPT-4 in Liu et al. (2023) for _coherence_ of SummEval and three attributes of Topical-Chat. We hypothesize that some attributes (e.g., _coherence_ for SummEval) are harder for ChatGPT to rate, so the correlations for these attributes show a larger improvement when ChatGPT explains how it rates the sample. In _rate-explain_, the output of ChatGPT contains a numeric rating followed by some explanations. As an auto-regressive language model, ChatGPT cannot depend on the explanation when generating the rating due to causal attention. If we stop the generation after ChatGPT generates the ratings, the output of _rate-explain_ will only contain the ratings, just like the output forms in _score only_. Although the ratings in _rate-explain_ do not depend on ChatGPT's rationales for the ratings, the ratings still correlate better with human ratings, compared with the ratings in _score only_. We think this is because when ChatGPT knows it needs to explain the ratings, it tends to generate ratings that are easier for it to explain, and a rating that is more \begin{table} \begin{tabular}{c|c c|c c c c c c c} \hline \multirow{2}{*}{**Sec.**} & \multicolumn{2}{c|}{**Ablations**} & \multicolumn{2}{c}{**Naturalness**} & \multicolumn{2}{c}{**Coherence**} & \multicolumn{2}{c}{**Engagingness**} & \multicolumn{2}{c}{**Groundedness**} \\ \cline{2-10} & **CoT** & **Output** & \(r\) & \(\tau\) & \(r\) & \(\tau\) & \(r\) & \(\tau\) & \(r\) & \(\tau\) \\ \hline \hline \multirow{3}{*}{3.1} & ✓ & \multirow{3}{*}{_Score only_} & 0.393 & 0.358 & 0.468 & 0.391 & 0.549 & 0.513 & 0.311 & 0.566 \\ & ✗ & & 0.408 & 0.331 & 0.443 & 0.404 & 0.557 & 0.535 & 0.358 & 0.582 \\ \hline \multirow{3}{*}{3.2} & ✗ & _Score only_ & 0.408 & 0.331 & 0.443 & 0.404 & 0.557 & 0.535 & **0.358** & 0.582 \\ & ✗ & _Free Text_ & **0.464** & 0.476 & 0.524 & 0.426 & **0.611** & 0.557 & **0.563** & 0.666 \\ \cline{1-1} & ✗ & _Rate-explain_ & **0.524** & 0.47 & 0.477 & 0.416 & 0.567 & 0.524 & **0.58** & 0.693 \\ \cline{1-1} & ✗ & _Analyze-rate_ & **0.573** & 0.47 & 0.486 & 0.416 & **0.628** & 0.524 & **0.725** & 0.693 \\ \hline \end{tabular} \end{table} Table 2: The Pearson’s \(r\) and Kendall’s \(\tau\) correlation coefficient between LLMs’ ratings and human ratings for Topical-Chat. All the results in this table, except the first row, are from ChatGPT. We **boldface** the Pearson’s \(r\) statistically significantly higher than _auto CoT + score only_. We underline the Pearson’s \(r\) comparable _auto CoT + score only_. aligned to humans' rating is easier for ChatGPT to explain. ### Empirical Guidelines Based on the analysis and results in this section, we provide the following guideline: **Always ask ChatGPT to explain/analyze when rating.** We do not see _rate-explain_ to be significantly better (or worse) than _analyze-rate_, so it is hard to determine which one to use. A valid method is sampling some ratings using _rate-explain_ and sampling some ratings using _analyze-rate_ and averaging the ratings from the two prompts as the final rating. Using auto CoT is optional since it does not always lead to higher correlations with human ratings. We also find that using auto CoT does not always improve the correlations when ChatGPT is asked to explain; this result is shown in Appendix Table 3. ### Robustness of the Guidelines LLMs are notorious for their performance fluctuation due to the input prompts, and the sequence generated by LLMs can be different when changing the hyperparameters used in decoding. To verify the validity of our empirical guidelines, we conduct the following two sets of experiments: (1) we vary the temperature used in sampling the output from ChatGPT, and (2) we vary the prompt given to ChatGPT. #### 3.4.1 Varying the Temperature We check if our guideline holds if we change the temperature \(T\) during generation. We compare Pearson's \(r\) when using the method proposed in G-Eval (Auto-CoT + score only) with _rate-explain_ and _analyze-rate_ under different temperatures used when generating the output from ChatGPT. We follow Chiang and Lee (2023) and use two temperatures: \(0.7\) and \(0.3\). The results are shown in Appendix Table 5 and summarized as follows: First, when fixing the sampling temperature, we find that _rate-explain_ and _analyze-rate_ always achieve a higher correlation compared with G-Eval. This supports our guideline that _"asking the LLM to explain/analyze outperforms the method proposed in G-Eval."_ Next, we observe that the correlation of G-Eval when \(T=0.3\) is much lower than that of \(T=1.0\). This shows that G-Eval is not robust to sampling temperature. Contrarily, we find that the correlations obtained by _rate-explain_ and _analyze-rate_ do not significantly change for different sampling temperatures for almost all cases. This shows that _rate-explain_ and _analyze-rate_ are more robust than G-Eval with respect to the sampling temperature. #### 3.4.2 Changing the Prompts We check if our guideline holds if we change the prompt given to ChatGPT. In this experiment, we changed the prompts to ChatGPT by appending some instructions before the descriptions of the rating task. We tried with two prompts: (1) the HHH prompts and (2) the human annotator prompts. The HHH prompt is designed by Bai et al. (2022) to align the output of LLMs to be more harmless, honest, and helpful. The human annotator prompt is inspired by Chiang and Lee (2023), who use a similar prompt to make the LLM behave as a human annotator. These two prompts will be inserted before the prompt we originally used in our paper. We use these two prompts to inject persona into the LLM. This is inspired by Zeng et al. (2023), which shows that the output of GPT3 can be different when prompted with a different persona. The prompts are detailed in Appendix F.3. The results are shown in Table 6 and summarized as follows: _rate-explain_ and _analyze-rate_ consistently outperform the G-eval when using the human annotator prompts and the HHH prompts. This indicates that our guidelines are robust toward different prompts. We also find that the correlations of G-Eval significantly drop when adding the human-annotator prompts or HHH prompts. On the other hand, the correlation for _rate-explain_ and _analyze-rate_ do not significantly decrease when adding the human-annotator prompt and the HHH prompt. This shows that asking the LLM to explain is more robust to the variation of the prompts. ## 4 Conclusion We study how to better use ChatGPT as an automatic evaluation tool by scrutinizing LLM evaluation and G-Eval. We provide concrete guidelines and show that by using those guidelines, the correlations of several evaluated attributes given by ChatGPT, a publicly usable model, can be higher than or comparable to the ratings given by GPT-4, a highly restricted and pricey model. We also show that the evaluation results based on our guidelines improve the best correlation that ChatGPT's rating can achieve. We believe our results and guidelines help future researchers better use LLMs for evaluation. ### Limitations There are three main limitations of this paper. 1. We only use ChatGPT to conduct the experiments in this paper. We explain why we chose ChatGPT in Section 2.3. We believe that using ChatGPT is already enough since we show that the correlations obtained by using ChatGPT are already comparable to or better than the previous SoTA results obtained by GPT-4. 2. We only conduct analysis using two tasks, while we know that NLP has more diverse tasks. We do not guarantee that our observations can generalize to all the other datasets. We recommend the users verify the effectiveness of using LLM to evaluate the tasks of interest. 3. We cannot fairly compare our results with Liu et al. (2023), the previous SoTA results, due to multiple reasons. We explain those reasons in Appendix A. ## Ethics Statement Our paper follows the ACL Code of Ethics. We do not see a particular harmful outcome of our paper. The code and datasets for reproducing our experiments can be found at [https://github.com/d223302/A-Closer-Look-To-LLM-Evaluation/](https://github.com/d223302/A-Closer-Look-To-LLM-Evaluation/). ## Acknowledgements We want to thank the reviews for providing detailed feedback and actionable suggestions, which helped us strengthen our paper. We also want to thank the senior committee members for monitoring the reviewing process. Cheng-Han Chiang is supported by a Ph.D. scholarship program by Delta Electronics.
2308.00565
AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle
Utilizing wind hovering techniques of soaring birds can save energy expenditure and improve the flight endurance of micro air vehicles (MAVs). Here, we present a novel method for fully autonomous orographic soaring without a priori knowledge of the wind field. Specifically, we devise an Incremental Nonlinear Dynamic Inversion (INDI) controller with control allocation, adapting it for autonomous soaring. This allows for both soaring and the use of the throttle if necessary, without changing any gain or parameter during the flight. Furthermore, we propose a simulated-annealing-based optimization method to search for soaring positions. This enables for the first time an MAV to autonomously find a feasible soaring position while minimizing throttle usage and other control efforts. Autonomous orographic soaring was performed in the wind tunnel. The wind speed and incline of a ramp were changed during the soaring flight. The MAV was able to perform autonomous orographic soaring for flight times of up to 30 minutes. The mean throttle usage was only 0.25% for the entire soaring flight, whereas normal powered flight requires 38%. Also, it was shown that the MAV can find a new soaring spot when the wind field changes during the flight.
Sunyou Hwang, Bart D. W. Remes, Guido C. H. E. de Croon
2023-08-01T14:09:19Z
http://arxiv.org/abs/2308.00565v1
# AOSoar: Autonomous Orographic Soaring of a Micro Air Vehicle ###### Abstract Utilizing wind hovering techniques of soaring birds can save energy expenditure and improve the flight endurance of micro air vehicles (MAVs). Here, we present a novel method for fully autonomous orographic soaring without a priori knowledge of the wind field. Specifically, we devise an Incremental Nonlinear Dynamic Inversion (INDI) controller with control allocation, adapting it for autonomous soaring. This allows for both soaring and the use of the throttle if necessary, without changing any gain or parameter during the flight. Furthermore, we propose a simulated-annealing-based optimization method to search for soaring positions. This enables for the first time an MAV to autonomously find a feasible soaring position while minimizing throttle usage and other control efforts. Autonomous orographic soaring was performed in the wind tunnel. The wind speed and incline of a ramp were changed during the soaring flight. The MAV was able to perform autonomous orographic soaring for flight times of up to 30 minutes. The mean throttle usage was only 0.25% for the entire soaring flight, whereas normal powered flight requires 38%. Also, it was shown that the MAV can find a new soaring spot when the wind field changes during the flight. ## I Introduction Flight endurance is one of the major factors holding back the real-world application of micro air vehicles (MAVs). For low size, weight, and power (SWaP) MAVs, flight endurance mainly depends on the power density of the battery, which is limited [1, 2], without fundamental progress on the horizon. One way to improve flight endurance is to exploit energy from the environment. Birds like albatrosses, vultrues, ospreys, and kestrels are well known for their ability to actively use the wind to minimize their energy expenditure to fly longer distances or time [3, 4, 5]. For example, vultrues utilize energy from rising air columns created by uneven ground heating, called thermals [6]. Thermal soaring of unmanned aerial vehicles (UAVs) has been studied in various aspects, not only through manually developed guidance and control strategies [7, 8] but also through reinforcement learning of detecting and exploiting thermals [9, 10]. Another type of soaring is orographic soaring. Kestrels are often observed hovering at a position over a dune without flapping their wings, which is called wind-hovering [11]. This is a good example of the orographic soaring, using the updraft generated by obstacles such as hills, mountains, and buildings when the wind hits the obstacle. Wind-hovering can be useful for remaining in a single place for observation, but also for prolonging the flight range. For example, gulls appear to plan their path to exploit orographic soaring on the way to their destination to save energy [12, 13]. In this paper, we solely focus on orographic soaring. Wind fields around obstacles and flight conditions for orographic soaring were analyzed using simulations and measurement data [14, 15, 16, 17]. These studies introduced unique opportunities and possibilities for exploiting orographic updrafts using MAVs. The feasibility and strategies of orographic soaring were also discussed. However, there was no actual flight demonstration performed in these studies. Orographic soaring of MAVs in a real-world environment has been demonstrated in only a few studies [18, 19]. However, the method in [18] was based on a priori knowledge of the entire wind field to generate a pre-defined trajectory to the soaring spot, while the method in [19] required manually positioning the MAV at a precise initial soaring position before switching on the autonomous soaring controller. To achieve a fully autonomous soaring, considerable challenges remain. In practice, accurately predicting or measuring the wind field is not feasible. Moreover, the MAV has to be able to explore and look for a feasible soaring position autonomously. In this paper, we demonstrate for the first time the autonomous orographic soaring of an MAV without a priori knowledge of the wind field nor precise initial positioning of the MAV by a human pilot. To achieve this, we present (i) a local search algorithm to find a soaring position and (ii) an INDI controller with control allocation that enables the MAV Fig. 1: Autonomous orographic soaring of a micro air vehicle. Without a priori knowledge of the wind field, the MAV successfully performs autonomous soaring in the wind tunnel using little to zero throttle throughout the flight. The MAV can find a new position to soar when environmental conditions vary without any human intervention or manual parameter changes. In this picture, the increasing slope angle changes the wind field, and the MAV autonomously finds a new soaring position. It moves to the front and downward because of the combination of changes in the updraft and the MAV’s sink rate. to use the same controller setting during the entire flight. An important advantage of the proposed control method is that the MAV does not need to switch controllers between soaring and navigation and can use the throttle whenever necessary. We demonstrate the proposed method with a real-world flight in a wind tunnel. Moreover, we validate the versatility of the proposed methods by changing the wind speed and updraft during the autonomous soaring flight. The paper is structured as follows: In section II, an INDI-based soaring controller and searching method are presented. In section III, we introduce the MAV and the wind tunnel test setups. In section IV, the results from an autonomous soaring flight in the wind tunnel are presented. We discuss the flight test in section V. Finally, we draw conclusions and suggest future research directions in section VI. ## II Methods There are many challenges for autonomous orographing. In this article, we focused on two main aspects. The first one is the controller. There is a unique challenge for orographic soaring because the aim is to maintain the position without using the throttle. Also, it is only feasible in a small updraft region of the wind field with high wind speed. Previous studies used a glider plane without a motor or changed controller or gains when the MAV enters the soaring mode. However, for sake of control fluidity, it would be desirable to use a single controller. We adopted INDI with control allocation to enable using a single controller during the whole flight, regardless of navigation or soaring. We present our control method in section II-A. The second aspect is to find feasible soaring positions. For fully autonomous flight, the MAV should be able to find where it can soar. In previous studies, it was determined either from a priori knowledge of the wind field or by a human pilot. We present a method for the MAV to autonomously find feasible soaring positions based on simulated annealing in section II-B. ### _INDI controller with allocation_ Traditional PID controllers were used in most previous research, however, many of the authors have mentioned the need for a more advanced controller, especially for gust rejection. Therefore, we adopted an INDI controller for soaring flights. INDI is good at disturbance rejection and requires little model information [20, 21, 22]. INDI is an incremental form of nonlinear dynamic inversion. It controls angular acceleration \(\dot{\omega}\) in an incremental way. The only required knowledge is the control effectiveness \(G\), mapping an increment in the control input \(u\) to a resulting rotational acceleration increment \(\dot{\omega}-\dot{\omega}_{0}\) : \[\dot{\omega}=\dot{\omega}_{0}+G(u-u_{0}) \tag{1}\] \[u=u_{0}+G^{-1}(\nu-\dot{\omega}_{0})\] Where \(\nu\) is the virtual control vector, and subscript 0 indicates a time in the past. The control effectiveness depends on the inertia of the vehicle. However, directly measuring the inertia can be challenging. Alternatively, it can be estimated from flight test data with actuator inputs with angular acceleration. Using the flight test data, the control effectiveness matrix(G) can be estimated by dividing angular acceleration by a control input vector(\(u\)). Practically, we conducted several manual outdoor flight tests to log radio control input commands and angular accelerations by post-processing the inertial measurement units reacting to the radio input. The control effectiveness was calculated by dividing the change of angular acceleration by the change of input commands from radio control, for each pitch, roll, and yaw axis at various airspeeds. After that, the effectiveness values were fitted as a quadratic function and scheduled by airspeed measurement, because the effectiveness of control surfaces depends on the dynamic pressure \(q=\frac{1}{2}\rho V^{2}\). Figure 2 shows the overview of the soaring controller. An INDI controller is used for both the inner loop and the outer loop. For the outer loop, linear acceleration error is calculated from position error and fed into the INDI outer loop. The MAV's position(\(\xi\)), velocity(\(\dot{\xi}\)), and a reference position are passed to a PD controller, and a linear acceleration reference(\(\ddot{\xi}_{ref}\)) goes into the INDI outer loop [22]. Then, pitch, roll references, and thrust increment are calculated from the outer loop. \(K_{\xi}\) and \(K_{\xi}\) are the gains for position error and velocity error, respectively. \[\ddot{\xi}_{ref}=K_{\xi}(K_{\xi}(\xi_{ref}-\xi)-\dot{\xi}) \tag{2}\] To keep the heading towards the wind, we calculate a yaw(\(\psi\)) reference. We set a virtual waypoint at 15 meters into the center of the cross-section of the wind tunnel. The yaw reference is calculated based on the error between the MAV's position and the virtual waypoint. Let \(x_{ref},y_{ref}\) be the x and y position of the virtual waypoint, and \(x,y\) the longitudinal and lateral position of the MAV. Then the yaw reference is determined by the following equation: \[\psi_{ref}=\text{atan}(\frac{y_{ref}-y}{x_{ref}-x}) \tag{3}\] The attitude reference(\(\eta_{ref}=[\dot{\phi}_{ref}~{}\dot{\theta}_{ref}~{}\dot{\psi}_{ref}]^{T}\)) is passed to an inner loop PD controller. Then, the angular acceleration reference(\(\dot{\omega}_{ref}\)) calculated from the PD controller and the thrust increment are passed into the inner loop INDI controller. \[\dot{\omega}_{ref}=K_{\omega}(K_{\eta}(\eta_{ref}-\eta)-\omega) \tag{4}\] The throttle is useful to navigate to the soaring region or to deal with a strong gust. It is also necessary to use throttle when the MAV cannot fly without power due to the wind conditions. However, changing gains or switching the controller from one to another in flight to enable or disable the throttle is not desirable. We utilized control allocation to cope with this problem. An INDI controller with control allocation allows the MAV to use the same controller and parameters throughout the flight, but seamlessly cut off or increase the throttle when desired. In particular, this makes it unnecessary to change anything when switching between navigating, searching for a soaring spot, and soaring flight. Control allocation of INDI was originally developed to prevent actuator saturation for over-actuated vehicles, because of aggressive yaw control behaviour. The weighted least square (WLS) algorithm was integrated with the INDI controller for inner loop control allocation in a paper by Smeur et al. [23]. Here, we use control allocation for achieving as still soaring as possible. This implies a preference for reducing throttle and minimizing accelerations. This leads to the following control allocation cost function: \[\begin{split} C_{alloc}(u_{wls})=\|W_{u}(u_{wls}-u_{p,wls})\|^{2} +\\ \gamma\|W_{v}(G_{o}u_{wls}-v_{wls})\|^{2}\end{split} \tag{5}\] Where \(u_{wls}\) is a control increment vector, \(W_{u}\) is a weighting matrix for the control inputs, \(W_{v}\) is a weighting matrix for the control objective, \(G_{o}\) is the control effectiveness matrix for the outer loop, \(v_{wls}\) is the virtual control increment command for the outer loop, \(u_{p}\) is the preferred control increment vector, and \(\gamma\) is a scale factor. In this study, control allocation is used to prioritize the controls to minimize thrust for the outer loop controller. We set a higher priority to control pitch than the thrust by adjusting weight for the WLS optimization. Therefore, for \(W_{v}\), we choose the weights to be 1, 100, 1 for roll(_rad_), pitch(_rad_), and thrust in a range of [0, 9600] respectively. For \(W_{u}\), we choose 1 for all axes, \(\gamma\) is \(10^{6}\), and \(u_{p}\) is a zero vector. ### _AOSearch: Autonomous Orographic Search for a soaring location_ To exploit updrafts, MAVs have to find feasible soaring locations autonomously. If the environment is static and a prior knowledge of the wind field is provided, it can be calculated from the MAV model and wind speed by finding an equilibrium. However, in the real world, it is difficult to measure the entire wind field. Furthermore, the wind speed may change during the flight. Thus, we developed an algorithm to find the soaring location which is applicable in a non-static environment, without any prior knowledge of the wind field. Simulated annealing is a local meta-heuristic search technique [24, 25]. It attempts to minimize a cost function by taking steps to neighboring positions in the search space. It either accepts or rejects the solution based on an acceptance probability which usually decreases to zero as the "temperature" decreases. Initially, steps that increase the cost are allowed, but eventually, simulated annealing becomes a greedy algorithm. Based on simulated annealing, we implemented _AOSearch_: Autonomous Orographic Search algorithm. In our case, there could be environmental changes at any time during the flight, regardless of the progress of the search. For example, wind speed can be changed over time. Therefore, the temperature does not decrease at each step. The temperature is set to zero, so the search algorithm only accepts better solutions. When the MAV finds a position at which it can soar, the search is finished. Hence, we implemented a threshold cost. If the value of the cost function is lower than the threshold at a certain position, it is considered converged and does not move to a new neighbour. When the value of the cost function exceeds the threshold because of an environmental change, the MAV restarts the search until it finds a new position that satisfies the threshold condition. We want to minimize energy expenditure as well as stay in the same position as much as possible just like kestrels hovering without moving its head for observation. In our case, thrust is the primary source of energy consumption to minimize. Pitch rate is also contributing because controlling the elevator spends energy. Furthermore, we aim for wind-hovering, which means that the MAV maintains its position. So, we want to minimize both horizontal and vertical speed. The cost function(\(C_{search}\)) captures all requirements for still, orographic soaring, with the MAV able to keep its downward view as static as possible with minimum throttle usage, also meaning minimal position and pitch changes. Hence, it is a function of thrust(\(T\)[\(\%\)]), horizontal and vertical ground speed(\(\dot{x}\), \(\dot{z}\)[\(m/s\)]), and pitch rate(\(\theta\)[\(rad/s\)]) with a gain for each parameters. \[C_{search}=k_{1}T+k_{2}|\dot{x}|+k_{3}|\dot{z}|+k_{4}|\dot{\theta}| \tag{6}\] The gains were \(k_{1}\)=9.6, \(k_{2}\)=1.6, \(k_{3}\)=1.0, and \(k_{4}\)=10. The threshold value was set to 43. The gains and threshold value were determined empirically, by observing the values when Fig. 2: A schematic overview of the soaring controller. \(\xi\) is the position, \(\eta\) is the attitude, \(\omega\) is the angular rate, \(\psi\) is the yaw angle, \(T\) is the thrust, and \(u\) is actuator commands. Subscript \(ref\) means reference, and subscript \(f\) is for filtered signals. the MAV was soaring in a stable manner. _AOSearch:_ the autonomous orographic search method is described in algorithm 1. First, calculate the cost function. Based on the value of the cost function, a step size(\(S\)) is selected. If the value is lower than the threshold, the MAV stays at the current position(\(Pos(s_{new})\)). If the value of the cost function has increased compared to the previous position(\(Pos(s)\)), go back to the previous position. If the value has decreased, keep the same direction. If it has just come back from a previous position, pick a new random neighbour. To get a random neighbour \(s_{new}\), pick a random direction(\(Dir(s_{new})\)) among four direction vectors of [x, z]: forward[1, 0], backward[-1, 0], up[0, 1], down[0, -1]. Then, a new soaring position is: \(Pos(s_{new})=Pos(s)+Dir(s_{new})\times S\). Repeat the process until the cost function value becomes less than the threshold. The step size (S) is set to four steps depending on the value of the cost function. As the value gets lower, the step size decreases from 0.3 \(m\) to 0.05 \(m\). The logic behind this is that the MAV takes a bigger step to explore the wind field when the energy consumption at the current position is high. When the value of the cost function is low at the current position, the MAV tries to fine-tune its position. \[S=\begin{cases}0.3,&\text{for }C\geq 3\times threshold\\ 0.2,&\text{for }3\times threshold>C\geq 2\times threshold\\ 0.1,&\text{for }2\times threshold>C\geq 1.5\times threshold\\ 0.05,&\text{otherwise}\end{cases}\] ``` 1:\(C(s_{new})\gets k_{1}T+k_{2}|i|+k_{3}|\hat{e}|+k_{4}|\hat{\theta}|\)\(\triangleright\) Calculate cost function 2:S \(\leftarrow\) Calculate a step size 3:if\(C(s_{new})<threshold\)then 4: Stay at the current position 5:else 6:if\(C(s_{new})<C(s)\)then 7:if returned is True then 8:\(Dir(s_{new})\gets random\)\(\triangleright\) pick a random direction 9:\(returned\leftarrow False\) 10:else 11:\(Dir(s_{new})\gets Dir(s)\)\(\triangleright\) keep the same direction 12:endif 13:else 14:\(Dir(s_{new})\gets-Dir(s)\)\(\triangleright\) go back to the previous position 15:\(returned\leftarrow True\) 16:endif 17:\(Pos(s_{new})=Pos(s)+Dir(s_{new})\times S\) 18: Move to a new soaring position \(Pos(s_{new})\) 19:endif ``` **Algorithm 1** AOSearch ## III Hardware and test setup ### _Eclipson model C 3d-printed model plane_ An Eclipson model C [26] airplane was used for the flight tests. It is a 3d-printed plane, which makes it easy to replace parts in case of a crash. It was printed with lightweight polylactic acid (LW-PLA) to reduce weight and increase aerodynamic performance. Unlike other studies that used a glider or a flying wing with flaps, we chose a 5-channel model plane to have more control. It has four servos for control surfaces (elevator, left aileron, right aileron, and rudder) and an electric motor at the front. Especially the rudder is useful to keep its heading toward the wind while maintaining its lateral position to stay in a small updraft region. The throttle is also necessary to navigate and fly safely in case of a strong gust or sudden environmental changes. The MAV is shown in Figure 3. It has a wingspan of 1100\(mm\), 18\(dm^{2}\) wing surface area, and an aspect ratio of 6.9. A Pixhawk 4 with Paparazzi autopilot open-source software [27] was used. Pixhawk 4 board has a processor and an inertial measurement unit. A GPS module and optical trackers were used for outdoor and indoor localization, respectively. An airspeed sensor was mounted under the wing and calibrated in the wind tunnel. The weight of the aircraft with electronics was 595 grams excluding battery, and a total of 716 grams including a 1.5\(A\) Lithium-Polymer battery. ### _Wind tunnel test setup_ The TU Delft open jet facility (OJF) is a wind tunnel with a \(2.85m\times 2.85m\) cross-section. We installed a ramp in front of the wind outlet to generate an orographic updraft. The OJF and the slope are shown in Fig. 4. The slope angle and wind speed were adjustable during the flight. For safety reasons, a rope system was attached to the ceiling. An Opti-track system installed in the wind tunnel was calibrated before the flight test. In order to get insight into the wind field with this setup, a computational fluid dynamics (CFD) simulation was performed using ANSYS fluent [28]. The contours of horizontal and vertical wind speeds are shown in Fig.5. The strongest updraft occurs close to the end of the slope, and the wind speed decreases near the slope because of the boundary friction. Fig. 4: TU Delft OJF and the slope setting for generating orographic updraft. The cross-section of the wind tunnel is \(2.85m\times 2.85m\). A wooden plate (\(2.44m\times 2.44m\)) was used for the slope. The slope was placed in front of the outlet of the wind tunnel. Fig. 3: Eclipson model C 3D-printed airplane. It has \(1.1m\) wingspan and a weight of 716\(g\) including a 1.5\(A\) Li-Po battery. Pixhawk4 is equipped with Paparazzi open-source autopilot software. An airspeed sensor is mounted under the wing. GPS sensor and opti-track markers are used for localization. ### _Glide polar_ To measure the sink rate, a series of outdoor flight tests was performed. The MAV had the same hardware and weight setting with the indoor tests for consistency. We flew the MAV manually on a calm day, and retrieved sink rate and airspeed data from flight logs during parts of the flight where the motor was turned off. The data points and the glide polar are shown in Fig. 6. The data was divided into two sections and fitted with a fourth-order polynomial function. The sink rate sharply increases around 9.8 m/s of airspeed. This is because the propeller started windmilling at that airspeed, generating additional drag. ## IV Test results ### _Autonomous Soaring_ Indoor flight tests were performed to demonstrate autonomous soaring using the proposed methods in this paper. We first started the flight in manual mode for safety reasons until the nominal wind speed reached 8.0 \(m/s\). Once the wind speed was stabilized, we switched on the autonomous flight mode. The MAV was fully autonomous from that point. It first hovered at a standby position and stabilized itself. Then it started the local search and autonomously moved its target soaring position until it converged according to algorithm 1. The standby position was set where requires approximately 20% of the throttle for hover. During the flight, we changed the environmental conditions, either the wind speed or the slope angle. Everything was running onboard, except position measurements from the opti-track. Data from onboard sensors was written on an SD card using a high speed flight logger and retrieved after the flight. There were two experimental cases: one in which we changed the wind speed with a fixed slope angle, and another in which we changed the slope angle with a fixed wind speed. In both cases, the MAV was able to successfully soar using very little energy without any prior knowledge of the wind field. Furthermore, it was shown that autonomous soaring in a changing environment is possible by using the proposed methods. #### Iv-C1 Changing nominal wind speed The first case is to change the wind speed during soaring. We started autonomous soaring at a wind speed of 8.5 m/s, and then slowly increased it to 9.8 m/s, and decreased it again. When the wind speed changes, the MAV either stays at the same position if possible, or tries to find another position. If the current position becomes not feasible for soaring due to the change of the wind field, it restarts searching for a new feasible soaring position. Figure 7 shows the wind speed, horizontal and vertical position, and throttle usage during the flight. Figure 8 shows the trajectory during the autonomous search from an initial position at 8.5 \(m/s\) wind speed. The throttle started of at 20%, but after the autonomous search for a soaring location turned on at 352 seconds, the MAV found a good soaring location in 185 seconds, achieving zero throttle at 537 seconds. From that time on, the throttle usage was very low. At 2244 seconds, we retook manual control to make the MAV land. Manual control is shown by the shaded regions. The wind speed was changed over time, which led to changes in position. There is no clear proportional relationship between wind speed and the positions. We will analyze the chosen soaring positions further in the next section. The flight time was a total of 30 minutes excluding a short manual flight after launch and landing. During the soaring, the mean throttle usage was 0.25 %. Note that it is a significant decrease in throttle usage compared to non-soaring flights. 38% of the throttle was required for the MAV to hover in the wind tunnel without a ramp at 8.5 m/s wind speed, and 30 to 50% of the throttle was normally used during outdoor flights. #### Iv-C2 Varying slope angle In the second case we changed the slope angle during soaring. The procedure remained the same as the first case, but instead of changing the wind speed, we changed the slope inclination while the nominal wind speed was fixed at 8.5 m/s. The slope angle, the MAV's horizontal and vertical position, and the throttle usage are shown in figure 9. The slope angle was set from 22.1 degrees to 25.2 degrees. The step size of the slope angle was not consistent because of the practical difficulty of moving the ramp precisely during the flight. The throttle usage was 25% at the start. We started the autonomous search at 1440 seconds. The MAV achieved zero throttle Fig. 5: Contours of horizontal and vertical wind speed over a slope in the wind tunnel. The slope angle was set to 23.2 degrees and the wind speed from the OJF was set to 8.5 m/s. Fig. 6: Glide polar of the Eclipson model C airplane. The data was divided into two sections and fitted with a fourth-order polynomial. Additional drag was generated at around 9.8 m/s of the airspeed because of the propeller windmilling. flight at 1572 seconds, 132 seconds after starting the search. At 2950 seconds, we retook manual control for landing. With a slope angle of 22.1 degrees, the inclination was small, so less updraft was generated than other conditions. Because of that, the combination of the horizontal and vertical wind was not favorable to hover stationary. Nevertheless, it was still able to soar using zero throttle at 22.1 degrees slope angle, allowing some movement. The soaring position moved frontward when the slope inclination increased, and it moved backward again when the slope angle decreased. This is because the feasible soaring region pushed forward as the slope angle increased. We will analyze the change of the wind field and chosen positions in the next section. The soaring flight time was a total of 25 minutes, excluding short manual flights after launch and before landing. During the soaring, the mean throttle usage was 0.25 %. Note that in both test cases, the flight was stopped before using up all the battery because of the size of the flight log file and limited onboard memory. ## V Analysis and Discussion For further analysis, we ran a CFD simulation for each combination of wind speed and slope angle. Based on the wind field calculated by the CFD simulation and the MAV's sink rate, a feasible soaring region can be determined. The MAV can soar where the updraft and sink rate are balanced (i.e. excess updraft = updraft - sink rate = 0), which is a white region in figures 10 and 11. The trajectory shows that the MAV mostly stayed in the region where the updraft and sink rate is balanced. The plots show that the MAV generally resides inside or very close to the white areas, in which the predicted excess updraft is zero. The MAV does occasionally fly in slight non-zero excess updraft locations, which we expect to be due to imperfections in the predictions. These imperfections can have various sources: differences between the CFD simulation and the real world, imperfect sink rate measurement and fitting, airspeed sensor measurement, and the wind speed controller of the OJF also had steady-state errors within 0.1 m/s range. One thing we noticed is that the vertical position of the MAV went down as the wind speed increased. Although this may seem counter-intuitive, the glide polar explains the Fig. 8: Flight trajectory during the autonomous search, from 352 to 537 seconds in case 1. The MAV started searching at the standby position (STBDY), marked as a black dot. It found a good soaring location and achieved zero throttle in 185 seconds. Fig. 7: Horizontal and vertical position, and throttle usage of the MAV during the flight according to the change of the wind speed from the wind tunnel. Fig. 11: Case 2: autonomous soaring at varying slope angles from 22.1 to 25.2 degrees. The contours of excess updraft and the MAV trajectory for each slope angle are shown. On the last plot, middle polar with the range of the sink rate for each case is presented. The sink rates stayed in similar ranges because the nominal wind speed was fixed at 8.5 m/s in this case. Fig. 10: Case 1: autonomous soaring in changing wind speed from 8.5 to 9.8 m/s. Contours of excess updraft and the MAV trajectory are shown for each nominal wind speed. In the last plot, the range of sink rate for each case is shown on the glide polar. Fig. 9: Horizontal and vertical position, and throttle usage of the MAV during the flight according to the change of the slope angle. behavior. The minimum sink rate of the MAV occurs at around 9.1 m/s of airspeed. When the airspeed gets higher than that, the sink rate significantly increased. As a result, more updraft is required for the MAV to stay aloft, and the MAV lowers its altitude to find a stronger updraft. In case of a varying slope angle, the position of the MAV moved forward as the inclination increased. It is because more updraft is generated with a larger slope angle, thus the position that balances the sink rate and the updraft moved forward. In both cases, the throttle usage and energy consumption decreased significantly after switching on the soaring mode. We set a position where the MAV can hover using approximately 20% of the throttle as a standby. During the soaring flight, the throttle usage dropped and remained at close to 0% for almost all the time. The mean throttle usage of the entire soaring flight was 0.25% for both cases, compared to 38% for a nominal flight. There were a few moments that the MAV used throttle because it was necessary to avoid a stall or recover the position. Note that the size of the feasible soaring region is only 10 to 20 cm vertically, which is very small to maintain the position within the region even for a human pilot. Also, we changed the wind field during the flight so sometimes the MAV had to overcome a sudden change using the throttle. ## VI Conclusion In this paper, we demonstrated the first autonomous orographic soaring in the real-world environment without a priori knowledge of the wind field, pre-defined trajectory planning, or manual initialization by a human pilot. A fully autonomous orographic soaring was performed in the wind tunnel with changing environment settings. With a combination of the local search algorithm and the INDI controller with control allocation, the MAV was able to soar autonomously with almost zero throttle in the updraft for over 25 minutes of flight time. Furthermore, we verified that the MAV can find a new soaring position when the updraft changes by using the proposed search method. For future work, performing an outdoor flight test will be the next step. The proposed methods in this paper are applicable to outdoor flights because they do not depend on any pre-measured environmental condition. However, the MAV will have to be more aware of its environment, as the wind may change direction and it will have to sense and avoid obstacles during the search. Using additional sensors can be helpful for recognizing the surroundings.
2304.00868
Theoretical insights on structural, electronic and thermoelectric properties of inorganic biphenylene: non-benzenoid Boron nitride
The first-principles calculations predict a stable biphenylene carbon network (BPN) like the Boron-nitride structure named inorganic biphenylene network (I-BPN). A comparison has been done between BPN and I-BPN to examine the stability of the I-BPN monolayer. We calculate the formation energy, phonon dispersion and mechanical parameters: young modulus and Poisson ratio for mechanical stability. It has been found that the stability of I-BPN is comparable with the BPN. The lattice transport properties reveal that the phonon thermal conductivity of I-BPN is 10th order low than the BPN. The electronic band structure reveals that I-BPN is a semiconductor with an indirect bandgap of 1.88 eV with valence band maximum (VBM) at Y and conduction band maximum (CBM) at the X high symmetry point. In addition, the thermoelectric parameters, such as the seebeck coefficient, show the highest peak value of 0.00292 V/K at 324K. Electronic transport properties reveal that I-BPN is highly anisotropic along the x and y-axes. Furthermore, the thermoelectric power factor as a function of chemical potential shows a peak value of 0.0056 W/mK2 (900K) along the x-axis in the p-type doping region. An electronic figure of merit shows an amplified peak approach to 1. The total figure of merit (including lattice transport parameters) shows peak values of 0.378 (0.21) for p-type and 0.24 (0.198) n-type regions along the x(y) direction. It is notice that the obtain ZT peaks values are higher than any B-N compositions.
Ajay Kumar, Parbati Senapati, Prakash parida
2023-04-03T10:44:29Z
http://arxiv.org/abs/2304.00868v2
Theoretical insights on structural, electronic and thermoelectric properties of inorganic biphenylene: non-benzenoid Boron nitride ###### Abstract The first-principles calculations predict a stable biphenylene carbon network (BPN) like the Boron-nitride structure named inorganic biphenylene network (I-BPN). A comparison has been done between BPN and I-BPN to examine the stability of the I-BPN monolayer. We calculate the formation energy, phonon dispersion and mechanical parameters: young modulus and Poisson ratio for mechanical stability. It has been found that the stability of I-BPN is comparable with the BPN. The lattice transport properties reveal that the phonon thermal conductivity of I-BPN is 10\({}^{\rm th}\) order low than the BPN. The electronic band structure reveals that I-BPN is a semiconductor with an indirect bandgap of 1.88 eV with valence band maximum (VBM) at Y and conduction band maximum (CBM) at the X high symmetry point. In addition, the thermoelectric parameters, such as the seebeck coefficient, show the highest peak value of 0.00292 V/K at 324K. Electronic transport properties reveal that I-BPN is highly anisotropic along the x and y-axes. Furthermore, the thermoelectric power factor as a function of chemical potential shows a peak value of 0.0056 W/mK\({}^{2}\) (900K) along the x-axis in the p-type doping region. An electronic figure of merit shows an amplified peak approach to 1. The total figure of merit (including lattice transport parameters) shows peak values of 0.378 (0.21) for p-type and 0.24 (0.198) n-type regions along the x(y) direction. It is notice that the obtain ZT peaks values are higher than any B-N compositions. Keywords: - Boron-nitride, Band-gap, Semiconductor ## Introduction Graphene, a monolayer of sp2-hybridized carbon atoms arranged in a honeycomb structure. It is one of the earliest fascinating two-dimensional materials, famous for its linear dispersion relation. It extended a new era to study atomically thin-layered materials[1-4]. Other two-dimensional mono-atomic layers like silicene, germanene, borophene, Phosphorene, etc., and hetero-structures like GaAs, hexagonal Boron-nitride (h-BN), transition metal dichalcogenides (TMDs) are to be investigated after the experimental confirmation of graphene by exfoliation technique in 2005[5-10]. After graphene, h-BN draws attention in theoretical as well as experimental studies. It consists of a structure similar to graphene. Still, instead of two similar carbon atoms in a graphene unit cell, h-BN has a boron (B) atom at one site, and the nitrogen (N) atom occupies the other site of the structure. Moreover, h-BN is iso-electronic with graphene as both accommodate 12 electrons per unit cell. Despite all this uniformity, the electronic band structure of both systems is entirely different. Graphene is a semi-metal Dirac material, whereas h-BN is an insulator with a large bandgap of 5.6 eV. The binary composition of h-BN distinguishes it from carbon allotropes, and the bond between boron and nitrogen atoms is partially ionic due to electronegativity differences. This partial charge transfer from B to N could disclose a significant variation in the band structure [11-13]. Furthermore, h-BN is an intriguing material for two-dimensional systems because of its chemical inertness, good insulators for use as a substrate to frame thin films, and thermal stability mixed with mechanical robustness [14-16]. Also, because of its broad bandgap range and atomic ordered thickness, h-BN has recently been described as a complementary metal-oxide-semiconductor (CMOS), the most reliable gate insulator in low-dimensional material-based transistors. In addition, combining stacked graphene layers and h-BN hetero-structure in the energy storage devices can help compensate for the loss in accessible surface area and ion storage capacity[17, 18]. Moreover, graphene, h-BN, and their heterogeneous structure are also reported in magnetic applications and spintronics points of view [19]. Further, the piezoelectric voltage coefficient of a single BN nanoflake (NF) is being studied, and the energy harvesting capabilities of 2D h-BN NFs-based flexible piezoelectric energy devices are reported [20]. There are huge numbers of works that try to explore the BN in different application visions. In this work, we proposed an inorganic 2D biphenylene carbon sheet-like monolayer of B-N named I-BPN. Fan _et al._ successfully synthesised the ultra-flat BPN with regularly spaced four, six, and eight-membered rings of sp\({}^{2}\) carbon atoms via a bottom-up approach [21]. Yunhao _et al._ and Seunghan _et al._ individually reported the modulation of the electronic properties of BPN by varying the concentration of hydrogen and halogens at different sites [22, 23]. Bafekry _et al._ reported the electronic and dielectric properties of BPN using the first principle calculation[24]. The inorganic structure equivalent to graphene is boron and nitrogen (h-BN), which has the same honeycomb lattice arrangement as graphene. Comparing atomic cell dimensions and bond lengths of graphene and h-BN, both are nearly identical and possess almost similar thermal and mechanical stability. As a result, h-BN can produce carbon-like nanostructures such as porous structures, closed cage structures 22, and nanotubes. Although the B-N composition network of carbon allotropes shows different electronic properties, these are stuck in their geometrical and mechanical stability with carbon-based structures. There are number of articles that reported the stable crystal of carbon allotropes and its identical network for boron-nitrogen composition. For illustration, graphenylene: a unique porous network of non-delocalised carbon atoms, was reported [25], and its B-N composed network called inorganic-graphenylene is studied [26]. The B-N composition of other 2D carbon allotropes networks like T-carbon and graphyne were also studied [27, 28]. In light of these factors, we investigated the electronic properties and mechanical stability of I-BPN, a biphenylene-like network of boron and nitrogen atoms composition. A comparative study between BPN and I-BPN has been done regarding geometry, mechanical strength, phonon calculation and lattice thermal conductivity. The electronic properties reveal that I-BPN is a semiconductor. It is also interesting to have insight into its thermoelectric parameters. **Computational details** BPN and I-BPN structures has been studied in the framework of Density Functional Theory (DFT) using Vienna Ab Initio Simulation Package (VASP). It used the Projector Augmented Wave (PAW) method that plays an important role in interactions between the valence electrons with the core electrons along with periodic boundary conditions. The pseudopotentials of B, C and N, are estimated for electronic configurations having valence electrons 2s\({}^{2}\)2p\({}^{1}\), 2s\({}^{2}\)2p\({}^{2}\) and 2s\({}^{2}\)2p\({}^{3}\): respectively. We considered Generalised Gradient approximation (GGA) for exchange-correlation potential formulated by Perdew-Burke-Ernzerhof (PBE). The plane wave energy cutoff value and K-mesh grid for optimisation and the self-consistent calculation are 600 eV and 25\(\times\)21\(\times\)1, respectively, chosen by the total energy convergence test. The proposed crystal structures are fully relaxed with the force value 10\({}^{-3}\)eV/A per atom by a conjugate-gradient algorithm. Further, the energy tolerance throughout the calculation is 10\({}^{-8}\) eV. Both structures have rectangular unit cells, which follow the periodic condition in the x-y plane. A vacuum of 20 A is given to avoid the interaction along the Z-axis. The electronic band calculation has been performed for both monolayers along the high symmetry points \(\Gamma\)-X-S-Y-\(\Gamma\). We investigate the phonon dispersion calculation to show the dynamical stability of the BPN and I-BPN monolayer. A supercell of 4\(\times\)3\(\times\)1 has been created to compute the vibrational spectra of atoms for Both BPN and I-BPN monolayers using finite displacement method. We use the phonopy package along with VASP to create the force sets and force constants for phonons calculation. To simulate ab initio molecular dynamic (AIMD) calculations, a Nose thermostat canonical ensemble with a time step of 1 fs over 5000 fs at two different temperature 300K and 600K has been used. In order to study the mechanical properties, the generalised Hooke's law is given as \[\sigma_{ij}=C_{ijkl}\epsilon_{kl}\ ;\ C_{ijkl}=\frac{1}{2}\frac{\partial^{2}U}{ \partial\epsilon_{ij}\partial\epsilon_{kl}}\ (where\ i,j,k,l=1,2,3,4,5,6). \tag{1}\] where \(\sigma\) and \(\epsilon\) is second rank stress and strain tensor and \(C\) is the fourth rank stiffness constants. the elastic strain energy per unit area equation is used which is expressed below; \[U(\epsilon_{11},\epsilon_{22})=\frac{1}{2}C_{11}\epsilon_{11}^{2}+C_{22} \epsilon_{22}^{2}+C_{12}\epsilon_{1}\epsilon_{2}+\cdots \tag{2}\] here, \(\epsilon_{11}\)_and_\(\epsilon_{22}\) is strain along the x and y-direction, and \(C_{11},C_{22}\)_C\({}_{44}\)and_\(C_{12}\) are stiffness constants. Lattice transport properties such as phonon thermal conductivity (\(\kappa_{ph}\)) has been calculated using the ShengBTE package[29]. A supercell of 4\(\times\)3\(\times\)1 and k-mesh of 9\(\times\)11\(\times\)1 has been used in the calculations for both BPN and I-BPN monolayers to find the lattice thermal conductivity. The Phonopy package has been used to obtain second-order force constants with VASP, and the symmetric displacements are used to calculate forces required for dynamical matrices. The same pseudopotentials and plane-wave basis cutoff energy has been used along with a 9\(\times\)11\(\times\)1 k-point grid. The VASP and thirdorder.py packages have been used to perform third-order anharmonic IFCs on the 4\(\times\)3\(\times\)1 supercell of the BPN and I-BPN. The third IFC considers interactions with up to four nearest neighbours. BPN and I-BPN generate 1618 displacement datasets of the 4\(\times\)3\(\times\)1 supercells, each with a displacement of 0.01 A. Further, the second and third IFCs have been used as input to the ShengBTE package for solving the linearised phonon Boltzmann transport equation using an iterative method and a dense 120\(\times\)120\(\times\)1 k-mesh for precise \(\kappa_{L}\) calculation. The mathematical relation of \(\kappa_{L}\) is obtained by single-mode relaxation time approximation (RTA) within ShengBTE is given as follows: \[\kappa_{ph}^{\alpha\beta}=\frac{1}{K_{B}T^{2}\Omega N}\sum_{\lambda}f_{0}(f_{ 0}+1)\ (\hbar\omega_{\lambda})^{2}\upsilon_{\lambda}^{\alpha}F_{\lambda}^{\beta} \tag{3}\] where \(\Omega\) is the volume of the unit cell, N is the number of q points uniformly distributed over the Brillouin zone, \(\omega_{\lambda}\) is the angular frequency of phonon modes, \(\upsilon_{\lambda}^{\alpha}\) is the phonon group velocity, \(f_{0}\) is the Bose-Einstein distribution function, and \(F_{\lambda}^{\beta}\) is the projection of the mean free displacement along the \(\beta\) direction. Furthermore, non-analytical corrections have been applied to the force constants for phonon dispersion and related calculations, such as born effective charge and dielectric constant. The electronic transport properties have been investigated using semi-classical Boltzmann transport equations with energy-independent relaxation time and rigid band approximations, as implemented in the BoltzTraP program. The following equations can be used to express the thermoelectric-related variables such as electrical conductivity, conductivity related to thermal gradient, and electronic thermal conductivity; \[\sigma_{\alpha\beta}(T;\mu)=\frac{1}{\Omega}\int\sigma_{\alpha\beta}(\varepsilon) \left[-\frac{\partial f_{\mu}(T;\varepsilon)}{\partial\varepsilon}\right]d\varepsilon \tag{4}\] \[v_{\alpha\beta}(T;\mu)=\frac{1}{eT\Omega}\int\sigma_{\alpha\beta}(\varepsilon) (\varepsilon-\mu)\left[-\frac{\partial f_{\mu}(T;\varepsilon)}{\partial \varepsilon}\right]d\varepsilon \tag{5}\] \[\xi^{0}_{\alpha\beta}(T;\mu)=\frac{1}{e^{2}\Omega T}\int\sigma_{\alpha\beta}( \varepsilon)(\varepsilon-\mu)^{2}\left[-\frac{\partial f_{\mu}(T;\varepsilon) }{\partial\varepsilon}\right]d\varepsilon \tag{6}\] The Seebeck coefficient can be easily calculated by using these tensors quantities, \[S_{\alpha\beta}=\sum_{\gamma}(\sigma^{-1})_{\alpha\gamma}v_{\beta\gamma} \tag{7}\] Where \(T\), \(\Omega\), \(\mu\) and \(f\) are the absolute temperature, cell volume, chemical potential, and Fermi-Dirac distribution, respectively. \(\sigma_{\alpha\beta}(\varepsilon)\) represent the density of state energy projected conductivity tensor which is expressed by \[\sigma_{\alpha\beta}(\varepsilon)=\frac{1}{N}\sum_{l,k}\sigma_{\alpha\beta}(i, k)\ \ \delta(\varepsilon-\varepsilon_{i,k}) \tag{8}\] Where \(N\) represents the no. of k-points, \(\varepsilon_{i,k}\) are electron-band energies (band index \(i\)) and \(\sigma_{\alpha\beta}(i,k)\) denotes the conductivity tensor is as follows, \[\sigma_{\alpha\beta}(i,k)=e^{2}\tau_{i,k}\vartheta_{\alpha}(i,k)\vartheta_{ \beta}(i,k) \tag{9}\] Where e is the charge of the electron, \(\tau_{i,k}\) is the relaxation time, \(\vartheta_{\alpha}(i,k)\) and \(\vartheta_{\beta}(i,k)\) are group velocities expressed as, \(\vartheta_{\alpha}(i,k)=\frac{1}{\mathrm{h}}\frac{\partial\varepsilon_{i,k}} {\partial k_{\alpha}},\vartheta_{\beta}(i,k)=\frac{1}{\mathrm{h}}\frac{ \partial\varepsilon_{i,k}}{\partial k_{\beta}}\) and \(\alpha,\beta\) are tensor indices. The value of relaxation time (\(\tau\)) must be computed in order to get the absolute value of these coefficients because BoltzTraP integrates electrical and electronic thermal conductivity in terms of \(\tau\). We determine \(\tau\) by applying the deformation potential (DP) theory to the effective mass (\(m\)*) and mobility (\(\mu_{2D}\)) of the charge carrier. The carrier mobility has been calculated using the Bardeen and Shockley DP theory. Furthermore, the effective masses of the electron (\(\mathrm{m_{e}}^{*}\)) and hole (\(\mathrm{m_{h}}^{*}\)) has been estimated by using the parabolic curvature of the conduction band minimum (for electrons) and valence band maximum (for hole) band edge close to the Fermi level respectively. The mathematical expression for \(m^{*}\) is \[m^{*}=\frac{\mathrm{h}^{2}}{\frac{\partial^{2}\varepsilon}{\partial k^{2}}} \tag{10}\] Moreover, the carrier mobility and relaxation time can be calculated using these following relations \[\mu_{2D}=\frac{2e\mathrm{h}^{3}C}{3k_{B}T|m^{*}|^{2}E_{1}^{2}}\ \ \ \&\ \ \ \ \tau=\frac{m^{*}}{e}\,\mu_{2D}\] (11a) & (11b) where \(k_{B}\) and T are the Boltzmann constant and temperature, respectively, and \(m^{*}\) is the effective mass of charge carrier. \(C\) is the elastic modulus (\(C=\frac{1}{A_{0}}\frac{\partial^{2}E}{\partial x^{2}}\), where \(E,A_{0}\) and \(\chi\) represents total energy in different deformation states, lattice area at equilibrium and the applied biaxial strain, respectively) which is determined by quadratically fitting the energy-strain data and \(E_{1}\) is the DP constant that reflects the strain-induced shift of the band edges (valence band maximum (VBM) for holes and conduction band minimum (CBM) for electrons). ### Results and Discussion We investigated the structural and electronic properties of monolayers BPN and I-BPN by the first principle. Figure 1(a) and 1(b) depicts the atomic structure of BPN and I-BPN. Both BPN and I-BPN are identical in unit cells and atomic arrangements, with the difference being the presence of biatoms in I-BPN and monoatomic BPN. Both have rectangular geometry (space group \(Pmmm\); group no. 47) with six carbon atoms in BPN and three B and N atoms in I-BPN. The optimised BPN lattice parameters are a = 3.75A and b = 4.52A, which is consistent with earlier theoretical studies. In contrast, the parameters of the I-BPN structure are a =3.94A and b =4.57 A. BPN and I-BPN are non-benzenoid carbon and B-N networks composed of octagonal, tetragonal, and hexagonal rings, resulting in slight variations in carbon-carbon bond lengths. The b/a ratio of 1.20(1.16) for BPN (I-BPN) and the presence of different atomic arrangements along the x Figure 1:- (a) and (b) display the optimised atomic structure of BPN and I-BPN (boron in green colour and nitrogen in white colour) monolayers with dotted box as a unit cell respectively, (c) shows high symmetry K-path in Ist Brillioun zone (d) and (e) phonon dispersion spectra of BPN and I-BPN monolayers respectively. and y-axes indicate an anisotropic structure. This lead to an expectation of anisotropic physical properties. There are three distinct C-C (1.45A, 1.40A and 1.44A) bonds in BPN, whereas I-BPN has five different bonds, three distinct B-N (1.45A, 1.41A and 1.47A) and two each for B-B(1.62 A) and N-N(1.55A) marked as numbers in figure 1(a) and 1(b). In BPN, all of the C atoms are triangulated as expected by sp\({}^{2}\) hybridisation, but the angles have different values of 90\({}^{\circ}\), 110\({}^{\circ}\), 125\({}^{\circ}\), and 145\({}^{\circ}\). These angles are distorted by a few degrees, as 90\({}^{\circ}\)(91.38\({}^{\circ}\), 88.6\({}^{\circ}\)), 110\({}^{\circ}\)(108.48\({}^{\circ}\), 110.8\({}^{\circ}\)), 125\({}^{\circ}\)(125.37\({}^{\circ}\), 125.98\({}^{\circ}\)), and 145\({}^{\circ}\)(145.8\({}^{\circ}\), 144.24\({}^{\circ}\)) for BPN (I-BPN) respectively. The cohesive energy (E\({}_{\rm coh}\)) of I-BPN was calculated to assure its stability by using below equation: - \[E_{coh}=\frac{E_{monolayer}-(xE_{A}+yE_{B})}{x+y}\] where \(E_{monolayer}\) is the total energy of unit cell of monolayer and \(E_{A}\) & \(E_{B}\) is the energy of single atom of constituents in the monolayer with \(x\) and \(y\) are no's of atoms respectively. According to our calculation, cohesive energy of the I-BPN monolayer is \(E_{coh}\) = -7.78 eV/atom. This negative cohesive energy shows that the I-BPN monolayer is stable and that it can be synthesized via chemical vapour deposition or molecular beam epitaxy. The E\({}_{\rm coh}\) of several 2D materials was computed and reported in Table 1 for comparison, and it has been found that the magnitude of the cohesive energy of I-BPN is significant to other experimentally synthesised monolayers[30]. Table no. 1 cohesive energy and optimised lattice constant of graphene, h-BN, BPN and I-BPN monolayers. \begin{tabular}{|c|c|c|c|c|} \hline monolayer & graphene & h-BN & BPN & I-BPN \\ \hline Lattice constant (A) & a=b=2.466 & a=b=2.512 & a=3.75, b=4.52 & a=3.94, b=4.57 \\ \hline \(E_{coh}\) (eV/atom) & -9.16 & -8.72 & -8.69 & -7.78 \\ \hline \end{tabular} ### Phonon dispersion calculation Figure 1(d) & (e) displays the phonon spectra of BPN and I-BPN. The phonon spectra analysis shows no negative frequency, indicating that the single layer of the I-BPN structure is stable. The dispersion of phononic bands of BPN is identical to the earlier report[31]. Additionally, we also observed that the lowest optical branch intensely hybridised with the acoustic branches in the case of I-BPN, which means that it has more three-phonon processes and a faster relaxation rate, both of which help produce small \(k_{l}\). Between points X and S, the TA mode and the LA mode jointly degenerate, which can greatly increase phonon scattering while reducing phonon transport, lowering the \(k_{l}\). The flatter phonon dispersion curve and the high degenerate states are important characteristics of the low phonon group velocity and small \(k_{l}\). AIMD simulations has been used to evaluate the temperature-dependent stability of I-BPN monolayer. Figure 2 shows the free energy vs time and snapshots of I-BPN geometries, at 5ps at two different temperatures, 300K and 600K with a time step of 1fs. It is important to note that the AIMD calculations at 600K demonstrate that the I-BPN is thermodynamically stable because no deformation or distortion has occurred, and no evidence of considerable mobility of the B and N atoms. It is worth to concluded that the I-BPN monolayer is highly stable at ambient temperature. ### Mechanical properties For 2D system, particularly rectangular symmetry, BPN and I-BPN has anisotropic (a\(\neq\)b), which implies \(C_{11}\neq C_{22}\) and in order to meet its elastic constants, should follow the below criteria[35]: \[C_{11}>0,C_{22}>0,C_{44}>0\;\;C_{11}C_{22}>C_{12}^{2}\] We estimated young's modulus \(Y_{10}(Y_{01})\) and Poisson coefficient \(\upsilon_{10}(\upsilon_{01})\) for BPN along the x and y direction; \[Y_{10}=\frac{c_{11}c_{22}-c_{12}^{2}}{c_{22}},Y_{01}=\frac{c_{11}c_{22}-c_{12}^ {2}}{c_{11}},\upsilon_{10}=\frac{c_{12}}{c_{22}}\;and\;\upsilon_{01}=\frac{c_{ 12}}{c_{11}}\,.\] We found that the \(Y_{10}(Y_{01})\) and \(\upsilon_{10}(\upsilon_{01})\) for BPN are 256.3(212.2) N/m and 0.39(0.32), respectively which is in good agreement with earlier theoretically reported[36, 37]. Further, \(Y_{10}(Y_{01})\) and \(\upsilon_{10}(\upsilon_{01})\) of I-BPN are 219.30(191.82) N/m and 0.36(0.32) respectively which comparatively lower than BPN. The Young's modulus of I-BPN is much higher than that of black phosphorene (83 N/m)[38] and MoS\({}_{2}\) (123 N/m)[39], but lower than that of graphene (Y\({}_{\rm armchair}\)= Y\({}_{\rm zigzag}=340\) N/m)[40], and hexagonal BN (Y\({}_{\rm armchair}\)= Y\({}_{\rm zigzag}=275\) N/m[41]. These results show that I-BPN has strong mechanical properties. A comparison of elastic coefficients, Y and \(\upsilon\) values between the BPN and I-BPN are shown in table no. 1 Figure 2: - AIMD simulations of the total energy and snapshots of initial and final configuration of I-BPN (a) and (b) for 300K and 600K respectively. Furthermore, Young's modulus Y(\(\theta\)) and Poisson's ratio \(\upsilon(\theta)\) along any arbitrary in-plane direction (where \(\theta\) is the angle with respect to the x direction) are determined using the formula: \[Y(0)=\frac{c_{11}c_{22}-c_{12}^{2}}{c_{11}s^{4}+c_{22}c^{4}+\left(\frac{c_{11}c_ {22}-c_{12}^{2}}{c_{44}}-2c_{12}\right)c^{2}s^{2}} \tag{12}\] \[\upsilon(0)=-\frac{\left(c_{11}+c_{22}-c_{11}^{2}\right)c_{44}}{c_{11}s^{4}+c_ {22}c^{4}+\left(\frac{c_{11}c_{22}-c_{12}^{2}}{c_{44}}-2c_{12}\right)c^{2}s^{2}} \tag{13}\] where \(c=cos\theta\) and \(s=sin\theta\) based on the calculated elastic constants. To further investigate the anisotropic mechanical properties of the BPN and I-BPN monolayers, we calculate the in-plane Young's modulus Y(\(\theta\)) and Poisson's ratio \(\upsilon(\theta)\) (using eq. 12 and 13) of the known BPN and compare it with I-BPN. In figure 3(a), the 2D polar plot of Young's modulus (as a function of \(\theta\)) first goes from a maximum of 193.50 N/m in the x-direction (\(\theta\)=0\({}^{\circ}\)) to a minimum of 169.25 N/m in the y-axis (\(\theta\)=90\({}^{\circ}\)), and then gradually again rises and achieves a maximum of 193.50N/m at \(\theta\)=180\({}^{\circ}\). The 2D polar plot reveals an oval curve, indicating that Young's modulus, like BPN, is anisotropic across the in-plane of the I-BPN monolayer. Further, a positive Poisson's ratio in figure 4(b) for BPN and I-BPN implies a tendency to expand or contract in the opposite direction of a compressive or tensile strain. ### Electronic properties Figure 4 compares the electronic band structures of BPN and I-BPN in terms of their high symmetrical k-path. It has been noticed that these 2D structures represent two distinct materials: BPN as a metal as few of bands crossing the Fermi energy (Fig. 2a) and I-BPN as an indirect band gap (\(E_{g}=1.88\) eV) semiconductor with VBM at Y and CBM at S along high symmetry paths as seen in figure 2. BPN shows linear crossing of bands saliently inclined to Figure 3:-Polar diagram for Young’s modulus Y (left) and Poisson’s ratio (right) of BPN (blue) and I-BPN (red) monolayer respectively. the side, forming slightly above the Fermi level. The solid-state system having this tilted Dirac cone is defined as a system where the effective space-time is non-Minkowski[32]. This tilted Dirac cone has been seen in various Dirac/Weyl materials [33, 34]. ### Thermoelectric properties #### Lattice thermal conductivity We use Phonopy and ShengBTE packages to investigate the relationship between the phonon thermal conductivity (K\({}_{\rm ph}\)) of the single-layer of BPN (I-BPN) with temperature. Normally, the thermal conductivity of the lattice follows a T-1 trend with temperature. The temperature-dependent lattice constants are not included in the BTE solution in this study, assuming that the thermal expansion of these lattices at high temperatures has no significant effect on the phononic properties. For BPN and I-BPN monolayers, we observed nearly identical temperature power factors in both the x- and y-direction. Figure 5 depicts the K\({}_{\rm ph}\) of BPN and I-BPN, which is typical temperature-dependent behaviour for the semiconductor family[42] in the range of 200-1000 K. The K\({}_{\rm ph}\) of BPN is anisotropic along the x(y)-axis due to the _pmmmm_ structural symmetry. At 300 K, it is revealed that the K\({}_{\rm ph}\) of BPN is 398 W/mK and 187 W/mK, which is agreed with previous studies [43, 44]. Furthermore, at room temperature, the anisotropic K\({}_{\rm ph}\) of I-BPN along x- and y-axes are 21.4 Figure 4:- Electronic band spectra of BPN (left) and I-BPN (right) monolayers. Figure 5:- Anisotropic Phonon thermal conductivity vs temperature plots of BPN (left) and I-BPN (right) monolayer. W/mK and 19.8 W/mK, respectively. It is found that the K\({}_{\rm ph}\) of BPN is roughly 10th order greater than that of I-BPN, which is expected given that the same order difference in K\({}_{\rm ph}\) was reported in hexagonal Graphene and BN monolayers[45-49]. **Electronic transport properties** As I-BPN is a semiconductor with an indirect bandgap of 1.88eV, it is worth to studying its thermoelectric properties. Although the conventional (GGA-PBE) calculations underestimate bandgaps, still, it is believed that our results are within a sufficient range to examine the thermoelectric properties of this material. In the meantime, its lattice thermal conductivity is also in the significant range that helps to find I-BPN as a good thermoelectric effect. Under the rigid bands and constant relaxation time approximation, BoltzTraP program integrates the electronic Boltzmann transport equations. It gives thermoelectric coefficients such as the Seebeck coefficient (S) which is independent of the relaxation time (\(\tau\)), whereas electrical conductivity (\(\sigma\)) and electronic thermal conductivity (\(\kappa_{\rm e}\)) are linearly dependent on \(\tau\). In order to find the absolute value of \(\sigma\) and \(\kappa_{\rm e}\), the relaxation time has been approximated using the deformation potential theory as we discussed earlier. From the above electronic band structure, the effective mass (m\(\ast\)) is determined for both the n-type and p-type carriers of I-BPN. The calculated DP constant values for electrons and holes of I-BPN, as reported in Table 1, as well as the elastic constant C, carrier mobility, and relaxation time at room temperature. We assumed a typical value of \(\tau\) (100 fs) at room temperature, because the DP theory consider only carrier scattering with the acoustic phonons. Further, it is better to take relaxation time of the order of 10\({}^{\rm-13}\) s, so that, it can be underestimate the electrical and electronic thermal conductivity. To calculate the \(\tau\) at various temperature values, the relation \(\tau_{T}=\frac{300\ast\tau_{300}}{T}\) has been utilised, where \(\tau_{T}\) is the relaxation time at temperature T[50, 51]. To obtain more insight, Figure 5(a)-(d) displays the Seebeck coefficient (S), electrical conductivity(\(\sigma\)), electronic thermal conductivity (\(\kappa_{\rm c}\)) and electronic figure of merit (ZT\({}_{\rm e}\)) as a function of the chemical potential (\(\mu\)) at three different temperatures, 300 K, 600K and 900K, for both transport (x-axis and y-axis) directions. It is worth noting that \(\mu\) is positive for n-type doping and negative for p-type doping. S is symmetric about \(\mu=0\) for the I-BPN monolayer. It signifies that electron-hole symmetry in the electronic structure is preserved[52]. At 300 K, the I-BPN monolayer has the highest Seebeck coefficient, 0.00285 V/K, which declines with increasing temperature. The temperature increases the bipolar effect due to an increase in carrier concentration, which further reduces the Seebeck coefficient[53]. Further, the maximum value of S is slightly higher than the semiconducting allotropes of carbon like graphdiyne (0.000248 V/K) and \(\gamma\)-graphyne (0.000260 V/K)[54] and other thermoelectric monolayers SnS (0.00145 V/K), arsensene (0.00118 V/K), phosphorene (0.0013V/K) [52, 55]. The calculated electrical and electronic thermal conductivity, both scaled by \(\tau\) that has been estimated by DP theory, are shown in Figures 6(b) and 6(c). \(\sigma\) exhibits anisotropic behaviour along the x- and y-axes, yielding a typical response with \(\mu\) and temperature. Generally, the materials with narrower gaps have higher conductivities, and I-BPN shows low \(\sigma\) because of the wider band gap. Similarly, \(\kappa_{e}\) is highly influenced by temperature and chemical potential. The electronic contribution of thermal conductivity, at 300K, is lower than other thermoelectric material. In wider band gap semiconductors, the electronic contribution in thermal and electrical conductivity demands intense doping or higher temperatures. At 300 K, 600 K, and 900 K, we plot the electronic thermal conductivity \(\kappa_{e}\) as a function of chemical potential. We can observe that the overall Figure 6: Anisotropic thermoelectric parameters; (a) Seebeck coefficient, (b) Thermal conductivity (c) Electrical conductivity, and (d) electronic figure of merit as a function of the chemical potential for I-PBN monolayer, respectively. topology of the \(\kappa_{\rm e}\) is nearly identical to that of the \(\sigma\). At room temperature, \(\kappa_{\rm e}\) along x-axis is higher than the y-axis shows its strong anisotropic behaviour. Furthermore, the thermoelectric performance of a material is examined by a figure of merit (ZT=S\({}^{2}\sigma\)T/\(\kappa_{\rm e}\)+ \(\kappa_{\rm ph}\)). Firstly, we consider only the electronic part, ZT\({}_{\rm e}\)=S\({}^{2}\sigma\)T/\(\kappa_{\rm e}\). ZT\({}_{\rm e}\) exhibits two amplified peaks appearing in both the excessive hole region and excessive electron region, indicating its favour equally for p-type and n-type doping in thermoelectric applications. It is observed that the max. peaks of ZT\({}_{\rm e}\) appear in that value of \(\mu\) where the seebeck coefficient is higher, and \(\kappa_{\rm e}\) is low, which can be seen in figure 6(d). Further, the temperature effect on ZT\({}_{\rm e}\) slightly decreases from 0.997(0.998) to 0.984(0.962) along the x(y) direction as T rises from 300K to 900K. In addition to considering these factors, another thermoelectric quantity, the power factor (PF), must be defined. PF of thermoelectric power generation indicates how much energy is produced at a given temperature. It conveys the role of fermions as energy carriers in thermoelectric power generation. The PF values grow with temperature and reach their maximum at 900K, as seen in Fig. 7(a). The maximum PF value for p-type doping is 0.0056 W/mK\({}^{2}\) at 900K along the x-axis, whereas the peak value shifts toward higher chemical potential along the y-axis and a relatively low value of 0.0021 W/mK\({}^{2}\) show strong anisotropy in PF. The peaks value of PF is relatively low than graphyne, arsenene and SnS monolayers[52]. Figure 7(b) depicts the total ZT included \(\kappa_{\rm ph}\), and it is interesting to note that the total contribution in \(\kappa\) is dominant by the lattice part at low temperatures; meanwhile, at higher temperatures, \(\kappa_{\rm e}\) suppressed the \(\kappa_{\rm ph}\). Further, electronic transport coefficients (S, \(\sigma\), \(\kappa_{\rm e}\)) comparisons reveal that the anisotropy of electrical conductivity predominates in the ZT. We can see that in the I-BPN monolayer, ZT has strong peaks in the vicinity of \(\mu\)=0 within the region where \(\kappa_{\rm e}\)\(<\)\(\kappa_{\rm ph}\). Although, including \(\kappa_{\rm ph}\), reduced the ZT value, still its peak value of 0.37. \(\kappa_{\rm ph}\) is relatively high due the weak anharmonicity in the covalent bonds and vibrations in 2D monolayers. The peak value of ZT for p-type carriers is higher than that of n-type carriers. To explore the I-BPN monolayer as a better thermoelectric material, ZT suggested that doping of the p-type is more effective than that of the n-type. It is also reported that in the majority of nanomaterials, when temperature increases, the ZT peaks broaden and charge carriers experience zero band gap at roughly \(\mu\)=0, which support to enhances thermoelectric performance[56-58]. Figure 7: - (a) Thermoelectric power factor and (b) total figure of merit as a function of chemical potential in the I-PBN monolayer. ## Conclusions To summarise our results, we used first principles computations to investigate the structural, elastic, electrical, and lattice transport features of BPN and I-BPN. The theory of Boltzmann transport. It has been discovered that the BPN and I-BPN are comparable stable in terms of formation energy and phonon dispersion spectra. We also noticed that the Young modulus of I-BPN is slightly lower than that of experimentally synthesised BPN while the poisson's ratio is the same for both monolayers, indicating that both monolayers are mechanically stable. The lattice thermal conductivity of I-BPN is 10\({}^{\mathrm{th}}\) orders of magnitude lower than that BPN monolayer which as expected because of same order difference for graphene and h-BN monolayer. Furthermore, the electronic properties demonstrate that BPN is metallic and I-BPN semiconducting nature. We calculate the thermoelectric properties for I-BPN using BoltzTraP since semi-conductors are ideal for thermometric studies. The Seebeck coefficient for the thermometric properties has significant values of (0.00289 V/K at 300K) in both the positive and negative regions, which suggests that both carrier types may be present. the power factor reveals that the p-type doping shows a peaks value of 0.0056 W/mK\({}^{2}\) at 900K along x-axis than the n-type region of 0.0021 W/mK\({}^{2}\). Moreover, ZT\({}_{\mathrm{e}}\) shows a peak value of 0.97 and 0.95 in p-type and n-doping region. p-type region as show addition peaks of 0.67 at 900K along x-axis indicates electron deficient doping is more suitable for I-BPN as thermoelectric material.
2306.08495
Single-board Device Individual Authentication based on Hardware Performance and Autoencoder Transformer Models
The proliferation of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where a multitude of interconnected devices collaboratively collect and analyze data. Ensuring the authenticity and integrity of the data collected by these devices is crucial for reliable decision-making and maintaining trust in the system. Traditional authentication methods are often vulnerable to attacks or can be easily duplicated, posing challenges to securing crowdsensing applications. Besides, current solutions leveraging device behavior are mostly focused on device identification, which is a simpler task than authentication. To address these issues, an individual IoT device authentication framework based on hardware behavior fingerprinting and Transformer autoencoders is proposed in this work. This solution leverages the inherent imperfections and variations in IoT device hardware to differentiate between devices with identical specifications. By monitoring and analyzing the behavior of key hardware components, such as the CPU, GPU, RAM, and Storage on devices, unique fingerprints for each device are created. The performance samples are considered as time series data and used to train outlier detection transformer models, one per device and aiming to model its normal data distribution. Then, the framework is validated within a spectrum crowdsensing system leveraging Raspberry Pi devices. After a pool of experiments, the model from each device is able to individually authenticate it between the 45 devices employed for validation. An average True Positive Rate (TPR) of 0.74+-0.13 and an average maximum False Positive Rate (FPR) of 0.06+-0.09 demonstrate the effectiveness of this approach in enhancing authentication, security, and trust in crowdsensing applications.
Pedro Miguel Sánchez Sánchez, Alberto Huertas Celdrán, Gérôme Bovet, Gregorio Martínez Pérez
2023-06-14T13:21:57Z
http://arxiv.org/abs/2306.08495v3
Single-board Device Individual Authentication based on Hardware Performance and Autoencoder Transformer Models ###### Abstract The proliferation of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where a multitude of interconnected devices collaboratively collect and analyze data. Ensuring the authenticity and integrity of the data collected by these devices is crucial for reliable decision-making and maintaining trust in the system. Traditional authentication methods are often vulnerable to attacks or can be easily duplicated, posing challenges to securing crowdsensing applications. Besides, current solutions leveraging device behavior are mostly focused on device identification, which is a simpler task than authentication. To address these issues, an individual IoT device authentication framework based on hardware behavior fingerprinting and Transformer autoencoders is proposed in this work. This solution leverages the inherent imperfections and variations in IoT device hardware to differentiate between devices with identical specifications. By monitoring and analyzing the behavior of key hardware components, such as the CPU, GPU, RAM, and Storage on devices, unique fingerprints for each device are created. The performance samples are considered as time series data and used to train outlier detection transformer models, one per device and aiming to model its normal data distribution. Then, the framework is validated within a spectrum crowdsensing system leveraging Raspberry Pi devices. After a pool of experiments, the model from each device is able to individually authenticate it between the 45 devices employed for validation. An average True Positive Rate (TPR) of 0.74\(\pm\)0.13 and an average maximum False Positive Rate (FPR) of 0.06\(\pm\)0.09 demonstrate the effectiveness of this approach in enhancing authentication, security, and trust in crowdsensing applications. keywords: Device Behavior Fingerprinting, Device Authentication, Transformer, Behavioral Data, Hardware Fingerprinting, Autoencoder + Footnote †: journal: ## 1 Introduction The widespread adoption of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where many IoT devices collaboratively gather and analyze data from the environment [10]. Many of these applications rely on single-board computers due to their reduced price and relatively good performance. These applications offer tremendous potential in diverse domains, such as environmental monitoring, urban planning, healthcare, and transportation. However, ensuring the authenticity and integrity of the data collected by these devices is critical for reliable decision-making and maintaining trust in the system [5]. The openness and distributed nature of crowdsensing systems make them susceptible to Sybil attacks and collusion among malicious entities [23]. Sybil attacks involve adversaries creating multiple fake identities to gain control over the system or manipulate the collected data. Collusion among malicious entities can also lead to coordinated attacks or data manipulation. Implementing identity verification mechanisms, reputation systems, and distributed consensus algorithms is required in order to prevent and detect such attacks [25]. Traditional authentication methods for IoT devices, such as cryptographic protocols or unique identifiers, are often susceptible to various attacks and vulnerabilities [22]. Moreover, devices with identical specifications can be easily duplicated or impersonated, posing a significant challenge to maintaining trust and security in crowdsensing applications. To address these limitations, novel approaches are required that leverage the unique characteristics of IoT devices to establish their authenticity. One of the directions proposed in the literature to solve these issues is leveraging hardware manufacturing imperfections in order to uniquely identify each device in the environment [13]. What elevates the efficiency of this approach is the integration of Machine Learning (ML) and Deep Learning (DL) techniques for the processing of collected hardware behavior data. These cutting-edge computational methodologies facilitate the analysis, classification, and prediction of the enormous amounts of complex, high-dimensional data generated by IoT devices [2]. Particularly, they can adeptly capture patterns and dependencies in this data, enabling effective anomaly detection and thereby facilitating the identification of devices or activities that deviate from established norms. The combination of hardware manufacturing imperfections and ML/DL techniques has been evidenced to provide remarkable results in the context of device identification [16; 15]. However, authentication poses a more complex issue: discerning whether a device is authentic or not, but without taking into account the data distributions of other devices. Therefore, there are still many challenges present related to hardware-based individual authentication leveraging ML/DL techniques: (_i_) most of the solutions available in the literature cover device identification and not in authentication [19], trying to differentiate a device between a set of known devices instead of uniquely verify its identity; (_ii_) novel DL methods such as attention Transformers have not been applied yet in this field [12], but could improve current results as it is happening in other fields; (_iii_) solutions are usually implemented in simulated or isolated environments, and not integrated into real-world applications [24]; (_iv_) most of the solutions relying on ML/DL follow a classification-based approach as they focus on identification, which is not practical in dynamic scenarios or when the number of devices is high [3]. To solve the previous challenges, the main contributions of the present work are: * A framework that leverages Transformer-based autoencoder models and hardware performance fingerprinting for the individual authentication of single-board computer devices. This framework leverages CPU, GPU, RAM and Storage components to measure their performance and find manufacturing variations that enable the differentiation between devices based on their performance. In this sense, the data from the legitimate device are taken as normal samples modeling its performance distribution, while samples from other devices should be detected as outliers or anomalies. * The deployment of the framework in a real world spectrum crowdsensing platform based on Raspberry Pi devices, namely ElectroSense. In total, 45 devices are utilized in the scenario: 15 Raspberry Pi 4, 10 Raspberry Pi 3, 10 Raspberry Pi 1, and 10 Raspberry Pi Zero. * The validation of the framework authentication performance in the deployed scenario. After data collection, an average True Positive Rate (TPR) of 0.74\(\pm\)0.13 and an average maximum False Positive Rate (FPR) of 0.06\(\pm\)0.09 are achieved, improving other state-of-the-art models such as LSTM and 1D-CNN networks. The remainder of this article is structured as follows. Section 2 gives an overview of hardware-based individual authentication and background on transformer usage for anomaly detection. Section 3 describes the Transformer and hardware-based device fingerprinting solution for individual authentication of single-board devices. Section 4 gives an overview of the crowdsensing platform employed for validation, the data collection process, and the experimental results when performing the authentication. Finally, Section 5 gives an overview of the conclusions extracted from the present work and future research directions. ## 2 Related Work This section reviews the key literature relevant on individual device authentication through hardware performance fingerprinting and transformer-based anomaly detection. ### Individual device authentication and identification The present work focuses on hardware-based single-board device authentication using the performance behavior of the components self-contained in the device and anomaly detection DL algorithms. Arafin and Qu [4] discussed several examples of hardware-based authentication that use memory access latency, instruction execution latency, and clock skew to authenticate devices, users, and broadcast signals used for navigation. In [15], the authors compared the deviation between the CPU and GPU cycle counters in Raspberry Pi devices to perform individual identification of 25 devices. The identification was performed using XGBoost, achieving a 91.92% True Positive Rate (TPR). In continuing work [12], the same authors improved the results to an average F1-Score of +0.96 and a minimum TPR of 0.8 using a time series classification approach based on LSTM and 1D-CNN combination. Similarly, [9] performed identical device identification using GPU performance behavior and ML/DL classification algorithms. Accuracy between 95.8% and 32.7% was achieved in nine sets of identical devices, including computers and mobile devices. Sanchez-Rola et al. [16] identified +260 identical computers by measuring the differences in code execution performance. They employed the Real-Time Clock (RTC), which includes its own physical oscillator, to find slight variations in the performance of each CPU. In [11], the author compared the drift between the CPU time counter, the RTC chip, and the sound card Digital Signal Processor (DSP) to identify identical computers. Other works have also explored hardware-based authentication applications using physical properties of computing hardware such as main memory, computing units, and clocks. Shrivastava et al. [18] proposed a high-performance Field Programmable Gate Arrays (FPGA) based secured hardware model for IoT devices using the Advanced Encryption Standard (AES) algorithm. They compared the performance of two FPGAs and found that the Spartan-6 FPGA provides better throughput and less time delay for IoT devices. Other works have explored the usage of Physical Unclonable Functions (PUFs) for IoT device identification [17]. However, PUFs are out of the scope of this work, as it is centered on hardware behavior fingerprinting based on device performance, avoiding the usage of new hardware elements or the modification of the device specifications. Table 1 compares the closest works in the literature with the present one. Although several works have worked in the combination of ML/DL techniques and hardware fingerprinting for device identification, a notable gap persists in the literature with respect to addressing the unique challenges of device authentication via an anomaly detection approach. Contemporary studies have primarily employed classification models, which serve to identify devices from a set pool of labels. However, these models are inadequate for the authentication problem. The task of authentication involves more than simple device recognition - it requires a system capable of detecting deviations from an expected hardware behavior, a task for which anomaly detection models, rather than traditional classification models, are better suited. Consequently, there is a significant need to investigate the potential of DL-based anomaly detection models, such as Transformer models, in the realm of device authentication. ### Transformer-based anomaly detection in IoT security The application of Transformer models in anomaly detection has recently gained momentum, recognizing their ability to extract meaningful features from sequential data effectively. Anomaly detection in time-series data, in particular, has seen significant advancements through the adoption of Transformer models [7]. Their proficiency in capturing temporal dynamics makes them an excellent choice for tasks that involve detecting irregularities in time-bound sequences [20]. In the field of IoT security, Transformer-based autoencoders have been employed to address high-dimensional and complex dependencies issues by leveraging the self-attention mechanism and the encoder-decoder architecture. Chen et al. [6] proposed a framework called GTA that learns a graph structure among sensors and applies graph convolution and Transformer-based modeling to detect anomalies in multivariate time series. Kozik et al. [8] proposed a hybrid time window embedding method with a Transformer-based classifier to identify compromised devices in IoT-networked environment. Tuli et al. [20] proposed TranAD, a deep Transformer network that uses attention-based sequence encoders to perform anomaly detection and diagnosis for IoT data streams. These works demonstrate the effectiveness and efficiency of Transformer-based models for anomaly detection in IoT security. However, the performance of Transformer-based anomaly detection in individual device authentication has not been explored yet, remaining as a practical field where the performance of these novel models can improve the state-of-the-art approaches. ## 3 Individual Device Authentication Framework This section elucidates the DL framework implemented for the purpose of hardware performance fingerprinting. The framework performs device fingerprinting based on performance deviations that show hardware manufacturing imperfections. An autoencoder Transformer model, a state-of-the-art approach in DL-based time series processing, is leveraged for the authentication of individual devices. The framework is designed in a modular manner, where different components are combined in a stacked layout, from the hardware behavior monitoring to the DL-based evaluation and authentication. Due to the reduced processing capabilities of single-board computers, the framework follows a client-server architecture, where the components related to data collection and device configuration are deployed locally in the device, and the server processes the data and performs the model training and evaluation. Figure 1 illustrates the different modules composing the framework and the pipeline followed by the data until an authentication decision is made. Five modules compose the framework: _(i)_ Monitoring, _(ii)_ Preprocessing, _(iii)_ Anomaly Detection, _(iv)_ Authentication, and _(v)_ Device Security. ### Monitoring Module The _Monitoring Module_ is in charge of the interaction with the hardware components and the monitoring of their performance. Besides, it sends the collected data to the server for its processing and evaluation. It contains two components: _Component Isolation and Stability_ and _Data Gathering_. #### 3.1.1 Component Isolation and Stability One of the key conditions to perform fingerprinting based on hardware performance is to ensure that the components selected for monitoring are running under stable conditions that enable the characterization of the small performance variations in the components due to manufacturing imperfections [15]. Therefore, this component is in charge of configuring the CPU, GPU, RAM and SD Card, the selected hardware components. It sets fixed running frequency for the components, isolate the components to avoid kernel interruptions, and disables some component optimizations that might affect the stability of the performance, such as memory address randomization. #### 3.1.2 Data Gathering This component is in charge of collecting the performance measurements by executing different tasks in the selected hardware components. In the case of single-board computers, the available hardware elements are the CPU, GPU, RAM and storage (typically SD card). As proposed in the literature [15], the hardware monitoring is done by using the in-device elements as a reference for the performance measurements. For example, GPU \begin{table} \begin{tabular}{c c c c c c} **Work** & **Scenario** & **Approach** & **Algorithm/Model** & **N Devices** & \\ \hline \hline [11] (2007) & Computer identification & \begin{tabular}{c} Statistical \\ correlation \\ \end{tabular} & \begin{tabular}{c} Pair-based \\ identification \\ \end{tabular} & 38 & \begin{tabular}{c} Computer identification based on the comparison of three \\ physical oscillators using t-test statistic \\ \end{tabular} \\ \hline [16] (2018) & Computer identification & \begin{tabular}{c} Statistical \\ correlation \\ \end{tabular} & \begin{tabular}{c} Mode-based \\ statistics \\ \end{tabular} & 265 & \begin{tabular}{c} All computers uniquely identified. No effect from CPU load \\ and temperature \\ \end{tabular} \\ \hline [9] (2022) & \begin{tabular}{c} Computer and mobile \\ identification \\ \end{tabular} & Classification & CNN & 9 & 95.8\% and 32.7\% accuracy in nine sets of identical devices. \\ \hline [15] (2023) & IoT device identification & Classification & XGBoost & 25 & 91.92\% average TPR. No effects from temperature changes and device rheooting \\ \hline [12] (2023) & IoT device identification & Classification & LSTM + 1D-CNN & 45 & 0.9\% average F1-Score. Resilience to temperature and ML/OL-evasion attacks. \\ \hline **This work** & **(2023)** & IoT device authentication & Anomaly Detection & Transformer & 45 & \begin{tabular}{c} All devices authenticated. 0.74 average TPR and 0.06 average \\ maximum FPR \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the closest works on ML/DL-focused hardware-based device identification and authentication performance is measured in CPU cycles, and CPU performance when executing a code is measured using the elapsed GPU cycles. The reasoning for this approach is that the component itself is not able to measure the deviations in its performance specification without an external cycle or time counter. ### Preprocessing Module The _Preprocessing Module_ plays the pivotal role of a bridge between the raw data gathered by the _Monitoring Module_ and the _Anomaly Detection Module_, where the data is employed to train the DL models and evaluate the device. The main tasks of this module encompass data cleaning and feature generation. #### 3.2.1 Data Cleaning This component is responsible for filtering and cleaning the raw performance metrics. Any missing, inconsistent, or erroneous data are identified and filtered, thus preparing the dataset for further processing. #### 3.2.2 Feature Generation This component focuses on feature extraction and engineering based on the cleaned data. First, it performs normalization of each one of the metrics gathered. Afterward, it is in charge of transforming the raw data into a format suitable for the Transformer model. A key aspect of this process is the concatenation of samples into groups of vectors, which facilitates time series-based analysis. ### Anomaly Detection Module The _Anomaly Detection Module_ is the heart of the authentication framework, tasked with training and evaluating the Transformer-based autoencoder model. The Transformer-based autoencoder is a variant of the Transformer model, which was originally proposed for natural language processing tasks. The key component of the Transformer architecture is the self-attention mechanism, which models the interactions between the elements in the input sequence [21]. More in detail, the self-attention mechanism computes a weighted sum of the input elements for each position in the sequence. The weight assigned to each input element is determined by its relevance to the position being considered. Formally, the self-attention can be computed as follows: \[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V \tag{1}\] where \(Q\), \(K\), and \(V\) are matrices representing the queries, keys, and values, respectively, and \(d_{k}\) is the dimensionality of the keys. In multi-head attention, this operation is done \(h\) times with different learned linear projections of the original \(Q\), \(K\), and \(V\) matrices. In the autoencoder variant of the Transformer model, the same sequence is provided as both the input and the target output of the model. The Transformer-based autoencoder learns to reconstruct the input sequence, which allows it to capture the underlying structure of the sequence data. The encoder and decoder are both composed of several identical layers. Each layer contains two sub-layers: a multi-head self-attention mechanism and a position-wise fully connected feed-forward network, using ReLU as activation function. The output of each sub-layer is then passed through a residual connection and layer normalization. In the context of device authentication, the Transformer-based autoencoder is trained to reconstruct the normal behavior of each device. Once the model is trained, it can be used to detect anomalies by comparing the reconstruction error of a new sequence with a predefined threshold. A high reconstruction error indicates that the new sequence is significantly different from the normal behavior, which could suggest a possible intrusion. The two components forming this module, in charge of the Transformer-based autoencoder training for each device, are: Figure 1: Individual device authentication framework. #### 3.3.1 Transformer Training and Optimization This component takes the processed data and trains a Transformer model for each device. This model, adept at reconstructing input data, establishes a profile of standard device behavior, thereby becoming proficient at detecting anomalies or deviations from the norm. This phase also involves the optimization of model parameters for each device independently to ensure the best performance. Then, the best model for each device is stored to be later used. #### 3.3.2 Transformer Evaluation Upon completion of the training phase, the model is subject to deployment for live data evaluation. The model predictive capability is tested against the values collected from the device after deployment. Then, the output of the Transformer will be employed in the _Authentication Module_ to determine if a device is the legitimate one and grant allow him to remain deployed in the network. ### Authentication Module The _Authentication Module_ makes the final decision regarding device authentication based on the evaluation results coming from the previous module. #### 3.4.1 Device Authentication This component is charged with the essential task of making the final authentication decision based on the anomaly detection results. Anomalies, interpreted as potential indications of device tampering or misuse, inform the authentication decision. A device may be authenticated and granted network access, or it may be rejected, depending on the analysis of these anomalies. ### Device Security Module The _Device Security Module_ serves as an additional layer of security, overseeing the enforcement of security measures. #### 3.5.1 Security Enforcement This component ensures the enforcement of necessary security rules or protocols based on the _Authentication Module_ decision. If a device is authenticated, it is granted access to the network. If a device is deemed unauthenticated, this component ensures the device is isolated from the network, safeguarding the integrity of the IoT system. This module also reports any security issues, such as repeated authentication failures, to a central authority for further investigation. Moving target defense (MTD) techniques are a suitable approach for this module, as they focus on changing the device configuration according to the mitigation actions required. Some examples of these techniques is the removal of files, dynamic network connection filtering, among others. ## 4 Framework Validation This section succinctly lays out the overall validation methodology, from leveraging the ElectroSense spectrum crowdsensing platform to data collection and preprocessing crucial for the analysis. The specifics of data gathering and the processes of cleaning, normalization, and transformation are explained. Finally, the Transformer-based Anomaly Detection model approach is validated in this real-world scenario, measuring its effectiveness. Note that the validation focuses on the data collection, monitoring, and DL parts of the framework. The development of advanced authentication rules and security measures is out of the scope of this work. ### ElectroSense spectrum crowdsensing platform The IoT spectrum sensors utilized in this research are a part of the ElectroSense network [10], an open-source, crowdsensing platform that collects radio frequency spectrum data with the aid of low-cost sensors. The platform, which capitalizes on a collaborative crowdsensing approach, enables the monitoring and collection of spectrum data. The core of this platform is the Raspberry Pi, a compact and cost-effective single-board computer, that when attached to software-defined radio kits and antennas can function as a versatile spectrum sensor. Such assembly of spectrum sensors by individual users contributes to the broad reach and comprehensive data collection capability of the ElectroSense platform. Once the sensors have collected the data, it is then sent to the ElectroSense backend platform, which is responsible for its storage, processing, and analysis. This meticulous processing and analysis facilitate the provision of a suite of services. These services extend beyond mere spectrum occupancy monitoring, delving into areas such as transmission optimization and decoding. This range of services provided by ElectroSense not only bolsters the understanding of spectrum utilization but also opens up avenues for innovative optimization and enhancement strategies in the field of IoT. Figure 2 depicts a diagram of the ElectroSense platform. For validation, numerous Raspberry Pi devices from different models are deployed in the crowdsensing platform in order to validate the proposed authentication framework. More in detail, the devices deployed are 15 Raspberry Pi 4 Model B, 10 Figure 2: ElectroSense crowdsensing platform diagram. Raspberry Pi 3 Model B+, 10 Raspberry Pi Model +, and 10 Raspberry Pi Zero. ### Data Gathering and Preprocessing The first step in the validation process is to obtain the hardware performance data from each device and preprocess it in order to be fed into the Transformer models. #### 4.2.1 Data Gathering The assembly of individual device authentication premised on hardware behavior hinges on the ability to monitor imperfections inherent in the device chips for subsequent evaluation. As outlined in Section 2, previous studies have primarily tackled this task by contrasting components featuring different base frequencies or crystal oscillators since deviations in these components performance can be discerned directly from the device. To construct the framework for individual device authentication, it was necessary to compile a dataset that utilizes metrics pertinent to the hardware components inherent in certain devices. This dataset has been christened LwHbench, and additional details can be found in [14]. In this context, the dataset gathered performance metrics from the CPU, GPU, Memory, and Storage of 45 Raspberry Pi devices of diverse models over a span of 100 days. Various functions were executed in these components, employing other hardware elements (operating at differing frequencies) to measure performance. Table 2 provides a summary of the functions that were monitored. These functions embody a set of common operations carried out in every component, aiming to gauge their performance. It is worth mentioning that additional analogous operations could be utilized during the data gathering process. In total, 215 features formed each one of the collected data vectors. The final dataset contains the following samples per device model: 505584 samples collected from 10 RPi 1B+ devices, 784095 samples from 15 RPi4 devices, 547800 samples from 10 RPi3 devices, and 548647 samples from 10 RPiZero devices. To collect the data, an array of countermeasures were implemented to mitigate the effect of noise introduced by other processes operating in the devices: Component frequency was kept constant, kernel level priority was enforced, the code was executed in an isolated CPU core (in multi-core devices), and memory address randomization was disabled. Moreover, the dataset was compiled under a variety of temperature conditions, facilitating the analysis of the influence this environmental feature has on component performance. #### 4.2.2 Preprocessing In the preprocessing stage, the time series were generated by applying a time window over the collected samples, combining them into groups of 10 to 100 vectors. This method of grouping facilitates the implementation of time series Deep Learning (DL) approaches and is adjusted to other literature works [12]. These models possess the ability to uncover intricate trends within the data, potentially leading to superior results compared to the standalone processing and evaluation of individual samples. Moreover, it also permits the utilization of attention models such as Transformers, which currently represent the pinnacle of performance in this field. For data normalization, _QuantileTransformer_[1] was utilized, given the variable data distributions originating from the differing hardware capabilities of each device model. The division of the data for model training and validation purposes consisted of 70% and 10% of the total, leaving the remaining 20% for testing. In order to minimize the potential impact of vector order correlations on the results, the splitting of training, validation, and test sets was performed without shuffling the samples. ### Transformer-based Anomaly Detection Validation As detailed in Section 3, the proposed Transformer approach performs hyperparameter tuning personalized for each device. Besides, other state-of-the-art DL architectures for anomaly detection in time series are tested to compare their performance to the Transformer. The tested networks are LSTM, 1D-CNN, and a combination of both of these layouts. Table 3 provides a comprehensive overview of the examined algorithms along with their corresponding hyperparameters. For validation, a server equipped with AMD EPYC 7742 CPU, NVIDIA A100 GPU, and 180 GB of RAM is employed, and the models are implemented using _Keras_ library. In the case of the LSTM and 1D-CNN models, the time series concatenation only achieved good results when using groups of 10 vectors or smaller due to their limited memory capabilities. In contrast, the Transformer achieved good results with all the sliding window lengths from 5 to 100, with the best results obtained with 100 vectors per sliding window. To set the anomaly detection threshold in the reconstruction of the samples fed to the autoencoder models, the 10% of the reconstruction error in the training samples is chosen as the boundary \begin{table} \begin{tabular}{l l l} **Component** & **Function** & **Feature Under Observation** \\ \hline \hline \multicolumn{3}{l}{*} & timestamp \\ & temperature & Core temperature of the device \\ \hline **CPU** & 1 s sleep & Elapsed GPU cycles during 1s of CPU sleep \\ & 2 s sleep & Elapsed GPU cycles during 2s of CPU sleep \\ & 5 s sleep & Elapsed GPU cycles during 5s of CPU sleep \\ & 10 s sleep & Elapsed GPU cycles during 10s of CPU sleep \\ & 120 s sleep & Elapsed GPU cycles during 120s of CPU sleep \\ & string hash & Elapsed GPU cycles during computation of a fixed string hash \\ & pseudo random & Elapsed GPU cycles while generating a software pseudo-random number \\ & urandom & Elapsed GPU cycles while generating 100 MB \\ & urandom & using _development_norm_ interface \\ & fib & Elapsed GPU cycles while calculating the 20th \\ & fib & Fibonacci number using the CPU \\ \hline **GPU** & matrix mul & Time taken by CPU to execute a GPU-based matrix multiplication \\ & matrix sum & Time taken by CPU to execute a GPU-based matrix summation \\ & sccepy & Time taken by CPU to execute a GPU-based graph shadow processing \\ \hline **Memory** & list creation & Time taken by CPU to generate a list with \\ & mem reserve & 1000 elements \\ & esv read & Time taken by CPU to fill 100 MB in memory \\ \hline **Storage** & read x100 & 100 measurements of CPU time for 100 kB \\ & write x100 & 100 measurements of CPU time for 100 kB \\ & & storage write operations \\ \hline \end{tabular} \end{table} Table 2: LwHBench dataset features [14]. between anomaly and normal sample. Then, the validation set is employed for the hyperparameter selection by choosing the model with the higher TPR. #### 4.3.1 Authentication Performance For the authentication capabilities evaluation, the strategy followed is one-vs-all, where the trained transformer model evaluates the test set of the source device (normal samples) but also the test sets of the rest of the devices (anomalies or outliers). Then, the True Positive Rate (TPR) of the legitimate device is compared with all the False Positive Rates (FPRs) of the rest of the devices, checking that the TPR value is greater than all the FPRs. Note that for this approach, different data normalizations should be performed in the test sets depending on which device is employed for training as the training data distribution changes. Table 4 shows the results of the one-vs-all authentication tests. It can be seen how only the Transformer-based approach is able to authenticate all the devices successfully. Although their average TPR is higher, LSTM and 1D-CNN networks only can identify some of the devices, offering a much lower difference between the average TPR and maximum FPR. This occurs because the FPR is much more variable in these models, and many models have a high FPR when evaluating data from other devices, while the FPR variability is smaller in the Transformer models. Figure 3 gives a closer look into the distributions of the TPRs and maximum FPRs of the 45 devices evaluated. It can be seen that both distributions are greatly separated, having only three cases where the maximum FPR goes over 0.20 and remains under 0.45. The TPR always stays over that value and reaches values close to 1 in some cases, having most of its values between 0.6 and 0.8. Besides, Figure 4 shows the exact TPR and maximum FPR values for each one of the devices evaluated, having its MAC address as an identifier. In this graph can be observed that in the cases where the maximum FPR has a relatively high value (0.2 to 0.4), the TPR is way higher, guaranteeing that the authentication can be made reliably. According to these results, a threshold-based authentication approach could be employed by the _Authentication Module_ to determine the result of the authentication process. An example can be a threshold for each device with a value 0.1 lower than the TPR achieved in the validation, as it is enough to differentiate all the devices present in the deployment. The results achieved by the anomaly detection validation have demonstrated the feasibility of the proposed framework, as it was able to uniquely authenticate 45 single-board devices with identical hardware and software specifications. These findings point towards a promising direction for individual device authentication premised on hardware behavior, demonstrating the potential of Transformer models in this sphere. #### 4.3.2 Resource Usage Although performance is the key characteristic to decide which model to use in the validation setup, resource usage during training and evaluation is also a critical point that should be taken into account when developing ML/DL-based solutions. Table 5 shows the time and memory employed by the model. The training time statistics were collected using 10 epochs as the number of iterations over the training dataset. Besides, the evaluation time was obtained while evaluating the entire test dataset of the device. Finally, memory usage represents the size of the model after it has been completely trained. Each model demonstrates distinct computational characteristics in terms of training time, evaluation time, and memory usage. The 1D-CNN model stands out as the most efficient, boasting the fastest training time of approximately 47.79 seconds and the quickest evaluation time of around 1.44 seconds. Additionally, it consumes the least amount of memory, using \begin{table} \begin{tabular}{l c c c c} **Model** & **Best window** & **Devices** & **Au-** & **Avg. TPR** & **Avg. Max.** \\ & **size** & **thenticated** & & **FPR** & **FPR** \\ \hline \hline 1D-CNN & 10 & 32 & 0.88\(\pm\)0.06 & 0.67\(\pm\)0.29 \\ \hline LSTM & 10 & 38 & 0.85\(\pm\)0.09 & 0.53\(\pm\)0.19 \\ \hline LSTM\_1D-CNN & 10 & 35 & 0.88\(\pm\)0.08 & 0.59\(\pm\)0.22 \\ \hline \hline Transformer & 100 & 45 & 0.74\(\pm\)0.13 & 0.06\(\pm\)0.09 \\ \hline \hline \end{tabular} \end{table} Table 4: Anomaly detection time series models results. Figure 3: TPR and maximum FPR distributions of the Transformer autoencoder. \begin{table} \begin{tabular}{l l} **Model** & **Hyperparameters** \\ \hline \hline General & \(epochs=[10,20,50]\), _batch_,_size_ = \(\{32,128,256,512\}\) \\ \hline 1D-CNN & \(filters=[16,32,64,128]\), \(kernel\_size_ = \{35,7\}\), \\ & \(n\_layers=[1,2,3]\) \\ \hline LSTM & \(neurons=[10,100]\), \(n_layers=[1,2,3]\), \\ \hline LSTM\_, & \(input\_layers=[2,3]\), \(conn\_filters=[16,32,64,128]\), \\ & \(cnn\_kernel\_size=\{3,5,7\}\), \(lst\_neurons=[10,100]\) \\ & \(n\_layers=[1,2,3]\) \\ \hline Transformer & \(diff=[32,64,128,256,1024]\), \(mm\_layers=[1,2,3]\) \\ \hline \end{tabular} \end{table} Table 3: Anomaly detection time series models and hyperparameters tested. only about 0.86 MB. This combination of speed and efficiency makes it an appealing choice for resource-limited applications. However, the LSTM model presents a significant increase in training time, taking approximately 283.68 seconds, and a slightly longer evaluation time of roughly 2.11 seconds. Coupled with a higher memory footprint of 1.33 MB, this model may demand greater computational resources than the 1D-CNN. Interestingly, the hybrid LSTM+1D-CNN model exhibits the highest training time among the models, approximately 306.92 seconds, and has a considerable evaluation time of about 2.45 seconds. Its memory usage is also higher, at 1.83 MB, reflecting the complexity inherent to the combination of LSTM and 1D-CNN architectures. Lastly, the Transformer model demonstrates a more moderate training time of approximately 157.68 seconds, albeit with the longest evaluation time of all models, around 8.93 seconds. More notably, it has a significantly higher memory usage, at a substantial 7.77 MB. While this may limit its applicability in memory-constrained environments, the Transformer model may excel in terms of capturing complex data patterns or delivering superior model accuracy, which are aspects not directly portrayed in the provided table. In conclusion, while the 1D-CNN model is undeniably efficient regarding speed and memory usage, the Transformer models might offer better performance under certain circumstances. These trade-offs between time, memory usage, and potential model accuracy ought to be taken into account when deciding on the most suitable model for a particular scenario. ## 5 Conclusions and Future Work This paper proposes a framework for individual device authentication based on hardware behavior and outlier detection, which fundamentally relies on identifying inherent imperfections in the device chips. The framework, which leverages hardware behavior fingerprinting and Transformer autoencoders, establishes a unique 'fingerprint' for each device based on manufacturing imperfections in CPU, GPU, RAM, and Storage, even in those with identical specifications. These imperfections are modeled by generating a model trained with the "normal" data distribution of the hardware performance of each device. This provides a robust mechanism for device authentication, distinguishing between genuine and potentially harmful devices. The framework follows a modular design where device monitoring and security enforcement modules are deployed in the device and the data processing modules are hosted in a server with enhanced processing capabilities. The practical implementation of this authentication framework in the ElectroSense platform demonstrates its effectiveness and real-world applicability. After 100 days of data collection using 45 Raspberry Pi devices, the Transformer-based autoencoder approach was implemented and compared with other state-of-the-art Deep Learning architectures such as LSTM and 1D-CNN for anomaly detection in time series. Despite the competitive performance of LSTM and 1D-CNN, the Transformer model emerged as the superior method, successfully authenticating all the devices. An average True Positive Rate (TPR) of 0.74\(\pm\)0.13 and an average maximum False Positive Rate (FPR) of 0.06\(\pm\)0.09 are achieved when performing one-versus-all authentication, a more complex task than the classification-based identification performed by other solutions in the literature. From these results, it can be concluded that the proposed approach not only prevents unauthorized device intrusions but also significantly contributes to the reliability of data analysis and the overall trustworthiness of the platform. Moving forward, this research line has room for future work and improvements. While the current study has focused on Raspberry Pi devices, further research should involve testing the Figure 4: Transformer autoencoder TPR and maximum FPR comparison per device. proposed model with other IoT devices, expanding its scope, and ensuring its applicability across a broad range of hardware. In addition, the study has examined the model effectiveness primarily in the context of a spectrum crowdsensing platform, ElectroSense. Future investigations could explore its implementation in different types of crowdsensing applications, thereby contributing to a comprehensive understanding of the framework versatility. ## Acknowledgment This work has been partially supported by _(a)_ the Swiss Federal Office for Defense Procurement (armausise) with the DEFENDIS and CyberForce (CYD-C-2020003) projects and _(b)_ the University of Zurich UZH.
2302.02329
Probing the jet transport coefficient of cold nuclear matter in electron-ion collisions
We present a study of the nuclear-medium induced transverse momentum broadening of particle production in future electron-ion-collision~(EIC) experiments. By considering the multiple scattering between hard partons and cold nuclear medium within the higher-twist factorization framework in perturbative QCD, we calculate the transverse momentum broadening of single hadron production in semi-inclusive measurements, as well as the nuclear enhancement of the transverse momentum imbalance for di-hadron and heavy-meson pair productions. In particular, a kinematics dependent non-perturbative jet transport coefficient $\hat q=\hat q(x,Q^2)$ extracted in a global analysis of the current data, together with its uncertainty determined with a Hessian method, are input into our calculations and are available for the community. Significant kinematic and color-state dependences of the nuclear induced broadening/imbalance are predicted. Our results indicate that the future EIC measurements are able to provide powerful constraints on the kinematic dependence of the transport coefficient $\hat q$ and thus greatly facilitate the jet tomography of cold nuclear medium.
Peng Ru, Zhong-Bo Kang, Enke Wang, Hongxi Xing, Ben-Wei Zhang
2023-02-05T07:49:05Z
http://arxiv.org/abs/2302.02329v1
# Probing the jet transport coefficient of cold nuclear matter in electron-ion collisions ###### Abstract We present a study of the nuclear-medium induced transverse momentum broadening of particle production in future electron-ion-collision (EIC) experiments. By considering the multiple scattering between hard partons and cold nuclear medium within the higher-twist factorization framework in perturbative QCD, we calculate the transverse momentum broadening of single hadron production in semi-inclusive measurements, as well as the nuclear enhancement of the transverse momentum imbalance for di-hadron and heavy-meson pair productions. In particular, a kinematics dependent non-perturbative jet transport coefficient \(\hat{q}=\hat{q}(x,Q^{2})\) extracted in a global analysis of the current data, together with its uncertainty determined with a Hessian method, are input into our calculations and are available for the community. Significant kinematic and color-state dependences of the nuclear induced broadening/imbalance are predicted. Our results indicate that the future EIC measurements are able to provide powerful constraints on the kinematic dependence of the transport coefficient \(\hat{q}\) and thus greatly facilitate the jet tomography of cold nuclear medium. ## I Introduction Exploring at a femtometer scale the properties of the nuclear media in different matter phases of quantum chromodynamics (QCD), such as cold nucleus, hadron gas and hot/dense quark-gluon plasma, is one of the main goals of various high-energy nuclear collisions [1; 2], including lepton-, hadron- and nucleus-nucleus collisions. Benefited by the factorization in perturbative QCD [3], the particle(s) produced with a large momentum transfer, such as a parton jet, can serve as a well-controlled hard probe of the non-perturbative property of the nuclear medium. The multiple scattering that a hard probe undergoes in nuclear medium can lead to the transverse momentum broadening and energy loss of the probe in general [4; 5], which are reflected in the observed nuclear modifications on the spectra and substructures of jets (or hadrons) [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. An important medium property commonly embodied in these effects is the jet transport property [17; 18; 19; 20], quantified as the coefficient \(\hat{q}\), which characterizes the transverse momentum broadening of a (quark) jet per unit propagation length in the medium and thus measures the strength of the interaction between the probe and nuclear medium. Transport coefficient \(\hat{q}\) has been an iconic quantity to represent the medium property seen by jets for a long time, especially in the study of heavy-ion collisions [21; 22; 23; 24; 25; 26; 27; 28]. Recently, the study of its dependence on kinematic variables like jet energy and probing scale became active [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], along with some related theoretical progress [40; 41; 42; 43; 44; 45; 46]. On this aspect, the electron-nucleus (\(e\)A) and proton-nucleus (\(p\)A) collisions is of particular importance [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50], since they provide a relative clean environment to delicately study the kinematic dependence of the transport property of cold nuclear matter and to test the theoretical framework of the jet-medium interaction, which in turn can be instructive for the study of nucleus-nucleus (AA) collisions. In our previous work [32], we performed the first global extraction of the \(\hat{q}\) in cold nuclear matter from the current data in \(e\)A and \(p\)A collisions, mainly on various types of nuclear-induced transverse momentum broadening. We found that, with a \(\hat{q}\) depending on the parton momentum fraction \(x\) and probing scale \(Q^{2}\), the theoretical calculations within the higher-twist expansion formalism can give an overall good agreement with the world data. The extracted optimal \(\hat{q}(x,Q^{2})\) shows significant enhancements in small- and large-\(x\) regions and a mild \(Q^{2}\) dependence. However, the uncertainties of \(\hat{q}(x,Q^{2})\) were not yet worked out in Ref. [32]. Besides, since most of the current data are gathered in the intermediate \(x\) and \(Q^{2}\) region, the suggested universality and kinematic dependence of \(\hat{q}(x,Q^{2})\) should be examined in future experiments with broader \(x\) and \(Q^{2}\) coverage. These issues motivate our followup study presented in this paper. In the first part of this work, we will upgrade our global analysis of \(\hat{q}\) by determining the uncertainties of the extracted \(\hat{q}(x,Q^{2})\) with the Hessian matrix method [51; 52], which will generate an uncertainty set of \(\hat{q}(x,Q^{2})\). The Hessian analysis is further upgraded by including a new data set on \(J/\psi\) production published recently [53] to strengthen the experimental constraints in small \(x\) region. Through the Hessian analysis in this work, we not only learn how the uncertainty of \(\hat{q}(x,Q^{2})\) varies with \(x\) and \(Q^{2}\), but also give a complete theoretical prediction with the uncertainty for a related observable, which is important for the future experimental examination. The future experiments of electron-ion collisions (EIC) will be an important place to examine our results [54; 1; 55]. Several future EIC facilities have been proposed or under construction, e.g., the Electro-Ion Collider in US (US-EIC) [1], Electron-ion collider in China (EicC) [56] and Jefferson Lab 12 GeV program (JLab) [57], etc. Since these experiments provide a wide kinematic (\(x\) and \(Q^{2}\)) coverage and high-precision measurements, they are expected to significantly improve our understanding of the transport property of cold nuclear matter. In EIC experiments, the nuclear-induced transverse momentum broadening is still the type of observable most directly relevant to the transport coefficient \(\hat{q}\). Therefore, in the second part of this work, we study the transverse momentum broadening of single particle and back-to-back particle pair productions in future electron-ion collisions. The latter case is also equivalent to the nuclear enhancement of the transverse momentum imbalance of the particle pair. Our calculations employ the formalism of higher-twist expansion, i.e., the generalized factorization in perturbative QCD, which has been well established [58; 59; 60; 61; 62] and developed [44; 45; 46; 47; 48; 49], and has shown good applicability on describing the nuclear modification in both cold and hot/dense nuclear matter [63; 64; 65; 20; 32; 47]. In the calculations, we use the \(\hat{q}\) together with its uncertainties extracted from our global analysis as an input. For comparison, we also provide the predictions with a kinematics independent constant \(\hat{q}\). By preliminarily estimating the experimental uncertainties, we show the potential of the future EIC measurements to provide powerful constraints on the \(\hat{q}\) in a wide kinematic range. The rest of this paper is organized as follows. In Sec. II, we briefly review the framework of our global analysis of \(\hat{q}\) and the main results in our previous analysis (Sec. II.1), and test the extracted \(\hat{q}\) with a new data set from the ALICE experiment at the Large Hadron Collider (Sec. II.2). We then present our work on Hessian analysis of the uncertainty of \(\hat{q}\) (Sec. II.3). The jet energy dependence of \(\hat{q}\) is also discussed at the end of this section. In Sec. III, we present the study of three types of nuclear-induced transverse momentum broadening/imbalance (Sec. III.1-III.3) for three EIC facilities: US-EIC, EicC, and JLab, and discuss the advantage of future EICs in understanding the kinematic dependence of \(\hat{q}\) (Sec. III.4). We give a summary and discussion in Sec. IV. ## II Global analysis of \(\hat{q}\) for cold nuclear matter In this section we perform an updated global analysis of the jet transport coefficient \(\hat{q}\) for cold nuclear matter (CNM) with the current world data from electron-nucleus and proton-nucleus collisions. Through the analysis, a kinematics dependent \(\hat{q}\) is extracted and its uncertainties are estimated with Hessian matrix, which will be used for making predictions for the EIC observables in the next section. ### Framework and previous results of the analysis First we briefly review the framework of our analysis as well as what have been done in our previous work [32]. In high-energy \(e\)A and \(p\)A collisions, \(\hat{q}\) is a key non-perturbative input in theoretical descriptions of the multiple scattering between the hard probe and the partons inside the nuclear target. Generally, in absence of the multiple scattering effects, the cross section of a hard scattering process in \(p\)A collisions can be written schematically as follows using leading twist collinear factorization formalism \[d\sigma^{S}=f_{q(g)/p}\otimes f_{q(g)/A}\otimes d\hat{\sigma}^{\rm S}\otimes D _{h}\,, \tag{1}\] where the superscript "\(S\)" denotes the single-scattering process, \(d\hat{\sigma}^{\rm S}\) represents the perturbatively calculable partonic cross section, \(f\) and \(D_{h}\) represent the involved parton distribution functions (PDFs) in initial state and fragmentation functions (FFs) in final state, respectively, and the subscript '\(q(g)/A\)' indicates an incoming quark (gluon) from the nucleus. The leading twist collinear factorization formalism underlies the successful global analyses for the PDFs and FFs [66; 67; 68; 52; 69; 70]. In the presence of a large nucleus, the parton multiple scattering effects become important and can be formulated by generalizing the collinear factorization in the higher-twist expansion approach [58; 59; 60; 61; 62]. Specifically, let us consider the transverse momentum broadening \(\Delta\langle p_{T}^{2}\rangle\), usually defined as the difference of the averaged transverse momentum square of the produced particle between \(e\)A (\(p\)A) and \(ep\) (\(pp\)) collisions, \[\Delta\langle p_{T}^{2}\rangle_{eA/pA}=\langle p_{T}^{2}\rangle_{eA/pA}- \langle p_{T}^{2}\rangle_{ep/pp}\,. \tag{2}\] The leading contribution to the broadening comes from the double scattering effects enhanced by the nuclear size. For example, in semi-inclusive deep inelastic scattering (SIDIS), the struck quark that is kicked off by the virtual photon may experience additional interactions with the partons inside the nuclear target, resulting in the \(\Delta\langle p_{T}^{2}\rangle\) of the final-state hadrons. In higher-twist expansion approach, the leading contribution of transverse momentum broadening can be writ ten generically in the form of a ratio as [71] \[\Delta\langle p_{T}^{2}\rangle\approx\frac{d\langle p_{T}^{2}\sigma^{D}\rangle}{d \mathcal{PS}}\bigg{/}\frac{d\sigma^{S}}{d\mathcal{PS}}\,, \tag{3}\] where the denominator \(d\sigma^{S}/d\mathcal{PS}\) is the leading-twist single scattering cross section in the phase space volume \(d\mathcal{PS}\) in \(e\)A or \(p\)A collisions, and the numerator \(d\langle p_{T}^{2}\sigma^{D}\rangle/d\mathcal{PS}\) is the \(p_{T}^{2}\)-weighted double scattering cross section [44; 45; 46; 47; 48] \[\frac{d\langle p_{T}^{2}\sigma^{D}\rangle}{d\mathcal{PS}}\equiv\int\!dp_{T}^{2 }p_{T}^{2}\frac{d\sigma^{D}}{d\mathcal{PS}dp_{T}^{2}}\,. \tag{4}\] which can be written as follows [74; 75] \[d\langle p_{T}^{2}\sigma^{D}\rangle=f_{q(g)/p}\otimes T_{ij}\otimes d\hat{ \sigma}^{\rm D}\otimes D_{h}\,, \tag{5}\] where \(T_{ij}\) represents the nuclear twsit-4 (T4) parton-parton correlation functions, which are universal non-perturbative functions encoding the medium properties characterized by the jet transport coefficient \(\hat{q}\). The \(p_{T}^{2}\)-weighted cross section, thus the broadening, also depends on the color representation of the hard probe, i.e., the quark and gluon jets correspond to the color factors \(C_{F}=(N_{c}^{2}-1)/2N_{c}\) and \(C_{A}=N_{c}\), respectively [44; 45; 46; 47; 48]. With the assumption of a loosely bound large nucleus, one can neglect the momentum and spatial correlations among the nucleons in the nuclear target. Therefore, the twist-4 matrix element can be effectively factorized in terms of leading twist PDFs and \(\hat{q}\). For example, one can approximate the twist-4 quark-gluon correlation function \(T_{qg}\) as [45] \[T_{qg}(x,0,0,\mu^{2})\approx\frac{9R_{A}}{8\pi^{2}\alpha_{s}}f_{q/A}(x,\mu^{2} )\hat{q}(x,\mu^{2}), \tag{6}\] where \(x\) is the momentum fraction carried by the quark that enters into the hard interaction, \(\mu\) is the factorization scale in perturbative QCD, \(R_{A}\) is the radius of the nucleus with mass number \(A\), \(f_{q/A}\) is the parton distribution function of quark \(q\) in the nucleus, and \(\hat{q}(x,\mu^{2})\) is the nuclear geometry averaged jet transport coefficient that we hope to extract. Similarly, for gluon-gluon correlation function \(T_{gg}\), which accesses the process initiated with a gluon in the nucleus, we assume a same form as Eq. (6) in our study, with \(f_{q/A}\) replaced by the gluon distribution \(f_{g/A}\). Through our this paper, the \(\hat{q}\) represents the transport coefficient of a quark jet. With the theoretical framework introduced above, one can perform a global analysis for \(\hat{q}\) similar to what have been done for PDFs and FFs [66; 67; 68; 69; 70; 52]. In particular, since both the twist-4 correlation functions and the leading-twist PDFs are universal non-perturbative quantities depend on \(x\) and \(\mu^{2}\), the \(\hat{q}\) from Eq. (6) naturally involves possible kinematic dependence on momentum fraction and probing scale. Note that in phenomenological studies [72; 73; 62]\(\hat{q}\) was usually assumed to be a constant value due to the unknown kinematic dependence. Particle transverse momentum broadening is the type of observable most directly relevant to \(\hat{q}\) for CNM [74; 75]. Our previous global analysis [32] takes into account the current world data on the transverse momentum broadening of single hadron production in SIDIS [6] of \(e\)A collisions, of Drell-Yan di-lepton production in \(p\)A collisions [76; 77], and of heavy quarkonium (\(J/\psi\) and \(\Upsilon\)) production in \(p\)A collisions [78; 79; 8; 80; 7; 7]. These observables involve multiple scatterings undergone by quark or gluon jets in the initial or (and) final states of the hard processes, providing multi-dimensional insight into the transport coefficient as well as a place to examine the theoretical framework. Besides the data on transverse momentum broadening, a set of data on the nuclear modification factor (shadowing effect) of the DIS structure function [81; 82] is also included in our analysis. In the higher-twist framework, such a nuclear suppression can be related to the coherent multiple scattering, which has been calculated by resumming the higher-twist contributions [72], thus, is also sensitive to the transport coefficient. In total, there are 215 data points in the analysis [32] from the experiments at DESY, FNAL, SPS, RHIC and LHC. In particular, different observables or measurements involve different kinematic regions identified with the momentum fraction \(x\) of the nuclear parton and the probing scale \(Q^{2}\), providing possibility to explore the kinematic dependence of \(\hat{q}(x,Q^{2})\). To address the kinematic dependence of \(\hat{q}\) in the global analysis, we adopt the parametrization form [32] \[\hat{q}(x,\mu^{2})=\hat{q}_{0}\,\alpha_{s}(\mu^{2})\,x^{\alpha}(1-x)^{\beta} \left[\ln(\mu^{2}/\mu_{0}^{2})\right]^{\gamma}\,, \tag{7}\] which involves four free parameters, \(\hat{q}_{0},\ \alpha,\ \beta\), and \(\gamma\) to be determined by the experimental data. Such a functional form is primarily motivated with several physical considerations. For example, in the small-\(x\) region, we expect that \(\hat{q}\) depends on the gluon saturation scale, which exhibits a power-law behavior as \(Q_{s}^{2}\propto x^{-1/3}\)[83]. This feature is related to the factor \(x^{\alpha}\) in Eq. (7). At large \(x\), the QCD power corrections could be different [84; 85; 86; 87], and may result in different behavior of \(\hat{q}\), which is considered by including the factor \((1-x)^{\beta}\). Moreover, a logarithmic scale dependence of \(\hat{q}\) is suggested from the radiative corrections [88; 41; 42], thus we use a factor \([\ln(\mu^{2}/\mu_{0}^{2})]^{\gamma}\) in Eq. (7), where the exponent \(\gamma\) is included to account for potential modification at the higher-order in perturbative corrections and/or non-perturbative contributions. Although the QCD scale evolution equation of the twist-4 quark-gluon correlation function has been derived in a series of previous work [44; 45; 46], it is coupled with the gluon-gluon correlator whose evolution is not determined. In Eq. (7), the \(\alpha_{s}(\mu^{2})\) is introduced to offset the \(\alpha_{s}\) in the denominator of the correlation function in Eq. (6), and \(\mu_{0}=1\) GeV is introduced to make the argument of the logarithm dimensionless [32]. On the whole, the parametrization form in Eq. (7) has some generality and is similar to what is usually used in the extraction of other non-perturbative quantities, such as the parton distribution functions [66]. We note that such a parametrization form and several similar forms of \(\hat{q}\) have been recently applied by other groups, and are shown to work well in their studies [35; 37]. In the analysis [32], the theoretical calculation of the transverse momentum broadening is performed at twist-4 level in QCD power expansion and at leading order (LO) in perturbative expansion with \(\alpha_{s}\). A complete NLO calculation is not yet available. In the calculation, we use the CT14 LO parton distribution functions with 3 active quark flavors [89], and the DSS fragmentation functions [67]. The heavy quarkonium production is calculated with the color evaporation model [47]. Since the transverse momentum broadening is expressed as a ratio in Eq. (3), it is to some extent insensitive to the non-perturbative inputs like the PDFs, FFs, and quarkonium production model. More details of the calculation can be seen in Ref. [32]. In addition, the possible hadronization of the in-medium jet may weaken the multiple scattering effects [90], which has not been considered in the current framework. The global analysis of \(\hat{q}\) starts with finding the optimal \(\hat{q}\) by minimizing the \(\chi^{2}\) as a function of the free parameters \(\{a_{j}\}\) defined as [51; 52] \[\chi^{2}(\{a_{j}\})=\sum_{i}\frac{\left[\mathcal{D}_{i}-\mathcal{T}_{i}(\{a_{ j}\})\right]^{2}}{\sigma_{i}^{2}}, \tag{8}\] where \(\mathcal{D}_{i}\) is the value of the \(i\)-th experimental data point, \(\mathcal{T}_{i}(\{a_{j}\})\) is the corresponding theoretical prediction depending on the values of the free parameters \(\{a_{j}\}=\{\hat{q}_{0},\ \alpha,\ \beta,\ \gamma\}\) in the \(\hat{q}\) parametrization in Eq. (7), and \(\sigma_{i}^{2}\) is the statistical and systematic experimental uncertainties summed in quadrature. The influence of the possible correlated experimental uncertainties has not been taken into account in our analysis [52; 91]. The procedure of minimizing the \(\chi^{2}\) in the \(\{a_{j}\}\) space is performed by utilizing the MINUIT package [92]. An optimal \(\hat{q}(x,Q^{2})\) within the parametrization form in Eq. (7) is found at a minimum total \(\chi^{2}=260\) (\(\chi^{2}/\)NDP\(=1.21\), where NDP\(=215\) is the number of data points [32]). The optimal \(\hat{q}(x,Q^{2})\) exhibits an obvious \(x\) dependence, especially in small- and large-\(x\) regions, as well as a mild \(Q^{2}\) dependence [32] (also seen in Sec. II.3 of this manuscript). The theoretical results with this \(\hat{q}(x,Q^{2})\) show an overall good agreement with the experimental data, indicating a universal kinematic dependence of \(\hat{q}(x,Q^{2})\) in cold nuclear matter. To further clarify this kinematic dependence, we also performed the fitting by assuming \(\hat{q}\) is a constant quantity as \(\hat{q}=\hat{q}_{0}\), and we found a minimum total \(\chi^{2}=388\) (\(\chi^{2}/\)NDP\(=1.8\)), which is apparently larger than that with the kinematic dependence. Especially, the calculations with the constant \(\hat{q}\) can not give a good description of the data in small- and large-\(x\) regions. For example, the \(\chi^{2}\) for the \(J/\psi\) broadening at the LHC is 87.3 (\(\chi^{2}/\)NDP\(=7.3\) with NDP\(=12\)), which is far from reasonable. In contrast, the result for this part is significantly improved by using the kinematics dependent \(\hat{q}\), reflected by the \(\chi^{2}=4.8\) (\(\chi^{2}/\)NDP\(=0.4\)) [32]. The results of the analysis should be examined in future, when more experimental data with a wider kinematic coverage and a high precision, e.g., from EIC, are available. To this end, one needs the uncertainty of the \(\hat{q}(x,Q^{2})\) under the constraints of the current data, which allows to make a complete theoretical prediction. In the previous work [32], a Lagrange multiplier method [51; 91] is employed to evaluate the uncertainties of part of the calculated observable. However, the uncertainties of \(\hat{q}(x,Q^{2})\) (varying with \(x\) and \(Q^{2}\)) can not be easily obtained with that method, which makes the analysis less predictive and thus motivates our reanalysis with the Hessian matrix method (Sec. II.3). Before presenting that, we show a test for the extracted \(\hat{q}\) with a new data set in the following subsection. ### A test for \(\hat{q}(x,Q^{2})\) with new \(J/\psi\) data We noted that the ALICE collaboration have published in 2021 a new data set on the transverse momentum broadening of \(J/\psi\) production in \(p\)-Pb collisions at \(\sqrt{s_{NN}}=8.16\) TeV at the LHC [53], which provides an opportunity to test the extracted \(\hat{q}\) in [32]. In Fig. 1, we show both the theoretical results with the kinematic dependent \(\hat{q}(x,Q^{2})\) and the constant \(\hat{q}\) extracted from previous analysis, confronting with the new data. We find the calculations with the \(\hat{q}(x,Q^{2})\) give a visible rapidity dependence of the broadening, reasonably description of the data in both backward and forward rapidity regions. However, the results with the constant \(\hat{q}\) obviously underestimate the broadening in forward (small \(x\)) region. A similar rapidity dependence can also be observed in the earlier ALICE \(J/\psi\) data measured at \(\sqrt{s_{NN}}=5.02\) TeV (2015) [8], which have been taken into account in our previous analysis and provided important informa Figure 1: Transverse momentum broadening \(\Delta\langle p_{T}^{2}\rangle\) in \(J/\psi\) production in backward [panel (a)] and forward [panel (b)] regions in pA collisions at the LHC as functions of centrality. Solid curve represents results with \(\hat{q}(x,Q^{2})\) form global analysis [32] and shaded area corresponds to uncertainty. Dashed line and dotted boundaries represent results with constant \(\hat{q}\) with uncertainty. Circles are new ALICE data [53] that are not included in previous analysis of \(\hat{q}\)[32]. tion on the \(x\) dependence of \(\hat{q}\). In this work, we will also include the new data in the Hessian analysis to provide more constraints on \(\hat{q}\). ### Uncertainty of \(\hat{q}(x,Q^{2})\) from Hessian Matrix In order to estimate the uncertainties of the kinematics dependent \(\hat{q}(x,Q^{2})\) under the constraints of the current data, we perform an analysis with a Hessian matrix method. The basic assumption of the Hessian analysis is that the \(\chi^{2}\) can be approximately expressed in a quadratic form of the free parameters \(\{a_{i}\}\) in the neighborhood of the minimum [51; 52] as \[\chi^{2}=\chi_{0}^{2}+\sum_{i,j}H_{ij}y_{i}y_{j}\,, \tag{9}\] where \(\chi_{0}^{2}\equiv\chi^{2}(\{a_{i}^{0}\})\) is the global minimum of the \(\chi^{2}\) at the optimal parameter values \(\{a_{i}\}=\{a_{i}^{0}\}\), \(y_{i}=a_{i}-a_{i}^{0}\) is the displacement of \(a_{i}\) from its optimal value \(a_{i}^{0}\), and \(H_{ij}\) are the elements of Hessian matrix defined as \[H_{ij}=\frac{1}{2}\left(\frac{\partial^{2}\chi^{2}}{\partial y_{i}\partial y _{j}}\right)_{a_{i}=a_{i}^{0}}. \tag{10}\] Usually there are interplays among different variables \(y_{i}\) (or \(a_{i}\)) in the \(\chi^{2}\), and the off-diagonal Hessian matrix elements could be non-zero. This makes the uncertainty estimation, corresponding to a certain tolerance \(\Delta\chi^{2}\equiv\chi^{2}-\chi_{0}^{2}\), not that straightforward. However, one can disentangle the parameters by defining a new basis \(\{z_{i}\}\) of the parameter space, in whose representation the Hessian matrix is diagonal. With the new set of parameters \(\{z_{i}\}\), the \(\Delta\chi^{2}\) can be written in a simple form as \[\Delta\chi^{2}=\sum_{i}z_{i}^{2}, \tag{11}\] which means that the contours of the \(\chi^{2}\) are spheres in the new basis. Using this new basis, one can generate the corresponding uncertainty sets of \(\hat{q}\), which can be used to estimate both the uncertainties of \(\hat{q}\) and the related theoretical predictions. More details of the Hessian analysis can be found in Appendix A. The Hessian analysis in this work is performed by using the MINUIT package combined with the ITERATE program [51]. The theoretical framework for calculating various observables is the same as in our previous analysis introduced in Sec II.1. After the new ALICE data on \(J/\psi\) production (Sec. II.2) included (now 227 data points in total), a global minimum \(\chi_{0}^{2}=275\) is reached through the analysis. We first show in Fig. 2 the global \(\chi^{2}-\chi_{0}^{2}\) as functions of each individual original parameters \(\{a_{i}\}\) in the vicinity of the minimum. We can see that the experimental data indeed have sensitivities to all the parameters. The non-zero optimal values of the parameters \(\alpha\), \(\beta\) and \(\gamma\) suggest the kinematic dependence of \(\hat{q}\) on \(x\) and \(Q^{2}\), to be further consolidated with the determined uncertainty. Fig. 3 shows the values of \(\chi^{2}-\chi_{0}^{2}\) as functions of the new parameters \(\{z_{i}\}\) defined in Eq. (10). They are found to have a very good agreement with the quadratic form \(\Delta\chi^{2}=z_{i}^{2}\), indicating the good performance of Hessian analysis. With this Hessian analysis, we obtain the optimal values of the parameters \(\{a_{i}\}\) together with their uncertainties corresponding to 90% confidence level (C.L.): \[\hat{q_{0}}=0.0191\pm 0.0061\text{ GeV}^{2}/\text{fm},\ \ \alpha=-0.182\pm 0.050,\] \[\beta=-2.85\pm 1.87,\quad\gamma=0.264\pm 0.169\,. \tag{12}\] We find that with the uncertainties, the global data favor negative \(\alpha\) and \(\beta\), and a positive \(\gamma\) in the parametrization of \(\hat{q}\). The optimal values of the parameters are in good agreement with the results of our previous analysis [32]. In Fig. 4, we show the extracted \(\hat{q}(x,Q^{2})\) versus the momentum fraction \(x\) for \(Q^{2}=\)1.2, 10, and 100 GeV\({}^{2}\), including the optimal values (\(S_{0}\)), uncertainty bands, and uncertainty sets \(S_{\pm k}\) of \(\hat{q}(x,Q^{2})\) [defined by Eq. (10)], which can be used to estimate the uncertainty for theoretical predictions [see Eq. (11)]. For the optimal values, we can see that the \(x\) dependence is noticeable in small- and large-\(x\) regions, and the \(Q^{2}\) dependence is relatively mild. The enhancements of \(\hat{q}(x,Q^{2})\) in small- and large-\(x\) regions are related to the negative parameters \(\alpha\) and \(\beta\), respectively. The \(\alpha\) value in Eq. (12) is qualitatively consistent with the growth rate of the gluon density expected in saturation physics [93], and the negative \(\beta\) may indicate an enhancement of the nuclear power correction at large \(x\)[84; 85; 86; 87]. Since most of the current data are located in the intermediate \(x\) and \(Q^{2}\) regions, the uncertainties of \(\hat{q}(x,Q^{2})\) become larger at small and large values of \(x\) or \(Q^{2}\), due to the less experimental constraints. Especially, the uncertainties are dramatically large at \(x\gtrsim 0.4\), where no data exist for this kinematic region. For comparison, we also show in Fig. 4 the \(\hat{q}\) extracted by assuming it is a kinematic independent constant quantity (\(\hat{q}=\hat{q}_{0}=0.0150^{+0.0023}_{-0.0025}\) GeV\({}^{2}\)/fm). To illustrate the impact of the added new \(J/\psi\) data on the extracted \(\hat{q}(x,Q^{2})\), we compare in Fig. 5 the \(\hat{q}(x,Q^{2})\) for \(Q^{2}=10\) GeV\({}^{2}\) extracted with and without the new \(J/\psi\) data. A good agreement between their optimal values is found. What is notable is that, with the additional constraints from the new data, the uncertainties at small values of \(x\) are reduced to some extent and a more evident \(x\) dependence is observed. More detailed illustrations for the impact of the new \(J/\psi\) data on \(\hat{q}(x,Q^{2})\) at different \(Q^{2}\) and on the predictions for different observables can be seen in Appendix B. In addition, for theoretical predictions, we have compared the uncertainties from the Hessian analysis with those from our previous analysis via the Lagrange multiplier method, and found a good agreement between them [94]. Figure 4: \(\hat{q}(x,Q^{2})\) extracted from global analysis, shown as a function of momentum fraction \(x\) of initial-state nuclear parton at \(Q^{2}=\)1.2, 10, and 100 GeV\({}^{2}\). Green solid curve represents optimal values (\(S_{0}\)), light-green band represents corresponding uncertainty, and dotted curves show uncertainty set \(\{S_{\pm k}\}\) of \(\hat{q}(x,Q^{2})\). For reference, kinematics independent constant \(\hat{q}\) and uncertainty extracted from data are shown as red dashed line and pink band, respectively. Both uncertainties of \(\hat{q}(x,Q^{2})\) and constant \(\hat{q}\) correspond to 90% C.L.. Figure 5: \(\hat{q}(x,Q^{2})\) at \(Q^{2}=10\) GeV\({}^{2}\) and uncertainties extracted from global analysis with and without ALICE data (2021) on transverse momentum broadening of \(J/\psi\) production. Results of new analysis are the same as in Fig. 4. Results without \(J/\psi\) (21’) data are shown with a dashed curve with dotted boundaries. Figure 6: Transverse momentum broadening \(\Delta\langle p_{T}^{2}\rangle\), in Drell-Yan process in \(p\)A collisions versus nuclear mass number \(A\) [panel (a)], in SIDIS versus Bjorken \(x_{B}\) [panel (b)], and in \(J/\psi\) production in backward [panel (c)] and forward [panel (d)] regions in \(p\)A collisions at the LHC (5TeV, 2015) as functions of \(N_{\rm coll}\). Green solid curve represents results with optimal \(\hat{q}(x,Q^{2})\) and light-green shaded area corresponds to uncertainty. Red dashed line and dotted band represent results with constant \(\hat{q}\) with uncertainty. Averaged momentum fraction \(\langle x\rangle\) and probing scale \(\langle Q^{2}\rangle\) are shown for reference. ### Kinematic dependence of \(\hat{q}\) The kinematic dependence of \(\hat{q}\) is of particular interest since it is related to the detailed partonic structures of the nuclear matter, similar to the \(x\) and \(Q^{2}\) dependence of a parton distribution function. Global analysis offers an indispensable data-driven understanding of such issues. To demonstrate how the experimental data determine the kinematic dependence of the \(\hat{q}\) in our analysis, we show in Fig. 6 the results calculated with the extracted \(\hat{q}(x,Q^{2})\) and constant \(\hat{q}=\hat{q}_{0}\), for four representative observables in the analysis: the transverse momentum broadening in Drell-Yan process (nuclear mass number dependence), in SIDIS (Bjorken \(x\) dependence), and in \(J/\psi\) production [dependence on the collision-centrality related \(N_{\rm coll}\) in backward and forward rapidity at the LHC]. For the Drell-Yan [panel (a)] and the backward \(J/\psi\) production [panel (c)], since the involved momentum fraction \(x\) of the nuclear parton is in the intermediate region, the results with the \(\hat{q}(x,Q^{2})\) and the constant \(\hat{q}\) are close to each other, while a slightly better agreement with the Drell-Yan data is given by that with the \(\hat{q}(x,Q^{2})\). However, the theoretical predictions are significantly improved with the kinematics dependent \(\hat{q}(x,Q^{2})\) in the SIDIS [panel (b)] and forward \(J/\psi\) production [panel (d)], which correspond to the regions of large and small \(x\), respectively. In particular, the calculation with the constant \(\hat{q}\) from the global analysis completely fails to describe the data on the forward \(J/\psi\) production at the LHC. Actually, the enhancements of the \(\hat{q}(x,Q^{2})\) in small and large \(x\) regions observed in Fig. 4 stem largely from the \(J/\psi\) production in forward region at the LHC and the Bjorken-\(x\) dependence of SIDIS in the analysis, respectively, which should be examined through future experiments that involve small and large \(x\) regions. On the other hand, it is also noteworthy that, for the observables that are taken into account in our analysis, i.e., the transverse momentum broadening in SIDIS, Drell-Yan process and heavy-quarkonium production, the energy of the hard probe \(E_{\rm jet}\) in the nucleus rest frame can be commonly expressed with the Lorentz invariant variables \(x\) and \(Q^{2}\) through the relationship: \[E_{\rm jet}=\frac{Q^{2}}{2m_{p}x}\,, \tag{13}\] where the hard scale \(Q^{2}\) is the virtuality (or squared invariant mass) of the virtual photon in SIDIS and Drell-Yan process, and is the squared invariant mass of the heavy-quark pair that form the quarkonium in the color evaporation model, \(x\) is the momentum fraction of the initial-state nuclear parton in these processes, and \(m_{p}\) is the nucleon mass. With Eq. (13), we can convert the extracted \(\hat{q}(x,Q^{2})\) into the form \(\hat{q}(E_{\rm jet},Q^{2})\), which can be regarded as the jet energy dependence of \(\hat{q}\) in cold nuclear matter. Since the jet energy dependence is usually discussed in the study of jet quenching in quark-gluon plasma in relativistic heavy-ion collisions [21; 27; 40], to extract the \(\hat{q}(E_{\rm jet},Q^{2})\) in cold nuclear matter will provide a reference for future comparative study. In Fig. 7 we plot the \(\hat{q}(E_{\rm jet},Q^{2})\) with uncertainties. We find that for \(Q^{2}=2-10\) GeV\({}^{2}\) the \(\hat{q}\) increases with \(E_{\rm jet}\) in a wide range of jet energy corresponding to the small \(x\) region. However, for \(Q^{2}=100\) GeV\({}^{2}\), the plotted region corresponds to large \(x\) values, and the \(\hat{q}\) decreases with \(E_{\rm jet}\) with a large uncertainty. Some similar results in the study of jet quenching can be found in Refs. [21; 27; 40]. Besides, the \(Q^{2}\) dependence in \(\hat{q}(E_{\rm jet},Q^{2})\) is more pronounced than that in \(\hat{q}(x,Q^{2})\), because the corresponding \(x\) will vary with \(Q^{2}\) for a certain \(E_{\rm jet}\). ## III Nuclear induced transverse momentum broadening/imbalance in electron-ion collisions The future EIC facilities [56; 1; 57] provide great opportunities to deepen our understanding of the jet transport property of the cold nuclear medium. In EIC experiments, the nuclear-medium induced transverse momentum broadening will continue to be the observable most directly related to the transport coefficient \(\hat{q}\). Since the initial-state projectile is an electron in EIC, the broadening is induced by the final-state multiple scattering between the outgoing hard probe and the nucleus. In this section, we will study the transverse momentum broadening in both single- and pairwise-particle productions at the EIC. For the latter case, the nuclear broadening of the particle-pair is equivalent to the nuclear enhancement of the particle-pair transverse momentum imbalance. Concretely, using the \(\hat{q}\) extracted in the previous section, we will calculate the transverse momentum broadening of single-hadron production and the enhancement of transverse momentum imbalance of di-hadron and heavy-meson pair (\(D\bar{D}\)) productions. These observables will be studied in the kinematic regions of three proposed EIC facilities: US-EIC, EicC, and JLab (12GeV) [56; 57; 1]. Since we focus on the EIC, it will be useful to give the typical Lorentz-invariant DIS kinematic variables \[x_{B}=\frac{Q^{2}}{2p_{N}\cdot q_{\gamma}}\,,\ \ y=\frac{q_{\gamma}\cdot p_{N}}{k_{ e}\cdot p_{N}}\,,\ \ Q^{2}=-q_{\gamma}^{2}\,. \tag{14}\] Here \(x_{B}\) is the Bjorken variable, \(y\) is the inelasticity of the scattering, and \(Q^{2}\) is the virtuality of the exchanged photon \(\gamma^{*}\). In their definitions, \(k_{e}\), \(p_{N}\) and \(q_{\gamma}\) are the 4-momenta of the incoming electron, the nucleon and the virtual photon, respectively. For the final-sate fragmentation process, the hadron momentum fraction \(z_{h}\) is introduced as \[z_{h}=\frac{p_{N}\cdot p_{h}}{p_{N}\cdot q_{\gamma}}, \tag{15}\] where \(p_{h}\) is the 4-momentum of the final-state hadron. In addition, the squared invariant mass of the photon-nucleon (\(\gamma^{*}\)-N) system is \(W^{2}=(q_{\gamma}+p_{N})^{2}\). In our calculations, the center-of-mass energies (\(\sqrt{s}\)) of the electron-nucleon system for the three EIC facilities are taken to be 90 GeV (US-EIC), 10.6 GeV (EicC), and 4.8 GeV (JLab), respectively [56; 57; 1; 95]. The kinematic range considered in our calculations is: \(Q^{2}>1\) GeV\({}^{2}\), \(0.01<y<0.95\), \(W^{2}>10\) GeV\({}^{2}\) (\(>4\) GeV\({}^{2}\) for JLab), and \(0.3<z_{h}<0.8\)[95; 96]. With this kinematic restriction, we plot in Fig. 8 the ranges of Bjorken \(x\) and \(Q^{2}\) covered by the three facilities. For comparison, some sampled values of the \(x\) and \(Q^{2}\) involved in our global analysis of the current data are also shown. We can see that the future EIC facilities have the potential to allow a high-coverage scan on the kinematic dependence of \(\hat{q}\), especially for small- and large-\(x\) regions where the current measurements rarely access. Besides the wide kinematic coverage, the high precision measurements at future EIC are expected to provide more powerful constraints on the \(\hat{q}\). In this study, to preliminarily estimate the uncertainty of the measurement in future EIC, we consider an integrated luminosity as \(\mathcal{L}=5\) fb\({}^{-1}\)[56; 57; 95], and evaluate the relative statistical uncertainty as \(\delta_{st}=1/\sqrt{\sigma\mathcal{L}}\), where \(\sigma\) is the cross section of the considered process. Due to the lack of the information on the systematic uncertainty, we simply assume it is on the same order of the statistical uncertainty, and include an additional factor \(\sqrt{2}\) to estimate the total uncertainty. Now we present the study for three types of nuclear induced broadening/imbalance as follows. ### Single hadron \(p_{T}\) broadening in SIDIS The transverse momentum broadening of the single hadron production in SIDIS is a type of observable that has already played an important role in our current analysis of \(\hat{q}\), and will be still important in future EIC. At lowest order in QCD, the single hadron comes from the fragmentation of the nuclear struck quark, which can rescatter with the nuclear medium when traversing it. The leading-twist cross section for the single scattering process can be written as \[\frac{d\sigma^{S}}{dx_{B}dQ^{2}dz_{h}}\!\!= \frac{2\pi\alpha_{\rm em}^{2}}{Q^{4}}\left[1\!+\!(1-y)^{2}\right]\] \[\times\sum_{q}e_{q}^{2}f_{q/A}(x_{B},\mu^{2})D_{h/q}(z_{h},\mu^{2 })\,. \tag{16}\] Figure 8: Kinematic (\(x_{B}\) and \(Q^{2}\)) regions covered by three EIC facilities shown as shaded area. Boundaries for US-EIC, EicC, and JLab are plotted with dashed, dotted-dashed, and dotted curves, respectively. Discrete circles are sampled momentum fraction \(x\) and probing scale \(Q^{2}\) from numerical calculations in current analysis of \(\hat{q}\) (Density of the circles doesn’t represent the density of data points). The \(p_{T}\) broadening in Eq. (3), at given values of \(x_{B}\), \(Q^{2}\) and \(z_{h}\), can be expressed as [44; 45] \[\Delta\langle p_{T}^{2}\rangle = \left(\!\frac{8\pi^{2}\alpha_{s}z_{h}^{2}C_{F}}{N_{c}^{2}-1} \right)\!\frac{\sum_{q}e_{q}^{2}T_{qg}(x_{B},\!0,\!0,\!\mu^{2})D_{h\hat{q}}(z_{ h},\mu^{2})}{\sum_{q}e_{q}^{2}f_{q/A}(x_{B},\mu^{2})D_{h\hat{q}}(z_{h},\mu^{2})}\,,\] where the color factor \(C_{F}\) corresponds to the transport of a quark jet. In our calculation, the factorization scale in Eq. (17) is taken to be \(\mu^{2}=Q^{2}\), and \(\Delta\langle p_{T}^{2}\rangle\) is obtained by averaging over a studied kinematic bin. The theoretical inputs, i.e., the PDFs and FFs, are the same as in our global analysis. In Fig. 9, we plot the results for the \(p_{T}\) broadening of pion production in SIDIS in electron-lead (\(e\)-Pb) scattering at three EIC facilities, at \(Q^{2}\!=\!2\) GeV\({}^{2}\) (left panel) and \(10\) GeV\({}^{2}\) (right panel), respectively. The solid curve with shaded band represents the predictions with the kinematic dependent \(\hat{q}(x,Q^{2})\) along with the uncertainties extracted in section II.3. We find that the \(\Delta\langle p_{T}^{2}\rangle\) as a function of \(x_{B}\) clearly reflects the \(x\) dependence and the uncertainties of \(\hat{q}(x,Q^{2})\), e.g., the growths of both the broadening and the uncertainties in small- and large-\(x\) regions. For comparison, we also show the predictions with the kinematics independent constant \(\hat{q}\) extracted from the current data as the dashed curve and dotted uncertainty band. As expected, the differences between the two theoretical predictions appear mainly in small- and large-\(x\) regions. In the bottom of each panel, we show the estimated experimental uncertainties as a reference, which are rather small compared to the theoretical uncertainties. Therefore, we expect the future EIC experiments should be able to distinguish the two theoretical predictions and provide powerful constraints on the \(\hat{q}\). Fig. 10 shows the \(\Delta\langle p_{T}^{2}\rangle\) as a function of the probing scale \(Q^{2}\) in an intermediate-\(x_{B}\) region where the difference between the extracted \(\hat{q}(x,Q^{2})\) and constant \(\hat{q}\) is small. Since the scale dependence of \(\hat{q}(x,Q^{2})\) is mild, we only see small differences between two theoretical predictions in the studied \(Q^{2}\) regions. The broadening with the constant \(\hat{q}\) slightly decreases with increasing \(Q^{2}\), which is due to the decreasing averaged \(z_{h}^{2}\) [a factor in Eq. (17)] as a result of the scale evolution of the fragmentation function. Similar as in Fig. 9, the estimated experimental uncertainties are small for the plotted \(Q^{2}\) region. ### Nuclear enhancement of di-hadron transverse momentum imbalance Now we focus on the nuclear-medium enhanced transverse momentum imbalance of the back-to-back particles production in future EIC. The transverse momentum imbalance of the particle pair given by \(\vec{p}_{T}=\vec{p}_{1T}+\vec{p}_{2T}\) is equal to the total transverse momentum of the two particles [49]. Thus, the enhancement of the imbalance is also Figure 10: Similar as Fig. 9, but for transverse momentum broadening \(\Delta\langle p_{T}^{2}\rangle\) of single pion production in SIDIS as a function of \(Q^{2}\) within \(0.07<x_{B}<0.09\). Results are calculated for \(1<Q^{2}<100\) GeV\({}^{2}\). Figure 9: Transverse momentum broadening \(\Delta\langle p_{T}^{2}\rangle\) of single pion production in SIDIS as a function of Bjorken \(x_{B}\) at \(Q^{2}=\)2 and 10 GeV\({}^{2}\), for three EIC facilities. Solid curve with shaded area represents results with \(\hat{q}(x,Q^{2})\) with uncertainties. Dashed curve with dotted band shows results with constant \(\hat{q}\) with uncertainties. Circles with vertical bars represent estimated experimental uncertainties. In calculations, we have taken \(Q^{2}\in[1.5,2.5]\) GeV\({}^{2}\) (left panel) and \(Q^{2}\in[9,11]\) GeV\({}^{2}\) (right panel), and set \(x_{B}<0.6\). On top of each panel, we mark the kinematic (\(x_{B}\)) ranges covered by three facilities, and overlaps among them can be seen. The theoretical results for two facilities in their overlap region is generally similar, since they depend on \(x_{B}\) at \(Q^{2}\) to a large extent. the broadening of the total transverse momentum. In this subsection, we study the imbalance of di-hadron (pion pair) production. The back-to-back hadron pair comes from the fragmentation of di-jet, which can be produced through the processes \(\gamma^{*}q\to qg\) and \(\gamma^{*}g\to q\bar{q}\) at LO in \(\alpha_{s}\). Following an earlier work [49], the differential cross section of di-hadron production can be written in the center of mass frame of the virtual-photon-nucleon (\(\gamma^{*}\)-N) system as \[\frac{d\sigma^{S}}{dy_{1}dy_{2}dp_{1T}dp_{2T}} = \frac{2\pi\alpha_{s}\alpha_{\rm em}}{(W^{2}+Q^{2})^{2}}{\sum_{b,c,d}}D_{h_{1}/c}(z_{1})D_{h_{2}/d}(z_{2}) \tag{18}\] \[\times\frac{f_{b/A}(x)}{x}H^{U}_{\gamma^{*}b\to cd}\,,\] where we have suppressed the scale \(\mu^{2}\)-dependence in the PDFs and FFs, \(y_{1(2)}\), \(p_{1T(2T)}\) and \(z_{1(2)}\) are the rapidity, transverse momentum and momentum fraction of the hadron \(h_{1(2)}\), and \(H^{U}_{\gamma^{*}b\to cd}\) represents the perturbatively calculable hard function of the partonic subprocess \(\gamma^{*}b\to cd\)[49]. The nuclear enhancement of the transverse momentum imbalance of the di-hadron, induced by the multiple scattering undergone by the two outgoing partons \(c\) and \(d\), is only sensitive to the total color of the two-parton composite state, which is equal to the color of the initial-state nuclear parton \(b\). Accordingly, the nuclear enhancement at given values of \(y_{1(2)}\) and \(p_{1T(2T)}\) is expressed as [49] \[\Delta\langle p_{T}^{2}\rangle=\left(\frac{8\pi^{2}\alpha_{s}}{N_{c}^{2}-1} \right)\frac{\sum_{b,c,d}D_{h_{1}/c}(z_{1},\mu^{2})D_{h_{2}/d}(z_{2},\mu^{2}) \frac{1}{x}T_{bg}(x,0,0,\mu^{2})H^{F}_{\gamma^{*}b\to cd}}{\sum_{b,c,d}D_{h_ {1}/c}(z_{1},\mu^{2})D_{h_{2}/d}(z_{2},\mu^{2})\frac{1}{x}f_{b/A}(x,\mu^{2})H ^{U}_{\gamma^{*}b\to cd}}\,, \tag{19}\] with the hard function \(H^{F}_{\gamma^{*}b\to cd}\) written as \[H^{F}_{\gamma^{*}b\to cd}=\begin{cases}C_{F}H^{U}_{\gamma^{*}b \to cd}&b={\rm quark}\\ C_{A}H^{U}_{\gamma^{*}b\to cd}&b={\rm gluon}.\end{cases} \tag{20}\] Here the color factor \(C_{F}\) (\(C_{A}\)) corresponds to the process initiated with a nuclear quark (gluon). This is more complicated than the case in SIDIS, where only the transport of a quark jet is involved at LO in \(\alpha_{s}\). In our calculation, the renormalization and factorization scales are taken to be the averaged transverse momentum of di-hadron, i.e., \(\mu=(p_{1T}+p_{2T})/2\). We also employ kinematic cuts as \(1<p_{1T(2T)}<4\) GeV and \(0.1<y_{1(2)}<2.0\). In Fig. 11, we plot the results for the nuclear enhancement of the di-hadron (pion pair) imbalance versus Bjorken \(x_{B}\), at \(Q^{2}=2\) GeV\({}^{2}\) (left panel) and 10 GeV\({}^{2}\) (right panel), respectively. The predictions with both the kinematics dependent \(\hat{q}(x,Q^{2})\) and constant \(\hat{q}\), as well as the estimated experimental uncertainties are shown. Note that the momentum fraction \(x\) carried by the initial-state nuclear parton \(b\) can be expressed as \[x=\frac{Q^{2}+M_{JJ}^{2}}{2p_{N}\cdot q_{\gamma}}=x_{B}\left(1+\frac{M_{JJ}^{ 2}}{Q^{2}}\right), \tag{21}\] where \(M_{JJ}\) is the invariant mass of the outgoing two partons (di-jet) \(c\) and \(d\). Therefore, the \(x\) that enters \(\hat{q}(x,Q^{2})\) is larger than the Bjorken variable \(x_{B}\). To show this difference, we mark in Fig. 11 the averaged momentum fraction \(\langle x\rangle\) for the corresponding \(x_{B}\) as green square. For a same region of Bjorken \(x_{B}\), the di-hadron production probes the \(\hat{q}(x,Q^{2})\) at larger values of \(x\) compared to the single-hadron production. Since there is a mixture of the processes with color factors \(C_{A}\) and \(C_{F}\), the di-hadron production is expected to suffer stronger multiple scattering in nuclear medium than the single-hadron production. At the same time, we can observe the broadening with a constant \(\hat{q}\) become weaker for a greater value of \(x_{B}\) (or \(x\)) due to the decreasing contributions from the gluon initiated processes. Since our calculation is made in the \(\gamma^{*}\)-N system, we have simply estimated the experimental uncertainties with a same luminosity as in \(e\)-A system to provide a reference. The measurement of di-hadron imbalance in future EIC is expected to provide valuable constraints on the behavior of \(\hat{q}(x,Q^{2})\) in large-\(x\) region, where the large uncertainties from the current analysis make the predictions less reliable. ### Nuclear enhancement of transverse momentum imbalance of heavy meson pair Similar to the di-hadron imbalance, there should be nuclear enhancement of the transverse momentum imbalance in heavy meson pair production. In this work, we study that for the \(D\bar{D}\) pair from the fragmentation of the produced back-to-back heavy quark pair \(Q\bar{Q}\) (\(c\bar{c}\)). At lowest order in \(\alpha_{s}\), the \(c\bar{c}\) di-jet is produced from the photon-gluon scattering \(\gamma^{*}g\to c\bar{c}\). The differential cross section for \(\gamma^{*}+A\to D(p_{1})+\bar{D}(p_{2})+X\) in the center of mass frame of the \(\gamma^{*}\)-N system can be expressed in a similar form of the di-hadron production in Eq. (18) as [49] \[\frac{d\sigma^{S}}{dy_{1}dy_{2}dp_{1T}dp_{2T}} = \frac{2\pi\alpha_{s}\alpha_{\rm em}}{(W^{2}+Q^{2})^{2}}D_{D\bar{ Q}}(z_{1})D_{\bar{D}\bar{Q}}(z_{2}) \tag{22}\] \[\times\frac{f_{g/A}(x)}{x}H^{U}_{\gamma^{*}g\to Q\bar{Q}}\,.\] The corresponding nuclear enhancement at given values of \(y_{1(2)}\) and \(p_{1T(2T)}\) is \[\Delta\langle p_{T}^{2}\rangle=\left(\frac{8\pi^{2}\alpha_{s}C_{A}}{N_{c}^{2}-1} \right)\frac{T_{gg}(x,0,0,\mu^{2})}{f_{g/A}(x,\mu^{2})}\,, \tag{23}\] This simple form is related to the fact that the \(c\bar{c}\) di-jet is only initiated with a nuclear gluon, corresponding to the color factor \(C_{A}\). In our calculation, the fragmentation from \(c\) (\(\bar{c}\)) quark to \(D\) (\(\bar{D}\)) meson is described with KKKS08 fragmentation functions [68], and the renormalization and factorization scales are taken to be the averaged transverse mass of the \(D\bar{D}\) pair, i.e., \(\mu=(m_{1T}+m_{2T})/2\). We employ the same kinematic cuts as in the calculation for di-hadron production. The momentum fraction \(x\) carried by the initial-state nuclear gluon can be similarly given by Eq. (21). In heavy meson pair production, the averaged momentum fraction \(x\) for a certain value of Bjorken \(x_{B}\) can be even larger than that in di-hadron production, due to the mass of heavy quark. In Fig. 12, we plot the results for the nuclear enhancement of the imbalance of \(D\bar{D}\) pair production versus the Bjorken \(x_{B}\), at \(Q^{2}\!=\!2\) GeV\({}^{2}\) (left panel) and \(10\) GeV\({}^{2}\) (right panel), respectively. The averaged \(\langle x\rangle\) for each kinematic bin is marked in the plot. It can be observed that, for a same \(x\) value, the \(\Delta\langle p_{T}^{2}\rangle\) in \(D\bar{D}\) production is stronger than that in di-hadron production, which is expected as a result of the greater color factor, to be examined in future EIC experiments. The three types of hard probes in future EIC studied in this section are expected to provide a multi-dimensional understanding of the jet transport property of cold nuclear matter in a wide kinematic range. In addition, although the results of \(\Delta\langle p_{T}^{2}\rangle\) in this section are obtained for the electron-lead collisions, the results for other colliding nuclei can be easily obtained by rescaling with the nuclear radius (\(\times R_{A}/R_{Pb}\)), according to Eq. (6). ### Advantage of future EIC measurement for understanding kinematic dependence of \(\hat{q}\) From the above study, we can see that the three future EIC facilities jointly provide a nearly full-kinematics scan of the jet transport property with high precisions, which will be valuable for determining the kinematic dependence of \(\hat{q}\). To illustrate this, we can note that, in the sense of the parametrization of \(\hat{q}(x,Q^{2})\) in Eq. (7), the kinematic dependence is determined by the parameters Figure 11: Similar as Fig. 9, but for nuclear enhancement of transverse momentum imbalance \(\Delta\langle p_{T}^{2}\rangle\) of di-hadron (\(\pi\pi\)) production as a function of \(x_{B}\). Result for each bin is calculated in \(\gamma^{*}\)-N system with fixed \(Q^{2}\) and \(x_{B}\) (central value). Green square marks averaged momentum fraction \(x=x_{B}(1+M_{JJ}^{2}/Q^{2})\) for corresponding \(x_{B}\), with values shown on the right-hand-side vertical axis. In right panel (\(Q^{2}=10\) GeV), dihadron production in JLab is kinematically prevented/suppressed, thus is not shown. Figure 12: Similar as Fig. 11, but for nuclear enhancement of transverse momentum imbalance \(\Delta\langle p_{T}^{2}\rangle\) of heavy-meson pair (\(D\bar{D}\)) production as a function of \(x_{B}\). \(\alpha\), \(\beta\), and \(\gamma\), and we have the ratio for two values of \(x\) \[r(x_{1},x_{2})\equiv\frac{\hat{q}(x_{1},Q^{2})}{\hat{q}(x_{2},Q^{2})}=\left(\frac {x_{1}}{x_{2}}\right)^{\alpha}\left(\frac{1-x_{1}}{1-x_{2}}\right)^{\beta}. \tag{24}\] Assuming that both \(\alpha\) and \(\beta\) are negative as suggested by our analysis, we can find that, for small \(x\) values (\(x_{1,2}\to 0\) and \(x_{1}<x_{2}\)), we have \(r(x_{1},x_{2})\rightarrow(x_{1}/x_{2})^{\alpha}\), and for large \(x\) values (\(x_{1,2}\to 1\) and \(x_{1}>x_{2}\)), we have \(r(x_{1},x_{2})\rightarrow[(1-x_{1})/(1-x_{2})]^{\beta}\). Thus, the behaviors of \(\hat{q}\) in small and large \(x\) regions are dominated by \(\alpha\) and \(\beta\), respectively. Similarly, the ratio for two values of \(Q^{2}\) \[r(Q_{1}^{2},Q_{2}^{2})\equiv\frac{\hat{q}(x,Q_{1}^{2})}{\hat{q}(x,Q_{2}^{2})}= \frac{\alpha_{s}(Q_{1}^{2})}{\alpha_{s}(Q_{2}^{2})}\left[\frac{\ln(Q_{1}^{2}/Q _{0}^{2})}{\ln(Q_{2}^{2}/Q_{0}^{2})}\right]^{\gamma} \tag{25}\] is only sensitive to the parameter \(\gamma\), which dominates the scale dependence. Accordingly, we can define a secondary observable as a ratio of \(\Delta\langle p_{T}^{2}\rangle\) in EIC, such as \[R_{\rm EIC}(x_{1},x_{2})=\frac{\Delta\langle p_{T}^{2}\rangle(x_ {1},Q^{2})}{\Delta\langle p_{T}^{2}\rangle(x_{2},Q^{2})}\,,\] \[R_{\rm EIC}(Q_{1}^{2},Q_{2}^{2})=\frac{\Delta\langle p_{T}^{2} \rangle(x,Q_{1}^{2})}{\Delta\langle p_{T}^{2}\rangle(x,Q_{2}^{2})}\,, \tag{26}\] to measure the \(x\) and \(Q^{2}\) dependence of \(\hat{q}(x,Q^{2})\), respectively. Figure 13 shows, for several observables in SIDIS, the dependence of theoretical prediction on each parameters \(a_{i}\) (\(\hat{q}_{0}\), \(\alpha\), \(\beta\), and \(\gamma\)), as a function of the relative parameter displacement \((a_{i}-a_{i}^{0})/\delta a_{i}\), with \(\delta a_{i}\) being the uncertainty of \(a_{i}\). The three panels in the top row show the dependence for \(\Delta\langle p_{T}^{2}\rangle\) at three facilities. We can see a single measurement of \(\Delta\langle p_{T}^{2}\rangle\) is usually sensitive to more than one parameters. However, as shown in the bottom row, the three jointly measured \(R_{\rm EIC}\) are separately sensitive to the parameters \(\alpha\), \(\beta\), and \(\gamma\). The future EIC facilities will allow a precise understanding of the detail of the kinematic dependence of \(\hat{q}\), thanks to the wide kinematic coverage and high-precision measurement. ## IV Summary and discussion To gain foreknowledge on how the future electron-ion-collision experiments can deepen our understanding of the jet transport property of cold nuclear matter, in this work, we study the nuclear-medium induced transverse momentum broadening/imbalance of single-/pairwise-particle production in electron-ion collisions, for the kinematic regions of three proposed future facilities. Our theoretical calculations take into account the multiple scattering undergone by the colored hard probe that traverses the nucleus, within the framework of the higher-twist expansion, i.e., the generalized factorization in perturbative QCD. Particularly, a kinematic dependent jet transport coefficient \(\hat{q}=\hat{q}(x,Q^{2})\), extracted from our global analysis of the current experimental data, is used in our calculations for EIC. This globally extracted \(\hat{q}(x,Q^{2})\), together with its uncertainty evaluated with a Hessian matrix method, are available for the community to make theoretical predictions. Moreover, by adding a new data set on \(J/\psi\) production from the LHC, the Hessian analysis results in reduced uncertainties of \(\hat{q}(x,Q^{2})\) in small-\(x\) region. At the same time, we show that this \(\hat{q}(x,Q^{2})\) can be converted into the function of jet energy, i.e., \(\hat{q}(E_{\rm jet},Q^{2})\), which may be instructive for the study of the medium modification of jet production in heavy-ion collisions. The current analysis suggests enhancements of \(\hat{q}\) in both small- and large-\(x\) regions, however, the uncertainties in these two regions are still considerable due to the limited data points therein, which is expected to be better constrained in future EIC experiments. With the extracted \(\hat{q}(x,Q^{2})\), we study three types of observable in EIC, including the transverse momentum broadening of single hadron production and the enhancement of two-particle transverse momentum imbalance in di-hadron and heavy-meson pair productions. These nuclear induced broadening/imbalance are found to be sensitive to the color state of the hard probe, and to exhibit a clear kinematic dependence stemming from \(\hat{q}(x,Q^{2})\). Besides, the results with a constant \(\hat{q}\) are also given for comparison. We find that the future EIC experiments have great potential to provide precise understanding of \(\hat{q}\) in a wide kinematic range and to facilitate the jet tomography of Figure 13: Dependence of results on individual parameters, \(\hat{q}_{0}\), \(\alpha\), \(\beta\), and \(\gamma\), represented with circle, diamond, triangle, and square symbols, respectively. Top row: transverse momentum broadening \(\Delta\langle p_{T}^{2}\rangle\) of single pion production in SIDIS in certain kinematic regions of three EIC facilities, where \(Q^{2}\) =10 GeV\({}^{2}\) and \(x=x_{1}\), \(x_{3}\) and \(x_{4}\) for US-EIC, EicC and JLab, respectively. Bottom row: ratios of \(\Delta\langle p_{T}^{2}\rangle\) in different kinematic regions defined as Eq. (26), where \(Q^{2}\) =10 GeV\({}^{2}\) for left and middle panels, and \(x\in[0.08,0.1]\) for right panel. Blue solid line with grey band represents theoretical prediction with uncertainty. Abscissa axis shows rescaled parameter values as \((a_{i}-a_{i}^{0})/\delta a_{i}\), where \(\delta a_{i}\) is uncertainty of \(a_{i}\). Yellow circle with vertical bar represents estimated experimental uncertainty. cold nuclear medium. _Note_: The transport coefficient \(\hat{q}(x,Q^{2})\) extracted from our global analysis and the uncertainty set are available for user and can be requested by email from [email protected] and [email protected]. _Acknowledgments._ This research was supported in part by the National Natural Science Foundation of China (NSFC) under Grants No. 12022512 and No. 12035007, by Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, by Guangdong Basic and Applied Basic Research Foundation (Project No. 2022A1515110392, 2022A1515010683), by the National Science Foundation in US under Grant No. PHY-1945471 (Z.K.), by the China Postdoctoral Science Foundation under Project No. 2019M652929 (P.R.), and by the MOE Key Laboratory of Quark and Lepton Physics (CCNU) under Project No. QLPL201802 (P.R.). ## Appendix A Details of Hessian analysis We briefly review the Hessian analysis employed in this work. Hessian matrix analysis is a well-known technique for the uncertainty estimation in a global analysis [97]. The general idea is to optimize the representation of the parameter space in the neighborhood of the global minimum \(\chi^{2}\), and to provide an uncertainty set of the parameterized quantity [e.g. \(\hat{q}(x,Q^{2})\) in this work], from which uncertainties of all related physical quantities can be evaluated. A Hessian analysis begins with finding in parameter space \(\{a_{i}\}\) the coordinate corresponding to the minimum of the global \(\chi^{2}\). The basic assumption of the Hessian analysis is that the \(\chi^{2}\) can be approximated with a quadratic form in the neighborhood of the minimum [51; 52] as \[\chi^{2}=\chi_{0}^{2}+\sum_{i,j}H_{ij}y_{i}y_{j}\,, \tag{10}\] where \(\chi_{0}^{2}\equiv\chi^{2}(\{a_{i}^{0}\})\) is the global minimum of the \(\chi^{2}\) at the optimal parameter values \(\{a_{i}\}=\{a_{i}^{0}\}\), \(y_{i}=a_{i}-a_{i}^{0}\) is the displacement of \(a_{i}\) from its optimal value \(a_{i}^{0}\), and \(H_{ij}\) is the element of Hessian matrix defined as the second-order partial derivatives of \(\chi^{2}\) at the minimum \[H_{ij}=\frac{1}{2}\left(\frac{\partial^{2}\chi^{2}}{\partial y_{i}\partial y _{j}}\right)_{a_{i}=a_{i}^{0}} \tag{11}\] Usually there are interplays among different variables \(y_{i}\) (or \(a_{i}\)) in the \(\chi^{2}\), and the off-diagonal Hessian matrix elements could be non-zero. This makes the uncertainty estimation, corresponding to a certain tolerance \(\Delta\chi^{2}\equiv\chi^{2}-\chi_{0}^{2}\), not that straightforward. To disentangle these interplays, one can define a new set of parameters \(\{z_{i}\}\), in whose representation the Hessian matrix is diagonal. This can be achieved by using the complete set of \(n\) orthonormal eigenvectors \(V_{i}^{(k)}\) of the symmetric \(n\times n\) Hessian matrix (\(n\) is the number of parameters), which satisfy the characteristic equations \[\sum_{j}H_{ij}V_{j}^{(k)}=\lambda_{k}V_{i}^{(k)}, \tag{12}\] with \(\lambda_{k}\) being the positive eigenvalues. The new parameters \(\{z_{i}\}\) can be expressed as the linear combinations of the original parameters \(\{y_{i}\}\)[51; 52] \[z_{i}=\sqrt{\lambda_{i}}\sum_{j}y_{j}V_{j}^{(i)}. \tag{13}\] With the new set of parameters \(\{z_{i}\}\), the \(\Delta\chi^{2}\) can be written in a simple form as \[\Delta\chi^{2}=\sum_{i}z_{i}^{2}, \tag{14}\] which means that the contours of the \(\chi^{2}\) are spheres in the new basis. With Eq. (14) and the inverse transformation of Eq. (13), one can define the uncertainty set \(\{a_{i}^{(\pm k)}\}\) of the original parameters \(\{a_{i}\}\) corresponding to a tolerance \(\Delta\chi^{2}\) as [52] \[a_{i}^{(\pm k)}=a_{i}^{0}\pm\sqrt{\frac{\Delta\chi^{2}}{\lambda_{k}}}V_{i}^{(k )}\,,\ \ \text{for}\ \ k=1,2,\ldots,n. \tag{15}\] For a quantity \(\mathcal{Q}\) as a function of \(\{a_{i}\}\), whose value corresponding to \(\{a_{i}^{(\pm k)}\}\) is \(\mathcal{Q}_{\pm k}\equiv\mathcal{Q}(\{a_{i}^{(\pm k)}\})\), its uncertainty can be evaluated with \[\Delta\mathcal{Q}=\frac{1}{2}\sqrt{\sum_{k=1}^{n}(\mathcal{Q}_{+k}-\mathcal{Q} _{-k})^{2}}. \tag{16}\] As an example, one can give the uncertainty set of \(\hat{q}\) \[S_{\pm k}\equiv\hat{q}_{\pm k}=\hat{q}(\{a_{i}^{(\pm k)}\}), \tag{17}\] and express the uncertainty set for any quantity as a function of \(\hat{q}\) as \(\mathcal{Q}_{\pm k}=\mathcal{Q}(S_{\pm k})\). In our analysis, the tolerance of the global \(\chi^{2}\) is given at a \(p\%\) confidence level (C.L.) by simply re-scaling the minimum as [52; 91] \[\Delta\chi^{2}=\chi_{0}^{2}\left(\frac{\xi_{p}}{\xi_{50}}-1\right)\,, \tag{18}\] where the rescaling parameter \(\xi_{p}\) corresponds to the \(p\)-th percentile of the \(\chi^{2}\) distribution \(P(\chi^{2},N)\) and is determined by \[\int_{0}^{\xi_{p}}P(\chi^{2},N)\,d\chi^{2}=p\,\%\,, \tag{19}\] with \(N\) the number of data points and \(P(\chi^{2},N)\) defined as \[P(\chi^{2},N)=\frac{(\chi^{2})^{N/2-1}e^{-\chi^{2}/2}}{2^{N/2}\Gamma(N/2)}. \tag{20}\] A more thorough and complicated scheme to estimate the tolerance can be found in Refs. [52; 91]. In our results, we have estimated the uncertainty at 90% C.L., corresponding to the tolerance \(\Delta\chi^{2}=35\). To get the uncertainty at any other C.L., one can simply rescale the \(\Delta\chi^{2}\) to get the corresponding uncertainty set \(\{a_{i}\}\) with Eq. (10). The Hessian analysis in this work is performed by using the MINUIT package combined with the ITERATE program [51], which will first search for a global minimum of the \(\chi^{2}\) and then calculate the eigenvectors \(V_{i}^{(k)}\) and eigenvalues \(\lambda_{k}\) of the Hessian matrix with an iterative method [51]. ## Appendix B Impact of new ALICE \(J/\psi\) data on \(\hat{q}\) In comparison with our previous analysis [32], we have added a new data set, i.e., the transverse momentum broadening of \(J/\psi\) in \(p\)-Pb collisions at \(\sqrt{s_{NN}}=8.16\) TeV at the LHC (ALICE 21'), into the Hessian analysis in this work. To quantitatively show the impact of the new data, we compare the results with and without including this data set. In Fig. 14, we compare the \(\hat{q}(x,Q^{2})\) extracted with and without the new \(J/\psi\) data. We also compare theoretical predictions calculated by using the \(\hat{q}(x,Q^{2})\) extracted in the two analyses in Fig. 15. In general, the new \(J/\psi\) data has small impact on the optimal/central values of the \(\hat{q}(x,Q^{2})\) or theoretical predictions, indicating the new analysis is consistent with our previous work [32]. With the new \(J/\psi\) data, the uncertainties of \(\hat{q}(x,Q^{2})\) in small \(x\) region are reduced, especially for \(Q^{2}=10-100\) GeV\({}^{2}\) (squared mass of \(J/\psi\sim 10\) GeV\({}^{2}\)), as shown in Fig. 14. As a result, we can see in Fig. 15 that, with the \(\hat{q}(x,Q^{2})\) extracted in the new analysis, the uncertainties of the theoretical predictions for \(J/\psi\) production in the forward region at the LHC are obviously reduced (2nd panel in 3rd row and last panel in 4th row). The theoretical uncertainties for other observables, related to mid- and large-\(x\) regions, have negligible changes.
2307.06785
Fermionic Sign Problem Minimization by Constant Path Integral Contour Shifts
The path integral formulation of quantum mechanical problems including fermions is often affected by a severe numerical sign problem. We show how such a sign problem can be alleviated by a judiciously chosen constant imaginary offset to the path integral. Such integration contour deformations introduce no additional computational cost to the Hybrid Monte Carlo algorithm, while its effective sample size is greatly increased. This makes otherwise unviable simulations efficient for a wide range of parameters. Applying our method to the Hubbard model, we find that the sign problem is significantly reduced. Furthermore, we prove that it vanishes completely for large chemical potentials, a regime where the sign problem is expected to be particularly severe without imaginary offsets. In addition to a numerical analysis of such optimized contour shifts, we analytically compute the shifts corresponding to the leading and next-to-leading order corrections to the action. We find that such simple approximations, free of significant computational cost, suffice in many cases.
Christoph Gäntgen, Evan Berkowitz, Thomas Luu, Johann Ostmeyer, Marcel Rodekamp
2023-07-13T14:53:57Z
http://arxiv.org/abs/2307.06785v1
# Fermionic Sign Problem Minimization by Constant Path Integral Contour Shifts ###### Abstract The path integral formulation of quantum mechanical problems including fermions is often affected by a severe numerical sign problem. We show how such a sign problem can be alleviated by a judiciously chosen constant imaginary offset to the path integral. Such integration contour deformations introduce no additional computational cost to the Hybrid Monte Carlo algorithm, while its effective sample size is greatly increased. This makes otherwise unviable simulations efficient for a wide range of parameters. Applying our method to the Hubbard model, we find that the sign problem is significantly reduced. Furthermore, we prove that it vanishes completely for large chemical potentials, a regime where the sign problem is expected to be particularly severe without imaginary offsets. In addition to a numerical analysis of such optimized contour shifts, we analytically compute the shifts corresponding to the leading and next-to-leading order corrections to the action. We find that such simple approximations, free of significant computational cost, suffice in many cases. ## I Introduction The _numerical sign problem_ is a major hindrance for the application of stochastic methods to certain physical systems, such as QCD at finite baryon density or electronically doped systems in strongly correlated condensed matter. The problem refers to the extreme cost of numerically approximating integrals arising with a highly oscillatory integrand, such as path integrals with complex-valued actions. Because partition functions are exponential in the action, the numerical costs typically scale exponentially in the spacetime volume [1], pushing many physically interesting systems beyond the reach of numerical investigation. Methods that reduce the sign problem allow us to use our limited resources more efficiently and thus extend the range of systems we can investigate. In cases when the offending term in the Hamiltonian that induces the complex phase is small, one can rely on simple reweighting. For small systems or ground-state properties one can forego stochastic simulations and instead use direct methods, such as tensor networks [2]. Complex Langevin is another popular method to fight the sign problem, but a lot of technology is required to guarantee it converges to the right distribution [3; 4]. Each of these methods have their own limitations, by no means fully solving the sign problem. Here we focus on _contour deformation_ to alleviate the sign problem. One transforms the integration domain of the path integral to a more favorable manifold in the high-dimensional complex space where the sign oscillations are reduced [4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. Such deformations are formally allowed as long as one does not cross any singularities of the integrand and one preserves the homology class of the integral. There exist manifolds, so-called _Lefschetz thimbles_, where the complex phase remains fixed. In theories where one thimble dominates, the sign problem is solved since the constant complex phase on the thimble can be factored outside of the path integral. Even when multiple thimbles contribute, each with a different but constant phase, the sign problem is not eliminated, but is expected to be improved. While the locations of these thimbles are not known _a priori_ they can be found by integrating holomorphic flow equations. Unfortunately the numerical determination of the complete set of contributing thimbles is quite costly: mapping out their full constellation is just as difficult as the original sign problem. As the goal in our work is to _alleviate_ the sign problem (as opposed to eliminating it), a natural question arises: given finite computational resources, which contour deformations are most efficient for the problem at hand at alleviating the sign problem _sufficiently_, meaning that observables can be extracted in a statistically meaningful and reliable manner. In previous studies of the Hubbard model [13; 14] we trained neural networks (NNs) on flowed configurations to parametrize an integration manifold called a _learnifold_[11; 15]. This deformation worked very well at alleviating the sign problem for various doped Hubbard systems. In particular, we provided physical results of a doped Hubbard model for carbon nano-systems up to 18 ion sites [13]. However this method still comes at the cost of generating flowed training data and training a neural network. Here we study the simplest imaginable deformation: shifting the integration manifold by a global imaginary constant offset. We find that optimizing the offset substantially reduces the computational demands and often yields a contour deformation of equal potency. A constant offset induces no Jacobian; keeping the method simple. Further, a constant offset does not require modification of the Monte Carlo algorithm, nor does it require generation of training data and training of NNs. In ref. [15], for example, it was shown for the Thirring model that a calculation on the _tangent plane_, a constant offset that intersects the classical saddle point of the main Lefschetz thimble, is sufficient at alleviating the sign problem. We have also used this deformation as a comparison to our neural networks in previous publications [13; 14]. For the Hubbard model, however, we find that for certain values of the chemical potential, the tangent plane does not meaningfully alleviate the sign problem. We will show how to incorporate quantum corrections to the saddle point, thereby obtaining a better constant shift that corresponds to the effective action obtained by inclusion of 1-particle irreducible terms. Even then there are cases where we resort to numerical optimization of the imaginary offset. We study the fermionic Hubbard model. As opposed to the systems investigated in our earlier work [13], here we also consider systems that are non-bipartite, such as the fullerenes \(C_{20}\) and \(C_{60}\). In the following section we provide the formal aspects of our method, providing derivations for the location of the classical and quantum-corrected saddle points that we use to determine our constant offsets. In section III we continue with the description of our method for numerically determining the optimized plane. We then demonstrate the efficacy of our methods by providing numerical results of various Hubbard systems in section IV. We recapitulate in section V. To keep the presentation reasonable, we place formal (and tedious) derivations in the appendices. ## II Formalism ### The Hubbard Model The Hubbard model describes the interacting behavior of particles on a lattice. In our case these are electrons on a lattice of ions. It consists of a tight binding term and an onsite interaction representing electron-electron repulsion. It takes into account external influences on the overall particle number, like doping or an applied voltage, via a chemical potential \(\mu\)[16; 17; 18; 19; 20; 21; 22]. We formulate our theory in the _particle-hole basis_[23] \[H=-\kappa\sum_{\left\langle x,y\right\rangle}\left(a_{x}^{\dagger}a_{y}-b_{x} ^{\dagger}b_{y}\right)+\frac{U}{2}\sum_{x}q_{x}^{2}-\mu\sum_{x}q_{x} q_{x} =a_{x}^{\dagger}a_{x}-b_{x}^{\dagger}b_{x} \tag{1}\] where \(\kappa\) is the hopping parameter of neighboring lattice sites, \(U\) provides the strength of interaction of two electrons sharing one lattice site, and \(q\) is the local charge operator relative to half filling. Alternatively the sum over neighboring sites can be represented with the hopping matrix \(K\) which is \(\kappa\) times the adjacency matrix of the lattice. This option also allows for individual hopping parameters. The sum in \(x\) is over all \(N_{x}\) sites. The \(a\) (\(a^{\dagger}\)) operator implements particle destruction (creation) and \(a^{\dagger}a\) counts particles. Similarly the \(b\) (\(b^{\dagger}\)) operators destroy (create) holes and \(b^{\dagger}b\) counts them. ### Finite lattices considered in this work As we explain below, stochastic simulations of the Hubbard model suffer from the sign problem when the geometry of the system is non-bipartite and/or a non-zero chemical potential is present. A lattice is _bipartite_ when its sites can be divided into two groups, such that each site has only neighbors of the other group. Another way to think of it is that each closed path must traverse an even number of links. In this paper we will investigate both cases, with the 8- and 18-site honeycomb lattices as bipartite examples and the \(C_{20}\) and \(C_{60}\) fullerenes as non-bipartite examples. The 8- and 18-site honeycomb lattices consist of \(2\times 2\) and \(3\times 3\) unit cells respectively and in this work are assumed to have periodic boundary conditions. \(C_{20}\) is a dodecahedron with 12 equal pentagons. \(C_{60}\) is a truncated icosahedron with 12 pentagons and 20 hexagons. The four lattice structures are visualised in fig. 1. All of the lattices we consider are _site transitive_, meaning that symmetries of the lattice can map any site to any other site, an analog to translation invariance. ### The path-integral formulation of the Hubbard model The expectation value of any quantum mechanical operator \(\hat{O}\) can be calculated within the path integral formalism, \[\left\langle\hat{O}\right\rangle=\frac{1}{\mathcal{Z}}\int\mathcal{D}\phi\, \hat{O}\left[\phi\right]e^{-S[\phi]} \tag{2}\] where the partition function is \[\mathcal{Z}=\int\mathcal{D}\phi\,e^{-S[\phi]} \tag{3}\] and \(\mathcal{D}\phi=\lim_{N_{t}\to\infty}\prod_{x}^{N_{x}}\prod_{t}^{N_{t}}d\phi_{x,t}\) and \(S[\phi]\) is the action that defines the system. It is common practice to estimate expectation values (2) with importance sampling \[\int\mathcal{D}\phi\,\hat{O}\left[\phi\right]\mathbb{P}\left[\phi\right]=\lim_ {N\to\infty}\frac{1}{N}\sum_{i}^{N}\hat{O}\left[\phi_{i}\right],\quad\phi_{i} \sim\mathbb{P}\left[\phi_{i}\right], \tag{4}\] drawing configurations \(\phi\) from the probability distribution \(\mathbb{P}\) by Markov Chain Monte Carlo (MCMC) methods such as Hybrid (or Hamiltonian) Monte Carlo (HMC) [24]. For purely real actions it is straight forward to choose \(\mathbb{P}\left[\phi\right]=e^{-S[\phi]}/\mathcal{Z}\). For generally complex actions \(S[\phi]=S_{R}[\phi]+iS_{I}[\phi]\), however, one typically separates the complex phase and absorbs it into the definition of the observable, \[\left\langle\hat{O}\right\rangle=\frac{\left\langle\hat{O}\exp(-iS_{I}) \right\rangle_{R}}{\left\langle\exp(-iS_{I})\right\rangle_{R}} \tag{5}\] where the subscript \(R\) indicates sampling according to the probability defined by the real part of the action, \(\mathbb{P}[\phi]=e^{-S_{R}[\phi]}/\mathcal{Z}_{R}\). This process is called _reweighting_ and it exactly produces correct expectation values (2) in the limit of infinite statistics. However, for finite statistics, the oscillating phase in the denominator of the reweighting (5), known as the _average phase_, \[\left\langle\exp(-iS_{I})\right\rangle_{R}=\frac{\int\mathcal{D}\phi\,e^{-iS_ {I}[\phi]}e^{-S_{R}[\phi]}}{\int\mathcal{D}\phi\,e^{-S_{R}[\phi]}}=\frac{ \mathcal{Z}}{\mathcal{Z}_{R}} \tag{6}\] can be hard to numerically estimate. Stronger oscillation and its attendant cancellations become more severe for larger system sizes and some parameters. We call the average phase's absolute value \[\Sigma=\left|\left\langle\exp(-iS_{I})\right\rangle_{R}\right|=\left|\frac{ \mathcal{Z}}{\mathcal{Z}_{R}}\right| \tag{7}\] the _statistical power_ and we use it to quantify the sign problem. When the statistical power is 1, the path integral is sign-problem free; a value of 0 indicates the worst possible sign problem. While in a problem-free case the stochastic uncertainties of expectation values from an ensemble of \(N_{\text{cfg}}\) generated configurations scale with \(N_{\text{cfg}}^{-1/2}\), with a sign problem the statistical power effectively reduces the contribution of each configuration: the error scales with the square root of the effective number of samples [4], \[N_{\text{eff}}=\Sigma^{2}\times N_{\text{cfg}}. \tag{8}\] Trotterizing the thermal partition function \(\text{tr}\!\left\{e^{-\beta H}\right\}\), linearizing the interaction with a Hubbard-Stratonovich transformation with an integral over auxiliary fields \(\phi\), and inserting resolutions of the identity in terms of Grassmann coherent states, yields an action \[S\left[\phi,\tilde{K},\tilde{\mu}\right]=\frac{1}{2\tilde{U}}\sum_{t,x}\phi_{ x,t}^{2}-\log\det\left(M\left[+\phi,+\tilde{K},+\tilde{\mu}\right]\right)-\log \det\left(M\left[-\phi,-\tilde{K},-\tilde{\mu}\right]\right) \tag{9}\] Figure 1: Spatial lattices considered in this paper. where the dimensionless parameters \(\tilde{U}=U\times\delta\), \(\tilde{\kappa}=\kappa\delta\) etc. are rescaled by the temporal lattice spacing \(\delta=\beta/N_{t}\). The fermion matrices \(M\) encode the particle and hole fermion loops exactly. There are a variety discretizations of the fermion matrix that become exact and equal in the continuum limit [23; 25; 9]. We use the 'exponential' discretization in the language of ref. [9] \[M_{x^{\prime}t^{\prime},xt}[\pm\phi,\pm K,\pm\mu]\equiv M_{x^{\prime}t^{ \prime},xt}[\pm;\phi]=\delta_{x^{\prime},x}\delta_{t^{\prime},t}-[\exp\Bigl{(} \pm\tilde{K}\Bigr{)}]_{x^{\prime},x}e^{\pm(-i\phi_{x},+\tilde{\mu})}B_{t^{ \prime}}\delta_{t^{\prime},t+1} \tag{10}\] where \(B_{N_{t}-1}\) carries an extra \(-1\) encoding the fermionic temporal antiperiodic boundary conditions. We do not consider other discretizations in this work. We note, however, that when \(\mu\neq 0\), this discretization does not suffer from ergodicity issues described in refs. [9; 25]. We now consider deforming our original integral by complexifying the auxiliary field \(\phi\) and deforming the integration manifold. Cauchy's theorem guarantees that this deformation leaves all holomorphic observables the same, as long as the deformation does not cross any singularities and the deformation preserves the homology class. The statistical power depends on the imaginary part of the action weighted by its real part and thus is not holomorphic, so it is manifold-dependent. Some manifolds may tame the oscillations, especially when they resemble Lefschetz thimbles, high-dimensional analogs of contours of steepest descent [11; 26]. We have previously trained neural networks to learn the results of the holomorphic flow in a computationally tractable way [13; 14]. An even simpler deformation, that of a constant imaginary shift in all components of \(\phi\), can lead to significant alleviation of the sign problem while incurring no additional costs to the HMC algorithm. In particular, ref. [11] showed that a constant shift that intersected the saddle point of the main thimble, producing the so called 'tangent plane', sufficiently reduced the sign problem in simulations of the Thirring model. We now consider the same constant shift to the tangent plane of the Hubbard model. ### The tangent plane of the Hubbard model The holomorphic flow of a configuration \(\phi\) is its image under evolution in a fictitous time \(t\) by \[\frac{d\phi}{dt}=(\partial_{\phi}S)^{*}. \tag{11}\] The saddle point that fixes the tangent plane is found by flowing the \(\phi=0\) translationally-invariant configuration to its fixed point. Because it is a fixed point the time derivative vanishes and the saddle point \(\phi_{c}\) satisfies \[\partial_{\phi}S[\phi]|_{\phi=\phi_{c}}=0. \tag{12}\] In the graphene case this saddle point has the greatest weight [27] on the semimetal side of the quantum critical point at \(U\lesssim 3.8\)[28; 29; 30; 31; 32; 33; 34]. The saddle point \(\phi_{c}\) has zero real part and, because the lattices we consider are site-transitive, constant imaginary part which is non-zero when \(\mu\neq 0\) on bipartite lattices and generically on non-bipartite lattices. Leveraging the simplicity of \(\phi_{c}=i\phi_{0}\) independent of space and time we can calculate the action \[S[\phi_{c}=i\phi_{0}]=\frac{1}{2\tilde{U}}N_{x}N_{t}\phi_{0}^{2}-\log\det \left(\mathbb{1}+e^{N_{t}\phi_{0}+\beta\mu}e^{+\beta K}\right)-\log\det\left( \mathbb{1}+e^{-N_{t}\phi_{0}-\beta\mu}e^{-\beta K}\right) \tag{13}\] where we used the Schur complement to simplify the fermion determinants and used the spatial independence of \(\phi_{0}\) to commute the auxiliary field terms past the hopping terms. Using \(\log\det=\mathrm{tr}\log\), transforming to the basis where the hopping matrix is diagonalized, and requiring \(\phi_{c}\) to be a fixed point (12) leads to \[\phi_{0}/\delta=-\frac{U}{N_{x}}\sum_{k}\tanh\left(\frac{\beta}{2}\left[\epsilon _{k}+\mu+\phi_{0}/\delta\right]\right) \tag{14}\] where the sum is over the \(N_{x}\) modes of the hopping matrix \(K\) with noninteracting energy eigenvalue \(\epsilon_{k}\); we provide a detailed derivation in Appendix B. Writing \(\delta=\beta/N_{t}\) shows that this transcendental equation contains only temporal continuum quantities, except for the combination \(\phi_{0}N_{t}\), which will stay fixed as we go towards the time continuum limit \(N_{t}\to\infty\). We see that \(\phi_{0}/\tilde{U}\) is bounded between \(-1\) and \(+1\) and can cheaply determine the imaginary offset of the tangent plane \(\phi_{0}\) solving this equation numerically. Figure 2 shows the behavior of the tangent plane for the 18-site honeycomb problem as a function of \(\mu\) for select values of \(U\) and \(\beta\). For large \(\beta\) where the \(\tanh\) becomes a sign function, the tangent plane has plateaus that are connected by constant slopes. The location of those depends on the spectrum of the hopping matrix \(K\); an example is shown in fig. 2. Properties of the tangent plane in the \(\mu=0\), \(\beta\to\infty\) limit The imaginary offset vanishes \(\phi_{0}=0\) so that the tangent and real planes coincide if the noninteracting energy eigenvalues \(\epsilon_{k}\) are symmetric about zero. This symmetry naturally occurs for bipartite lattices, such as the honeycomb lattice, in the absence of chemical potential \(\mu=0\). For these cases \(\phi_{0}=0\) for any inverse temperature \(\beta\). Moreover, in the \(\beta\to\infty\) limit the \(\tanh\) functions become the sign function, and if there are equal numbers of positive and negative noninteracting eigenvalues (not necessarily symmetric about zero), the sum also vanishes and the tangent plane again corresponds to the real plane. Both \(C_{20}\) and \(C_{60}\) have non-symmetric spectra but \(C_{60}\) enjoys equally many positive and negative noninteracting energies. In Figure 2 we show how finite temperature smooths the piecewise-linear \(\beta=\infty\) tangent plane for the 18-site honeycomb lattice. In fig. 3 we compare the behavior of the tangent plane for varying \(U\), \(\mu\), \(\beta\), and different lattices, both bipartite and non-bipartite. The results at \(\mu=0\) shown in this figure confirm our statements above. For the fullerene results, which are non-bipartite, the choice of \(\beta=10\) is large enough that the resulting tangent plane at \(\mu=0\) is nearly identical to the real plane. #### iii.1.2 Properties of the tangent plane in the \(\mu\beta\to\infty\) limit Another interesting scenario is to consider the behavior of the tangent plane in the \(\mu\beta\to\infty\) limit. To understand the behavior in this limit we start again with the action (9). With repeated application of Schur's complement, we can express (see ref. [9] for an explicit derivation) \[\log\det M[\pm]=\log\det\left(\mathbb{1}+\mathbb{F}[\pm]\right) \mathbb{F}[\pm]=\prod_{t=0}^{N_{t}-1}[e^{\pm\tilde{K}}][e^{\mp i \phi_{t}}]e^{\pm\tilde{\mu}}. \tag{15}\] where the \(0\)th timeslice is rightmost and each term in [square brackets] in the product represents a space\(\times\)space matrix. Since the chemical potential term is proportional to the identity, we can bring it out of the product, so the action becomes \[S[\mathbf{\phi}]=\frac{\mathbf{\phi}^{2}}{2U}-\log\det\left(\mathbb{1}+e^{-\mu\beta} \prod_{t=0}^{N_{t}-1}[e^{-\tilde{K}}][e^{i\phi_{t}}]\right)-\log\det\left( \mathbb{1}+e^{+\mu\beta}\prod_{t=0}^{N_{t}-1}[e^{\tilde{K}}][e^{-i\phi_{t}}] \right). \tag{16}\] Figure 2: Tangent plane of 18-site honeycomb lattice. The vertical lines mark transitions where the argument of a \(\tanh\) that determines the tangent plane (14) switches its sign. At the beginning of a downwards slope in \(\beta\to\infty\) the sign switches from \(-1\) to \(0\) and at the end from \(0\) to \(+1\). In the limit of asymptotically large \(\beta\mu\) the determinants simplify \[\lim_{\mu\beta\to\infty}S[\phi]= \frac{\phi^{2}}{2\tilde{U}}-\log\det\left(\mathbb{1}\right)-\log \det\left(e^{+\mu\beta}\prod_{t=0}^{N_{t}-1}[e^{\tilde{K}}][e^{-i\phi_{t}}] \right)+\mathcal{O}(e^{-\mu\beta})\] \[= \frac{\phi^{2}}{2\tilde{U}}-N_{x}\mu\beta-\log\det\left(\prod_{t=0 }^{N_{t}-1}[e^{+\tilde{K}}][e^{-i\phi_{t}}]\right)\] \[= \frac{\phi^{2}}{2\tilde{U}}-N_{x}\mu\beta-\log\left[e^{-i\Phi} \det e^{+K\beta}\right]\] \[= \frac{\phi^{2}}{2\tilde{U}}-N_{x}\mu\beta+i\Phi-\beta\operatorname {tr}\{K\} \tag{17}\] where we define \(\Phi=\sum_{x,t}\phi_{x,t}\). Since our hopping matrices have no self hopping \(\operatorname{tr}\{K\}=0\). Now solving for critical points \(\partial S/\partial\phi=0\), we find the main critical point at constant field with \(\phi_{c}=-i\tilde{U}\). This demonstrates that the tangent plane approaches \(-\tilde{U}\) in the large \(\mu\beta\) limit. Similarly, in the \(\mu\beta\to-\infty\) limit one finds \[S[\mathbf{\phi}]=\frac{\mathbf{\phi}^{2}}{2\tilde{U}}+N_{x}\mu\beta-i\Phi\, \tag{18}\] and the tangent plane approaches \(+\tilde{U}\). The resulting tangent plane shift in this limit has important implications for our stochastic calculations. Simulating on the tangent plane means using components of the field, \(\phi_{j}\), that are offset by \(-i\tilde{U}\), i.e. \(\phi_{j}\to\phi_{j}-i\tilde{U}\ \forall\ j\). The Figure 3: Tangent plane offsets (normalized to \(\tilde{U}\)) depending on the chemical potential \(\mu\), for various systems and parameters. resulting action under this deformation becomes \[S[\mathbf{\phi}-i\tilde{U}]=\frac{\mathbf{\phi}^{2}}{2\tilde{U}}-\frac{N_{x}U\beta}{2}+N_{ x}U\beta-N_{x}\mu\beta=\left(\frac{U}{2}-\mu\right)\beta N_{x}+\frac{\mathbf{\phi}^{2}}{2 \tilde{U}}. \tag{19}\] This means that the action on this flat contour is purely real in this limit. That is, the tangent plane _completely solves the sign problem_ in this limit. It is equivalent, up to some overall shift in the energy, to a _quenched_ calculation, with no fermion matrix. Later we show numerical results that confirm these findings. ### Quantum corrections to the saddle point The saddle point that defines the tangent plane corresponds to the critical point of the classical action. In quantum field theory the location of this point shifts due to the presence of quantum fluctuations which, in our case, corresponds to thermal fluctuations. We can estimate this shift by calculating the quantum effective action and determining the extremum of this action, as is done in standard textbooks on QFT. This correction to the saddle point corresponds to the inclusion of all one-particle irreducible (1PI) diagrams. Thus it represents a quantum (thermal) correction to the classical saddle point, and the ensuing constant manifold that intercepts this point is expected to reduce the sign problem. We assume that the maximum we find, when including higher order correction terms, will return an offset that reduces the sign-problem even more than the basic tangent plane. To start, we assume that \(\phi_{c}\) is the saddle point in the presence of quantum fluctuations and apply the saddle point approximation about this point. That is, we expand the action in powers of a small perturbation \(\eta\) about this point, \(\phi=\phi_{c}+\eta\). This gives \[S[\phi_{c}+\mathbf{\eta}]= S[\phi_{c}]+(\mathbf{\eta}\cdot\mathbf{\nabla})\,S[\phi_{c}]+\frac{1}{2} \left(\mathbf{\eta}\cdot\mathbf{\nabla}\right)^{2}S[\phi_{c}]+\mathcal{O}\big{(}\eta^ {3}\big{)}\] \[= S[\phi_{c}]+(\mathbf{\eta}\cdot\mathbf{\nabla})\,S[\phi_{c}]+\frac{1}{2} \mathbf{\eta}\cdot\mathsf{H}_{S[\phi_{c}]}\cdot\mathbf{\eta}+\mathcal{O}\big{(}\eta^{3 }\big{)}. \tag{20}\] Here we have made use of the Hessian, \[\big{(}\mathsf{H}_{S[\phi_{c}]}\big{)}_{x^{\prime}t^{\prime},xt}=(\partial_{x^ {\prime}t^{\prime}}\partial_{xt}S[\phi])\big{|}_{\phi=\phi_{c}}. \tag{21}\] Since \(\eta\) is assumed small, we will omit the \(\mathcal{O}\big{(}\eta^{3}\big{)}\) terms. Furthermore because the critical point satisfies \(\left.\nabla S[\phi]\right|_{\phi_{c}}=0\) the linear terms also vanish. Hence the path integral simplifies to a Gaussian integral which we can do, \[\int\mathcal{D}\phi\,e^{-S[\phi]}\approx e^{-S[\phi_{c}]}\int\mathcal{D}\mathbf{ \eta}\,e^{-\frac{1}{2}\mathbf{\eta}\cdot\mathsf{H}_{S[\phi_{c}]}\cdot\mathbf{\eta}}=e ^{-S[\phi_{c}]}\left(\det\mathbb{H}_{S[\phi_{c}]}\right)^{-1/2}\equiv e^{-S_{ \text{eff}}[\phi_{c}]}. \tag{22}\] This allows us to formulate an effective action \[S_{\text{eff}}[\phi_{c}]=S[\phi_{c}]+\frac{1}{2}\log\det\mathsf{H}_{S[\phi_{c }]}. \tag{23}\] The extremum of this action defines our 1PI-corrected spacetime-constant saddle point \(\phi_{c}=i\phi_{1}\). Note that without the Hessian term we recover our original action and the extremum in this case is the saddle point of our leading order classical action that defines the tangent plane (14). In comparison, our 1PI-corrected effective action (23) includes the quantum effects at next to leading order (NLO). In Appendix C we show how to evaluate the Hessian (21) when \(\phi_{c}=i\phi_{1}\) a spacetime constant. ### Excursion to infinite lattices In this section we demonstrate on select lattices how to determine the tangent plane in the infinite-volume limit. We provide two well known examples: the 2-dimensional square and honeycomb lattices. In the infinite volume limit we can access every mode in the first Brillouin zone (B.Z.) and we can replace the sum over noninteracting energies in eq. (14) with a momentum integral. For a 2-dimensional square lattice one has \[\frac{1}{N_{x}}\sum_{k}\to\int_{\mathbf{k}\in B.Z.}\frac{d\mathbf{k}}{(2\pi)^{2}},\] where \(\mathbf{k}\equiv(k_{x},k_{y})\) with \(-\pi\leq k_{i}<\pi\) (square B.Z.). The non-interacting energies are given by \[\epsilon(\mathbf{k})=2\left(\cos(k_{x})+\cos(k_{y})\right). \tag{24}\] Making these substitutions to determine the tangent plane (14) leads to \[\phi_{0}/\delta=-U\int_{\mathbf{k}\in B.Z.}\frac{d\mathbf{k}}{(2\pi)^{2}}\tanh\left(\frac{ 1}{2}\beta\left[\epsilon(\mathbf{k})+\mu+\phi_{0}/\delta\right]\right). \tag{25}\] For the infinite honeycomb lattice the non-orthogonal lattice translation vectors and the two-band structure means we must substitute \[\frac{1}{N_{x}}\sum_{k}\to\frac{3\sqrt{3}}{2}\int_{\mathbf{k}\in B.Z.}\frac{d\mathbf{k }}{(2\pi)^{2}}\frac{1}{2}\sum_{\sigma=\pm 1}\,\] where the factor \(3\sqrt{3}/2\) comes from the hexagonal geometry of the B.Z. and \(\sigma\) runs over the two bands. The non-interacting energies are [35; 36] \[\epsilon_{\pm}(\mathbf{k})=\pm|f(\mathbf{k})| f(\mathbf{k})=1+2e^{-\frac{3i\phi_{\pm}}{2}}\cos\left(\frac{\sqrt{3}k_{y }}{2}\right). \tag{26}\] So we find \[\phi_{0}/\delta=-U\frac{3\sqrt{3}}{2}\int_{\mathbf{k}\in B.Z.}\frac{d\mathbf{k}}{(2\pi )^{2}}\frac{1}{2}\sum_{\sigma=\pm 1}\tanh\left(\frac{1}{2}\beta\left[ \epsilon_{\sigma}(\mathbf{k})+\mu+\phi_{0}/\delta\right]\right). \tag{27}\] We solve the square-lattice (25) and honeycomb (27) relations numerically. In fig. 4 we show the solutions of \(\phi_{0}\) for both the square and honeycomb system for select values of \(U\). We see that \(\phi_{0}\) remains smooth as a function of chemical potential \(\mu\), even in the limit of zero temperature (\(\beta\gg 1\)). Also, in both cases, in the limit of asymptotically large \(\mu\) we have \(\phi_{0}\to-\tilde{U}\). ## III Numerical optimization method In many cases both tangent plane and NLO offsets lead to an improvement in statistical power, with NLO typically providing modest improvement over the tangent plane (but not in all cases). However, a simple numerical investigation shows that one can further improve the statistical power, in most cases by shifting beyond the NLO result. For example, in fig. 5 we show the statistical power for the eight-site honeycomb system coming from a scan of various offsets that include the real plane, tangent plane, and the NLO offset. The scan shows a singular peak in the statistical power. However, this peak does not occur at either the tangent plane or the NLO offset. In all our investigations of different systems to date we find similar behavior; namely, there is a singular peak in statistical power due to constant offset. We refer to this offset that maximizes the statistical power as the _optimized shift_. Because the greater statistical power means smaller required samples (8), the potential savings in computational resources when simulating at the optimized shift in comparison to either the NLO or tangent plane can be orders of Figure 4: The zero-temperature tangent plane \(\phi_{0}\) (normalized to \(\tilde{U}\)) for the infinite 2-D square lattice (left) and honeycomb lattice (right) as a function of chemical potential \(\mu\) for various onsite interactions \(U\), as labelled in the figure. magnitude. However, determining the location of the peak from a simple raster scan in offsets is timely and inefficient, since each point requires an HMC simulation with sufficient statistics to resolve the statistical power. Instead, we formulate a search algorithm like Newton-Raphson, relying on the calculation of derivatives of the statistical power using the _current_ HMC ensemble to make a prediction for the location of the offset that corresponds to the peak of the statistical power. We then iterate this procedure to converge to the peak. Our algorithm requires the first two derivatives of the statistical power with respect to the imaginary offset \(\phi_{0}\) from the existing Markov Chain configurations, \[\frac{\mathrm{d}}{\mathrm{d}\phi_{0}}\left\langle e^{-iS_{I,\phi_ {0}}}\right\rangle_{R,\phi_{0}}= \frac{\mathrm{d}}{\mathrm{d}\phi_{0}}\frac{\int\mathcal{D}\phi e ^{-S_{\phi_{0}}}}{\int\mathcal{D}\phi e^{-S_{R,\phi_{0}}}} \tag{28}\] \[= \left\langle e^{-iS_{I,\phi_{0}}}\right\rangle_{R,\phi_{0}}\left \langle\frac{\mathrm{d}S_{R,\phi_{0}}}{\mathrm{d}\phi_{0}}\right\rangle_{R, \phi_{0}}-\left\langle e^{-S_{I,\phi_{0}}}\frac{\mathrm{d}S_{\phi_{0}}}{ \mathrm{d}\phi_{0}}\right\rangle_{R,\phi_{0}}\] \[= \left\langle e^{-iS_{I,\phi_{0}}}\right\rangle_{R,\phi_{0}}\left( 2\left\langle\frac{\mathrm{d}S_{R,\phi_{0}}}{\mathrm{d}\phi_{0}}\right\rangle _{R,\phi_{0}}^{2}+\left\langle\frac{\mathrm{d}^{2}S_{R,\phi_{0}}}{\mathrm{d} \phi_{0}}^{2}-\frac{\mathrm{d}S_{R,\phi_{0}}}{\mathrm{d}\phi_{0}}^{2}\right\rangle _{R,\phi_{0}}\right)\] \[-2\left\langle e^{-S_{I,\phi_{0}}}\frac{\mathrm{d}S_{\phi_{0}}}{ \mathrm{d}\phi_{0}}\right\rangle_{R,\phi_{0}}\left\langle\frac{\mathrm{d}S_{R, \phi_{0}}}{\mathrm{d}\phi_{0}}\right\rangle_{R,\phi_{0}}-\left\langle e^{-S_{I,\phi_{0}}}\left(\frac{\mathrm{d}^{2}S_{\phi_{0}}}{\mathrm{d}\phi_{0}}^{2}- \frac{\mathrm{d}S_{\phi_{0}}}{\mathrm{d}\phi_{0}}^{2}\right)\right\rangle_{R, \phi_{0}} \tag{29}\] We stress that the calculations of these derivatives rely only on a single ensemble. Ref. [37] points out that these derivatives may be simplified and estimated with reliability even in cases with a sign problem. Practically, they enable iterative procedures for a predicting the optimized shift. When the second derivative is negative we can get a good prediction via Newton-Raphson \[\phi_{0,i+1}=\phi_{0,i}-\frac{\frac{\mathrm{d}}{\mathrm{d}\phi_{0}}\left\langle e ^{-iS_{I,\phi_{0}}}\right\rangle_{R,\phi_{0},i}}{\frac{\mathrm{d}^{2}}{ \mathrm{d}\phi_{0}^{2}}\left\langle e^{-iS_{I,\phi_{0}}}\right\rangle_{R,\phi_ {0},i}}. \tag{30}\] When the second derivative is positive we work with the first derivative to approach the peak. As soon as there are points with opposite first derivatives, their central value usually gets us into the region where the second derivative is Figure 5: Visualization of effect of an imaginary offset on the sign problem. This example shows the eight site honeycomb lattice with \(N_{t}\) = 16, \(\beta\) = 8, \(U\) = 2 and \(\mu\) = 1. The errorbands show the bootstrap errors of the first and second derivative at the datapoint. negative or at least fairly close to it. Additionally we limit the searching region to the interval \([-\tilde{U},+\tilde{U}]\). In principle, higher order derivatives can also be calculated and used to predict the location of the optimized shift though the statistical errors in these terms grow. In practice we find that the procedure converges quickly when starting from a region where the sign problem is light enough to calculate at least the first derivative. However it can fail when this is not the case and it gets stuck when the statistical power is of the same order of magnitude as its uncertainty. A more quantitative demonstration of this method will be given in section III.1. A more advanced method might fit all known measurements and derivatives to estimate the location of the peak. When we are only interested in separate sets of parameters we have to rely on an analytic approximation as the starting point. When we want to scan over one parameter sufficiently finely, we can do so iteratively starting from the previous offset or a rescaled version of it. In this paper we rescaled it with the fraction of new and previous tangent plane offset. Figure 6 gives a cartoon showing the different offsets in comparison to the Lefschetz thimbles and conveys a geometrically intuitive understanding as to why some planes do better than others. In fig. 7 we show the statistical power for an interesting range of chemical potentials and imaginary offsets. We trace contours for the tangent plane, NLO offset, and the best offsets. The best case scenario would be a cheaply determined estimate of the optimized offset for a given \(\mu\). The statistical powers of the real-, tangent-, NLO, and optimized planes are compared as a function of \(\mu\) in fig. 8. The key takeaways are that the tangent plane consistently and drastically outperforms the standard algorithm at practically the same cost, and that the sign problem vanishes when the system becomes saturated. Furthermore we see that we can reduce the sign problem for parameters where the tangent plane is insufficient. For system sizes of interest these differences determine whether a system can be calculated (with reasonable resources) or not. Note that the sign problem also gets worse with increasing \(\beta\) and \(U\) such that even the optimized plane will eventually fail at finite \(\mu\). This simple method is not suitable to fix the sign problem across the board, it just leads to an efficient expansion of the calculable parameter space, which might or might not include interesting physical phenomena, but definitely enables us to do better zero temperature extrapolations. For stronger sign problems we have to rely on manifolds with more parameters, either with simple parametrizations or with neural networks [13; 14; 15]. ### Benchmarks In this section we demonstrate the convergence of our numerical optimization algorithm and present the resulting increase of the statistical power. The examples refer to the eight site honeycomb lattice with \(N_{t}=16\), \(\beta=8\) and \(U=2\). Figure 9 showcases the convergence with iterations from different starting points. While they would all converge to the same offsets eventually, we observe that starting in a region with unclear first derivative, that is dominated by statistical noise, turns the algorithm into a random walk, which can be observed in the real plane example. This further highlights the importance of having good analytic starting points. Figure 10 shows the Figure 6: A cartoon of the different manifolds referred to throughout this work. The Lefschetz thimbles are drawn to resemble contours of holomorphic flow applied to constant fields. The real plane, tangent plane, NLO estimate (next to leading order correction) and optimized plane show the planar manifolds that we use as our integration regions. The red dot marks the main critical point. significant improvements in statistical power that can be achieved by just a few iterations. We observe in both figures that most runs starting from a reasonable guess converge and roughly agree with each other after just 3 iterations. Here the improvements from tangent plane to leading order correction to iterative starting points can be best observed by comparing the second iterations with each other and and in comparison to the converged result. ## IV Results In this section we provide physical observables determined by HMC with the introduced modifications. The observables of our choice are the single particle correlation functions \[C_{k}(\tau)=\left\langle a_{k}(\tau)a_{k}^{\dagger}(0)\right\rangle \tag{31}\] where \(a\) and \(a^{\dagger}\) are particle ladder operators and \(k\) labels operators in definite the irreducible representations of the lattice automorphism group. In the honeycomb case \(k\) labels operators with definite momentum; for the fullerenes \(k\) labels representations of the icosahedral symmetry group. We also sum the local charges (1) to measure the global Figure 7: The datapoints in this heatmap show the statistical power of the system as indicated by the colorbar at different chemical potential \(\mu\) evaluated with HMC on a plane with an imaginary offset given by the y-axis. The red and purple curve show our analytically determined offsets. The blue curve connects the offsets with the greatest statistical power at each \(\mu\). The inset plot shows a slice of the heatmap to visualize the connection to fig. 5. The system is the eight site honeycomb lattice with \(N_{t}=16\), \(\beta=8\), \(U=2\). charge \[\langle Q\rangle=\left\langle\sum_{x}q_{x}\right\rangle=N_{x}-2\sum_{k}C_{k}(\tau=0 ). \tag{32}\] To establish trust in the physical correctness of the algorithm we compare with the eight site honeycomb lattice for which we have exact results from direct diagonalization. We find that the uncertainties of the standard real plane HMC are much greater than the uncertainties of our more advanced methods. Especially for the eight site lattice the real plane results do not agree with the exact solution while the calculations with an imaginary offset match it very well. Above all the optimized offset resembles the exact solution with great precision. This plus the agreement with the NLO, and often with the tangent plane as well, gives us confidence in the results of the larger systems. We present the correlation functions resulting from our methods in fig. 11. Figure 12 shows that for the same number of configurations the quality of the measured observables seems to match the expected outcome from comparing the statistical powers, which can be found in fig. 8 and fig. 13. We see larger statistical fluctuations with worse statistical power; the optimized method consistently performs best. Figure 12a shows that most of the numerical results estimate the charge correctly (32) according to the exact results. The real plane HMC underestimates its error systematically for large ranges of \(\mu\). Still, the optimized offset resembles the exact result best; the tangent plane and NLO arguably offer comparable uncertainties for many parameter choices. Figure 12d and fig. 13c show that our method has limits and not every sign problem can be conquered with a simple constant offset. Also in certain areas of the other charge plots, we see that the optimization routine could fail when the sign problem is very strong, causing a worse result than the NLO. An interesting observ Figure 8: Comparing statistical power of the real plane, tangent plane, NLO correction and optimized shift as a function of \(\mu\). The system is an eight site honeycomb lattice with \(N_{t}=16\), \(\beta=8\) and \(U=2\). charge flattens the statistical power of the tangent and NLO planes peak. ## V Conclusion Our results clearly show that the simple introduction of an imaginary shift of the integration contour can greatly impact the severity of the sign problem in quantum field theory with practically no additional computation cost or human effort required. We provide two analytic expressions for such offsets for the Hubbard model. Further a careful tuning of these offsets while coming at a small cost can lead to even better outcomes, especially when a range of parameters is to be explored. The reduction of the sign problem depends on the system and the physical parameters, but can potentially make a difference of orders of magnitude which can be seen as an increase in measurement precision at a fixed sample size or saving of computational resources on the way to achieve a certain desired precision. Even though this does not eliminate the sign problem entirely, it extends the parameter space that is explorable within our naturally limited resources and allows us to perform higher quality extrapolations. In addition to the analysis of the method itself we provided observables to condensed matter systems which could not be calculated before with lattice stochastic methods, going as far as the \(C_{60}\) buckyball a stable synthesizable carbon nanosystem. Further we found that in the limit of infinite chemical potential, which in case of the Hubbard model refers to a completely filled/empty lattice, the sign problem vanishes at a certain offset stating the question if this behavior could be seen in other theories as well. Our numerical optimization could lend itself to unsupervised learning, driving towards maximizing the statistical power and minimizing its derivative (28). Figure 9: Numerically optimized offset after a given number of iterations. Iteration 0 marks the starting offsets. This plot shows the convergence to the optimal offset and the value of a good starting guess. The iterative method started from \(\mu=0\) and \(\mu=6\), meeting in the middle. There are some open questions remaining in regards to combining optimized offset with neural networks, that we will address in the future. Is the optimized offset also the best starting point for generating training data? Does the uplift of neural networks and optimized shift over the tangent plane correlate? Further we plan to investigate the \(C_{60}\) lattice in more detail, as our methods have opened the door to high quality measurements on these large systems that are out of reach for exact diagonalization as well as HMC without sign problem optimization. We are developing a library for these nanosystems calculations with the intention of making it publicly available so everyone can try out our methods on the theory or model they are interested in. ###### Acknowledgements. We thank Neill Warrington for many helpful discussions related to this work as well as Timo Lahde for his valuable comments. This work was funded in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG Project-ID 196253076 - TRR110) as well as the STFC Consolidated Grant ST/T000988/1. We gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum Julich. Figure 10: Statistical power at numerically determined optimal offset, after a given number of optimization steps. For each iteration we used \(10\,000\) HMC steps (+tuning). ## Appendix A Statistical Power In fig. 13 we show the behavior of the statistical power over the interesting range of \(\mu\) for the remaining lattices, where we again compare the different offset methods. We also see in the plot that for non-bipartite lattices introducing a small chemical potential can relieve the sign problem. ## Appendix B Detailed derivation of tangent plane Starting from the action (9) we provide a detailed derivation the equation that determines the tangent plane (14). As in the main text (15) let \[\mathbb{F}_{\pm}=\mathbb{F}[\pm\phi,\pm K,\pm\mu]=\prod_{\tau=0}^{N_{t}-1}f_{ \tau}^{\pm} f_{\tau}^{\pm}=\left[e^{\pm\hat{K}}\right]\left[e^{\pm(-i\phi_{x,\tau }+\hat{\mu})}\right] \tag{16}\] where each term in [square brackets] is a space-by-space matrix and the product is from right (\(\tau=0\)) to left (\(\tau=N_{t}-1\)). Then, using the Schur complement \[\det M[\phi,K,\mu]=\det\left(\mathbb{1}+\mathbb{F}[\phi,K,\mu]\right) \tag{17}\] and likewise for \(M[-\phi,-K,-\mu]\). Since the inverses exist for any configuration with finite action \[\frac{\partial S}{\partial\phi_{x,t}}=\frac{1}{\hat{U}}\phi_{x,t}-\mathrm{tr} \big{\{}(\mathbb{1}+\mathbb{F}_{+})^{-1}\partial_{\phi_{x,t}}\mathbb{F}_{+} \big{\}}-\mathrm{tr}\big{\{}(\mathbb{1}+\mathbb{F}_{-})^{-1}\partial_{\phi_{x,t}}\mathbb{F}_{-}\big{\}} \tag{18}\] Figure 11: Single particle correlation function for different lattices and calculated with different methods. All systems were evaluated with \(N_{t}=32\) at \(U=2\) and \(\mu=1\). Each data point was calculated from a markov chain with \(50,000\) HMC configurations, where we measured on each \(10^{\mathrm{th}}\) to reduce autocorrelation, furthermore we averaged over correlators guaranteed to be equal by symmetry. The exact solution in fig. 10(a) was determined by exact diagonalization in the temporal continuum limit. where the two traces correspond to the particle and hole fermion matrices (15). Each auxiliary field only appears once in any given \(\mathbb{F}\), inside one entry of a diagonal matrix. So, differentiating \(\mathbb{F}_{\pm}\) inserts \(\mp i\mathbb{P}_{x}\) where \(\mathbb{P}_{x}\) projects to site \(x\), \[\partial_{\phi_{x,t}}\mathbb{F}_{\pm}=\left[\prod_{\tau=t}^{N_{t}-1}f_{\tau}^{ \pm}\right]\left[\mp i\mathbb{P}_{x}\right]\left[\prod_{\tau=0}^{t-1}f_{\tau}^ {\pm}\right] \tag{14}\] where the products go from right to left. Cycling the traces so that the projector is rightmost gives \[\frac{\partial S}{\partial\phi_{x,t}}=\frac{1}{U}\phi_{x,t}+i\sum_{s}s\, \mathrm{tr}\!\left\{\left[\prod_{\tau=0}^{t-1}f_{\tau}^{\pm}\right](\mathbb{1} +\mathbb{F}_{s})^{-1}\left[\prod_{\tau=t}^{N_{t}-1}f_{\tau}^{\pm}\right] \mathbb{P}_{x}\right\} \tag{15}\] where the sum over \(s\) runs over \(\pm 1\). Plugging in the imaginary spacetime constant \(\phi=i\phi_{0}\) means the auxiliary field factors are proportional to the identity matrix, \[f_{\tau}^{\pm}=\exp\left(\pm(\tilde{K}+\phi_{0}+\tilde{\mu})\right) \tag{16}\] independent of timeslice \(\tau\). Since \(\phi_{0}\) is space-independent we can sum on \(x\) and use the completeness \(\sum_{x}\mathbb{P}_{x}=1\). Since \(N_{t}\tilde{K}=\beta K\) and \(N_{t}\tilde{\mu}=\beta\mu\), \[0=\frac{1}{\tilde{U}}N_{x}\phi_{0}+\sum_{s\in\pm 1}s\,\mathrm{tr}\!\left\{ \frac{e^{s(\beta K+N_{t}\phi_{0}+\beta\mu)}}{1+e^{s(\beta K+N_{t}\phi_{0}+ \beta\mu)}}\right\} \tag{17}\] and evaluating this relationship in the eigenbasis of \(K\) yields the tangent plane relation (14). Figure 12: Charge expectation value at varying \(\mu\) for different lattices and calculated with different methods. All systems have \(N_{t}\,=\,16\) and \(U\,=\,2\). Each data point was calculated from a markov chain with \(50\,000\) HMC configurations, where we measured on each \(10^{\text{th}}\) to reduce autocorrelation. The exact solution in fig. 11(a) was determined by exact diagonalization with \(N_{t}=16\) discretization. ## Appendix C Detailed derivation of NLO To find the NLO constant imaginary offset we need to minimize the effective action (23) which requires computing the Hessian \[\mathbb{H}_{x^{\prime}t^{\prime},xt}=(\partial_{x^{\prime}t^{\prime}}\partial_{xt }S[\phi])\mathbb{J}_{\phi=\phi_{1}}\;. \tag{104}\] We start from the general single derivative (110) and differentiate again. Without loss of generality we assume \(t^{\prime}\geq t\), \[\partial_{x^{\prime}t^{\prime}}\partial_{xt}S[\phi]=\frac{1}{U}\delta_{x^{ \prime}x}\delta_{t^{\prime}t}-\sum_{s}\mathrm{tr}\Big{\{}(\mathbb{1}+\mathbb{ F}_{s})^{-1}\partial_{\phi_{x^{\prime},t^{\prime}}}\partial_{\phi_{x,s}} \mathbb{F}_{s}-(\mathbb{1}+\mathbb{F}_{s})^{-1}\left(\partial_{\phi_{x^{ \prime},t^{\prime}}}\mathbb{F}_{s}\right)(\mathbb{1}+\mathbb{F}_{s})^{-1} \left(\partial_{\phi_{x,t}}\mathbb{F}_{s}\right)\Big{\}} \tag{105}\] The second derivative of \(\mathbb{F}\) is much like the first (101) but with a second projector \(\mathbb{P}_{x^{\prime}}\) inserted at time \(t^{\prime}\); the \((\mp i)^{2}=-1\) regardless of sign choice. \[\partial_{x^{\prime}t^{\prime}}\partial_{xt}\mathbb{F}_{s}=-\left[\prod_{\tau =t^{\prime}}^{N_{t}-1}f_{\tau}^{s}\right]\mathbb{P}_{x^{\prime}}\left[\prod_{ \tau=t}^{t^{\prime}-1}f_{\tau}^{s}\right]\mathbb{P}_{x}\left[\prod_{\tau=0}^{t -1}f_{\tau}^{s}\right]. \tag{106}\] In fact, this looks much like the two-inverse term, though that term has an inverse between the projectors, \[\left(\partial_{\phi_{x^{\prime},t^{\prime}}}\mathbb{F}_{s}\right)(\mathbb{1} +\mathbb{F}_{s})^{-1}\left(\partial_{\phi_{x,t}}\mathbb{F}_{s}\right)=-\left[ \prod_{\tau=t^{\prime}}^{N_{t}-1}f_{\tau}^{s}\right]\mathbb{P}_{x^{\prime}} \left[\prod_{\tau=0}^{t^{\prime}-1}f_{\tau}^{s}\right]\left[\mathbb{1}+ \mathbb{F}_{s}\right]^{-1}\left[\prod_{\tau=t}^{N_{t}-1}f_{\tau}^{s}\right] \mathbb{P}_{x}\left[\prod_{\tau=0}^{t-1}f_{\tau}^{s}\right]. \tag{107}\] Figure 13: Statistical power for different lattices over a range of \(\mu\). For all systems \(U=2\) and \(N_{t}=16\). Cycling the trace so that \(\mathbb{P}_{x}\) is rightmost and consolidating like factors gives \[\partial_{x^{\prime}t^{\prime}}\partial_{xt}S[\phi]=\frac{1}{U}\delta_{x^{\prime} x}\delta_{t^{\prime}t}+\sum_{s}\mathrm{tr}\Bigg{\{}\left[\prod_{\tau=t-1}^{t-1}f_{ \tau}^{s}\right]\left(\mathbb{1}+\mathbb{F}_{s}\right)^{-1}\left[\prod_{\tau=t ^{\prime}}^{N_{t}-1}f_{\tau}^{s}\right]\mathbb{P}_{x^{\prime}}\left[\prod_{ \tau=t}^{t^{\prime}-1}f_{\tau}^{s}\right]\left[\mathbb{1}+\mathbb{F}_{s} \right]^{-1}\left[\prod_{\tau=t}^{N_{t}-1}f_{\tau}^{s}\right]\right]\mathbb{P }_{x}\Bigg{\}}. \tag{100}\] Making repeated use of \(C^{-1}B^{-1}A^{-1}=(ABC)^{-1}\) we can re-express the term in the sum \[\left[\prod_{\tau=0}^{t-1}f_{\tau}^{s}\right]\left[\mathbb{1}+ \mathbb{F}_{s}\right]^{-1}\left[\prod_{\tau=t}^{N_{t}-1}f_{\tau}^{s}\right] =\left[\prod_{\tau=t-1}^{0}(f_{\tau}^{s})^{-1}\right]^{-1}\left[ \mathbb{1}+\prod_{\tau=0}^{N_{t}-1}f_{\tau}^{s}\right]^{-1}\left[\prod_{\tau= N_{t}-1}^{t}(f_{\tau}^{s})^{-1}\right]^{-1}\] \[=\left[\prod_{\tau=N_{t}-1}^{t}(f_{\tau}^{s})^{-1}\right]\left[ \mathbb{1}+\prod_{\tau=0}^{N_{t}-1}f_{\tau}^{s}\right]\left[\prod_{\tau=t-1}^ {0}(f_{\tau}^{s})^{-1}\right]\right]^{-1}\] \[=\left[\prod_{\tau=t-1}^{t}(f_{\tau}^{s})^{-1}\right]+\mathbb{1} \Bigg{]}^{-1} \tag{101}\] where the products of \(f^{-1}\)s count down from right to left and wrap from \(0\) to \(N_{t}-1\). Inserting convenient products equal to the identity to use the result (101) twice gives \[\partial_{x^{\prime}t^{\prime}}\partial_{xt}S[\phi]= \frac{1}{U}\delta_{x^{\prime}x}\delta_{t^{\prime}t} \tag{102}\] \[+\sum_{s}\mathrm{tr}\Bigg{\{}\left[\mathbb{1}+\prod_{\tau=t-1}^{ t}(f_{\tau}^{s})^{-1}\right]^{-1}\left[\prod_{\tau=t^{\prime}-1}^{t}(f_{\tau}^{s})^ {-1}\right]\mathbb{P}_{x^{\prime}}\left[\prod_{\tau=t}^{t^{\prime}-1}f_{\tau}^ {s}\right]\left[\prod_{\tau=t-1}^{t}(f_{\tau}^{s})^{-1}\right]\left[\mathbb{1 }+\prod_{\tau=t-1}^{t}(f_{\tau}^{s})^{-1}\right]^{-1}\mathbb{P}_{x}\Bigg{\}}.\] Casting the inverse factors after \(\mathbb{P}_{x^{\prime}}\) into the denominator we arrive at \[\partial_{x^{\prime}t^{\prime}}\partial_{xt}S[\phi]=\frac{1}{U}\delta_{x^{ \prime}x}\delta_{t^{\prime}t}+\sum_{s}\mathrm{tr}\Bigg{\{}\left[\mathbb{1}+ \prod_{\tau=t-1}^{t}(f_{\tau}^{s})^{-1}\right]^{-1}\left[\prod_{\tau=t^{\prime }-1}^{t}(f_{\tau}^{s})^{-1}\right]\mathbb{P}_{x^{\prime}}\left[\prod_{\tau=t}^{ t^{\prime}-1}f_{\tau}^{s}\right]\left[\mathbb{1}+\prod_{\tau=t}^{t-1}f_{\tau}^{s} \right]^{-1}\mathbb{P}_{x}\Bigg{\}}, \tag{103}\] a convenient general form for arbitrary \(\phi\). To evaluate the effective action (23) and find the NLO imaginary offset we set \(\phi=i\phi_{1}\) a spacetime constant. Like before (101), the auxiliary field terms become proportional to the identity matrix and we can group terms into powers of \[f^{\pm}=f_{\tau}^{\pm}=\exp\pm(\delta K+\delta\mu+\phi_{1}) \tag{104}\] which has the nice property \((f^{\pm})^{-1}=f^{\mp}\) so that we may treat the sign label as a true exponent. Defining \(\Delta t=t^{\prime}-t\) we simplify to \[\mathbb{H}_{x^{\prime}t^{\prime},xt}=\left.\partial_{x^{\prime}t^{\prime}} \partial_{xt}S[\phi]\right|_{\phi=i\phi_{1}}=\frac{1}{U}\delta_{x^{\prime}x} \delta_{t^{\prime}t}+\sum_{s}\mathrm{tr}\Big{\{}\big{[}\mathbb{1}+f^{-sN_{t}} \big{]}^{-1}\,f^{-s\Delta t}\mathbb{P}_{x^{\prime}}f^{+s\Delta t}\left[\mathbb{ 1}+f^{+sN_{t}}\right]^{-1}\mathbb{P}_{x}\Big{\}}. \tag{105}\] Since the hopping amplitudes are symmetric \(K=K^{\top}\), the matrices \(f\) (104) are too \(f=f^{\top}\). Moreover, the projectors are symmetric \(\mathbb{P}=\mathbb{P}^{\top}\). Because the sum is over \(s\in\{\pm 1\}\), the signs and inverses conspire so that the two traces are over a matrix and its transpose, and are therefore equal. So, we can consolidate the traces and use the projectors to isolate needed matrix elements \[\mathbb{H}_{x^{\prime}t^{\prime},xt} =\frac{1}{U}\delta_{x^{\prime}x}\delta_{t^{\prime}t}+2\,\mathrm{tr }\big{\{}(\mathbb{1}+f^{-N_{t}})^{-1}f^{-\Delta t}\mathbb{P}_{x^{\prime}}f^{+ \Delta t}(1+f^{+N_{t}})^{-1}\mathbb{P}_{x}\big{\}} \tag{106}\] \[=\frac{1}{U}\delta_{x^{\prime}x}\delta_{t^{\prime}t}+2\left[( \mathbb{1}+f^{-N_{t}})^{-1}f^{-\Delta t}\right]_{xx^{\prime}}\left[f^{+\Delta t}( \mathbb{1}+f^{+N_{t}})^{-1}\right]_{x^{\prime}x}. \tag{107}\] We may quickly evaluate the matrix elements by a unitary transformation from the eigenbasis of \(K\), \[\big{[}f^{\pm\Delta t}(\mathbb{1}+f^{\pm N_{t}})^{-1}\big{]}_{x^{\prime}x}=\sum_ {k}\mathbb{U}_{x^{\prime}k}^{\dagger}\frac{e^{\pm\Delta t}(\delta_{k^{\prime} +\phi_{1}+\delta\mu})}{1+e^{\pm\beta(\epsilon_{k}+\phi_{1}/\delta+\mu)}}\mathbb{U}_ {kx}\hskip 56.905512pt\mathbb{U}^{\dagger}K\mathbb{U}=\epsilon_{k}. \tag{108}\] We emphasize that these results rely on many simplifications offered by a constant spacetime offset \(\phi=i\phi_{1}\). However, nearly identical simplifications provide a similar evaluation for configurations with a different constant field on each temporal slice. Rather than compute derivatives of the action expressed with \(\log\det(\mathbb{1}+\mathsf{F}_{\pm})\) (26) one may directly study \(\log\det M_{\pm}\) instead. In the case of constant field we may diagonalize \(M\) with a straightforward unitary Matsubara decomposition \[\mathbb{A}_{kn,xt} =\mathsf{U}_{kx}e^{i\tilde{\omega}_{n}t}/\sqrt{N_{t}} \tilde{\omega}_{n} =(2n+1)\pi/N_{t}. \tag{28}\] One finds a structurally similar and numerically equal expression for the Hessian in terms of matrix elements \(T\) \[\mathbb{H}_{x^{\prime}t^{\prime},xt} =\left(\frac{1}{U}-1\right)\delta_{x^{\prime},x}\delta_{t^{ \prime},t}-T_{+;xt,x^{\prime}t^{\prime}}T_{+;x^{\prime}t^{\prime},xt}-T_{-;xt, x^{\prime}t^{\prime}}T_{-;r^{\prime}t^{\prime},xt} \tag{29}\] \[T_{\pm;x^{\prime}t^{\prime},xt} =\sum_{kn}\mathbb{A}_{x^{\prime}t^{\prime},kn}^{\dagger}\frac{e^ {\pm(\delta\epsilon_{k}+\delta\mu+\phi_{1}+i\tilde{\omega}_{n})}}{1-e^{\pm( \delta\epsilon_{k}+\delta\mu+\phi_{1}+i\tilde{\omega}_{n})}}\mathbb{A}_{kn,xt}. \tag{30}\] We have numerically verified that these two formulations yield the same Hessian.
2302.03023
V1T: large-scale mouse V1 response prediction using a Vision Transformer
Accurate predictive models of the visual cortex neural response to natural visual stimuli remain a challenge in computational neuroscience. In this work, we introduce V1T, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals. We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance. Moreover, we show that the self-attention weights learned by the Transformer correlate with the population receptive fields. Our model thus sets a new benchmark for neural response prediction and can be used jointly with behavioral and neural recordings to reveal meaningful characteristic features of the visual cortex.
Bryan M. Li, Isabel M. Cornacchia, Nathalie L. Rochefort, Arno Onken
2023-02-06T18:58:38Z
http://arxiv.org/abs/2302.03023v4
# V1T: large-scale mouse V1 response prediction ###### Abstract Accurate predictive models of the visual cortex neural response to natural visual stimuli remain a challenge in computational neuroscience. In this work, we introduce V1T, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals. We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance. Moreover, we show that the attention weights learned by the Transformer correlate with the population receptive fields. Our model thus sets a new benchmark for neural response prediction and captures characteristic features of the visual cortex. Code available at github.com/bryanlimy/V1T. ## 1 Introduction Understanding how the visual system processes information is a fundamental challenge in neuroscience. Predictive models of neural responses to naturally occurring stimuli have shown to be a successful approach toward this goal, serving the dual purpose of generating new hypotheses about biological vision [6; 48; 66] and bridging the gap between biological and computer vision [41; 54; 56]. This approach relies on the idea that high performing predictive models, which explain a large chunk of the stimulus-driven variability, have to account for the nonlinear response properties of the neural activity, thus allowing to identify the underlying computations of the visual system [12]. An extensive amount of work on primary visual cortex (V1) has been dedicated to build quantitative models that accurately describe neural responses to visual stimuli, starting from simple linear-nonlinear models [27; 29], energy models [2] and multi-layer neural network models [34; 37; 49]. These models, based on neurophysiological data, provide a powerful framework to test hypotheses about neural functions and investigate the principles of visual processing. With the increase of popularity of deep neural networks (DNNs) in computational neuroscience in recent years [30; 38; 39; 53], DNNs have set new standards of prediction performance [3; 19; 31; 65; 75], allowing for a more extensive exploration of the underlying computations in sensory processing [6; 8; 48; 63; 66]. DNN-based models are characterized by two main approaches. On the one hand, task-driven models rely on pre-trained networks optimized on standard vision tasks, such as object recognition, in combination with a readout mechanism to predict neural responses [9; 11; 72] and they have proven to be successful for predicting visual responses in primates by obtaining a shared generalized representation of the visual input across animals [9; 71]. However, task-trained models do not yield the same generalization and prediction results for mouse visual cortex [10]. On the other hand, data-driven models share a common representation by being trained end-to-end directly on data from thousands of neurons, without any assumption on the functional properties of the network, and they have been shown to be successful as predictive models for the mouse visual cortex [43]. Data-driven models for prediction of visual responses across multiple animals typically employ the core-readout framework [8; 9; 20; 32; 43]. Namely, a core module which learns a shared latent representation of the visual stimuli across the animals, followed by animal-specific linear readout modules to predict neural responses given the latent features. This architecture enforces the nonlinear computations to be performed by the shared core, which can in principle capture general characteristic features of the visual cortex [43]. The readout models then learn the animal-specific mapping from the shared representation of the input to the individual neural responses. With the advent of large-scale neural recordings, datasets that consist of thousands or even hundreds of thousands of neurons are becoming readily available [58; 59]. This has led to an increase of the parameters needed in the readout network to account for the large number of neurons, hence significant effort in neural predictive modeling has been dedicated to develop more efficient readout networks. On the other hand, due to their effectiveness and computation efficiency [24], convolutional neural networks (CNNs) are usually chosen as the shared representation model. Recently, Vision Transformer (ViT) [18] has achieved excellent results in a broad range of computer vision tasks [25] and Transformer-based [64] models have become increasingly popular in the computational neuroscience community [55; 62; 68]. Ye and Pandarinath [73] proposed a Neural Data Transformer to model spike trains, which was extended by Le and Shlizerman [35] using a Spatial Transformer and achieved state-of-the-art performance in 4 neural datasets. Berrios and Deza [7] introduced a data augmentation and adversarial training procedure to train a dual-stream Transformer which showed strong performance in predicting monkey V4 responses. In modeling the mouse visual cortex, Conwell et al. [15] experimented with a wide range of out-of-the-box DNNs, including CNNs and ViTs, to compare their representational similarity when pre-trained versus randomly initialized. Here, we explore the benefits of the ViT convolution-free approach and self-attention mechanism as the core representation learner in a data-driven neural predictive model. Since neural variability shows a significant correlation with the internal brain state [46; 47; 60], information about behavior can greatly improve visual system models in the prediction of neural responses [5; 20]. To exploit this relationship, we also investigate a principled mechanism in the model architecture to integrate behavioral states with visual information. Altogether, we propose V1T, a novel ViT-based architecture that can capture visual and behavioral representations of the mouse visual cortex. This core architecture, in combination with an efficient per-animal readout [43], outperforms the previous state-of-the-art model by 12.7% and 19.1% on two large-scale mouse V1 datasets [20; 69], which consist of neural recordings of thousands of neurons across over a dozen behaving rodents in response to thousands of natural images. Moreover, we show that the attention weights learned by the core module correlate with behavioral variables, thus drawing useful parallels between the model and the visual cortex. ## 2 Neural data We considered two large-scale neural datasets for this work, Dataset S (Sensorium dataset) by Willeke et al. [69] and Dataset F by Franke et al. [20]. These two datasets consist of V1 recordings from behaving rodents in response to thousands of natural images, providing an excellent platform to evaluate our proposed method and compare it against previous visual predictive models. We first briefly describe the animal experiment in Dataset S. A head-fixed mouse was placed on a cylindrical treadmill with a \(25\,\mathrm{inch}\) monitor placed \(15\,\mathrm{cm}\) away from the animal's left eye and more than 7,000 neurons from layer L2/3 in V1 were recorded via two-photon calcium imaging. Note that the position of the monitor was selected such that the stimuli were shown to the center of the recorded population receptive field. Gray-scale images \(x_{\text{image}}\in\mathbb{R}^{c=1\times h=144\times w=256}\) from ImageNet [16] were presented to the animal for \(500\,\mathrm{ms}\) with a blank screen period of \(300\,\mathrm{to}\)\(500\,\mathrm{ms}\) between each presentation. Neural activities were accumulated between \(50\,\mathrm{and}\)\(500\,\mathrm{ms}\) after each stimulus onset. In other words, for a given neuron \(i\) in trial (stimulus) \(t\), the neural response is represented by a single value \(r_{i,t}\). In addition, the anatomical coordinates of each neuron as well as five behavioral variables \(x_{\text{behaviors}}\in\mathbb{R}^{5}\) were recorded alongside with the calcium responses. These variables include pupil dilation, the derivative of the pupil dilation, pupil center (2d-coordinates) and running speed of the animal. Each recording session consists of up to 6,000 image presentations (i.e. trials), where 5,000 unique images are combined with 10 repetitions of 100 additional unique images, randomly intermixed. The 1,000 trials with repeated images are used as the test set and the rest are divided into train and validation sets with a split ratio of 90% and 10% respectively. In total, data from 51 rodents Mouse A to E were recorded in this dataset. Footnote 1: The test set labels from 2 additional mice are not publicly available [69], we hence excluded these animals from our analysis Dataset F follows largely the same experimental setup with the following distinction: colored images (UV-colored and green-colored, i.e. \(x_{\text{image}}\in\mathbb{R}^{c=2\times h\times w}\)) from ImageNet were presented on a screen placed \(12\,\mathrm{cm}\) away from the animal; 4,500 unique colored and 750 monochromatic images were used as the training set and an additional 100 unique colored and 50 monochromatic images were repeated 10 times throughout the recording; in total, 10 rodents Mouse F to O were used in the experiment with \(1,000\) V1 neurons recorded from each animal. Table A.1 summarizes the experimental information from both datasets. ## 3 Previous work A substantial body of work has recently focused on predictive models of cortical activity that learn a shared representation across neurons [8, 9, 20, 32, 43], which stems from the idea in systems neuroscience that cortical computations share common features across animals [45]. In DNN models, these generalizing features are learned in a nonlinear core module, then a subsequent neuron-specific readout module linearly combines the relevant features in this representation to predict the neural responses. Recently, Lurz et al. [43] introduced a shared CNN core and animal-specific Gaussian readout combination that achieved excellent performance in mouse V1 neural response prediction, and this is the current state-of-the-art model on large-scale benchmarks including Dataset S and Dataset F. Here, we provide a brief description for each of the modules in their proposed architecture, which our work is built upon. **CNN core** Typically, the core module learns the shared visual representation via a series of convolutional blocks [9, 20, 43]. In Lurz et al. [43], given an input image \(x_{\text{image}}\in\mathbb{R}^{c\times h\times w}\), the CNN core module outputs a latent representation vector \(z\in\mathbb{R}^{d\times h^{\prime}\times w^{\prime}}\) where \(h^{\prime}=h-k+1\) and \(w^{\prime}=w-k+1\). Previous works have shown correlation between behaviors and neural variability, and that the behavioral variables can significantly improve neural predictivity [5, 44, 51, 60]. Therefore, the authors proposed to integrate the behavioral variables \(x_{\text{behaviors}}\in\mathbb{R}^{5}\) with the visual stimulus by duplicating each variable to a \(h\times w\) matrix and concatenating them with \(x_{\text{image}}\) in the channel dimension, resulting in an input vector of size \(\mathbb{R}^{c+5\times h\times w}\). **Readout** To compute the neural response of neuron \(i\) from mouse \(m\) with \(n_{m}\) neurons, the readout module \(\mathbb{R}_{m}:\mathbb{R}^{d\times h^{\prime}\times w^{\prime}}\to\mathbb{R}^{ n_{m}}\) by Lurz et al. [43] computes a linear regression of the core representation \(z\) with weights \(w_{i}\in\mathbb{R}^{w^{\prime}\times h^{\prime}\times c}\), followed by an ELU activation with an offset of 1, i.e. \(o=\text{ELU}(\mathbb{R}_{m}(z))+1\), which keeps the response positive. The regression is performed by a Gaussian readout, which learns the parameters of a 2d Gaussian distribution whose mean \(\mu_{i}\) represents the center of the receptive field of the neuron in the image space and whose variance quantifies the uncertainty of the receptive field position, which decreases over training. The response is thus obtained as a linear combination of the feature vector of the core at a single spatial position, which allows the model to greatly reduce the number of parameters per neuron in the readout. Notably, to learn the position \(\mu_{i}\), the model also exploits the retinotopic organization of V1 by coupling the recorded cortical 2d coordinates of each neuron with the estimated center of the receptive field from the readout. Moreover, the authors introduced a shifter module to adjust (or shift) the \(\mu_{i}\) receptive field center of neuron \(i\) to account for the trial-to-trial variability due to eye movement. The shifter network \(\mathbb{R}^{2}\to\mathbb{R}^{2}\) consists of 3 dense layers with hidden size of 5 and \(\tanh\) activation; it takes as input the 2d pupil center coordinates and learns the vertical and horizontal adjustments needed to shift \(\mu_{i}\). ## 4 Methods The aim of this work is to design a neural predictive model \(F(x_{\text{image}},x_{\text{behaviors}})\) that can effectively incorporate both visual stimuli and behavioral variables to predict responses \(o\) that are faithful to real recordings \(r\) from mouse V1. With that goal, we first detail the core architectures proposed in this work, followed by the training procedure and evaluation metrics. ### ViT core Vision Transformers [18], or ViTs, have achieved competitive performance in many computer vision tasks, including object detection and semantic segmentation, to name a few [13; 14; 61]. Here, we propose a data-driven ViT core capable of learning a shared representation of the visual stimuli that is relevant for the prediction of neural responses in the visual cortex. Moreover, we introduce an alternative approach in ViT to encode behavioral variables in a more principled way when compared to previous methods and further improve the neural predictive performance of the overall model. The original ViT [18] classifier is comprised of 3 main components: (1) a tokenizer first encodes the 3d image (including channel dimension) into 2d patch embeddings, (2) the embeddings are then passed through a series of Transformer [64] encoder blocks, each consisting of a Multi-Head Attention (MHA) and a Multi-Layer Perceptron (MLP) module which requires 2d inputs, and finally (3) a classification layer outputs the class prediction. The following sections detail the modifications made to convert the vanilla ViT to a shared visual representation learner for the downstream readout modules. Moreover, we experiment with a number of recently proposed efficient ViTs that have been emphasized for learning from small to medium size datasets. **Tokenizer** The tokenizer, or patch encoder, extracts non-overlapping squared patches of size \(p\times p\) from the 2d image and projects each patch to embeddings \(z_{0}\) of size \(d\), i.e. \(\mathbb{R}^{c\times h\times w}\rightarrow\mathbb{R}^{l\times(cp^{2})} \rightarrow\mathbb{R}^{l\times d}\) where \(l=hw/p^{2}\) is the number of patches. Dosovitskiy et al. [18] proposed two tokenization methods in the original ViT, where patches can be extracted either via a \(p\times p\) sliding window over the height and width dimensions of the image, followed by a linear layer with \(d\) hidden units, or via a 2d convolutional layer with kernel size \(p\) and \(d\) filters. Typically, Transformers and ViTs are required to pre-train on large scale datasets and then fine-tune on the downstream target dataset in order to obtain optimal performance [25]. Although the datasets used in this work are already some of the largest publicly available V1 recordings, they are still small in terms of modern deep learning standards. To avoid the tedious step of pre-training, we considered two recently introduced efficient ViT methods that are highly competitive in scarce data settings.Lee et al. [36] proposed Shifted Patch Tokenization (SPT) to combat the low inductive bias in ViTs and enable better learning from limited data. Conceptually, SPT allows additional (adjacent) pixel values to be included in each patch, thus improving the locality, or receptive field, of the model. Input image \(x_{\text{image}}\in\mathbb{R}^{1\times h\times w}\) is shifted spatially by \(p/2\) in one of the four diagonal directions (top-left, top-right, bottom-right, with zero padding and the four shifted images (i.e. each shifted in one diagonal direction) are then concatenated with the original image, resulting in a vector Figure 1: Illustration of the V1T block architecture. which can be processed by the two patch extraction approaches mentioned above. With a similar goal in mind, the Compact Convolutional Transformer (CCT, Hassani et al. 26) was proposed as a convolutional tokenizer to learn the patch embeddings that can take advantage of the translation equivariance and locality inherent in CNNs. The proposed mini-CNN is fairly simple: it consists of a 2d convolution layer with a \(p\times p\) kernel and filter size \(d\), followed by ReLU activation and a max pool layer. In this work, we experimented with and compared all four tokenization methods: sliding window, a single 2d convolutional layer, SPT and CCT. As ViTs are agnostic to the spatial structure of the data, a positional embedding is added to each patch to encode the relative position of the patches with respect to each other [18, 25] and this positional embedding can either be learned or sinusoidal. Finally, a learnable BERT [17] [cls] token is typically added to the patch embeddings (i.e. \(z_{0}\in\mathbb{R}^{(l+1)\times d}\)) to represent the class of the image. **Transformer encoder** The encoder consists of a series of ViT blocks, where each block comprises two sub-modules: Multi-Head Attention (MHA) and Multi-Layer Perceptron (MLP). In each MHA module, we applied the standard self-attention formulation [18, 64]: Attention\((Q,K,V)=\text{softmax}(QK^{T}/\sqrt{d})V\), where query \(Q\), key \(K\) and value \(V\) are linear projections of the input \(z_{b}\) at block \(b\). Conceptually, the self-attention layer assigns a pairwise attention value among all the patches (or tokens). In addition to the standard formulation, we also experimented with the Locality Self Attention (LSA, Lee et al. 36), where a diagonal mask is applied to \(QK^{T}\) to prevent strong connections in self-tokens (i.e. diagonal values in \(QK^{T}\)), thus improving the locality inductive bias. Each sub-module is preceded by Layer Normalization (LayerNorm, Ba et al. 4), and followed by a residual connection to the next module. **Reshape representation** To make the dimensions compatible with the Gaussian readout module (see Section 3 for an overview), we reshape the 2d core output \(z\in\mathbb{R}^{l\times d}\) to \(\mathbb{R}^{d\times h^{\prime}\times w^{\prime}}\), where \(l=h^{\prime}\times w^{\prime}\) and \(h^{\prime}\leq w^{\prime}\). Note that if the number of patches \(l\) is not sufficiently large, it is possible for the same position in \(z\) to be mapped to multiple neurons, which could lead to adverse effects. For instance, in the extreme case of \(l=1\), all neurons would be mapped to a single \(p\times p\) region in the visual stimulus (i.e. they would have the same visual receptive field), which is not biologically plausible given the size of the recorded cortical area [21]. We therefore set the stride size of the patch encoder as a hyperparameter and allow for overlapping patches, thus letting the hyperparameter optimization algorithm select the optimal number of patches. #### 4.1.1 Incorporating behaviors Previous studies have shown that visual responses can be influenced by behavioral variables and brain states; for example, changes in arousal, which can be monitored by tracking pupil dilation, lead to stronger (or weaker) neural responses [33, 52]. As a consequence, the visual representation learned by the core module should also be adjusted according to the brain state. Here, instead of concatenating the behavioral variables as additional channels in the image (see Section 3), we propose an alternative method to integrate behavioral variables with visual stimuli using a novel ViT architecture - V1T, illustrated in Figure 1. We introduced a behavior MLP module (\(\text{B-MLP}:\mathbb{R}^{5}\rightarrow\mathbb{R}^{d}\)) at the beginning of the encoder block which learns to adjust the visual latent vector \(z\) based on the observed behavioral states \(x_{\text{behaviors}}\). Each B-MLP module comprises two fully-connected layers with \(d\) hidden units and a dropout layer in between; \(\tanh\) activation is used so that the adjustments to \(z\) can be both positive and negative. Importantly, as layers in DNNs learn different features of the input, usually increasingly abstract and complex with deeper layers [50, 74], we hypothesize that the influence of the internal brain state should therefore change from layer to layer. To that end, we learned a separate B-MLP\({}_{b}\) at each block \(b\) in the V1T core, thus allowing level-wise adjustments to the visual latent variable. Formally, B-MLP\({}_{b}\) projects \(x_{\text{behaviors}}\) to the same dimension of the embeddings \(z_{b-1}\), followed by an element-wise summation between latent behavioral and visual representations, and then the rest of the operations in the encoder block: \[z_{b} \gets z_{b-1}+\text{B-MLP}_{b}(x_{\text{behaviors}}) \tag{1}\] \[z_{b} \leftarrow\text{MHA}_{b}(\text{LayerNorm}(z_{b}))+z_{b}\] (2) \[z_{b} \leftarrow\text{MLP}_{b}(\text{LayerNorm}(z_{b}))+z_{b} \tag{3}\] where \(z_{0}\) denotes the original patch embeddings. ### Training and evaluation In order to isolate the change in prediction performance that is solely due to the proposed core architectures, we employed the same readout architectures by Lurz et al. [43], as well as a similar data preprocessing and model training procedure. We used the same train, validation and test split provided by the two datasets (see Section 2). Natural images, recorded responses, and behavioral variables (i.e. pupil dilation, dilation derivative, pupil center, running speed) were standardized using the mean and standard deviation measured from the training set and the images were then resized to \(36\times 64\) pixels from \(144\times 256\) pixels. The shared core and per-animal readout modules were trained jointly using the AdamW optimizer [42] to minimize the Poisson loss \[\mathcal{L}_{m}^{\text{Poisson}}(r,o)=\sum_{t=1}^{n_{t}}\sum_{i=1}^{n_{m}} \left(o_{i,t}-r_{i,t}\log(o_{i,t})\right) \tag{4}\] between the recorded responses \(r\) and predicted responses \(o\), where \(n_{t}\) is the number of trials in one batch and \(n_{m}\) the number of neurons for mouse \(m\). A small value \(\varepsilon=1e-8\) was added to both \(r\) and \(o\) prior to the loss calculation to improve numeric stability. Gradients from each mouse were accumulated before a single gradient update to all modules. We tried to separate the gradient update for each animal, i.e. one gradient update per core-readout combination, but this led to a significant drop in performance. We suspect this is because the core module failed to learn a generalized representation among all animals when each update step only accounted for gradient signals from one animal. Although learning rate warm-up and pre-training on large datasets are considered the standard approach to train Transformers [25, 70], all our models were trained from scratch to be consistent with previous work in this task. We used a learning rate scheduler in conjunction with early stopping: if the validation loss did not improve over 10 consecutive epochs, we reduced the learning rate by a factor of \(0.3\); if the model still had not improved after 2 learning rate reductions, we then terminated the training process. Dropout [57], stochastic depths [28], and L1 weight regularization were added to prevent overfitting. The weight in dense layers were initialized by sampling from a truncated normal distribution (\(\mu=0.0,\sigma=0.02\)) with the bias values were set to 0.0; whereas the weight and bias in LayerNorm were set to 1.0 and 0.0. Each model was trained on a single Nvidia RTX 2080Ti GPU and all models converged within 200 epochs. Finally, we employed Hyperband Bayesian optimization [40] to find the hyperparameters that achieved the best validation loss, this included finding the optimal tokenization method and self-attention mechanism. The initial search space and final hyperparameter settings are detailed in Table A.2. The prediction performance of our models was measured by the single trial correlation metric, used by Willeke et al. [69] and Franke et al. [20], which can also account for the trial-to-trial variability in the test set where the same visual stimuli were shown multiple times. We computed the correlation between recorded \(r\) and predicted \(o\) responses: \[\text{corr}(r,o)=\frac{\sum_{i,j}(r_{i,j}-\bar{r})(o_{i,j}-\bar{o})}{\sqrt{ \sum_{i,j}(r_{i,j}-\bar{r})^{2}\sum_{i,j}(o_{i,j}-\bar{o})^{2}}} \tag{5}\] where \(\bar{r}\) and \(\bar{o}\) are the average recorded and predicted responses across all trials in the test set. ## 5 Results Our goal is to improve V1 response prediction with an architecture that generalizes across animals. Here, we first discuss the final core architecture chosen after the Bayesian hyperparameter optimization, followed by a comparison of our proposed core against a baseline linear-nonlinear model and the previous state-of-the-art CNN model [43] on two large-scale mouse V1 datasets. Finally, we analyze the trained core module and present the insights that can be gained from it. **Tuning** We first looked at how hyperparameters of ViT and ViT affect model performance. We observed the predictive performance to be quite sensitive towards number of patches, patch size and patch stride. The most performant models used a patch size of 8 and a stride size of 1, thus extracting the maximum number of patches. We note that this allows the readout to learn a mapping from the shared core representation of the stimulus to the cortical position of each neuron that spans across the whole image, and not just a part of the image. Since the visual receptive fields of neurons are distributed across a large area of the monitor given the size of the recorded cortical area, this leads to more accurate response predictions from the model. Furthermore, we found that the two efficient tokenizers, SPT and CCT, whose aim is to reduce the number of patches, both failed to improve the model performance, reiterating that a finer tiling of the image is crucial for accurate predictions of cortical activity. In line with recent studies on efficient Transformers [22, 26], we observed no empirical difference between the choice of learnable and sinusoidal positional embedding, and that the inclusion of the [cls] token had little to no effect in the representation learning task. Moreover, we found that the LSA attention mechanism, which encourages the model to learn from inter-tokens by masking out the diagonal self-token, led to worse performance, suggesting information from adjacent patches in this task is not as influential as it is in image classification. **Comparison** Next, we compared the models trained with the (vanilla) ViT and V1T core module against a baseline linear-nonlinear model and the CNN model on the two large scale mouse V1 datasets (see Section 2), their results are summarized in Table 1 and Table 2. By simply replacing the CNN core module with the tuned vanilla ViT architecture, we observed a considerable improvement in response predictions across all animals, with an average increase of \(9.5\%\) and \(11.3\%\) in single trial correlation over the CNN model in Dataset S and Dataset F respectively. Thus far, the core module encoded the brain state of the animals by concatenating behavioral variables as additional channels in the natural image. We propose V1T, an architecture that encodes the brain state via a nonlinear transformation by B-MLP, followed by element-wise summation with the latent representation of the image at every layer (block) in the core module (see Section 4.1.1 for further discussion). The model trained with the V1T core further improved the average prediction performance by \(2.9\%\) and \(7.0\%\) in the two datasets, or \(12.7\%\) and \(19.1\%\) over the CNN model. Hence, our proposed core architecture achieved state-of-the-art results in both gray-scale and colored natural visual stimuli, as well as varying sizes of neuron populations. A common practice to improve performance of machine learning models is ensemble learning, including for neural response prediction [20, 69]. Following the procedure in Franke et al. [20], we trained 10 models with different random seed initializations and selected the 5 best models based on their validation performance. The average of the selected models constituted the output of the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Mouse} \\ Method & A & B & C & D & E & avg (gain) \\ \hline Linear & 0.262 & 0.306 & 0.281 & 0.263 & 0.262 & 0.275 (-27.2\%) \\ CNN & 0.350 & 0.424 & 0.385 & 0.371 & 0.360 & 0.378 (0\%) \\ ViT (v) & 0.375 & 0.455 & 0.415 & 0.433 & 0.392 & 0.414 (+9.5\%) \\ ViT & 0.401 & 0.464 & 0.430 & 0.436 & 0.401 & 0.426 (+12.7\%) \\ ViT (center crop \(\alpha=0.8\)) & **0.403** & **0.468** & **0.433** & **0.442** & **0.403** & **0.430** (+13.8\%) \\ \hline \multicolumn{8}{l}{Ensemble of 5 models} \\ \hline CNN & 0.379 & 0.443 & 0.409 & 0.406 & 0.385 & 0.404 (+6.9\%) \\ ViT & **0.414** & **0.475** & **0.443** & **0.452** & **0.413** & **0.439** (+16.1\%) \\ \hline \hline \end{tabular} \end{table} Table 1: Single trial correlation between predicted and recorded responses in Dataset S [69] test set. The average correlation and its relative improvement (in brackets) to the CNN model [43] are shown in the last column. To demonstrate that the extracted attention maps can inform us about the (ir)relevant regions in the visual stimulus, we trained an additional ViT core with images center cropped to \(\alpha h\times\alpha w\) pixels. ViT (v) denote the tuned vanilla ViT core. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & \multicolumn{6}{c}{Mouse} \\ Method & F & G & H & I & J & K & L & M & N & O & avg (gain) \\ \hline Linear & 0.194 & 0.254 & 0.214 & 0.279 & 0.255 & 0.233 & 0.148 & 0.231 & 0.174 & 0.243 & 0.223 (-27.8\%) \\ CNN & 0.253 & 0.371 & 0.184 & 0.377 & 0.329 & 0.319 & 0.207 & 0.331 & 0.341 & 0.376 & 0.309 (0\%) \\ ViT (v) & 0.310 & 0.375 & 0.352 & 0.379 & 0.385 & 0.262 & 0.294 & 0.360 & 0.358 & 0.368 & 0.344 (+11.3\%) \\ ViT & **0.326** & **0.386** & **0.387** & **0.394** & **0.398** & **0.373** & **0.298** & **0.377** & **0.363** & **0.379** & **0.368** (+19.1\%) \\ \hline \hline \end{tabular} \end{table} Table 2: Single trial correlation between predicted and recorded responses in Dataset F [20] test set. The average correlation and its relative improvement (in brackets) to the CNN model are shown in the last column. ViT (v) denote the tuned vanilla ViT core. ensemble model. The CNN ensemble model achieved an average improvement of \(6.9\%\) in Dataset S as compared to its non-ensemble variant. Nevertheless, the individual V1T model still outperformed the CNN ensemble by \(5.4\%\). For completeness, the V1T ensemble trained with the same procedure achieved an average single trial correlation of 0.439, which corresponds to an \(8.7\%\) improvement over the CNN ensemble model. B-MLP **activation** Next, we investigated different variations of the B-MLP module. The motivation of the proposed behavior module is to enable the core to learn a shared representation of the visual and behavioral variables across the animals. Moreover, the level-wise connections allow the self-attention module in each V1T block to encode different behavioral features with the latent visual representation. We experimented with a per-animal B-MLP module (while the rest of the core was still shared across animals) which did not perform any better than the shared counterpart, suggesting that the behavior module can indeed learn a shared internal brain state presentation. We also tested having the module in the first block only, as well as using the same module across all blocks (i.e. all B-MLP\({}_{b}\) shared the same weights). In both cases, however, led to worsen results with a \(2-4\%\) reduction in predictive performance on average. To further examine the proposed formulation, we analyzed the activation patterns of the shared behavior module at each level in V1T, shown in Figure 1(a). We observed a noticeable distinction in B-MLP outputs in earlier versus deeper layers, with a higher spread in deeper layers, which corroborates our hypothesis that the influence of the behavioral variables differs at each level of the visual representation process. **Attention visualization** In addition to the performance gain in the proposed core modules, the self-attention mechanism inherent in Transformers can be used to visualize areas in the input image that the model learns to focus on. In our case, it allows us to detect the regions in the visual stimulus that drive the neural responses. To that end, we extracted the per-stimulus attention map learned by the V1T core module via Attention Rollout [1, 18]. Briefly, we averaged the attention weights (i.e. \(\text{Softmax}(QK^{T}/\sqrt{d})\)) across all attention heads in MHA, and then multiplied the weights over all layers (blocks), recursively. Figure 1(b) shows the normalized average attention weights superimposed to the input images from Mouse A in Dataset S, with more examples available in Appendix A.1. Given that the position of the computer monitor was chosen in order to center the population receptive field, V1 responses from the recorded region should be mostly influenced by the center of the image [69]. Here, we can see a clear trend where the core module is focusing on the central regions of the images to predict the neural response, which aligns with our expectation from the experiment conditions. Interestingly, when the core module inputs the same image but with varying behaviors (i.e. test set, second row in Figure 1(b)), we noticed variations in the attention patterns. This suggests that the V1T core is able to take behavioral variables into consideration and adjust its attention solely based on the brain state. Figure 2: (a) \(\tanh\) activation distributions of B-MLP at each level (block) in the V1T core. The spread of activation distributions indicates varying influence of behavioral variables at the block in the core module. (b) V1T attention visualization on Mouse A validation and test samples via Attention Rollout [1]. Each attention map was normalized to \([0,1]\), and the standardized behavioral variables of the corresponding trial are shown below the image in the format of [pupil dilation, dilation derivative, pupil center (\(x\), \(y\)), speed]. More examples are available in Figure A.2. These attention maps can inform us of the area of (dis)interest in the visual input which, in turn, allow us to build more sophisticated predictive models. For instance, the core module consistently assigned higher weights to patches in the center of the image, suggesting information at the edges of the image are less (or not at all) relevant for the recorded group of neurons. As a practical example, we eliminated irrelevant information in the stimuli by center cropping the image to \(\alpha 144\times\alpha 256\) pixels where \(0<\alpha\leq 1\), prior to downsampling the input to \(36\times 64\) pixels. After a grid search on \(\alpha\), we found that \(\alpha=0.8\) (i.e. removing \(36\%\) of the total number of pixels) further improved the average predictive performance of V1T by \(1\%\) (see Table 1). Note that we also obtained similar improvement with the CNN model. To further explore the relationship between the attention weights learned by the core module and the behavioral information, we measured the correlation between the center of mass of the attention maps and the pupil centers in the vertical and horizontal axes. The correlation coefficient of each animal in Dataset S is summarized in Table 3. Overall, we found a moderate correlation between the attention maps and the pupil center of the animal, with an average correlation of \(0.525\pm 0.079\) and \(0.409\pm 0.105\) in the horizontal and vertical directions across animals with p-values \(\ll 0.0001\). These correlations indicate that attention maps can reveal the impact of behavioral variables on the neural responses. Therefore, this framework can be particularly useful for studies investigating the coding of visual information across visual cortical areas (V1 and higher visual areas), as the model could determine what part(s) of the visual stimulus is processed along the "hierarchy" of visual cortical areas. Since higher visual areas are known to have larger receptive fields [23; 67], we would expect a larger part of the image to be relevant for the core module. Further investigation of the attention map could also be used to determine which part of a visual scene was relevant when performing more specific tasks, such as object recognition, decision-making, or spatial navigation. ## 6 Discussion In this work, we presented a novel core architecture V1T to model the visual and behavioral representations of mouse primary visual cortex activity in response to natural visual stimuli. The proposed representation learner integrates behavioral variables with the latent visual variables via a nonlinear transformation in a layer-wise fashion. We evaluated our proposed method on two large-scale mouse V1 datasets, which comprised of neural activities of thousands of neurons responding to gray-scale and colored natural images of over a dozen behaving rodents, and achieved considerable improvements of 12.7% and 19.1% in prediction performance over the previous state-of-the-art method. This further emphasizes the effect of behavioral states on visual cortical responses, and how efficiently combining visual stimulus and internal brain state can substantially improve neural response predictions. With a strong neural predictive performance, this model also provides a framework to investigate _in silico_ the computations in the visual system. For instance, the center of the attention maps learned by our model show a correlation with the pupil center of the animals, highlighting how features of this architecture can be phenomenologically linked to properties of the visual cortex. This provides an excellent platform to explore how visual information is encoded in cortical areas. In future work, we plan to further investigate the relationship between behavioral variables and neural responses. The attention visualization technique, for instance, enables ablation studies on the effect of each specific variable on the neural activity. Moreover, we plan to extend the method to recordings of the visual cortex in response to natural videos, to track how this relationship may evolve over time, as well as experiments in naturalistic settings, to know which part of a visual scene is relevant for certain behaviors. \begin{table} \begin{tabular}{c c c} \hline \hline Mouse & x-axis (p-value) & y-axis (p-value) \\ \hline A & 0.682 (9.642e-138) & 0.568 (2.036e-86) \\ B & 0.489 (4.265e-61) & 0.493 (2.279e-62) \\ C & 0.505 (5.127e-65) & 0.370 (2.215e-33) \\ D & 0.484 (1.943e-59) & 0.310 (1.519e-23) \\ E & 0.464 (4.318e-54) & 0.302 (2.314e-22) \\ \hline \hline \end{tabular} \end{table} Table 3: Correlations between the center of mass of the attention maps and pupil centers in the (x-axis) horizontal and (y-axis) vertical direction in Dataset S test set. #### Acknowledgments We sincerely thank Willeke et al. and Franke et al. for making their high-quality large-scale mouse recordings publicly available which makes this work possible. We would also like to thank Antonio Vergari for his insightful comments and suggestions on improving the manuscript. This work was supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. For the purpose of open access, the author has applied a creative commons attribution (CC BY) licence to any author accepted manuscript version arising.
2308.16313
Euler's First Proof of Stirling's Formula
We present a proof given by Euler in his paper {\it ``De serierum determinatione seu nova methodus inveniendi terminos generales serierum"} \cite{E189} (E189:``On the determination of series or a new method of finding the general terms of series") for Stirling's formula. Euler's proof uses his theory of difference equations with constant coefficients. This theory outgrew from his earlier considerations on inhomogeneous differential equations with constant coefficients of finite order that he tried to extend to the case of infinite order.
Alexander Aycock
2023-08-30T20:46:04Z
http://arxiv.org/abs/2308.16313v1
# Euler's First Proof of Stirling's Formula ###### Abstract We present a proof given by Euler in his paper _"De serierum determinatione seu nova methodous inveniendi terminos generales serierum"_[4] (E189: "On the determination of series or a new method of finding the general terms of series") for Stirling's formula. Euler's proof uses his theory of difference equations with constant coefficients. This theory outgrew from his earlier considerations on inhomogeneous differential equations with constant coefficients of finite order that he tried to extend to the case of infinite order. ## 1 Introduction Stirling's formula \[n!\sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}\quad\text{for}\quad n\to\infty. \tag{1}\] was first proven by Stirling. It can be proven by application of the Euler-Maclaurin summation formula or the saddle point approximation. But in his paper _"De serierum determinatione seu nova methodus inveniendi terminos generales serierum"_[4] (E189: "On the determination of series or a new method of finding the general terms of series") Euler gave another proof based on his theory on inhomogeneous linear difference equations with constant coefficients. His theory will be described in section 2. Finally, we will present and discuss Euler's proof in section 3. ## 2 Euler's Theory of Inhomogeneous Difference Equations ith Constant Coefficients In this section we will discuss Euler's application of his theory of inhomogeneous difference equations with constant coefficients to the derivation of Stirling's formula (1). Euler reduced them to a differential equation of infinite order. Having treated the finite order case in _"Methodus aequationes differentiales altiorum graduum integrandi ulterius promota"_[3] (E188: "The method to integrate differential equations of higher degrees expanded further") before, in [4] he then tried to transfer the results from the before-mentioned paper to the case of infinite order. Unfortunately, this is not possible in the way Euler intended and hence lead Euler to a wrong result when he applied his theory to the case of the logarithm of the factorial. We will explain this in more detail in section 3. But we will briefly state what we need to discuss Euler's solution of inhomogeneous linear differential equations of finite (see section 2.1) and infinite order (see section 2.2) first. ### Inhomogeneous Linear Differential Equations of Finite Order In his paper [3], Euler considered equations of the form: \[\left(a_{0}+a_{1}\frac{d}{dx}+a_{2}\frac{d^{2}}{dx^{2}}+\cdots+a_{n}\frac{d^{ n}}{dx^{n}}\right)f(x)=g(x), \tag{2}\] with complex coefficients \(a_{1},a_{2},\cdots,a_{n}\). Euler did not state any conditions on the function \(g(x)\)1. In SS22, Euler described the following procedure: First, find the zeros with their multiplicity of the expression: Footnote 1: The conditions on \(g(x)\) can be inferred from Euler’s solution. But since we will not need this in this paper, we will not elaborate on this subject. \[P(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots+a_{n}z^{n}.\] Assume \(z=k\) is a solution of \(P(z)=0\). Then, if \(k\) is a simple zero2 of \(P(z)\), a solution of (2) is given by: Footnote 2: In this note, we will only need the case of simple zeros and hence will only state the corresponding formula. In [3], Euler stated all cases from order 1 to 4 explicitly. \[f(x)=\frac{e^{kx}}{P^{\prime}(k)}\int e^{-kx}g(x)dx. \tag{3}\] Note that the indefinite integral introduces a constant of integration. ### Reduction of the Difference Equation to a Differential Equation #### 2.2.1 General Idea As we mentioned in section 1, Euler's paper [4] is a paper actually devoted to inhomogeneous difference equations with constant coefficients, i.e., equations of the form: \[a_{0}f(x)+a_{1}f(x+1)+\cdots+a_{n}f(x+n)=g(x), \tag{4}\] with complex coefficients \(a_{0},a_{1},\cdots,a_{n}\). Euler's idea to solve (4) is as follows: First, rewrite \(f(x+1),f(x+2),\cdots,f(x+n)\) in terms of \(f(x)\) and its derivatives by applying Taylor's theorem. Next, substitute the corresponding term in equation (4). After some rearrangement, one arrives at an inhomogeneous differential equation of infinite order with constant coefficients, i.e., an equation of the form: \[\left(A_{0}+A_{1}\frac{d}{dx}+A_{2}\frac{d^{2}}{dx^{2}}+\cdots+A_{n}\frac{d^{n }}{dx^{n}}+\cdots\right)f(x)=g(x), \tag{5}\] where \(A_{0},A_{1},A_{2},\cdots\) are complex coefficients. Having transformed the initial equation (4) into this form, Euler argued that the same procedure outlined in section (2.2) also applies here. More precisely, one has to find all zeros of the expression: \[A_{0}+A_{1}z+A_{2}z^{2}+\cdots+A_{n}z^{n}+\cdots \tag{6}\] and has to construct the solution to (5) from those zeros. In his paper [4] Euler considered various examples; but in this note we are interested in his solution of the simple difference equation. #### 2.2.2 Example: The Simple Difference Equation For the sake of explanation and since we will be need the result in section (3), let us consider the simple difference equation, i.e., the equation \[f(x+1)-f(x)=g(x) \tag{7}\] and let us describe Euler's solution. First, Eulerc expanded \(f(x+1)\) by using Taylor's theorem: Footnote c: In his paper [4] §55, Euler considered the equation \(y(x)-y(x-1)=X(x)\) instead of equation (7). But does not change the final result substantially, of course. \[f(x+1)=f(x)+\frac{d}{dx}f(x)+\frac{1}{2!}\frac{d^{2}}{dx^{2}}f(x)+\frac{1}{3! }\frac{d^{3}}{dx^{3}}f(x)\cdots.\] Substituting this into equation (7), Euler arrived at the equation \[\left(\frac{d}{dx}+\frac{1}{2!}\frac{d^{2}}{dx^{2}}+\frac{1}{3!}\frac{d^{3}}{dx^{3 }}+\cdots\right)f(x)=g(x).\] Thus, according to his theory, Euler needed to find the zeros (and their multiplicity) of the expression \[P(z)=\frac{z}{1!}+\frac{z^{2}}{2!}+\frac{z^{3}}{3!}+\cdots=e^{z}-1.\] The general zero of this expression is \(z=\log(1)\). But in his paper _"De la controverse entre Mrs. Leibnitz et Bernoulli sur les logarithmes des nombres negatifs et imaginaires"_[2] (E168:"On the controverse of Leibniz and Bernoulli on the logarithms of negative and imaginary numbers") Euler had demonstrated that the logarithm of a number is a multivalued expression and hence concluded that there are infinitely many zeros, namely: \[z=0,\pm 2\pi i,\pm 4\pi i,\pm 6\pi i,\pm 8\pi i,\cdots.\] Furthermore, all those zeros are simple, since: \[\lim_{z\to 2k\pi i}\frac{e^{z}-1}{z-2k\pi i}=\lim_{z\to 2k\pi i}\frac{e^{z}}{1}=e^{2k \pi i}=1.\] where L'Hospital's rule was used in the first step. Therefore, Euler used the general solution formula (3). This gave him: \[f(x)=\int g(x)dx+e^{2\pi ix}\int g(x)e^{-2\pi ix}dx+e^{-2\pi ix}\int g(x)e^{+ 2\pi ix}dx \tag{8}\] \[+e^{4\pi ix}\int g(x)e^{-4\pi ix}dx+e^{-4\pi ix}\int g(x)e^{+4\pi ix}dx+\cdots\] In [4] SS 55, Euler expressed the solutions using sines and cosines instead of the exponentials that we used here. Thus, we arrived at Euler's general solution of the simple difference equation (7). Unfortunately, as we will see below in section (3.3), there is a mistake in Euler's solution (8). ## 3 Application to the Factorial In [4] SS56 - SS60, Euler applied his general formula (8) to the factorial4, i.e., the function \(y(x)\) satisfying: Footnote 4: More precisely, Euler actually considered the difference equation satisfied by the \(\Gamma\)-function. \[y(x+1)=xy(x). \tag{9}\] This equation can be transformed into a simple difference equation by taking logarithms. We have: \[\log y(x+1)-\log y(x)=\log(x).\] ### Application of the General Formula Applying (8) with \(f(x)=\log y(x)\) and \(g(x)=\log(x)\) we get: \[f(x)=x\log x-x+C+e^{2\pi ix}\int\log(x)e^{-2\pi ix}dx+e^{-2\pi ix}\int\log(x)e^ {+2\pi ix}dx \tag{10}\] \[+e^{4\pi ix}\int\log(x)e^{-4\pi ix}dx+e^{-4\pi ix}\int\log(x)e^{+4\pi ix}dx+\cdots\] where \(\int\log(x)dx\) was already evaluated and \(C\) is a constant of integratione. Footnote e: This is the solution Euler gave in [4] §59. But he represented his solution using sines and cosines. ### Derivation of Stirling's Formula SS59 -SS60 of [4] contain the derivation of Stirling's formula (1) from (10). Euler first evaluated the general expression: \[e^{2k\pi ix}\int e^{-2k\pi ix}\log(x)dx.\] He did so by integrating by parts infinitely many times with \(e^{-2k\pi ix}\) as function to be integrated. In modern and compact notation the result isf: Footnote f: Since Euler used \(\sin(2k\pi x)\) and \(\cos(2\pi x)\) instead of \(e^{-2k\pi ix}\), his result differs from the one we will find. But the derivation is the same in both cases, of course. \[e^{2k\pi ix}\int e^{-2k\pi ix}\log(x)dx=-\frac{\log(x)}{2k\pi i}+\sum_{n=1}^{ \infty}\frac{(-1)^{n}(n-1)!}{(2k\pi i)^{n+1}x^{n}}+C_{k}e^{2k\pi ix}.\] \(C_{k}\) is a constant of integration. Proceeding in the same way for all other integrals, we have the formal identity: \[\log y(x)=x\log x-x+C+\sum_{k\in\mathbb{Z}\backslash\{0\}}\left(C_{k}e^{2k\pi ix}- \frac{\log(x)}{2k\pi i}+\sum_{n=1}^{\infty}\frac{(-1)^{n}(n-1)!}{(2k\pi i)^{n+1 }x^{n}}\right)\] Let us simplify the sum. First, we note that \[C+\sum_{k\in\mathbb{Z}\backslash\{0\}}C_{k}e^{2k\pi ix}=:h(x)\] is a general periodic function, i.e., it satisfies \(h(x+1)=h(x)\) for all \(x\). Next, \[\sum_{k\in\mathbb{Z}\backslash\{0\}}\frac{\log(x)}{2k\pi i}=0,\] since the terms cancel each other. Therefore, we just need to evaluate the double sum. By a formal calculation we have: \[\sum_{k\in\mathbb{Z}\backslash\{0\}}\sum_{n=1}^{\infty}\frac{(-1)^{n}(n-1)!}{ (2k\pi i)^{n+1}x^{n}}=\sum_{n=0}^{\infty}\sum_{k=1}^{\infty}\frac{2}{k^{2n+2}} \cdot\frac{(-1)^{n}(2n)!}{(2\pi)^{2n+2}\cdot x^{2n+1}}. \tag{11}\] The sum over \(k\) had been evaluated by Euler. The general formula can found, e.g., in [1] and in modern notation reads: \[\sum_{k=1}^{\infty}\frac{1}{k^{2n}}=\frac{(-1)^{n-1}(2\pi)^{2n}B_{2n}}{2(2n)!}, \tag{12}\] where \(B_{n}\) is the \(n\)-th Bernoulli number. Inserting this into (11), we find: \[\sum_{k\in\mathbb{Z}\backslash\{0\}}\sum_{n=1}^{\infty}\frac{(-1)^{n}(n-1)!}{ (2k\pi i)^{n+1}x^{n}}=\sum_{n=0}^{\infty}2\cdot\frac{(-1)^{n}(2\pi)^{2n+2}B_{2 n+2}}{2(2n+2)!}\cdot\frac{(-1)^{n}(2n)!}{(2\pi)^{2n+2}\cdot x^{2n+1}}.\] Many terms cancel such that: \[\sum_{k\in\mathbb{Z}\backslash\{0\}}\sum_{n=1}^{\infty}\frac{(-1)^{n}(n-1)!}{ (2k\pi i)^{n+1}x^{n}}=\sum_{n=1}^{\infty}\frac{B_{2n}}{(2n-1)2nx^{2n-1}}.\] Therefore, inserting everything we found into (10) we get: \[\log y(x)=x\log x-x+h(x)+\sum_{n=1}^{\infty}\frac{B_{2n}}{(2n-1)2nx^{2n-1}}, \tag{13}\] where \(h(x)\) satisfies \(h(x+1)=h(x)\). This equation is to be understood as an asymptotic series of course and is the formula Euler arrived at in [4] SS60, Euler just substituted the explicit numbers for the Bernoulli numbers. Comparing (13) to (1), the term \(\log(\sqrt{2\pi})\) is still missing. In [4] Euler argued that it follows from considering a special case, e.g., \(x=1\)8 and the initial condition \(y(1)=1\) to (9) such that one arrives at the final formula: Footnote 8: More precisely, Euler argued that \(h(x)\) is to be considered as constant in this case and the value of this constant is equal to the sum \(1-\sum_{n=1}^{\infty}\frac{B_{2n}}{(2n-1)2n}\) which Euler claims to be \(\frac{1}{2}\log(2\pi)\) without a proof in this paper, although the series does not converge due to the rapid growth of the Bernoulli numbers. But Euler knew that one can ascribe the beforementioned value to the sum, since it corresponds to the constant \(\sqrt{2\pi}\) in Stirling’s formula (1). \[\log y(x)=x\log x-x+\log(\sqrt{2\pi})+\sum_{n=1}^{\infty}\frac{B_{2n}}{(2n-1) 2nx^{2n-1}}, \tag{14}\] if \(x\) is infinitely large. In [4] SS60, Euler stated the formula as follows: \[y(x)=\frac{x^{x}}{e^{x}}\left(1+\frac{1}{12x}+\frac{1}{288x^{2}}-\frac{139}{5 1840x^{3}}+\cdots\right)\sqrt{2\pi}, \tag{15}\] which follows by inserting the explicit values for the Bernoulli numbers in (14), taking the exponential and expanding the exponential of the sum. ### Discussion of the Result As it was remarked by G. Faber in a footnote in the Opera Omnia version of [4], equation (14) and hence (15) is incorrect. The correct formula reads: \[\log y(x)=x\log x-x+\log(\sqrt{\frac{2\pi}{x}})+\sum_{n=1}^{\infty}\frac{B_{2n }}{(2n-1)2nx^{2n-1}}, \tag{16}\] i.e., Euler's formula is off by the term \(\log(\sqrt{x})\). Furthermore, the term is not missing due to a calculational error, but due to a conceptional one. More precisely, Euler's idea to construct the solution from the zeros of (6) does not work in general. We can see how the missing term enters by a formal argumenth. We are still interested in (7). Writing \(D\) for \(\frac{d}{dx}\), this equation can also be represented as: \[\left(e^{D}-1\right)f(x)=g(x).\] Thus, formally the solution is given as: \[f(x)=\left(e^{D}-1\right)^{-1}g(x),\] such that we have to find out how to express \(\left(e^{D}-1\right)^{-1}\). We only know how to calculate \(D^{n}f(x)\) for \(n\in\mathbb{Z}\). Thus, the idea is to expand \(\left(e^{D}-1\right)^{-1}\) into a Laurent series in \(D\) around \(D=0\) apply it to \(g(x)\). There are many possibility to perform this expansion, but for purposes we will only need the direct expansion. This expansion had also been given by Euler, e.g., in _"De seriebus quibusdam considerations"_[1] (E130:"Considerations on certain series") SS271. The expansion reads: Footnote 1: Euler considered the function \(\frac{z}{1-e^{-x}}\) and did not state the general formula for the coefficients, but explained their origin. \[\left(e^{D}-1\right)^{-1}=\sum_{n=0}^{\infty}B_{n}\frac{D^{n-1}}{n!}=D^{-1}- \frac{1}{2}+\frac{D}{12}-\frac{D^{3}}{720}+\cdots, \tag{17}\] where \(B_{n}\) are the Bernoulli numbers again. Interpreting \(D^{-1}\) as an integration, we can write: \[f(x)=\left(e^{D}-1\right)^{-1}g(x)=\int g(x)dx-\frac{1}{2}g(x)+\frac{1}{12} \frac{d}{dx}g(x)-\cdots, \tag{18}\] which is nothing but a modern representation of the Euler-Maclaurin summation formula. Thus, Euler's approach, i.e., constructing the solution from the zeros of \(e^{D}-1\), misses the term \(-\frac{1}{2}g(x)\). If we apply (18) to the factorial, i.e., take \(g(x)=\log(x)\) we arrive at (16). ## 4 Conclusion In this note we briefly mentioned Euler's theory how to solve inhomogeneous ordinary differential equations of infinite order with constant coefficients and Euler's application of his theory to the derivation of Stirling's formula (1). We pointed out the conceptual error in Euler's approach and provided an explanation how to correct it (section 3.3). Nevertheless, there are many intriguing ideas in [4], aside from Euler's derivation of Stirling's formula on which we focused, such that we intend to cover more content from the before-mentioned paper in the future.
2304.11971
Switchover phenomenon for general graphs
We study SIR type epidemics on graphs in two scenarios: (i) when the initial infections start from a well connected central region, (ii) when initial infections are distributed uniformly. Previously, \'Odor et al. demonstrated on a few random graph models that the expectation of the total number of infections undergoes a switchover phenomenon; the central region is more dangerous for small infection rates, while for large rates, the uniform seeding is expected to infect more nodes. We rigorously prove this claim under mild, deterministic assumptions on the underlying graph. If we further assume that the central region has a large enough expansion, the second moment of the degree distribution is bounded and the number of initial infections is comparable to the number of vertices, the difference between the two scenarios is shown to be macroscopic.
Dániel Keliger, László Lovász, Tamás Móri, Gergely Ódor
2023-04-24T10:09:34Z
http://arxiv.org/abs/2304.11971v1
# Switchover phenomenon for general graphs ###### Abstract We study SIR type epidemics on graphs in two scenarios: (i) when the initial infections start from a well connected central region, (ii) when initial infections are distributed uniformly. Previously, Odor et al. demonstrated on a few random graph models that the expectation of the total number of infections undergoes a switchover phenomenon; the central region is more dangerous for small infection rates, while for large rates, the uniform seeding is expected to infect more nodes. We rigorously prove this claim under mild, deterministic assumptions on the underlying graph. If we further assume that the central region has a large enough expansion, the second moment of the degree distribution is bounded and the number of initial infections is comparable to the number of vertices, the difference between the two scenarios is shown to be macroscopic. ## 1 Introduction We study the propagation of a disease on a network, and in particular the "switchover" phenomenon established in [6, 7]. Informally, the phenomenon means the following. We have a network (describing the network of interactions of people in a country), which has a denser "central region" and a sparser "periphery". We compare the total number of nodes that get infected if a given number of seeds (initial infections) are distributed uniformly and randomly in the central region and in the whole graph, respectively. The switchover phenomenon means that for a low infection rate, an epidemics starting in the central region is worse (results in a larger epidemics), but this switches over so that the epidemics starting uniformly over the whole country is worse. In [6], the authors have shown by simulation that this phenomenon occurs in many networks (not all), and established it rigorously for some very simple networks. In [7], some mathematical conditions were formulated (without proof), under which the switchover phenomenon occurs. The goal of this paper is to generalize those results and prove them mathematically. Our model for the spread of infection is the SIR(1) model (which is one of the simplest). In this model, we have a finite graph \(G\). A node can be in one of three states: susceptible (S), infected (I) or resistant (R). At each step, if a susceptible node has an infected neighbor, then it gets infected by this neighbor with probability \(\beta\). If it has several infected neighbors, then the events that these infect the node are independent. The node becomes infected if at least one of its infected neighbors infect it. An infected node recovers deterministically after one step, and will be resistant from then on, which means that it does not infect and cannot be infected. If you think of a time scale where one step as a week, then this may be a reasonable assumption; every event (getting infected and then passing it on) is recorded on a weekly scale. The main advantage of the SIR(1) model for us is that it is equivalent with a percolation problem. A proof of this simple observation was given in [7]. Briefly, it is not hard to see that we can decide about each edge in advance, independently and with probability \(\beta\), whether it is going to pass on the infection, at any time when one of its endpoints is infected and the other one is susceptible. Our model guarantees that every edge has at most one chance to be in this situation. In other words, we keep every edge with probability \(\beta\) and delete the remaining edges; this way we get an edge-percolated graph \(G^{\beta}\). For a seed set \(S\), we denote by \(G^{\beta}(S)\) the union of those components of \(G^{\beta}\) that contain at least one node of \(S\). Then \(|G^{\beta}(S)|\) nodes will be infected at one point during the epidemic in total. Our goal is to compare the expectations of \(|G^{\beta}(\mathbf{S}_{1})|\) and \(|G^{\beta}(\mathbf{S}_{2})|\), where \(\mathbf{S}_{1}\) is a random subset of the central region and \(\mathbf{S}_{2}\) is a random subset of the whole node set. In Section 2.2, we show that (under quite general conditions) for very small \(\beta\), seeding the central region is worse, but for \(\beta\) very near to \(1\), seeding the whole graph uniformly is worse (Theorem 2.3). However, such values of \(\beta\) are unlikely to occur in real life, and also the differences in epidemic sizes are minuscule. We call this "weak switchover", and we give its formal definition in Section 2.1. In Section 2.3, we formulate conditions on the graph under which we can work with values of \(\beta\) in a more reasonable range, and we can establish that the difference between the sizes of the epidemics starting from \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\) is of the same order of magnitude as the whole graph (we call this "strong switchover"). These conditions on the graph (Theorem 2.14) are tighter than for weak switchover, but they are still reasonable, and can be satisfied by real networks. As an application of our results in Section 2.3, we prove that weak switchover occurs on Chung-Lu random graphs with power-law degree distribution [3] in Section 4. This result was stated in [6], along with a non-rigorous proof. **Relationship with distribution-free graph models.** Epidemics are often studied either theoretically or by simulation on random graph models [5]. In this paper, our goal is different: we aim to find deterministic conditions on the graph, which give rise to the switchover phenomenon (in expectation, where the randomness only comes from the epidemic or percolation process). Such combinatorial results, which are studied with a network science application in mind, are called _distribution-free_ in the literature [4]. The main advantage of the distribution-free approach is that deterministic conditions can be verified on real networks, as opposed to the results on random graph distributions, where we can only hope that the results also apply to real networks. Moreover, one can go from results with deterministic conditions to results on random graphs relatively easily (as we do in Section 4), whereas going in the opposite direction seems much more difficult. Proving facts that hold with high probability for random graphs for deterministic graphs with appropriate properties goes back (at least) to the study of quasirandom graphs [1989]. In the network science setting, the study of distribution-free graph models was started by Fox et al. [4], and several papers followed. We refer to [8] for a review. The deterministic constraints studied in this topic include conditions on the triadic closure [4], on heterogenous degree distributions [2] and on the expansion properties [1] of the graphs. While one of our main conditions is also a deterministic expansion property (a stronger one than in [1]), our conditions and proof techniques are different from all previous papers that we are aware of in this topic. ## 2 Results ### Notation and setup Let \(G=(V,E)\) be a simple graph on \(n\) nodes. We use the notation \(|G|=|V|=n\). For a subset \(S\subseteq V\), \(G(S)\) denotes the union of connected components meeting \(S\). As usual, we denote by \(G[S]\) the subgraph induced by \(S\). \(e(K,L)\) stands for the number of edges between \(K,L\subseteq V.\) The average degree and the second moment of the set \(K\subseteq V\) is denoted by \[\overline{\deg}(K):= \frac{1}{|K|}\sum_{v\in K}\deg(v),\] \[\overline{\deg^{2}}(K):= \frac{1}{|K|}\sum_{v\in K}\deg^{2}(v).\] The path visiting vertices \(v_{1},v_{2},\ldots,v_{k}\in V\) is denoted by \(v_{1}v_{2}\ldots v_{k}\). For neighboring vertices \(u,v\in V\) we write \(u\sim v\) and for \(K\subseteq V\), \(\mathcal{N}(K)\) stands for \(\{v\in V\setminus K\mid\exists u\in K:v\sim u\}\), i.e. the neighborhood of \(K\). We consider graphs with a specified subset \(C\subseteq V\) (modeling the _central region_) of size \(|C|=r=cn\). Here, \(0<c<1\) is considered to be "macroscopic", and \(C\) will be denser than average in a sense to be defined later. Throughout, we use the notation \(G_{1}=G[C]\) and \(G_{2}=G\setminus E(G_{1})\). For \(0\leq\beta\leq 1\), \(G^{\beta}\) denotes the percolation of \(G\) with edge retention probability \(\beta\); in other words, the graph obtained by selecting each edge of \(G\) independently with probability \(\beta\), and deleting the unselected edges. Usually, the set \(S\subseteq V\) represents a _deterministic_ seed of initial infections. We will be interested in _random_ seeds \(\mathbf{S}\sim\mathrm{Uni}(L,k)\) sampled uniformly from the \(k\)-subsets of a set \(L\subseteq V\) for some \(k=sn\) (\(0<s<c\)). We think of \(L\) as a macroscopic subset; typical choices are \(L=V\) and \(L=C\). The corresponding random subsets for \(L=C\) and \(L=V\) are \(\mathbf{S}_{C}\sim\mathrm{Uni}(C,k)\) and \(\mathbf{S}_{V}\sim\mathrm{Uni}(V,k)\). In our considerations, we generate the random graph \(G^{\beta}\) and the seed set \(\mathbf{S}\) independently. We let \(\mathbb{P}_{\mathbf{S}}\) and \(\mathbb{E}_{\mathbf{S}}\) denote the probability and expectation if only the seed \(\mathbf{S}\) is randomized, and define \(\mathbb{P}_{\beta}\) and \(\mathbb{E}_{\beta}\) analogously when only the graph \(G^{\beta}\) is randomized. We use no subscript if probability and expectation are taken over both random choices. Now we come to our two main definitions. **Definition 2.1**: We say the graph \(G\) exhibits a _weak switchover phenomenon_ with seed size \(k\) (\(1\leq k\leq|C|\)), if there are \(\beta_{1},\beta_{2}\in(0,1)\) such that for \(\mathbf{S}_{C}\sim\mathrm{Uni}(C,k)\), and \(\mathbf{S}_{V}\sim\mathrm{Uni}(V,k)\) we have \[\mathbb{E}\big{(}|G^{\beta_{1}}(\mathbf{S}_{C})\big{)}>\mathbb{E}\big{(}|G^{ \beta_{1}}(\mathbf{S}_{V})|\big{)},\] but \[\mathbb{E}\big{(}|G^{\beta_{2}}(\mathbf{S}_{C})\big{)}<\mathbb{E}\big{(}|G^{ \beta_{2}}(\mathbf{S}_{V})|\big{)}.\] Note that Definition 2.1 only requires that there is some difference between \(\mathbb{E}(|G^{\beta}(\mathbf{S}_{C})|)\) and \(\mathbb{E}(|G^{\beta}(\mathbf{S}_{V})|)\), where this difference could be small, even vanishing as \(n\rightarrow\infty\). In a more robust version, we require these differences to constitute a positive fraction of the whole population. To make an exact definition, we need to consider a sequence of graphs whose size tends to infinity: **Definition 2.2**: We say the sequence of graphs \((G_{n},C_{n})\) exhibits a _strong switchover phenomenon_ with seed sizes \(k_{n}\), if there are real numbers \(\delta>0,0<\beta_{1}(n),\beta_{2}(n)<1\) such that for \(\mathbf{S}_{n,V}\sim\mathrm{Uni}(V(G_{n}),k_{n})\), and \(\mathbf{S}_{n,C}\sim\mathrm{Uni}(C_{n},k_{n})\) we have \[\mathbb{E}\big{(}|G^{\beta_{1}}(\mathbf{S}_{n,V})\big{)}\geq\mathbb{E}\big{(} |G^{\beta_{1}}(\mathbf{S}_{n,C})|\big{)}+\delta|V(G_{n})|,\] but \[\mathbb{E}\big{(}|G^{\beta_{2}}(\mathbf{S}_{n,V})\big{)}\leq\mathbb{E}\big{(} |G^{\beta_{2}}(\mathbf{S}_{n,C})|\big{)}-\delta|V(G_{n})|.\] for large enough \(n\). ### Weak switchover We start by elementary remarks concerning the cases when \(\beta\to 0\) and \(\beta\rightarrow\ 1\). It is clear that if \(\beta\to 0\), then \(\mathbb{E}(|G^{\beta}(S)|)\rightarrow|S|\), while if \(\beta\to 1\), then \(\mathbb{E}(|G^{\beta}(S)|)\rightarrow\ n\) for every nonempty set \(S\) and connected \(G\). The case of small \(\beta\) is straightforward, since the seeds and those nodes reached in one step will dominate. The probability that a particular path of length \(2\) is retained in \(G^{\beta}\) is at most \(\beta^{2}\), so with probability \(1-O(\beta^{2})\), only neighbors of \(S\) get infected, and each such neighbor is infected by only one seed (here \(O\) refers to \(\beta\to 0\)). Hence for any subset \(S\subseteq V\), \[\mathbb{E}\left(|G^{\beta}(S)\right)=|S|+\beta e(S,V\setminus S)+O\left(\beta ^{2}\right). \tag{1}\] The asymptotics at \(\beta\to 1\) is more complicated. Assume that \(G\) has the (mild) property that \((*)\)\(G\) has minimum degree \(d\), it is not \(d\)-regular and the only edge-cuts in \(G\) with at most \(d\) edges are the stars of minimum degree nodes. Let \(Y\subseteq V\) be the set of nodes with degree \(d\). Set \(\gamma=1-\beta\). With probability at least \(1-O(\gamma^{d+1})\), at most \(d\) edges of \(G\) are missing in \(G^{\beta}\). By \((*)\), in this case \(G^{\beta}\) is either a connected spanning subgraph of \(G\), or it has a single isolated node in \(Y\). The probability of the latter event is \(\gamma^{d}\) for any given node in \(Y\). This implies that with probability at least \(1-O(\gamma^{d+1})\), for every set \(S\subseteq V\), \(|S|\geq 2\), the infected graph \(G^{\beta}(S)\) will miss at most one node in \(Y\setminus S\). Hence \[\mathbb{E}_{\beta}(|G^{\beta}(S)|)=n-|Y\setminus S|\gamma^{d}+O(\gamma^{d+1}). \tag{2}\] Formulas (1) and (2) imply: **Theorem 2.3**: _Let \(G\) be a connected graph, and \(S_{1},S_{2}\subseteq V\), \(|S_{1}|=|S_{2}|\)._ (a) _If \(e(S_{1},V\setminus S_{1})>e(S_{2},V\setminus S_{2})\) and \(\beta\) is sufficiently close to \(0\), then \(\mathbb{E}(|G^{\beta}(S_{1})|)>\mathbb{E}(|G^{\beta}(S_{2})|)\)._ (b) _If \(G\) has property \((*)\), \(|S_{1}\cap Y|>|S_{2}\cap Y|\), and \(\beta\) is sufficiently close to \(1\), then \(\mathbb{E}(|G^{\beta}(S_{1})|)<\mathbb{E}(|G^{\beta}(S_{2})|)\)._ Coming to random seed sets, it will be easy to derive from (1) and (2) the following. **Theorem 2.4**: _Let \(G\) be a connected graph and \(2\leq k<r\)._ (a) _If_ \[\frac{r-k}{r-1}\overline{\deg}(C)>\frac{n-k}{n-1}\overline{\deg}(V),\] _then \(\mathbb{E}(|G^{\beta}(\mathbf{S}_{C})|)>\mathbb{E}(|G^{\beta}(\mathbf{S}_{V})|)\) if \(\beta\) is sufficiently close to \(0\)._ (b) _If \(G\) has property \((*)\) and_ \[\frac{|Y\cap C|}{r}<\frac{|Y|}{n},\] _then \(\mathbb{E}(|G^{\beta}(\mathbf{S}_{C})|)<\mathbb{E}(|G^{\beta}(\mathbf{S}_{V})|)\) if \(\beta\) is sufficiently close to \(1\)._ **Remark 2.5**: For fixed \(c\) and small enough \(s\) it is enough to assume \(\overline{\deg}(C)>\overline{\deg}(V)\) for part (a) of Theorem 2.4 as \[1\leq\frac{r-1}{r-k}\frac{n-k}{n-1}=1+O(s).\] **Corollary 2.6**: _If both conditions_ (a) _and_ (b) _above are satisfied, then \(G\) exhibits the weak switchover phenomenon for seed sets of size \(k\)._ Note that the both conditions say that \(C\) has larger degrees than average. In conclusion, a weak switchover phenomenon occurs for all graphs under very mild hypotheses, but for unrealistically extreme values of \(\beta\), and leading only to minuscule differences. Our goal in the next section is to exhibit a strong switchover with much more reasonable values of \(\beta\). ### Strong switchover To establish the case of small \(\beta\) for strong switchover is similar to the analogous case for weak switchover: again seeds and their neighbors will play the main role. We have to do more careful estimates, involving the spectrum of \(G\). Our main tool is the following refined version of (2). **Lemma 2.7**: _Let \(L\subseteq V\), \(m=|L|\), and let \(\mathbf{S}\) be a random \(k\)-subset of \(L\). Then_ \[\mathbb{E}\big{(}|G^{\beta}(\mathbf{S})|\big{)}=k+k\Big{(}\overline{\deg}(L)- \frac{k-1}{m-1}\frac{1}{m}e(L,L)\Big{)}\beta+R,\] _where_ \[|R|\leq\overline{\deg^{2}}(V)\beta^{2}n.\] Applying this lemma with \(L=V\) and \(L=C\), we will get that for an appropriate \(\beta\), seeding the central region is substantially more dangerous than seeding the whole node set. More exactly: **Corollary 2.8**: _Assume that_ \[\frac{r-k}{r-1}\overline{\deg}(C)-\frac{n-k}{n-1}\overline{\deg}(V)\geq c_{1}>0. \tag{3}\] _Let_ \[0<\beta\leq\frac{1}{4}\frac{c_{1}}{\deg^{2}(V)}s. \tag{4}\] _Then_ \[\mathbb{E}\left(|G^{\beta}(\mathbf{S}_{C})|\right)-\mathbb{E}\left(|G^{\beta} (\mathbf{S}_{V})|\right)\geq\frac{1}{2}c_{1}\beta sn.\] **Remark 2.9**: Similarly to part (a) of Theorem 2.4 it is enough to ensure that \(\overline{\deg}(C)>\overline{\deg}(V)\) uniformly for (3) when \(s\) is small enough. Ensuring the large \(\beta\) case for strong switchover is more involved and requires further assumptions regarding the graph \(G\). More precisely, we assume edge expansion of the central region instead of large average degree. **Definition 2.10**: We say that a graph \(G=(V,E)\) has _edge-expansion_\((a,q)\) with some \(a>0\) and \(0<q<\frac{1}{2}\), if for every set \(X\subset V\), \(qn<|X|\leq n/2\), the number of edges between \(X\) and \(V\setminus X\) is at least \(a|X|\). **Remark 2.11**: Note that the parameters \(a,q\) might not be optimal. If \(a_{1}\leq a_{2},q_{1}\leq q_{2}\) and \(G\) has edge-expansion \((a_{2},q_{1})\), than it is also true that \(G\) has edge-expansion \((a_{1},q_{2})\). The following lemma shows that if the central region has a large enough expansion, the epidemic will produce more infections from a uniform seeding when \(\beta\) is close to \(1\). **Lemma 2.12**: _Let \(b\) be the average degree of nodes of \(V\setminus C\) in \(G\), and assume \(G_{1}\) has edge expansion \((a,q)\) with \(q<1/3.\) Then_ \[\begin{split}&\mathbb{E}(|G^{\beta}(\mathbf{S}_{V})|)-\mathbb{E}(|G ^{\beta}(\mathbf{S}_{C})|)\geq\\ & s(1-c)(1-\beta)^{b}n-\frac{c}{c-s}qn-\left(1+\frac{c}{c-s} \right)n\rho^{r}-ne^{-2ck/3},\end{split} \tag{5}\] _where \(\rho:=\left(\frac{e(1-\beta)^{a}}{q}\right)^{q}.\)_ **Remark 2.13**: When \(G_{1}\) has edge expansion \((a,q)\) with \(a>b\) and \(q=(1+\varepsilon)e(1-\beta)^{a}\) for some \(\varepsilon>0\) we end up with \(0\leq\rho<1\) resulting in \[\left(1+\frac{c}{c-s}\right)n\rho^{r}+ne^{-2ck/3}=o(n)\] for all fixed \(0<\beta<1\). Furthermore, as \(0\leq b<a\) it is possible to set \(0<\beta<1\) to a value for which \(q<\frac{1}{3}\) and \[s(1-c)(1-\beta)^{b}>\frac{c}{c-s}(1+\varepsilon)e(1-\beta)^{a}=\frac{c}{c-s}q,\] thus, there is a \(\delta>0\) such that \(\mathbb{E}(|G^{\beta}(\mathbf{S}_{V})|)>\mathbb{E}(|G^{\beta}(\mathbf{S}_{C}) |)+\delta n\) for large enough \(n\). Our main result concerning strong switchover will easily follow by a combination of Lemmas 2.7 and 2.12. **Theorem 2.14**: _Let \((G_{m}:\ m=1,2,\dots)\) be a sequence of such that \(n_{m}=|V(G_{m})|\to\infty\). Let \(C_{m}\subseteq V(G_{m})\) so that \(|C_{m}|=c_{m}n_{m}\) and let \(b_{m}\) denote the average degree in \(G_{m}\) of nodes in \(V(G_{m})\setminus C_{m}\) with some uniform bound \(b_{m}\leq b_{max}\)._ _Assume there is a \(\varepsilon>0\) such that :_ * \(c_{m}\leq 1-\varepsilon\)_,_ * \(\varepsilon\leq s_{m}\leq(1-c_{m})c_{m}/2\)_,_ * _For any_ \(0<q<\frac{1}{3}\)__\(G_{m}[C_{m}]\) _has edge-expansion_ \((b_{max}+\varepsilon,q)\) _when_ \(m\) _is large enough._ _Also, assume the second moments of the degrees are uniformly bounded._ _Then the graph sequence \(((G_{m},C_{m}):\ m=1,2,\dots)\) exhibits the strong switchover phenomenon for seed sizes \(s_{m}n_{m}\)._ ## 3 Proofs We start with a simple identity, which will imply that when choosing a "small" random seed \(\mathbf{S}\) compared to \(V\), it becomes unlikely that two vertices in \(\mathbf{S}\) are adjacent to each other, and hence \[\mathbb{E}\left(e(\mathbf{S},V\setminus\mathbf{S})\right)\approx\mathbb{E} \left(e(\mathbf{S},V)\right)=k\overline{\deg}(L).\] **Lemma 3.1**: _Let \(\mathbf{S}\) be a random \(k\)-element subset of \(L\). Then_ \[\mathbb{E}_{\mathbf{S}}\left(e(\mathbf{S},V\setminus\mathbf{S})\right)=k \Big{(}\overline{\deg}(L)-\frac{k-1}{m-1}\frac{1}{m}e(L,L)\Big{)}.\] The last (error) term can be estimated as \[\frac{k-1}{m-1}\frac{1}{m}e(L,L)<\frac{k}{m}\overline{\deg}(L).\] **Proof.** \[\mathbb{E}\left(e(\mathbf{S},V\setminus\mathbf{S})\right)= \mathbb{E}\left(e(\mathbf{S},V)\right)-\mathbb{E}\left(e(\mathbf{S},\mathbf{S}) \right)=\] \[\frac{k}{m}e(L,V)-\frac{k(k-1)}{m(m-1)}e(L,L)=k\Big{(}\overline{ \deg}(L)-\frac{k-1}{m-1}\frac{1}{m}e(L,L)\Big{)}\] This proves the lemma. \(\square\) ### Weak switchover **Proof of Theorem 2.6.** We start with the small \(\beta\) case. \[\mathbb{E}\left(\left|G^{\beta}(\mathbf{S}_{C})\right|\right)- \mathbb{E}\left(\left|G^{\beta}(\mathbf{S}_{V})\right|\right)=\] \[\left(\mathbb{E}\left(e\left(\mathbf{S}_{C},V\setminus\mathbf{S} _{C}\right)\right)-\mathbb{E}\left(e\left(S_{V},V\setminus\mathbf{S}_{v} \right)\right)\right)\beta+O\left(\beta^{2}\right)\] Due to Lemma 3.1 the leading term can be bounded as \[\mathbb{E}\left(e\left(\mathbf{S}_{C},V\setminus\mathbf{S}_{C} \right)\right)-\mathbb{E}\left(e\left(\mathbf{S}_{V},V\setminus\mathbf{S}_{V} \right)\right)=\] \[k\left(\overline{\deg}(C)-\frac{k-1}{r-1}\frac{1}{r}e(C,C)- \overline{\deg}(V)+\frac{k-1}{n-1}\frac{1}{n}e(V,V)\right)\geq\] \[k\left(\frac{r-k}{r-1}\overline{\deg}(C)-\frac{n-k}{n-1} \overline{\deg}(V)\right)>0,\] making \(\mathbb{E}\left(\left|G^{\beta}(\mathbf{S}_{V})\right|\right)>\mathbb{E}\left( \left|G^{\beta}(\mathbf{S}_{C})\right|\right)\) for sufficiently small \(\beta\). As for small \(\gamma=1-\beta\) \[\mathbb{E}\left(\left|G^{\beta}(\mathbf{S}_{V})\right|\right)- \mathbb{E}\left(\left|G^{\beta}(\mathbf{S}_{C})\right|\right)=\] \[\left(\mathbb{E}\left(\left|Y\setminus\mathbf{S}_{C}\right|\right) -\mathbb{E}\left(\left|Y\setminus\mathbf{S}_{V}\right|\right)\right)\gamma^{d} +O\left(\gamma^{d+1}\right)=\] \[k\left(\underbrace{\frac{\left|Y\right|}{\left|V\right|}-\frac{ \left|Y\cup C\right|}{\left|C\right|}}_{>0}\right)\gamma^{d}+O\left(\gamma^{d+ 1}\right),\] implying \(\mathbb{E}\left(\left|G^{\beta}(\mathbf{S}_{V})\right|\right)>\mathbb{E} \left(\left|G^{\beta}(\mathbf{S}_{C})\right|\right)\) when \(\beta\) is close to \(1\). \(\square\) ### Strong switchover #### 3.2.1 Lemmas for small \(\beta\) We are going to prove Lemma 2.7 in the following slightly stronger form: **Lemma 3.2**: _Let \(L\subseteq V\), \(m=|L|\), and let \(\mathbf{S}\) be a random \(k\)-subset of \(L\). Then_ \[\mathbb{E}\big{(}|G^{\beta}(\mathbf{S})|\big{)}=k+k\Big{(}\overline{\deg}(L)- \frac{k-1}{m-1}\frac{1}{m}e(L,L)\Big{)}\beta+R,\] _where_ \[-\frac{1}{2}\left(\overline{\deg}^{2}(V)-\overline{\deg}(V)\right)\beta^{2}n \leq R\leq\left(\overline{\deg}^{2}(V)-\overline{\deg}(V)\right)\beta^{2}n.\] Proof.For ease of notation introduce \(\deg_{S}(v):=e\left(\{v\},\mathbf{S})\right)\) representing the number of neighbors of vertex \(v\in V\) from \(\mathbf{S}\sim\mathrm{Uni}(L,k).\) For the lower bound on \(R\), it suffices to count nodes in \(\mathbf{S}\) and their neighbors: \[\mathbb{E}_{\mathbf{S}}(|G^{\beta}(\mathbf{S})|) \geq k+\sum_{v\in\mathcal{N}(\mathbf{S})}\left(1-(1-\beta)^{ \deg_{\mathbf{S}}(v)}\right)\] \[\geq k+\sum_{v\in\mathcal{N}(\mathbf{S})}\left(\beta\deg_{ \mathbf{S}}(v)-\beta^{2}\binom{\deg_{\mathbf{S}}(v)}{2}\right)\] \[=k+\beta e(\mathbf{S},V\setminus\mathbf{S})-\beta^{2}\sum_{v\in V }\binom{\deg_{\mathbf{S}}(v)}{2}. \tag{6}\] Note that, by definition \(\binom{\deg_{\mathbf{S}}(v)}{2}=0\) when \(\deg_{\mathbf{S}}(v)\leq 1\). The probability that the random set \(\mathbf{S}\) contains two given nodes in \(\mathrm{L}\) is \(\frac{k(k-1)}{m(m-1)}\), hence \[\beta^{2}\sum_{v\in V}\mathbb{E}\left[\binom{\deg_{\mathbf{S}}(v) }{2}\right]=\frac{k(k-1)}{m(m-1)}\beta^{2}\sum_{v\in V}\binom{\deg_{L}(v)}{2}\leq\] \[\frac{1}{2}\beta^{2}\sum_{v\in V}\deg(v)\left(\deg(v)-1\right)= \frac{1}{2}\left(\overline{\deg}^{2}(V)-\overline{\deg}(V)\right)\beta^{2}n.\] For the upper bound notice \[\mathbb{E}_{\mathbf{S}}\left(\big{|}G^{\beta}(\mathbf{S})\big{|}\right)=k+\sum _{v\in\mathcal{N}(\mathbf{S})}\mathbb{P}_{\mathbf{S}}(v\in G^{\beta}(\mathbf{ S}))+\sum_{v\in V\setminus(\mathbf{S}\cup\mathcal{N}(\mathbf{S}))}\mathbb{P}_{ \mathbf{S}}(v\in G^{\beta}(\mathbf{S})).\] Let \(\deg_{\mathbf{S}}^{\beta}(v)\) denote number of neighbors of \(v\in V\) from \(\mathbf{S}\) in the percolated graph \(G^{\beta}\). Clearly, for \(v\in\mathcal{N}(\mathbf{S})\) \[\mathbb{P}_{\mathbf{S}}\left(v\in G^{\beta}(\mathbf{S})\right) \leq\mathbb{P}_{\mathbf{S}}\left(\deg_{\mathbf{S}}^{\beta}(v)>0\right)+ \mathbb{P}_{\mathbf{S}}\left(\left.v\in G^{\beta}(\mathbf{S})\right|\deg_{ \mathbf{S}}^{\beta}(v)=0\right)=\] \[\underbrace{1-(1-\beta)^{\deg_{\mathbf{S}}(v)}}_{\leq\beta\deg_{ \mathbf{S}}(v)}+\mathbb{P}_{\mathbf{S}}\left(\left.v\in G^{\beta}(\mathbf{S}) \right|\deg_{\mathbf{S}}^{\beta}(v)=0\right)\] Let \(H\) denote the graph where the edges between \(v\) and \(\mathbf{S}\) are deleted. Since edge retention happens independently \[\mathbb{P}_{\mathbf{S}}\left(\left.v\in G^{\beta}(\mathbf{S})\right|\deg_{ \mathbf{S}}^{\beta}(v)=0\right)=\mathbb{P}_{\mathbf{S}}(v\in H^{\beta}(\mathbf{ S})).\] We will call a length \(2\) path _vuw proper_ if \(v\) and \(w\) has distance two, or in other words, \(v,u,w\) does not form a triangle. \(\mathcal{A}_{v}(G)\) denotes the event that there is no proper path starting from \(v\) in the percolated graph \(G^{\beta}.\) Since \(H\) is a subgraph of \(G\)\(\mathcal{A}_{v}(G)\subseteq\mathcal{A}_{v}(H).\) Observe that vertices \(v\in V\setminus(\mathbf{S}\cup\mathcal{N}(\mathbf{S}))\) in graph \(G\) and \(v\in\mathcal{N}(\mathbf{S})\) in \(H\) are at least \(2\) steps away from the set \(\mathbf{S}\). This implies \[v\in V\setminus(\mathbf{S}\cup\mathcal{N}(\mathbf{S})) \mathbb{P}_{\mathbf{S}}\left(v\not\in G^{\beta}(\mathbf{S})\right)\geq \mathbb{P}\left(\mathcal{A}_{v}(G)\right),\] \[v\in\mathcal{N}(\mathbf{S}) \mathbb{P}_{\mathbf{S}}(v\not\in H^{\beta}(\mathbf{S}))\geq \mathbb{P}_{\mathbf{S}}(\mathcal{A}_{v}(H))\geq\mathbb{P}\left( \mathcal{A}_{v}(G)\right).\] Together, they make bound \[\mathbb{E}_{\mathbf{S}}\left(\left|G^{\beta}(\mathbf{S})\right|\right)\leq k+ \beta e\left(\mathbf{S},V\setminus\mathbf{S}\right)+\sum_{v\in V\setminus \mathbf{S}}\left(1-\mathbb{P}\left(\mathcal{A}_{v}(G)\right)\right).\] Let \(\delta(u)\) denote the number of \(vuw\) proper paths for some \(w\). Clearly, \(\delta(u)\leq\deg(u)-1\). Note that two proper paths \(vuw,\;vu^{\prime}w^{\prime}\) can only share an edge in their \(vu,\,vu^{\prime}\) segment when \(u=u^{\prime}\), the second segment is always disjoint. (\(w^{\prime}=u,\,u^{\prime}=w\) would make \(u,v,w\) a triangle.) This means any two proper paths are independent when \(u\neq u^{\prime}\). \[\mathbb{P}\left(\bigcup_{w\sim u}\left\{vuw\text{ is a proper path in }G^{\beta}\right\}\right)=\beta\left(1-(1-\beta)^{\delta(u)}\right)\] \[\mathbb{P}\left(\mathcal{A}_{v}(G)\right)= \prod_{u\sim v}\left[1-\beta\left(1-(1-\beta)^{\delta(u)}\right) \right]\stackrel{{*}}{{\geq}}1-\sum_{u\sim v}\beta\left(1-(1- \beta)^{\delta(u)}\right)\] \[\geq 1-\beta^{2}\sum_{u\sim v}\delta(u)\geq 1-\beta^{2}\sum_{u \sim v}\left(\deg(u)-1\right)\] \[\sum_{v\in V\setminus\mathbf{S}}\left(1-\mathbb{P}\left(\mathcal{A}_{v}(G) \right)\right)\leq \beta^{2}\sum_{v\in V}\sum_{u\sim v}(\deg(u)-1)=\beta^{2}\sum_{u }\deg(u)\left(\deg(u)-1\right)\] \[= \left(\overline{\deg^{2}}(V)-\overline{\deg}(V)\right)\beta^{2}n\] Note that at step \(*\) we used the union bound for independent events. \(\Box\) **Proof.** (Corollary 2.8) Let \(R_{C},R_{V}\) be the remainder terms in Lemma 2.7 when \(L=C,V\). Since \(\frac{n-k}{n-1}\geq\frac{r-k}{r-1}\),(3) implies \(\overline{\deg}(C)\geq\overline{\deg}(V).\) Thus, \[|R_{C}|,|R_{V}|\leq\overline{\deg^{2}}(V)\beta^{2}n\] This results in the bound \[\mathbb{E}\left(|G^{\beta}(\mathbf{S}_{C})|\right)-\mathbb{E} \left(|G^{\beta}(\mathbf{S}_{V})|\right)\] \[\geq\Big{(}\overline{\deg}(C)-\frac{k-1}{r-1}\frac{1}{r}e(C,C)- \overline{\deg}(V)+\frac{k-1}{n-1}\frac{1}{n}e(V,V)\Big{)}\beta k+R_{C}-R_{V}\] \[\geq\frac{1}{2}c_{1}\beta k=\frac{1}{2}c_{1}\beta sn.\] #### 3.2.2 Lemmas for large \(\beta\) **Lemma 3.3**: _Let \(G\) be a graph with \(n\) nodes and edge-expansion \((a,q)\), where \(a>1\) and \(q<1/3\). Let \(0<\beta<1\), and let \(H\) be a largest connected component of \(G^{\beta}\). Then_ \[\mathbb{P}\big{(}|H|\leq(1-q)n\big{)}\leq\rho^{n},\] _where_ \[\rho=\Big{(}\frac{e(1-\beta)^{a}}{q}\Big{)}^{q}. \tag{7}\] For the bound to be nontrivial, we need that \(q>e(1-\beta)^{a}\). We start with an elementary observation. **Claim 1**: _If the largest connected component of \(G^{\beta}\) has at most \(n-t\) nodes, where \(t\leq n/3\), then there is a set \(X\subseteq V\) such that \(t\leq|X|\leq n/2\) and no edge of \(G^{\beta}\) connects \(X\) and \(V\setminus X\)._ **Proof.** (Claim 1) Indeed, let \(H_{1}\) be the nodeset of the largest connected component of \(G^{\beta}\). Then \(|H|\leq n-t\) by hypothesis. If \(|H|\geq n/2\), then \(X=V\setminus H\) satisfies the conditions in the claim. So suppose that \(|H|<n/2\). If \(t\leq|H|\), then \(H\) satisfies the conditions in the claim. So suppose that \(|H|<t\). Let us add further connected components to \(H\) as long as it remains at most \(n/2\) in cardinality, to get a set \(X\). If \(|X|\geq t\) then we are done, so suppose that \(|X|<t\). Adding any other connected component, we get a set \(X^{\prime}\) with \(|X^{\prime}|>n/2\) and \(|X^{\prime}|<|X|+t\). If \(|X^{\prime}|\leq n-t\), then \(V\setminus X^{\prime}\) satisfies the conditions in the claim. So suppose that \(|X^{\prime}|>n-t\). But then \(n-t<|X^{\prime}|\leq|X|+t\leq 2t\), and so \(t>n/3\), contrary to the hypothesis. **Proof.** (Lemma 3.3) Let \(z=(1-\beta)^{a}\). For a fixed \(k\)-subset \(X\) (\(qn\leq k\leq n/2\)), the graph \(G\) has at least \(ak\) edges between \(X\) and \(V\setminus X\), and the probability that none of them is selected is at most \((1-\beta)^{ak}=z^{k}\). So the probability that there is a set \(X\subseteq V\) with \(qn\leq|X|\leq n/2\) and having no edges between \(X\) and \(V\setminus X\) is at most \[\sum_{k=\lceil qn\rceil}^{\lfloor n/2\rfloor}\binom{n}{k}z^{k}.\] Let \(p=z/(1+z)\) and let \(\xi\) be a \(\mathrm{Binom}(n,p)\) distributed random variable. Then, by the well-known Chernoff-Hoeffding bound, \[\sum_{k=\lceil qn\rceil}^{\lfloor n/2\rfloor}\binom{n}{k}z^{k}= (1+z)^{n}\sum_{k=\lceil qn\rceil}^{\lfloor n/2\rfloor}\binom{n}{k}p^{k}(1-p)^ {n-k}\] \[\leq(1+z)^{n}\,\mathbb{P}(\xi\geq qn)\leq(1+z)^{n}\left[\left( \frac{p}{q}\right)^{q}\left(\frac{1-p}{1-q}\right)^{1-q}\right]^{n}\] \[=\left[\left(\frac{z}{q}\right)^{q}\frac{1}{(1-q)^{1-q}}\right] ^{n}. \tag{8}\] Here \[\frac{1}{(1-q)^{1-q}}=\left(1+\frac{q}{1-q}\right)^{1-q}<e^{q},\] hence, by (3) and Claim 1 \[\mathbb{P}\big{(}|H|\leq(1-q)n\big{)}<\left(\frac{ez}{q}\right)^{qn},\] proving the lemma. \(\square\) **Proof.** (Lemma 2.12) Let \(H_{1}\) denote the component of \(G^{\beta}\) with largest number of nodes in \(C\), and let \(H_{2},\ldots,H_{m}\) be the other components. Note that \(H^{C}\subseteq H_{1}\) where \(H^{C}\) is the largest component in \(G^{\beta}(C)\). Let \(h_{j}=|H_{j}|\) and \(c_{j}=|C\cap H_{j}|\). Let \(p_{j}\) and \(q_{j}\) denote the probability that \(H_{j}\) is not infected by \(\mathbf{S}_{V}\) and \(\mathbf{S}_{C}\), respectively. Then \[p_{j}=\prod_{i=0}^{k-1}\left(1-\frac{h_{j}}{n-i}\right),\] where the last inequality holds whenever \(h_{j}\leq n-k+1\); else, \(p_{j}=0\). Similarly, \[q_{j}=\prod_{i=0}^{k-1}\left(1-\frac{c_{j}}{r-i}\right),\] where again the last inequality holds if \(c_{j}\leq r-k+1\) and \(0\) otherwise. Then \[\mathbb{E}_{\beta}(|G^{\beta}(\mathbf{S}_{V})|) -\mathbb{E}_{\beta}(|G^{\beta}(\mathbf{S}_{C})|)=\sum_{j=1}^{m}h_{ j}(1-p_{j})-\sum_{j=1}^{m}h_{j}(1-q_{j})\] \[=\sum_{j=1}^{m}h_{j}(q_{j}-p_{j}). \tag{9}\] The main idea of the proof is that we partition the index set \(K=\{1,\ldots,m\}\) into four sets: \[K_{1} :=\{1\},\] \[K_{2} :=\{j\in K:\ h_{j}=1,c_{j}=0\},\] \[K_{3} :=\{j\in K\setminus K_{1}:\ h_{j}\leq c_{j}/(c-s)\},\] \[K_{4} :=K\setminus K_{1}\setminus K_{2}\setminus K_{3}.\] Let \(V_{i}=\cup_{j\in K_{i}}H_{j}\). We lower bound the sum in equation (9) using a different estimate over each set \(K_{j}\). There are two sets where uniform seeding is more dangerous (\(K_{2}\) and \(K_{4}\)), one set where the two seedings are essentially equally dangerous (\(K_{1}\), the giant component) and one set where the central seeding is more dangerous (\(K_{3}\)), but this set \(K_{3}\) only contains components which have a relatively large part in \(C\) compared to \(V\setminus C\), and since the giant component is quite large in \(G_{1}\), the components in \(K_{3}\) cannot have too much weight We make this intuition precise in the computation below. First, we fix the percolation \(G^{\beta}\), and estimate the expectations over the choice of seed sets. We start with \(K_{1}\), which only contains the index of the component with the largest number of nodes in \(C\). This component will have a non-empty intersection with both \(\mathbf{S}_{V}\) and \(\mathbf{S}_{C}\) with high probability. More exactly, \[\mathbb{E}_{\beta}(|V_{1}\cap G^{\beta}(\mathbf{S}_{V})| -\mathbb{E}_{\beta}(|V_{1}\cap G^{\beta}(\mathbf{S}_{C})|=h_{1}(q_ {1}-p_{1})\geq-h_{1}p_{1}\] \[\geq-h_{1}\left(1-\frac{h_{1}}{n}\right)^{k}\geq-n\left(1-\frac{ h_{1}}{n}\right)^{k}\geq-ne^{-h_{1}k/n}.\] Next we consider \(K_{2}\), the index set of those components of \(G^{\beta}\) that are isolated nodes of \(V\setminus C\). Clearly \(q_{j}=1\) and \(p_{j}=\prod_{i=0}^{k-1}\left(1-\frac{1}{n-j}\right)=\frac{n-k}{n}\) for \(j\in K_{2}\). So \[\mathbb{E}_{\beta}(|V_{2}\cap G^{\beta}(\mathbf{S}_{V})|)-\mathbb{E}_{\beta}( |V_{2}\cap G^{\beta}(\mathbf{S}_{C})|)=\sum_{j\in K_{2}}h_{j}(q_{j}-p_{j})= \frac{k}{n}\left|V_{2}\right|. \tag{11}\] For \(K_{3}\) we use the lower bound \[\mathbb{E}_{\beta}(|V_{3}\cap G^{\beta}(\mathbf{S}_{V})|)-\mathbb{E} _{\beta}(|V_{3}\cap G^{\beta}(\mathbf{S}_{C})|)=\sum_{j\in K_{3}}h_{j}(q_{j}-p_{ j})>-\sum_{j\in K_{3}}h_{j}\] \[\geq-\sum_{j\in K_{3}}\frac{c_{j}}{c-s}\geq-\frac{1}{c-s}|C\setminus V _{1}|. \tag{12}\] Finally, if \(j\in K_{4}\), then it must satisfy \[1-\frac{h_{j}}{n}\leq 1-\frac{c_{j}}{r-k},\] which implies that for these components \(q_{i}\geq p_{i}\) and so \[\mathbb{E}_{\beta}(|V_{3}\cap G^{\beta}(\mathbf{S}_{V})|)-\mathbb{E}_{\beta}(| V_{3}\cap G^{\beta}(\mathbf{S}_{C})|)\geq 0. \tag{13}\] Summing (10)-(13), we get \[\mathbb{E}_{\beta}(|G^{\beta}(\mathbf{S}_{V})|)-\mathbb{E}_{\beta }(|G^{\beta}(\mathbf{S}_{C})|)\] \[\geq-ne^{-|V_{1}|k/n}+\frac{k}{n}|V_{2}|-\frac{1}{c-s}|C\setminus V _{1}|. \tag{14}\] To compute the expectation of this over the percolation, let us denote the degree of the \(i^{th}\) node in \(V\setminus C\) by \(b_{i}\). Then by Jensen's inequality (since \((1-\beta)^{x}\) is convex), \[\mathbb{E}_{\beta}(|V_{2}|)=\sum_{i=1}^{n-r}(1-\beta)^{b_{i}}\geq(n-r)(1-\beta )^{b}. \tag{15}\] Recall \(H^{C}\). By applying Lemma 3.3 to \(G_{1}\), we have \(|V_{1}|=|H_{1}|\geq|H_{1}^{C}|>(1-q)r\) with probability at least \(1-\rho^{r}\), where \(\rho\) is defined by (7). Hence, \[\mathbb{E}\big{(}e^{-|V_{1}|k/n}\big{)}\leq e^{-(1-q)ck}+\rho^{r}\leq e^{-2ck/ 3}+\rho^{r},\] and \[\mathbb{E}\left(\big{|}H^{C}\big{|}\right)\geq(1-q)r\mathbb{P}\left(\big{|}H^ {C}\big{|}>(1-q)r\right)\geq(1-q)(1-\rho^{r})r\] resulting in \[\mathbb{E}\left(|C\setminus V_{1}|\right)\leq\mathbb{E}\left( \big{|}C\setminus H^{C}\big{|}\right)=r-\mathbb{E}\left(\big{|}H^{C}\big{|}\right)\leq\] \[r\left[1-(1-q)(1-\rho^{r})\right]\leq qr+r\rho^{r}. \tag{16}\] To sum up, \[\mathbb{E}(|G^{\beta}(\mathbf{S}_{V})|)-\mathbb{E}(|G^{\beta}( \mathbf{S}_{C})|)\] \[\geq s(1-c)(1-\beta)^{b}n-\frac{c}{c-s}qn-\left(1+\frac{c}{c-s} \right)n\rho^{r}-ne^{-2ck/3}.\] This proves the lemma. \(\Box\) **Proof of Theorem 2.14.** We start with the small \(\beta\) case. We need the following fact: **Claim 2**: _Let \(H\) be a graph with \(N\) nodes and edge-expansion \((a,q)\) (\(a\geq 1,0<q<1/2\)). Then the average degree in \(H\) is at least \(2a\frac{N-1}{N+1}\)._ **Proof.** (Claim 2) We check this for the case when \(|V(H)|=2m+1\) is odd (the even case is similar). For every \(m\)-subset \(S\subseteq V\), there are at least \(am\) edges between \(S\) and \(V\setminus S\). This gives \(am{2m+1\choose m}\) edges. Each edge is counted \(2{2m-1\choose m-1}\) times, hence \[|E(H)|\geq am{2m+1\choose m}{2{2m-1\choose m-1}}=a\frac{(2m+1)m}{m+1}.\] Thus the average degree is \[\overline{\deg}(H)=\frac{2|E(H)|}{2m+1}=2a\frac{m}{m+1}.\] This proves the Claim. \(\Box\) Consider a graph \(G_{m}\) from the given sequence. In the rest of this proof, we omit the indices \(m\), to make the arguments more readable. The Claim above implies that \[\overline{deg}(C)\geq 2a\frac{r-1}{r+1}\sim 2a,\] therefore \[\frac{r-k}{r-1}\overline{\deg}(C)-\frac{n-k}{n-1}\overline{\deg} (V)\sim\left(1-\frac{s}{c}\right)\overline{\deg}(C)-(1-s)\overline{\deg}(V)\geq\] \[\left(1-\frac{s}{c}\right)\overline{\deg}(C)-\left(c\overline{ \deg}(C)+(1-c)b\right)=\] \[\left(1-c-\frac{s}{c}\right)\overline{\deg}(C)-(1-c)b\stackrel{{ s\leq\frac{1}{2}c(1-c)}}{{\geq}}\] \[(1-c)\left(\frac{1}{2}\overline{\deg}(C)-b\right)\gtrsim(1-c)(a- b)\geq\varepsilon^{2}\Rightarrow\] \[\frac{r-k}{r-1}\overline{\deg}(C)-\frac{n-k}{n-1}\overline{\deg} (V)\geq\varepsilon^{2}-o(1)\geq\frac{1}{2}\varepsilon^{2}=:c_{1}>0\] when \(n\) is large enough. This means the conditions of Corollary 2.8 are satisfied when \(\beta\) is small enough. The large beta case is an easy consequence of Remark 2.13. \(\Box\) Application: Chung-Lu model with power law degree distribution In this section, we apply Lemma 2.12 to rigorously prove the previous claim of [6], that the uniform seeding can be more dangerous in random graphs with power-law degree distribution with exponent \(\tau\in(2,3)\) if \[\frac{1}{n}\beta^{-\frac{1}{\lceil\tau-3\rceil}}\ll s\ll\beta^{\frac{\tau-1}{3 -\tau}}. \tag{17}\] In [6], this claim appears as an if and only if statement, however, in this section we only address the "if" part. Our proof strategy has already been outlined in a previous paper [7], but this is the first time when we give a fully rigorous proof. We start by defining the random graph distribution in the focus of this section, which is one of the most standard models of networks with a power-law degree distribution. **Definition 4.1**: Let us denote by \(\mathcal{CL}(\tau)\) the _distribution of Chung-Lu random graphs with exponent \(\tau\in(2,3)\)_, where the nodes \(v_{i}\in V\) are indexed from \(1\) to \(n\), and \(v_{i}\) and \(v_{j}\) are connected by an edge independently with probability \(p_{ij}=\min\left\{\frac{d_{i}d_{j}}{D},1\right\}\), where \(d_{i}=\left(\frac{n}{i}\right)^{\frac{1}{\tau-1}}\) and \(D=\sum_{k=1}^{n}d_{k}\). Notice that the expected degree of a node with index \(i\) in a graph sampled from \(\mathcal{CL}(\tau)\) is \(d_{i}+o(1)\). Moreover, for \(d_{i}\) to be less than some integer degree \(d\), we need to have \(\frac{i}{n}\leq d^{1-\tau}\), which hints that the exponent of the cumulative degree distribution is expected to be around \(1-\tau\), and therefore the degree distribution is expected to follow a power-law with exponent \(\tau\in(2,3)\). The average degree of the distribution is expected to be a constant, because \[\sum_{k=1}^{n}d_{k}\sim\int_{1}^{n}\left(\frac{n}{x}\right)^{\frac{1}{\tau-1}} dx=\frac{\tau-1}{\tau-2}\left(n-n^{\frac{1}{\tau-1}}\right)=\Theta(n). \tag{18}\] Chung-Lu random graphs were introduced in [3], we refer to this paper and follow-up works for more precise statements on interpreting Definition 4.1. Here, we continue by stating an elementary result about the edge expansion of Chung-Lu random graphs, which may have already appeared in the literature in a similar form, but we are not aware of it. **Lemma 4.2**: _If \(G\) is sampled from \(\mathcal{CL}(\tau)\) with \(\tau\in(2,3)\), and if \(C\) is the set of vertices of \(G\) with index \(i\leq cn\), with \(\frac{1}{\sqrt{n}}\ll c\ll(\log(n))^{-\frac{\tau-1}{3-\tau}}\), then \(G[C]\) has edge expansion \(\left(\frac{n}{4D}c^{-\frac{3-\tau}{\tau-1}},0\right)\) asymptotically almost surely._ Before proving Lemma 4.2, let us show how we can apply it to rigorously prove the claim in equation (17), at least partially. **Corollary 4.3**: _Let us consider a sequence of graphs \(G_{n}\) sampled from \(\mathcal{CL}(\tau)\) with \(\tau\in(2,3)\), and let us define the central region \(C\) as the \(\lfloor cn\rfloor\) nodes with the largest expected degree (i.e., index) in \(G_{n}\). Let us assume that the size of the seed set satisfies \(s=\Theta(c)\) and \(c-s=\Theta(c)\). Under the mild assumption \(1-\beta=\Theta(1)\), if_ \[\sqrt{\frac{\log(n)}{n}}\ll s\ll\left(\frac{\beta}{\log(n)}\right)^{\frac{\tau -1}{3-\tau}} \tag{19}\] _hold then the uniform seeding creates a larger epidemic than the central seeding (\(G_{n}\) has the weak switchover property) with probability tending to 1 as \(n\to\infty\)._ Notice that with this choice of parameters, the number of seeds is linear only the size of the central region, but sub-linear in the size of the graph. As shown in Figure 1, the parameter ranges set by equation (19) form a subset of the parameters in equation (17), therefore, Corollary 4.3 is weaker than the claim in [6]. To generalize Corollary 4.3 for the remaining parameter ranges, different proof methods are necessary. **Proof of Corollary 4.3.** We will use Lemma 2.12 for Chung-Lu random graphs with \[q=\frac{s(c-s)(1-c)(1-\beta)^{b}}{2c}. \tag{20}\] For Lemma 2.12 to be applicable, we need to make sure that for our choice \[q=\frac{s(c-s)(1-c)(1-\beta)^{b}}{2c}>(1+\varepsilon)e(1-\beta)^{a}, \tag{21}\] and to satisfy equation (5), we need that \[\frac{1}{2}s(1-c)(1-\beta)^{b}>\left(1+\frac{c}{c-s}\right)\rho^{r}+e^{-2ck/3}, \tag{22}\] Figure 1: Phase diagrams for the switchover phenomenon on Chung-Lu random graphs with power-law degree distribution (\(\tau\in(2,3)\)). (a) With grey we show the region where the central area is more dangerous, as claimed in [6], and with the dotted pattern we show the region where the central area is more dangerous by equation (17), as claimed in [6]. (b) With grey we show the region where the central area is more dangerous by Corollary 4.3. keeping in mind that \(c\) and \(s\) are not constants anymore. Condition \(q<\frac{1}{3}\) is trivially satisfied for large enough \(n\) as \(q=\Theta(s)\). Recall, that we chose \(s=\Theta(c)\) and \(c-s=\Theta(c)\). Notice that we can apply Lemma 4.2, because the condition \(\frac{1}{\sqrt{n}}\ll c\ll(\log(n))^{-\frac{r-1}{3-r}}\) holds by (19). Then, since we also know \(b=\Theta(1)\) by equation (18), we can show that equation (21) holds if \[\log(c)\gg\log(1-\beta)c^{-\frac{3-r}{r-1}},\] which is implied by equation (19) as \[\log(1-\beta)c^{-\frac{3-r}{r-1}}\leq -\beta c^{-\frac{3-r}{r-1}}=-\left(\frac{c}{\beta^{\frac{r-1}{3-r }}}\right)^{-\frac{3-r}{r-1}}\stackrel{{\eqref{eq:c-s-r-1}}}{{ \ll}}-\log(n)\] \[= 2\log\left(\frac{1}{\sqrt{n}}\right)\ll\log(c).\] Similarly, equation (22) holds if \[\rho^{r}+e^{-2ck/3}\ll s.\] By equation (18) and substititing the definition of \(\rho\) from equation (7), we get \[(1+\varepsilon)^{-qr}+e^{-2ck/3}\ll s,\] which must hold because equation (19) implies \[\Theta(qr)=\Theta(ck)=\Theta\left(s^{2}n\right)\stackrel{{\eqref {eq:c-s-r-1}}}{{\gg}}\log(n).\] Therefore, Lemma 4.2 implies that for these parameter ranges, the uniform seeding can be more dangerous, and weak switchover occurs. We conclude the section by providing the proof of Lemma 4.2. **Proof of Lemma 4.2.** For \(S\subset V\) with \(|S|\leq|C|/2\), let \(X_{S}\) be the number of edges between \(S\) and \(C\setminus S\). In the first part of the proof, we show that \(\mathbb{E}(X_{S})\geq\frac{n}{2D}c^{-\frac{3-r}{r-1}}|S|\) for every \(S\), and in the second part we prove that the variables \(X_{S}\) are all well-concentrated around their expectation. Note that since we assumed \(c\gg\frac{1}{\sqrt{n}}\), and since we know \(D=\Theta(n)\), we have that \(\min\left\{d_{\lfloor cn\rfloor}^{2},D\right\}=d_{\lfloor cn\rfloor}^{2}\geq c ^{-\frac{2}{r-1}}\). Then, we compute the expectation of \(X_{S}\) as \[\mathbb{E}(X_{S})= \sum_{i\in S}\sum_{i\in C\setminus S}\min\left\{\frac{d_{i}d_{j} }{D},1\right\}=\sum_{i\in S}\sum_{i\in C\setminus S}\frac{d_{i}d_{j}}{D}\geq \sum_{i\in S}\sum_{i\in C\setminus S}\frac{d_{\lfloor cn\rfloor}^{2}}{D}\] \[\geq \frac{|S|(|C|-|S|)}{D}c^{-\frac{2}{r-1}}\stackrel{{ |S|\leq\frac{|C|}{2}}}{{=}}\frac{n}{2D}c^{-\frac{3-r}{r-1}}|S|.\] Next, we use the union bound, and well-known multiplicative Chernoff bounds on binomial random variables, to prove that the random variables \(X_{S}\) are concentrated around their expectation. We bound \[\mathbb{P}\left(\exists S\subset C,|S|\leq\frac{|C|}{2}\text{ with }X_{S} \leq\frac{1}{2}\mathbb{E}(X_{S})\right) \leq\sum_{\begin{subarray}{c}S\subset C\\ |S|\leq\frac{|C|}{2}\end{subarray}}\mathbb{P}\left(X_{S}\leq\frac{1}{2} \mathbb{E}(X_{S})\right)\] \[\leq\sum_{\begin{subarray}{c}S\subset C\\ |S|\leq\frac{|C|}{2}\end{subarray}}e^{-\frac{1}{8}\mathbb{E}(X_{S})}\] \[=\sum_{\begin{subarray}{c}S\subset C\\ |S|\leq\frac{|C|}{2}\end{subarray}}e^{-\frac{n}{16D}c^{-\frac{3-r}{r-1}}|S|}.\] Set \(\eta=\frac{n}{16D}c^{-\frac{3-r}{r-1}}\). Let us change the indexing of the sum to the size of the set \(S\), and apply a standard upper bound on binomial coefficients to obtain \[\sum_{\begin{subarray}{c}S\subset C\\ |S|\leq\frac{|C|}{2}\end{subarray}}e^{-\eta|S|}=\sum_{k=1}^{\frac{|cn|}{2}} \binom{|cn|}{k}e^{-\eta k}\leq\sum_{k=1}^{\frac{|cn|}{2}}\left(\frac{enc}{k} \right)^{k}e^{-\eta k}\leq\sum_{k=1}^{\frac{|cn|}{2}}\left(cne^{1-\eta}\right) ^{k}.\] Notice that we have arrived at a geometric series with common ratio \(cne^{1-\eta}\), which tends to zero as long as \(\eta=\frac{n}{16D}c^{-\frac{3-r}{r-1}}\gg\log(n)\); an asymptotic inequality that holds by the assumption \(c\ll\left(\log(n)\right)^{-\frac{7-1}{3-r}}\). Therefore, we arrived to the equation \[\mathbb{P}\left(\exists S\subset C,|S|\leq\frac{C}{2}\text{ with }X_{S}\leq \frac{n}{4D}c^{-\frac{3-r}{r-1}}|S|\right)\to 0,\] which completes the proof of the lemma. ## 5 Concluding remarks In this paper, we gave the first fully rigorous proofs of the switchover phenomenon, introduced in [6], for general classes of graphs. We showed that weak switchover exists under mild conditions on the graph, and we also showed sufficient conditions for strong switchover. One limitation of the current paper is that, in the case of the strong switchover, the size of the seed set was assumed to be fairly large, of size \(\Omega(n)\). Although for the Chung-Lu model in Section 4 we did study smaller seed sets, and we did use the machinery of the strong switchover proofs, we were only able to show the existence of the weak switchover phenomenon. This agrees with the simulations and heuristic derivations of the previous work [6], which also claimed that Chung-Lu models exhibit weak switchover, but not strong switchover. However,[6] also showed that the strong switchover phenomenon occurs with much smaller seed sets on geometric graphs, notably on the commuting network of Hungary constructed from real data, and also random graph models with an underlying geometry. Unfortunately, our current results do not say much about such geometric graphs. As in this paper, proving the existence of strong switchover with small seed sets would boil down to the distribution of component sizes in the percolated graph \(G^{\beta}\). However, contrary to this paper, we need the existence of at least medium size components also in the periphery, because if most of the peripheral nodes in \(G^{\beta}\) are contained in bounded-size components, then we need \(\Omega(n)\) seeds to have strong switchover (even to have an epidemic of size \(\Omega(n)\)). Finding appropriate conditions for such medium size components in the periphery which could lead to the existence of strong switchover with small seed sets (say, of size \(\sqrt{n}\) or even \(\log n\)) is an interesting future direction. **Acknowledgment.** The authors are thankful to Marianna Bolla for her insightful remarks. This work has been supported by the Dynasnet ERC Synergy project (ERC-2018-SYG 810115). Gergely Odor was supported by the Swiss National Science Foundation, under grant number P500PT-211129.
2310.13021
AI for Mathematics: A Cognitive Science Perspective
Mathematics is one of the most powerful conceptual systems developed and used by the human species. Dreams of automated mathematicians have a storied history in artificial intelligence (AI). Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems. In this work, we reflect on these goals from a \textit{cognitive science} perspective. We call attention to several classical and ongoing research directions from cognitive science, which we believe are valuable for AI practitioners to consider when seeking to build truly human (or superhuman)-level mathematical systems. We close with open discussions and questions that we believe necessitate a multi-disciplinary perspective -- cognitive scientists working in tandem with AI researchers and mathematicians -- as we move toward better mathematical AI systems which not only help us push the frontier of the mathematics, but also offer glimpses into how we as humans are even capable of such great cognitive feats.
Cedegao E. Zhang, Katherine M. Collins, Adrian Weller, Joshua B. Tenenbaum
2023-10-19T02:00:31Z
http://arxiv.org/abs/2310.13021v1
# AI for Mathematics: A Cognitive Science Perspective ###### Abstract Mathematics is one of the most powerful conceptual systems developed and used by the human species. Dreams of automated mathematicians have a storied history in artificial intelligence (AI). Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems. In this work, we reflect on these goals from a _cognitive science_ perspective. We call attention to several classical and ongoing research directions from cognitive science, which we believe are valuable for AI practitioners to consider when seeking to build truly human (or superhuman)-level mathematical systems. We close with open discussions and questions that we believe necessitate a multi-disciplinary perspective--cognitive scientists working in tandem with AI researchers and mathematicians--as we move toward better mathematical AI systems which not only help us push the frontier of the mathematics, but also offer glimpses into how we as humans are even capable of such great cognitive feats. ## 1 Introduction Building computational systems that understand and practice mathematics at the level of human mathematicians has been a long-standing aspiration of artificial intelligence (AI) [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. The rise of large language models (LLMs) has sparked imaginations that we are closer than ever to attaining, or surpassing, human-level performance on a range of tasks [16; 17; 18]. Yet, simultaneously, something seems amiss: despite these models achieving tremendous performance in many realms of human expertise (e.g., medicine, law, creative writing), the performance of these models on _mathematics_ specifically lags behind [19; 20; 21; 22]. There are many efforts to improve the mathematical problem-solving capabilities of LLMs, such as adjusting the training data and feedback strategies [23; 24; 25; 26; 27], equipping models with expanded background knowledge at inference-time [28], or composing LLMs with existing computational mathematics systems [29; 30; 31; 32; 33]. Recent efforts to build in principles from cognitive science, e.g., the importance of learning abstractions, have also seen success [34; 35]; however, we believe that the broader AI-mathematics community still has much to draw from cognitive science: in the questions we ask and the methods by which we approach such challenges. First, we believe it is essential to reflect on what goals we are even trying to pursue. What does it _mean_ for AI systems to excel at mathematics _at or beyond_ a human level? Is simply excelling at a suite of standard benchmark datasets sufficient? While there is no doubt value in benchmarks to spur progress, humans--and human mathematicians--are capable of so much more than what can be captured in a static benchmark. We are capable of _intuitions_ and _judgments_[36], of reasoning about the world _as world_[37], of seeking deeper explanations and understandings of results [38], of flexibly developing new problem solving tactics and not just solving new problems, but _posing_ them too [39; 40]. How then should we proceed to develop human-level AI mathematicians? In the rest of this position paper, we argue that perspectives from cognitive science have a lot to offer in this new age of LLMs. Cognitive scientists, AI researchers, and mathematicians can productively contribute together to this vision towards growing flexible, automated mathematicians that help us push the frontiers of mathematical knowledge and reflect back on how we are even capable of remarkable achievements of mathematical cognition [41]. ## 2 Looking to cognitive science We now call attention to several classical and active research directions within cognitive science which we believe hold value for those building mathematical AI systems. ### Sample-efficient learning One of the hallmarks of human cognition is our ability to learn new concepts, knowledge, and problem-solving strategies, from little data [42; 43; 44; 45; 46; 47]. In mathematics, data paucity is a particular conundrum, e.g., it is costly and difficult to obtain high-quality data on advanced topics, and few texts may exist on the cutting-edge or more obscure branches of mathematics. On the other hand, human mathematicians, from early learners to expert-level mathematicians, do not need millions of examples to learn mathematical concepts and problem-solving strategies. Yet, even though the rote _number_ of examples developing mathematicians may be exposed to is small, that does not mean that a concept is grasped _immediately_ upon exposure. It make take a human multiple encounters with an example, extended time sitting and thinking--squeezing out a tremendous amount from a handful of examples, e.g., through active engagement like self-explanation (see below)--to master a concept or strategy, after which such knowledge can be readily generalized to new situations [44; 48; 49; 50; 51]. ### Concepts, representations, and world models It is inspiring to reflect on the sample-efficiency of human learning. If we are to obtain or surpass such capabilities in AI systems, it is important to examine _how_ humans may achieve such efficiency in the first place. Towards this end, we point to the rich cognitive science literature on concepts, their representations, and how the human mind builds rich models of the world out of concepts [52; 53; 54; 55]. In cognitive science, much research in cognitive science points to powerful inductive biases gleaned through evolution: "core knowledge" [56; 57]. It has been speculated that a core "number sense" [58] forms the foundation upon which our mathematical prowess is built. Strong evidence points to two core number systems--for reasoning about numerosity exactly, and approximately [49; 59; 58]. From these core knowledge systems, we can develop _concepts_[42; 54; 60]. Notably in mathematics, concepts have _precise definitions_, unlike other abstract concepts such as justice and knowledge or everyday concepts like chair. At the same time, mathematicians think about concepts more than in terms of definitions; they can give examples and counterexamples, draw out relationships between concepts, and so on--this type of conceptual richness is compatible with the psychological theory of conceptual-role semantics [52]. So, what are the _form(s)_ of these conceptual representations? Contemporary cognitive science has provided strong evidence for that conceptual representations may be modeled by "languages of thought" [61; 62; 63; 64], which in mathematics, may be built over core geometric primitives [65]. Closely linked with "languages of thought" is the notion of a "world model". In AI, many have highlighted the importance of world models, although researchers disagree about how to build such models within AI systems [66; 67; 68; 64]. It is generally accepted that a world model should support simulation of possibilities, causal and counterfactual reasoning, and calibrated judgements about belief and truth [67; 68; 69; 70; 71; 72; 73; 74]. We hypothesize that the intuitions that mathematicians acquire over years of practice can be seen as forming world models of the mathematical universe. Here, we use the famous "\(\mathsf{P}=\mathsf{NP}\)?" problem as an illustrative example. Most people believe that \(\mathsf{P}\neq\mathsf{NP}\). It seems that much evidence of such strong beliefs over an unproven statement comes from simulating what would happen if \(\mathsf{P}=\mathsf{NP}\) or \(\mathsf{P}\neq\mathsf{NP}\). If the former is true, many counter-intuitive consequence would follow, whereas we would not need to heavily adjust our other beliefs about computation if the latter is true [75; 76; 77]. This kind of simulation and argumentation, we suggest, may be powered by world models. ### Goals, planning, and agency Today, the dominant paradigm for large language models is a passive one: a (very large) training corpus is provided to the model, and the model optimizes some given objective function [78; 16]. At inference-time, a model is presented with a problem (e.g., a translation or reasoning task) and tries to make good predictions. However, this is not how humans think about or perform problem-solving. Humans are planning agents with goals spanning across different communities and timescales [79; 80]. When planning to achieve a goal, we can flexibly divide a task into sub-goals, form and leverage simplified abstract representations to inform planning, and replan [81; 82; 83; 84; 85]. Planning is crucial to success in mathematical reasoning. Consider when a teacher gives a student a problem to solve; the student needs to generate sub-goals and come up with strategies, such as looking up definitions, consider examples, examine different cases, or simply look for help. Moreover, mathematical cognition is not just about planning for set goals, but _inventing_ new goals, problems, and concepts [39; 40; 51; 41]. How do some mathematicians _form the goal_ of inventing new mathematics, and how do they achieve it? Engineering and scientific insights on these questions--drawing on cognitive science, AI, and mathematics--may drive a huge leap forward towards creative AI mathematicians. ### Cognitive limitations and resource-rationality However, humans are far imperfect planners, and they may fail to execute the plans we do embark upon. Mathematicians may become wedded to a particular proof strategy only to realize it was misguided and need to backtrack, or worse, could fall prey to functional fixedness [86; 87] and the sunk cost fallacy [88]. Such instances put a damper in the notion that humans are rational reasoners [89]. Cognitive scientists here too have developed rich frameworks to reconcile such challenges. Rather, we may be viewed as rational _in light of resource constraints_, i.e., "resource-rational" [90; 91; 92; 93]. This notion finds particular importance when thinking about humans and AI systems. Fundamentally, humans and computational systems have different resource limitations: computers are able to make calculations extremely fast, are not constrained to the same limitations on working memory, and do not succumb to daily inevitable fatigue that we humans do. When building mathematical AI systems then, it is prudent to question whether we should be designing AI systems to mimic human resource constraints [92]. If trying to build a computational "thought partner" to complement humans and enable us to explore greater mathematical depths than we have so far, for instance, by making more calculations and proposing possible new patterns in troves of data [15], then perhaps we do not want to curtail a model's resources. However, one could argue that perhaps, such resource limitations are not a failing, but rather an _advantage_: for instance, empowering us to judiciously _select_ which problems to solve in the first place. Indeed, mathematics communities (generally) do not waste too much time on problems that people believe to be out of reach. Studying under what settings resource limitations on mathematical cognition are advantageous, and when they are not, is a ripe space for collaboration across cognitive science, mathematics, and AI, particularly when thinking about making sensible use of limited resources even present in large-scale AI systems [94; 95]. ### Communication and explanation We close our tour of cognitive science insights to spark the imaginations of those seeking to build mathematical AI by reiterating that mathematics is a _group activity_ consisting of communities, and development of knowledge in any intellectual community depends on effective _communication_. We argue that a cognitive perspective on communication is valuable for the math-AI community for two core reasons. First, the _output_ of our communication amongst each other forms the bedrock of the data used to train LLMs. Second, insights from cognitive science reveal that communication can spur learning for the communicator [96]. We start by reflecting on the latter. Ample evidence in cognitive science reveals the power of self-explanation for improving learning and generalization [96; 97; 98; 99; 100; 101; 102]. Explanations can help the explainer identify abstractions to inform induction [96; 99] and reveal gaps in one's own knowledge [97], motivating information-seeking to resolve such gaps [100]. At first glance, recent LLM research such as chain-of-thought-prompting [103], "self-taught reasoning" [104], "self-reflection" [105] could be viewed as self-explanation to improve reasoning, but we encourage ruminating on the cognitive underpinnings. In fact, we argue that these are _not_ instances of self-explanation in the way that humans self-explain. For humans, self-explanation is something that we _want_ to do, because understanding is intrinsically valuable [99; 106]. Thus, it is desirable to not just have new prompting strategies leveraging explanations, but systems designed with explanations at their core. And what about communication to others? We externalize many of our inner thoughts, whether that be writing out the steps of a new proof, drawing diagrams to convey a concept, or debating with a friend what the largest possible number is. These externalized thoughts and interactions increasingly form the bedrock of training data for AI systems. Nonetheless, humans do not communicate _all_ of our inner thoughts; rather, we communicate what we believe is essential to convey [107]--often requiring the listener to make inferences about what the communicator _intends_ to communicate (which may differ from what they _actually_ produced) [108; 109; 110]. Such communication frameworks may be important for building mathematical AI systems that can adequately "read between the lines" in the data available and recognize that when providing mathematical assistance to humans, humans _are capable_ of such inferences (e.g., we do not always require overly verbose responses and in fact may find it less helpful in mathematics [19]). ## 3 Concluding remarks Catalyzing community cross-talkAs we highlight, the cognitive science community has been studying topics deeply relevant to mathematical AI. We hope our piece helps further expose AI practitioners and mathematicians to what we believe are valuable terminology and conceptual structures from cognitive science. Cognitive scientists too can sharpen our theories from further exchanges across communities; we lay out a few strategies to facilitate such conversations. First, accessibility of higher-level mathematics is perhaps one of the most pernicious barriers to effective collaboration across cognitive scientists and AI practitioners in the space. Convenings designed to engage not just the AI and mathematics community but also cognitive scientists would aid in building a shared vocabulary across these communities. Second, there is a need for improved _research tools_ to empower the study of mathematics across our communities. For more than a decade, cognitive scientists and AI practitioners alike have benefited enormously from crowdsourcing platforms such as Amazon Mechanical Turk [111] and Prolific [112]. However, at present, it is hard to find targeted domain practitioners on such sites. We suggest that it would be extremely valuable for the community to discuss the idea of a "Mechanical Turk for mathematics"; i.e., a platform where AI and cognitive scientists can post studies, questions, data gathering attempts about mathematics and mathematicians and students can participate in them. Ideally, such an effort could benefit all parties involved. Third, we note that a strong catalyst for collaboration can be a shared goal [113]. We point to _games_ as a sensible playground which may appeal to mathematicians, cognitive scientists, and AI practitioners. Games have been ripe grounds for study in both AI [114; 115; 116] and cognitive science [81; 117; 118; 119], and as recently exposed by Poesia [120], aspects of mathematics itself may be cast in the language of games. We see this as a particularly exciting framing that allows us to better understand many aspects of mathematics with the help of mathematicians. Looking forwardWith the resurgence in interest around AI and mathematics, we emphasize the value of engaging with the cognitive science community in the quest towards more powerful automated mathematicians. Engaging across the mathematics, cognitive science, and AI communities is paramount in even defining what this quest is and where we intend to go. To close, we propose several directions of inquiry that we think the nexus of the cognitive science, AI, and mathematics communities are poised to address. For instance, advances in AI can serve as tools to help us better understand the relationship between mathematical problem-solving capabilities and the _modalities_ of mathematical data--language (natural and formal) alongside figures and diagrams; what makes a problem easy or hard (and how this differs across humans and AI systems); what kinds of prior knowledge, including human commonsense knowledge, is necessary to learn mathematics; and what are the computational foundations of mathematical insights. We believe that steps along these directions, taken across our communities, can not only spur the development of truly powerful AI mathematicians, but also shed light on what is so special about _humans'_ feats of mathematical cognition--sparking efforts to improved tailored mathematical education and push the boundaries of what we, jointly with AI systems, understand about the wonderful world of mathematics. ## Acknowledgements We thank Timothy Gowers, Gabriel Poesia, and Roger Levy for comments on earlier drafts. We thank Noah Goodman, Raymond Wang, and Lionel Wong for discussions related to this work. We also thank Albert Jiang, Mateja Jamnik, the Human-Oriented Automated Theorem Proving System Team at Cambridge, and the Spring 2023 GPS Seminar at MIT for conversations that inspired aspects of this work. KMC acknowledges funding from the Marshall Commission and Cambridge Trust. AW acknowledges support from a Turing AI Fellowship under grant EP/V025279/1, The Alan Turing Institute, and the Leverhulme Trust via CFI. JBT acknowledges funding from AFOSR Grant #FA9550-22-1-0387 and the MIT-IBM Watson AI Lab.
2302.05452
About the atomic and molecular databases in the planetary community -- A contribution in the Laboratory Astrophysics Data WG IAU 2022 GA session
This paper corresponds to an invited oral contribution to the session 5A organised by the IAU inter-commission B2-B5 working group (WG) "Laboratory Astrophysics Data Compilation, Validation and Standardization: from the Laboratory to FAIR usage in the Astronomical Community" at the IAU 2022 General Assembly (GA). This WG provides a platform where to discuss the Findability, Accessibility,Interoperability, Reuse (FAIR) usage of laboratory Atomic and Molecular (A&M) data in astronomy and astrophysics. A&M data play a key role in the understanding of the physics and chemistry of processes in several research topics, including planetary science and interdisciplinary research in particular the atmospheres of planets and planetary explorations, etc. Databases, compilation of spectroscopic parameters, and facility tools are used by computer codes to interpret spectroscopic observations and simulate them. In this talk I presented existing A&M databases of interest to the planetary community focusing on access, organisation, infrastructures, limitations and issues, etc.
M. Rengel
2023-02-10T16:22:45Z
http://arxiv.org/abs/2302.05452v1
[ ###### Abstract This paper corresponds to an invited oral contribution to the session 5A organised by the IAU inter-commission B2-B5 working group (WG) "Laboratory Astrophysics Data Compilation, Validation and Standardization: from the Laboratory to FAIR usage in the Astronomical Community" at the IAU 2022 General Assembly (GA) Rengel (2022). This WG provides a platform where to discuss the Findability, Accessibility,Interoperability, Reuse (FAIR) usage of laboratory Atomic and Molecular (A&M) data in astronomy and astrophysics. A&M data play a key role in the understanding of the physics and chemistry of processes in several research topics, including planetary science and interdisciplinary research in particular the atmospheres of planets and planetary explorations, etc. Databases, compilation of spectroscopic parameters, and facility tools are used by computer codes to interpret spectroscopic observations and simulate them. In this talk I presented existing A&M databases of interest to the planetary community focusing on access, organisation, infrastructures, limitations and issues, etc. planets, atmospheres, exoplanets, atomic data, molecular data, laboratory astrophysics, experiment, databases, data network, data analysis A&M] About the atomic and molecular databases in the planetary community - A contribution in the laboratory Astrophysics Data WG IAU 2022 GA session Rengel et al.] M. Rengel\({}^{1}\) 2022 IAU 371 Symposium D. Soderblom & G. Nave, eds. ## 1 Introduction The talk represented a tour on A&M databases used in the planetary and exoplanetary communities not from the perspective of a developer, but of an user. A&M databases compile and provide detailed spectral information for A&M to feed codes that predict and simulate radiation in gaseous media. Between the applications, we find atmospheric physics: (exo) planetary atmospheres, comets and small bodies. These databases are a critical input for the codes which predict and interpret spectra of planetary atmospheres (hydrostatic equilibrium atmospheres and expanding comas), and space and ground-based telescopes facilities depend on the quality and extent of reference A&M parameters. Line lists typically contain hundreds to billions of individual transitions. A&M databases contains detailed A&M spectroscopic parameters of A&M like pressure-broadening (shapes), collision-induced absorption (CIA), transition intensity or cross sections, line shape parameters, rotation-vibration transition position (wavelength, frequency), parameters to describe how these vary with temperature and pressure, aerosol indices of refraction, microphysical and optical properties of atmospheric aerosols, for example. ## 2 Some existing A&M databases - Data and file formats Several groups worldwide generate and compile A&M data through measurement and/or calculation (e.g. HITRAN[Gordon _et al._ (2022)], GEISA[Jacquinet-Husson et al.(2016)], JPL Molecular Spectroscopy4, CDMS\(\|\)[Endres et al.(2016)], VAMDC 4\(\dagger\)[Dubernet _et al._ (2010), Dubernet _et al._ (2016), Albert _et al._ (2020)], ExoMol++[Tennyson _et al._ (2020)], HITEMP4[Rothman et al.(2010)], VALD http//vald.astro.uu.se, MoLLIST [http://bernath.uwaterloo.ca/molecularlists.php](http://bernath.uwaterloo.ca/molecularlists.php), Ames Molecular Spectroscopic Data for Astrophysical and Atmospheric Studies ([http://huang.seti.org](http://huang.seti.org), TheoReTs [https://theorets.univ-reims.fr/](https://theorets.univ-reims.fr/), etc.). Several secondary databases and information services are fed with data from such sources in a fragmented manner. A variety of data formats (cross sections, K-tables, line-by-line, super-lines) and file formats (e.g..hdf5,.pickle,.mp4,.txt,.npy) are generated. There are online tools such as HAPI to extend functionalities [https://hitran.org/hapi/](https://hitran.org/hapi/)[Kochanov et al.(2016)], and exo-k library to handle radiative opacities [https://pypi.org/project/exo-k/](https://pypi.org/project/exo-k/)[Leconte(2021)], that enable conversion between different formats. As part of the spectroscopic input to atmospheric codes, the HITRAN molecular spectroscopic database is already internationally recognised as standard in the planetary community, and the ExoMol database, valid over extended temperature ranges, is widely used in the exoplanetary community. Footnote 4: [https://hitran.org](https://hitran.org) Footnote 5: [https://spec.jpl.nasa.gov](https://spec.jpl.nasa.gov) Footnote 6: [https://cdms.astro.uni-koeln.de](https://cdms.astro.uni-koeln.de) Footnote 7: [https://vamdc.org](https://vamdc.org) Footnote 8: [https://www.exomol.com](https://www.exomol.com) Footnote 9: [https://hitran.org/hitemp](https://hitran.org/hitemp) Footnote 10: [https://hitran.org/hitemp](https://hitran.org/hitemp) Footnote 11: [https://bitran.org/hitemp](https://bitran.org/hitemp) Footnote 12: [https://spec.jpl.nasa.gov](https://spec.jpl.nasa.gov) Footnote 13: [https://www.exomol.com](https://www.exomol.com) Footnote 14: [https://hitran.org/hitemp](https://hitran.org/hitemp) Footnote 15: [https://hitran.org/hitemp](https://hitran.org/hitemp) Footnote 16: [https://hitran.org/hitemp](https://hitran.org/hitemp) Footnote 17: [https://bitran.org/hitemp](https://bitran.org/hitemp) Footnote 18: [https://spec.jpl.nasa.gov](https://spec.jpl.nasa.gov) Footnote 19: [https://cdms.astro.uni-koeln.de](https://cdms.astro.uni-koeln.de) Footnote 20: [https://vamdc.org](https://vamdc.org) Footnote 21: [https://www.exomol.com](https://www.exomol.com) [MISSING_PAGE_POST] Footnote 34: ## 4 Discussion: Needs and wish-list In spite of the tremendous advances and current efforts in the generation of databases, there are still improvements in progress. A growing demand for spectroscopic data for (exo) planetary studies and other atmospheres is being driven by scientists who are interested in modelling as well as observing diverse bodies. Line lists are generated from experiments and/or ab initio calculations and may be incomplete or contain errors. Databases differ in completeness, and some ones do not accurately characterise high-frequency spectral regions. Sometimes there are no datasets for a specific problem at hand. Atmospheric codes used by the planetary and exoplanetary characterisation communities, that are designed to solve the radiative transfer equation by looking at the propagation of radiation through a medium and simulate observations and infer parameters, have their own methods for the computation of opacities and there are no community standards. Furthermore, there are also no community standards in the selection of atmospheric codes in mission planning. There are needs to increase accessibility of opacities (computation, access, visualisation, manipulation), laboratory measurements of molecular cross-sections, and pressure broadening description for some species, among many other aspects. Furthermore, going into details, the community identifies needs in the following aspects: isotopologues: CIA (more lists of N\({}_{2}\)O, CH\({}_{4}\), SO), expansion of CIA of secondary species for wider temperature and wavelength ranges, additional experiments for CO intensities, improvement of the CH\({}_{4}\) quality of the line shape parameters, continuum absorption by water vapour, partition functions at higher temperatures for all species (around 5000 K), more friendly ways implementing new data in the RT codes, more aerosol refractive indexes above 500-600 K, more saturation vapour pressure data, kinetic data at high temperature, Rayleigh scattering for non-H\({}_{2}\), collisional xsecs (in particular: mid-sized organics, H\({}_{2}\)O\({}^{+}\), diatom-H\({}_{2}\), CH\({}_{3}\)OH, HCN, SO\({}_{2}\), CH\({}_{2}\)) and high-resolution xsecs (R=1E6+) atmospheres, organic sulphide gases for the line lists (biomarkers). ## Acknowledgements I thank Sergey Yurchenko and Iouli Gordon for the input and discussions. I thank also the members of the SOC of the session, and the members of the inter-commission B2-B5 \begin{table} \begin{tabular}{c c c} \hline \hline Code name & Link & Reference \\ \hline ARCiS & [http://www.exoclouds.com](http://www.exoclouds.com) & Min et al.(2020) \\ TauREx & [https://taurex3-public.readthedocs.io/en/latest/](https://taurex3-public.readthedocs.io/en/latest/) & Al-Refaie et al.(2021) \\ NEMESIS & [https://users.ox.ac.uk/](https://users.ox.ac.uk/) atmp0035/nemesis.html & Irwin et al.(2008) \\ petitRADTRANS & [https://petitradtrans.readthedocs.io/en/latest/](https://petitradtrans.readthedocs.io/en/latest/) & Mollière et al.(2019) \\ PSG & [https://pgg.gsfc.nasa.gov/](https://pgg.gsfc.nasa.gov/) & Villanueva et al.(2018) \\ CHIMERA & [https://github.com/mrline/CHIMERA](https://github.com/mrline/CHIMERA) & Line et al.(2013) \\ PLATON & [https://github.com/ideasrule/platon](https://github.com/ideasrule/platon) & Zhang et al.(2020) \\ ATMO & [https://www.erc-atmo.eu](https://www.erc-atmo.eu) & Tremblin et al.(2015) \\ MOLIERE & Urban et al.(2012) & Urban(2004) \\ ARTS & [https://radiitivetransfer.org/](https://radiitivetransfer.org/) & Buehler et al.(2018) \\ Home made (MPS) & - & Jarchow, C. (1998) \\ Helios-r2 & [https://github.com/exocime/Helios-r2](https://github.com/exocime/Helios-r2) & Kitzmann et al.(2020) \\ SCARLET & - & Benneke(2015) \\ BART & [https://github.com/exospors/BART](https://github.com/exospors/BART) & Bleicc(2016) \\ INARA & [https://gitlab.com/frontierdevelopmentlab/astrobiology/inara](https://gitlab.com/frontierdevelopmentlab/astrobiology/inara) & Soboczenski et al.(2018) \\ \hline \end{tabular} \end{table} Table 1: Some radiative transfer and inversion codes used in the (exo) planetary community working group in special to Marie-Lise Dubernet. I acknowledge the support by the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" (DFG PR 36 24602/41). For the purpose of Open Access, a CC-BY-SA 4.0 public copyright licence has been applied by the author to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission.
2305.01693
A new sample of transient ultraluminous X-ray sources serendipitously discovered by Swift/XRT
Ultraluminous X-ray sources (ULXs) are our best laboratories for studying extreme super-Eddington accretion. Most studies of these objects are of relatively persistent sources, however there is growing evidence to suggest a large fraction of these sources are transient. Here we present a sample of five newly reported transient ULXs in the galaxies NGC 4945, NGC 7793 and M81 serendipitously discovered in Swift/XRT observations. Swift monitoring of these sources have provided well sampled lightcurves, allowing for us to model the lightcurves with the disk instability model of Hameury & Lasota (2020) which implies durations of 60-400 days and that the mass accretion rate through the disk is close to or greater than the Eddington rate. Of the three source regions with prior HST imaging, color magnitude diagrams of the potential stellar counterparts show varying ages of the possible stellar counterparts. Our estimation of the rates of these sources in these three galaxies is 0.4-1.3 year$^{-1}$. We find that while persistent ULXs dominate the high end of galaxy luminosity functions, the number of systems that produce ULX luminosities are likely dominated by transient sources.
Murray Brightman, Jean-Marie Hameury, Jean-Pierre Lasota, Ranieri D. Baldi, Gabriele Bruni, Jenna M. Cann, Hannah Earnshaw, Felix Fürst, Marianne Heida, Amruta Jaodand, Margaret Lazzarini, Matthew J. Middleton, Dominic J. Walton, Kimberly A. Weaver
2023-05-02T18:02:46Z
http://arxiv.org/abs/2305.01693v1
# A new sample of transient ultraluminous X-ray sources serendipitously discovered _Swift_/XRT ###### Abstract Ultraluminous X-ray sources (ULXs) are our best laboratories for studying extreme super-Eddington accretion. Most studies of these objects are of relatively persistent sources, however there is growing evidence to suggest a large fraction of these sources are transient. Here we present a sample of five newly reported transient ULXs in the galaxies NGC 4945, NGC 7793 and M81 serendipitously discovered in _Swift_/XRT observations. Swift monitoring of these sources have provided well sampled lightcurves, allowing for us to model the lightcurves with the disk instability model of Hameury & Lasota (2020) which implies durations of 60-400 days and that the mass accretion rate through the disk is close to or greater than the Eddington rate. Of the three source regions with prior _HST_ imaging, color magnitude diagrams of the potential stellar counterparts show varying ages of the possible stellar counterparts. Our estimation of the rates of these sources in these three galaxies is 0.4-1.3 year\({}^{-1}\). We find that while persistent ULXs dominate the high end of galaxy luminosity functions, the number of systems that produce ULX luminosities are likely dominated by transient sources. ## 1 Introduction Ultraluminous X-ray sources (ULXs) are powerful X-ray sources found outside the nucleus of galaxies (see Kaaret et al. (2017), Fabrika et al. (2021) and King et al. (2023) for recent reviews). They exhibit luminosities in excess of \(10^{39}\) erg s\({}^{-1}\) which is the Eddington limit of the typical 10 \(M_{\odot}\) black hole found in our Galaxy. First identified in the early 1980s by the _Einstein Observatory_(Giacconi et al., 1979), the first fully imaging X-ray telescope put into space, they were originally thought to be more massive black holes, potentially intermediate-mass black holes (\(M_{\rm BH}\)= 100-\(10^{5}\)\(M_{\odot}\), e.g. Colbert & Mushotzky, 1999). However, more recently, consensus has shifted to view these sources as lower-mass super-Eddington accretors (e.g. Middleton et al., 2015). This was famously confirmed for some sources by the detection of pulsations, revealing their central engines to be neutron stars (NSs, e.g. Bachetti et al., 2014; Furst et al., 2016; Israel et al., 2017, 2017) and not black holes at all. NSs have masses of only 1-2 \(M_{\odot}\), implying their luminosities when assuming isotropic emission to be 100 s of times the Eddington limit. ULXs are thus our best laboratories for studying extreme super-Eddington accretion. The vast majority of ULX studies have been on relatively persistent sources, i.e. sources that while some may be highly variable, are consistently active and have been detected by X-ray instruments for decades. Indeed there is evidence to suggest they have been active for much longer from the collisionally ionized bubbles surrounding sources such as Holmberg IX X-1, NGC 1313 X-2, NGC 7793 S26 and NGC 5585 ULX which have es timated dynamical ages of \(\sim 10^{5}\) years (Pakull and Mirioni, 2002; Pakull et al., 2010; Moon et al., 2011; Weng et al., 2014; Berghea et al., 2020; Soria et al., 2021). Studies of persistent ULXs have revealed their multicomponent X-ray spectra (e.g. Gladstone et al., 2009; Walton et al., 2018), coherent pulsations (e.g. Bachetti et al., 2014; Furst et al., 2016; Israel et al., 2017, 2017), ultrafast outflows (e.g. Pinto et al., 2016; Kosec et al., 2018), super-orbital periods (e.g. Walton et al., 2016; Hu et al., 2017; Brightman et al., 2019, 2020), cyclotron lines (Brightman et al., 2018; Walton et al., 2018), among many other things. However, in addition to persistent ULXs there are several known transient ULXs. Indeed one of these occurred in our own Galaxy, Swift J0243.6+6124 (Cenko et al., 2017; Wilson-Hodge et al., 2018), and another in the SMC, RX J0209.6-7427 (Chandra et al., 2020; Vasilopoulos et al., 2020). Both of these were found to be powered by NS accretors with a Be star companion. Type I Be X-ray binary outbursts occur when a neutron star, often in a wide eccentric orbit, accretes material as it passes through the decretion disk of its Be star companion (Reig, 2011). Type II outbursts are brighter and often reach the Eddington limit, as was the case with Swift J0243.6+6124 and RX J0209.6-7427. It is not clear if all transient ULXs are Be X-ray binaries, however M51 XT-1 (Brightman et al., 2020) would be a candidate for a non-Be X-ray binary since it peaked at an X-ray luminosity of \(10^{40}\) erg s\({}^{-1}\), much greater than seen in Be X-ray binaries. Transient ULXs are far less well studied than their persistent counterparts, potentially skewing our understanding of super-Eddington accretion and of ULXs in general (Dage et al., 2021). This is mostly due to the lack of wide-field X-ray surveys with the sensitivity to detect these mostly extragalactic sources. eROSITA was launched in 2019 and the data from its all sky surveys will have the potential to change this. Most ULXs known today have been identified serendipitously in pointed imaging X-ray observations by _XMM-Newton_, _Chandra_ and _Swift_(e.g. Liu and Mirabel, 2005; Liu and Bregman, 2005; Winter et al., 2006; Swartz et al., 2011; Walton et al., 2011; Earnshaw et al., 2019; Kovlakas et al., 2020), with the latest catalog of ULX candidates containing 1843 sources (Walton et al., 2021). However, the relative rates of persistent and transient sources is unknown. A few detailed studies of transient ULXs discovered serendipitously have been presented in the literature (e.g. Strickland et al., 2001; Soria et al., 2007; Middleton et al., 2012; Soria et al., 2012; Middleton et al., 2013; Carpano et al., 2018; Pintore et al., 2018; Liu et al., 2019; van Haaften et al., 2019; Earnshaw et al., 2019, 2019; Brightman et al., 2020; Earnshaw et al., 2020; Walton et al., 2021; Dage et al., 2021; Robba et al., 2022), however, a systematic search for transient ULXs is lacking. NASA's _Neil Gehrels Swift Observatory_ (hereafter _Swift_, Gehrels et al., 2004) observes 10 s of targets a day, many of which are monitoring observations, with the data being quickly downloaded and made public. This allows for a near real time search for transients, and detailed follow up. We have already reported on the discovery of a tidal disruption event found this way (Brightman et al., 2021), and the _Swift_ team have recently presented the Living Swift-XRT Point Source catalogue (LSXPS) and real-time transient detector (Evans et al., 2022). Here we report on results on transient ULXs from our own systematic search for X-ray transients in _Swift_/XRT observations. ## 2 The Search for New X-Ray Transients Beginning in \(\sim\)2019 October, we have routinely downloaded a selection of new _Swift_/XRT observations on a \(\sim\)daily basis. Not all observations were downloaded due to time constraints. We searched for sources in these observations using the detect function of the heasoft tool ximage and a signal to noise threshold of 3. The positions of the detected X-ray sources were then cross-correlated with latest versions of the _Swift_ Point Source Catalog (2SXPS, Evans et al., 2020), the Fourth _XMM-Newton_ Serendipitous Source Catalogue (4XMM, Webb et al., 2020), the _Chandra_ Source Catalog (CSC2, Evans et al., 2010) and the Second ROSAT All-Sky Survey Source Catalogue (2RXS, Boller et al., 2016). When the new _Swift_ source was found to have no close counterpart in these catalogs, we first assessed if this is because the source position was not previously observed by an imaging X-ray telescope, or it was a genuine new source. If it appeared to be a new source, we investigated further by using the online tool provided by the University of Leicester1(Evans et al., 2007, 2009) to determine the best position, and generate a lightcurve and spectrum of the source. All products from this tool are fully calibrated and corrected for effects such as pile-up and the bad columns on the CCD. All spectra were grouped with a minimum of one count per bin using the heasoft v 6.28 tool grppha and fitted in xspec v 12.11.1 (Arnaud, 1996). The C statistic was used for fitting to source spectra with the background subtracted (Cash, 1979). Since the C statistic cannot formally be used when the background is subtracted, xspec uses a modified version of the C statistic known as the W statistic to account for this. We describe the five new sources we found below. ### Swift J130456.1-493158, an X-ray transient in the field of NGC 4945 Swift J130456.1-493158 was first detected in a _Swift_/XRT observation taken on 2021 February 8 (obsID 00013908005). The target of the _Swift_ observation was NGC 4945 X-1 (Brandt et al., 1996), an ultraluminous X-ray source hosted by NGC 4945, a barred spiral galaxy in the constellation Centaurus. The enhanced position given by the online tool was R.A. = 196.23411\({}^{\circ}\), -49.53306\({}^{\circ}\)(=13h 04m 56.19s, -49deg31'59.0'') with an error radius of 3.2''(90% confidence). The position of Swift J130456.1-493158 appears to place the source in the outskirts of the galaxy (Figure 1). No X-ray source has been reported at this position previously, despite multiple _Chandra_, _XMM-Newton_, _Suzaku_, _NuSTAR_ and _Swift_ observations, the last of which was by _Swift_ only 2 weeks prior to the new X-ray source being detected, as shown in the lightcurve in Figure 2. After the source was initially detected, it declined in brightness from its peak, becoming undetected by _Swift_/XRT 60 days after its initial detection, even in stacked observations. We used the online tool to extract the stacked _Swift_/XRT spectrum of the source from 6 observations during which the source was detected. The total exposure time was 12.9 ks. The online tool fitted the spectrum with an absorbed power-law model, which yielded \(W=53.02\) with 62 DoFs where \(N_{\rm H}\)= \(1.33^{+1.12}_{-0.76}\times 10^{22}\) cm\({}^{-2}\) and \(\Gamma=2.63^{+1.06}_{-0.87}\) assuming a Galactic column density of \(2.2\times 10^{21}\) cm\({}^{-2}\)(Willingale et al., 2013). The 0.3-10 keV unabsorbed flux from this model was \(1.0^{+4.2}_{-0.6}\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\), which implies a luminosity of \(1.7\times 10^{39}\) erg s\({}^{-1}\) at 3.7 Mpc. The count rate to flux conversion factor was \(1.37\times 10^{-10}\) erg cm\({}^{-2}\) count\({}^{-1}\), which we used to determine the luminosity axis in Figure 2. The deepest upper limit on the flux of Swift J130456.1-493158 prior to its detection is from _Chandra_ observations which have a sensitivity of \(1.1\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.5-8 keV band listed in CSC2 (Evans et al., 2010). This is 3 orders of magnitude lower than the flux measured above. The deepest upper limit from _XMM-Newton_ observations is \(<7.4\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.2-12 keV band listed in XSA. We also obtained a _Chandra_ DDT observation of the source which took place on 2021 March 10 (obsID 24986), with ACIS-S at the aimpoint in VFAINT mode. The source was well detected with a count rate of \(1.52\times 10^{-2}\) counts s\({}^{-1}\) in the 10 ks exposure. We extracted the _Chandra_ spectrum with specextract from circular regions, radius 1.5'' for the source and 7.5'' for the background. The spectra were grouped with a minimum of 1 count per bin with the tool grppha. We fitted the _Chandra_ spectrum of the source with the same model used to fit the _Swift_/XRT spectrum described above. Since we did not find evidence for spectral variability between _Swift_/XRT and _Chandra_, we fitted the joint _Swift_/XRT and _Chandra_ spectrum of the source in xspec, with a constant to account for cross-calibration uncertainties and the flux variability of the source. This yielded \(W=120.10\) for 174 DoFs. The cross-calibration constant for the _Swift_/XRT spectrum is set to unity, and the constant for the _Chandra_ spectrum is \(0.70^{+0.21}_{-0.16}\). We find \(N_{\rm H}\)= \(1.13^{+0.45}_{-0.38}\times 10^{22}\) cm\({}^{-2}\) and \(\Gamma=2.82^{+0.56}_{-0.51}\). The log of the 0.3-10 keV unabsorbed flux from this model corresponding to the time of the _Chandra_ observation is \(-11.84^{+0.38}_{-0.28}\), which implies a luminosity of \(2\times 10^{39}\) erg s\({}^{-1}\) at 3.7 Mpc. The spectrum is shown in Figure 3. We also trialled a diskbb model in place of the powerlaw one, which produced \(W=122.14\) for 174 DoFs, a slightly worse fit for the same number of DoFs. We find \(N_{\rm H}\)= \(5.18^{+2.72}_{-2.22}\times 10^{21}\) cm\({}^{-2}\) and \(T_{\rm in}=1.03^{+0.24}_{-0.17}\) with a normalization, \(N=1.67^{+2.40}_{-1.00}\times 10^{-2}\). The normalization is related to the inner disk radius by \(R_{\rm in}=D_{10}\times\sqrt{N/cos\theta}\), where \(R_{\rm in}\) is the inner disk radius in km, \(D_{10}\) is the distance to the source in units of 10 kpc, and \(\theta\) is the inclination angle of the disk. Assuming a face-on disk (\(\theta=0\)) yields \(R_{\rm in}=48\) km which is the innermost stable orbit of a 5\(M_{\odot}\) black hole. We note that the luminosity estimate would be a factor of 3.5 lower if this model is assumed and integrated over all energies. We also used the _Chandra_ data to acquire a more precise position of Swift J130456.1-493158. We compiled an X-ray source list of the _Chandra_ observation in the 0.5-8 keV band using wavdetect with default parameters and cross-matched this with a _Gaia_ EDR3 source list of the region (Gaia Collaboration et al., 2018), selecting sources within 1.0'' of each other, which produced four _Chandra_/_Gaia_ matched sources. We define the astrometric shifts as the mean difference in RA and Dec between these matched sources which is \(\delta\)RA= 0.34'' and \(\delta\)Dec= \(-\)0.44''. The corrected position is R.A. = 13h 04m 56.350s (196.23479\({}^{\circ}\)), Decl.=-49deg 31' 59.66'' (-49.533239\({}^{\circ}\), J2000), which lies in the middle of the _Swift_ error circle. The mean residual offset between the corrected _Chandra_ positions and the _Gaia_ positions is 0.53'', which we use as our positional error. There are no sources catalogued at other wavelengths within the _Chandra_ error circle. The closest source is a near-IR \(J=18.9\) source cataloged by the VISTA Hemisphere Survey (VHS, McMahon et al., 2013) and lies 1.68'' from the _Chandra_ position, and is therefore unlikely to be related. Despite numerous _HST_ observations of NGC 4945, none of them covered the region of the source. We ran the tool uvotsource on the _Swift_/UVOT images to obtain photometry of the source in the UV bands using a 2\({}^{\prime\prime}\) radius circular region centered on the X-ray position. The source was not detected and we obtained upper limits of \(UVW2>22.7\), \(UVM2>22.5\) and \(UVW1>21.5\) taken from observations when the X-ray source was bright. ### Swift J130511.5-492933, a second X-ray transient in the field of NGC 4945 This X-ray source was also detected in a _Swift_/XRT observation of NGC 4945 X-1, and was first detected on 2021 September 24 (obsID 00013908017), 7 months after Swift J130456.1-493158 as described in Section 2.1 above. The astrometrically corrected position given by the online tool from the first 22 obsIDs where the source was detected was 196.2985\({}^{\circ}\), -49.4928\({}^{\circ}\) (=13h 05m 11.65s, -49\({}^{\circ}\) 29\({}^{\prime}\) 34.3\({}^{\prime\prime}\)) with an error radius 2.4\({}^{\prime\prime}\) (90% confidence) and we henceforth refer to this source as Swift J130511.5-492933. No X-ray source has pre Figure 1: _Swift_/XRT (left, red is 0.3–1 keV, green is 1–2.5 keV and blue is 2.5–10 keV, smoothed with a 8\({}^{\prime\prime}\) Gaussian), _Swift_/UVOT (middle, \(UVW2\) filter), and DSS \(R\)-band image (right) of NGC 4945, with the position of Swift J130456.1-493158 marked with a cyan circle and Swift J130511.5-492933 marked with a green circle, both with 25\({}^{\prime\prime}\) radius. North is up and East is left. Figure 3: _Swift_/XRT (black) and _Chandra_ (magenta) spectra of Swift J130456.1-493158, the X-ray transient in NGC 4945, fitted simultaneously with an absorbed power-law model with all parameters tied between instruments, but with a cross-normalization constant to allow for differing responses and flux levels. Figure 2: _Swift_/XRT lightcurve of Swift J130456.1-493158, the transient in NGC 4945 (black data points). Upper limits (3\(\sigma\)) from a stack of observations pre- and post-detection are shown with black arrows. The _Chandra_ data are shown in red. The luminosity axis on the right assumes a distance of 3.7 Mpc to the source. viously been reported within the positional error circle of Swift J130511.5-492933. The _Swift_/XRT lightcurve of the source produced by the online tool is shown in Figure 4, which shows the source declining in brightness until \(\sim 250\) days after its initial detection, after which the source was undetected by _Swift_/XRT. The XRT, UVOT and \(R\)-band images are shown in Figure 1 which show that similarly to Swift J130456.1-493158, Swift J130511.5-492933 appears to be in the outskirts of NGC 4945. We ran the tool uvotsource on the _Swift_/UVOT images to obtain photometry of the source in the UV and optical bands using a 2\({}^{\prime\prime}\) radius circular region centered on the X-ray position. The source was not detected and we obtained upper limits of \(UVW2>20.9\), \(UVM2>21.2\), \(UVW1>20.8\), \(U>20.2\), \(B>19.5\), and \(V>18.8\), taken from obsID 00015017005 taken when the source was X-ray bright. As with Swift J130456.1-493158, no source at any wavelength is catalogued within the error region for this X-ray source, and none of the _HST_ observations of NGC 4945 cover the region. Once again the closest source is a \(J=14.6\) mag near-IR source which lies 4.7\({}^{\prime\prime}\) from the astrometrically corrected position of the X-ray source, outside the 90% error circle (2.4\({}^{\prime\prime}\) radius). We used the online tool to extract the stacked _Swift_/XRT spectrum of the source (first 26 observations since detection) with a total exposure time of 45.1 ks. The online tool fitted the spectrum with an absorbed power-law model, which yielded \(W=259.57\) with 252 DoFs where \(N_{\rm H}\)\(=6.7^{+2.1}_{-1.7}\times 10^{21}\) cm\({}^{-2}\) and \(\Gamma=2.23^{+0.30}_{-0.27}\) assuming a Galactic column density of \(2.2\times 10^{21}\) cm\({}^{-2}\)(Willingale et al., 2013). The 0.3-10 keV unabsorbed flux from this model \(1.02^{+0.31}_{-0.17}\times 10^{-12}\)\({\rm\ erg\,cm^{-2}\,s^{-1}}\), which implies a luminosity of \(1.7\times 10^{39}\)\({\rm\ erg\,s^{-1}}\) at 3.7 Mpc. The count rate to flux conversion factor was \(7.71\times 10^{-11}\)\({\rm\ erg\,cm^{-2}\,count^{-1}}\), which we used to determine the luminosity axis in Figure 4. Fitting in xspec, we found an improvement in the fit could be found with a multicolor disk component (diskbb) in the place of the power-law component, which yielded \(W\)=246.11 with 252 DoFs. The best-fit parameters of this model were \(N_{\rm H}\)\(=2.7^{+1.3}_{-1.0}\times 10^{21}\) cm\({}^{-2}\), \(T_{\rm in}=1.0\pm 0.2\) keV, \(N=2.3^{+1.7}_{-1.0}\times 10^{-2}\). As for Swift J130456.1-493158, if we assume a face-on disk we find \(R_{\rm in}=56\) km which is the innermost stable orbit of a 6\(M_{\odot}\) black hole. We note that the luminosity estimate would be a factor of 1.9 lower if this model is assumed and integrated over all energies. We plot the spectrum of Swift J130511.5-492933 in Figure 5. Unfortunately the _Chandra_ observation taken of Swift J130456.1-493158 as described above did not have Swift J130511.5-492933 in the field of view. The deepest upper limit on the flux of Swift J130511.5-492933 prior to its detection with _Swift_/XRT is from other _Chandra_ observations which have a sensitivity of \(6.5\times 10^{-16}\)\({\rm\ erg\,cm^{-2}\,s^{-1}}\) in the 0.5-8 keV band listed in CSC 2.0. This is \(>\)3 orders of magnitude lower than the flux measured above. The deepest historical upper limit from _XMM-Newton_ observations is \(<1.1\times 10^{-14}\)\({\rm\ erg\,cm^{-2}\,s^{-1}}\) in the 0.2-12 keV band listed in XSA. Figure 4: _Swift_/XRT lightcurve of Swift J130511.5-492933, the second transient in NGC 4945. Upper limits (3\(\sigma\)) are shown with arrows. Data from _XMM-Newton_ are shown in blue. The luminosity axis on the right assumes a distance of 3.7 Mpc to the source. Figure 5: _Swift_/XRT spectrum of Swift J130511.5-492933, the second X-ray transient in NGC 4945, fitted with an absorbed power-law model. A 150-ks _XMM-Newton_ observation of NGC 4945 took place on 2022 July 5, 284 days after Swift J130511.5-492933 was detected. The XMM data were reduced using a pipeline that utilizes v19.1.0 of the Science Analysis Software (SAS). The cifbuild command was used to create a CCF corresponding to the observations, and the odfingest command was used to produce a SAS summary file. The data were reduced and MOS and pn event files were created using the emproc and epproc commands, respectively. We first identify periods of high background by creating a lightcurve of the events in the 10-12 keV band, creating good time intervals where the rate was \(<0.4\) counts s\({}^{-1}\) in this band in the pn detector leaving 99 ks for the pn and 101 ks for the MOS. Events were selected with \(PATTERN\leq 4\) for the pn and \(PATTERN\leq 12\) for the MOS. Upon inspection of the images, a faint X-ray enhancement appears at the source location in both the pn and MOS1 data. To test whether this is a detection, spectra were extracted using the specextract command with circular source regions of radii 16\({}^{\prime\prime}\). Local background was accumulated from annuli of the same area just around the source regions. The resulting spectra are heavily background dominated which requires extensive modeling. We therefore calculate an upper limit on the flux of the source by assuming that the spectrum does not change and applying the best-fitting _Swift_/XRT model of a multicolor disk component (diskbb) with \(N_{\rm H}\)\(=2.7\times 10^{21}\) cm\({}^{-2}\), and \(T_{\rm in}=1.0\) keV to the source+background spectrum in xspec. This yields an upper limit on the 0.3-10 kev flux of \(7.5\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\), which implies a 0.3-10 keV unabsorbed luminosity of \(L_{\rm X}\)\(=1.2\times 10^{37}\), well below the luminosity measured by _Swift_/XRT only 30 days prior (Figure 4). ### 2SXPS J235825.7-323609, an X-ray transient in the field of NGC 7793 This X-ray source was first detected in a _Swift_/XRT observation taken on 2018 April 28 (obsID 00094097003) and not found in our real time search, rather a search through archival data. The target of the Swift observation was NGC 7793 P13, an ultraluminous X-ray source hosted by NGC 7793 (Read & Pietsch, 1999) known to be a ULX pulsar (Furst et al., 2016; Israel et al., 2017). The enhanced XRT position given by the online tool was R.A.=359.60774\({}^{\circ}\), Decl.=-32.60254\({}^{\circ}\)(=23h 58m 25.86s, -32\({}^{\circ}\)36'09.2\({}^{\prime\prime}\)) with an error radius of 2.5\({}^{\prime\prime}\)(90% confidence, Figure 6). The source is listed in the 2SXPS catalogue as 2SXPS J235825.7-323609 with a mean count rate of \(5.37\pm 0.76\times 10^{-4}\) counts s\({}^{-1}\), detected in a stack of data over the date range 2010-08-16-2018-07-28. This average count rate was 2 orders of magnitude below the newly detected count rate. We note that this average flux is from a date range that covers periods both when the source was undetected in individual observations and when it was detected in individual observations. Since this source was first catalogued in 2SXPS, we henceforth refer to it by its catalogued name 2SXPS J235825.7-323609. The lightcurve produced by the online tool is shown in Figure 7. This shows that prior to 2018 April 28, the source was not detected in stacked observations with upper limits consistent with the 2SXPS count rate. The source was not detected in the XRT observation immediately preceding April 28 on April 22. After reaching its peak, the source declined monotonically until it was no longer detected by _Swift_/XRT 180 days afterwards. A diskbb model in place of the powerlaw one produced an improvement in the fit of \(\Delta\)C=-9, where \(N_{\rm H}\)\(<5.9\times 10^{20}\) cm\({}^{-2}\) and \(T_{\rm in}=1.03^{+0.21}_{-0.16}\) with a normalization, \(N=1.72^{+1.65}_{-0.87}\times 10^{-2}\). The normalization corresponds to \(R_{\rm in}=50\) km which is the innermost stable orbit of a \(6M_{\odot}\) black hole when assuming a face on disk. We note that the luminosity estimate would be a factor of 1.7 lower if this model is assumed and integrated over all energies. An _XMM-Newton_ observation took place on 2018-11-27, 213 days after the initial detection by _Swift_/XRT (obsID 0823410301). We filter the data in the same way as described for Swift J130511.5-492933, which results in 17.6 ks of data. A circular region with a radius of 15\({}^{\prime\prime}\) was used to extract the source spectrum, and an annulus with inner radius 25\({}^{\prime\prime}\) and outer radius of 45\({}^{\prime\prime}\) was used to extract the background spectrum. The data were grouped with a minimum of 1 count per bin using grppha. The source was background dominated above 1 keV so we excluded these channels and the resulting average count rate in the 0.2-1 keV band was \(2.5\pm 0.5\times 10^{-3}\) counts s\({}^{-1}\). Due to the narrow bandpass, we fit the _XMM-Newton_ spectrum with the same model as for the _Swift_ spectrum, with all parameters fixed with the exception of the normalization, which yielded \(N=1.4\times 10^{-5}\) with \(W=73.04\) with 54 DoFs. The 0.3-10 keV flux is \(4.6^{+1.6}_{-1.5}\times 10^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\), which implies a luminosity of \(7.9^{+1.8}_{-2.6}\times 10^{37}\) erg s\({}^{-1}\) at 3.8 Mpc (Sabbi et al., 2018). We plot this flux in Figure 7. The source was not detected in an observation only 1 month after the above _XMM-Newton_ observation (obsID 0823410401 on 2018-12-27). The upper limit on the 0.2-10 keV flux is listed as 5.9\(\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) in 4XMM from this observation, with similar upper limits provided by fur ther observations since then. We show the _Swift_ and _XMM-Newton_ spectra in Figure 8. The deepest upper limit from _XMM-Newton_ observations prior to the _Swift_ detection is \(<5.0\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.2-12 keV band listed in XSA. This is 2 orders of magnitude lower than the peak flux measured above. 4XMM DR11 lists the source from obsID 0823410301 as 4XMM J235825.9-323610 at R.A.=23h 58m 25.98s and Decl.=-32\({}^{\circ}\)36\({}^{\prime}\)10.5\({}^{\prime\prime}\) with a positional error of 1.0\({}^{\prime\prime}\), which is an improvement on the XRT position. Fortuitously, _HST_ has observed the region of 2SXPS J235825.7-323609 as part of the GHOSTS survey (Radburn-Smith et al., 2011) with the ACS and F606W and F814W filters. However, no source is listed in the _Hubble Source Catalogue_ (v3) within the _XMM-Newton_ positional error and the closest source lies 2.1\({}^{\prime\prime}\) away. We ran uvotsource on the mostly \(U\)-band UVOT data but did not detect the source in any observation to a limiting magnitude of \(U\sim 21.5\). ### Swift J235749.9-323526, a second X-ray transient in NGC 7793 This X-ray source was also detected in a _Swift_/XRT observation of NGC 7793 P13, and was first detected on 2022 September 25 (obsID 00031791173), 4 years and Figure 8: _Swift_/XRT (black) and _XMM-Newton_ (blue) spectra of 2SXPS J235825.7-323609, the X-ray transient in NGC 7793, fitted simultaneously with an absorbed power-law model with all parameters tied between instruments, but with a cross-normalization constant to allow for differing responses and flux levels. Figure 6: _Swift_/XRT (left, red is 0.3–1 keV, green is 1–2.5 keV and blue is 2.5–10 keV, smoothed with a 8\({}^{\prime\prime}\) Gaussian), _Swift_/UVOT (middle, \(U\)-band), and DSS \(R\)-band image (right) of NGC 7793, with the position of 2SXPS J235825.7-323609 marked with a green circle with 25\({}^{\prime\prime}\) radius. North is up and East is left. Figure 7: _Swift_/XRT lightcurve of 2SXPS J235825.7-323609, the transient in the field of NGC 7793. Upper limits (3\(\sigma\)) are shown with downward pointing arrows. The luminosity axis on the right assumes a distance of 3.8 Mpc to the source. Data from _XMM-Newton_ are shown in blue. 5 months after Swift J130511.5-492933 (see Section 2.3 above). The enhanced position given by the online tool from the first 9 obsIDs where the source was detected was 359.458\({}^{\circ}\), -32.590806\({}^{\circ}\) (=23h 57m 49.92s, -32\({}^{\circ}\) 35\({}^{\prime}\) 26.9\({}^{\prime\prime}\)) with an error radius 2.5\({}^{\prime\prime}\) (90% confidence) and we henceforth refer to this source as Swift J235749.9-323526 (Figure 9). A faint _Chandra_ source, 2CXO J235749.7-323527, has previously been reported within the positional error circle of Swift J235749.9-323526 with a flux of \(2.2\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.5-7 keV band, 3 orders of magnitude lower than the inferred XRT flux, which is also coincident with the _Gaia_ nuclear position of the galaxy (Figure 9). The _Swift_/XRT lightcurve of the source produced by the online tool is shown in Figure 10, which shows the source declining in brightness from its initial detection. A lightcurve binned by snapshot rather than observation is also shown to highlight some short-term variability seen. We used the online tool to extract the stacked _Swift_/XRT spectrum of the source (first 9 observations since detection) with a total exposure time of 14 ks. The online tool fitted the spectrum with an absorbed power-law model, which yielded \(W=176.66\) with 204 DoFs where \(N_{\rm H}\)= \(2.1^{+0.9}_{-0.8}\times 10^{21}\) cm\({}^{-2}\) and \(\Gamma=2.02^{+0.25}_{-0.24}\) assuming a Galactic column density of \(1.2\times 10^{20}\) cm\({}^{-2}\) (Willingale et al., 2013). The 0.3-10 keV unabsorbed flux from this model \(1.72^{+0.26}_{-0.20}\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\), which implies a luminosity of \(3.0\times 10^{39}\) erg s\({}^{-1}\) at 3.8 Mpc. The count rate to flux conversion factor was \(4.86\times 10^{-11}\) erg cm\({}^{-2}\) count\({}^{-1}\), which we used to determine the luminosity axis in Figure 10. We obtained a _NuSTAR_ DDT observation of Swift J235749.9-323526 which occurred on 2022 October 7 (obsID 90801526002) with an exposure of 53 ks. We used heasoft v6.28, nustardas v2.0.0 and caldb v20211115 to analyze the data. We produced cleaned and calibrated events files using nupipeline with the default settings on mode 1 data only. We used nuproducts to produce spectral data, including source and background spectra, and response files. A circular region with a radius of 40\({}^{\prime\prime}\) was used to extract the source spectra and a radius of 80\({}^{\prime\prime}\) was used to extract the background spectra, taking care to extract the background from the same chip as the source. The source is detected with a count rate of \(5\times 10^{-3}\) counts s\({}^{-1}\) in the 3-10 keV band in each FPM, above which the background dominates the source. We used the absorbed power-law model described above to determine the flux plotted in Figure 10. We also obtained a _Chandra_ DDT observation of the source which took place on 2022 October 27 (obsID 27481), with ACIS-S at the aimpoint in VFAINT mode. The source was well detected with a count rate of \(4.09\times 10^{-2}\) counts s\({}^{-1}\) in the 10 ks exposure. We extracted the _Chandra_ spectrum with specextract from a circular region, radius 2.0\({}^{\prime\prime}\) for the source and an annulus radii of 3.1 and 6.2\({}^{\prime\prime}\) for the background. The spectra were grouped with a minimum of 1 count per bin with the tool grppha. Again we used the absorbed power-law model from the _Swift_ data to determine the flux plotted in Figure 10. We then fitted the _Swift_, _NuSTAR_ and _Chandra_ data simultaneously initially with an absorbed power-law model in xspec, with a constant to account for cross-calibration uncertainties and the flux variability of the source. This yielded \(W=580.10\) for 611 DoFs. However, this revealed structure in the data to model residuals indicating that a more complex model was required. We then trialled the addition of a high energy cut off to the power-law model (cutoffpl) which lead to an improved \(W=534.30\) for 610 DoFs. A multicolor disk blackbody model, diskpbb, similarly produced \(W=534.00\) for 610 DoFs, which we select as our best-fit model due to the slightly better fit statistic. The best-fit parameters are intrinsic line-of-sight column density \(N_{\rm H}\)= \(1.2^{+0.7}_{-0.8}\times 10^{21}\) cm\({}^{-2}\), inner disk temperature \(T_{\rm in}=1.24^{+0.17}_{-0.16}\) keV, disk temperature index \(p=0.54^{+0.10}_{-l}\) (where \({}_{-l}\) indicates the lower bound uncertainty reached the lower limit of 0.5 for the parameter) and normalization \(N=8^{+15}_{-5}\times 10^{-3}\). the total luminosity of this model is \(3.7\times 10^{39}\) erg s\({}^{-1}\). Given the normalization of the diskpbb model and assuming a face-on inclination of the disk (\(\theta=0^{\circ}\)), the implied black hole mass is 4 \(M_{\odot}\) for a non-spinning black hole. The spectra fitted with this model are shown in Figure 11. We also checked for variation of the parameters between observations by untying them in the fit, but did not find any evidence for this. We also used the _Chandra_ data to acquire a more precise position for Swift J235749.9-323526 in the same way as was done for Swift J130456.1-493158, which produced seven _Chandra_/_Gaia_ matched sources. The astrometric shifts were \(\delta\)RA= \(+0.34^{\prime\prime}\) and \(\delta\)Dec= \(+0.22^{\prime\prime}\). After subtracting these shifts, the corrected position is R.A. = 23h 57m 49.903s (359.45793\({}^{\circ}\)), Decl.=-32\({}^{\circ}\) 35\({}^{\prime}\) 27.97\({}^{\prime\prime}\) (-32.591104\({}^{\circ}\), J2000), which lies within the _Swift_ error circle. The mean residual offset between the corrected _Chandra_ positions and the _Gaia_ positions is 0.57\({}^{\prime\prime}\), which we use as our positional error. With this improved positional uncertainty, 2CXO J235749.7-323527 and the nucleus of NGC 7793 are excluded as counterparts to this new X-ray source since they lie 2.0\({}^{\prime\prime}\) away (Figure 9). 2CXO J235749.7-323527 was not detected in this observation, however. Its reported flux was around the limit ing flux of the new observation, which is the likely reason for the non-detection. There are also no other sources catalogued at other wavelengths within the _Chandra_ error circle for Swift J235749.9-323526, with the exception of eight Hubble Source Catalog (HSC) v3 sources (Whitmore et al., 2016) which we will discuss in Section 4.1. Finally, due to the possibility that Swift J235749.9-323526 was a nuclear transient, we obtained radio follow-up of the source with the Very Large Array (VLA). The VLA observations were carried out on 2022 October 27 at X-band (8-12 GHz), in C configuration. The angular resolution was 5.9\({}^{\prime\prime}\times\)2.3\({}^{\prime\prime}\), slightly larger than the nominal one due to the low declination of the source. The field of view included the entire host galaxy structure. We did not detect radio emission at the position of the source obtained by _Chandra_, resulting in a 3-\(\sigma\) upper limit of 18 \(\mu\)Jy/beam. An emitting region is visible Figure 11: _Swift_/XRT (black), _NuSTAR_ (cyan, FPMA and FPMB combined for plotting purposes) and _Chandra_ (magenta) spectra of Swift J235749.9-323526, the second X-ray transient in NGC 7793. These have been fitted simultaneously with an absorbed multicolor disk black body model with all parameters tied between instruments, but with a cross-normalization constant to allow for differing responses and flux levels. Figure 10: _Swift_/XRT lightcurve of Swift J235749.9-323526, the second transient in NGC 7793. Upper limits (3\(\sigma\)) are shown with black arrows. _NuSTAR_ and _Chandra_ data are shown in cyan and red respectively. The inset shows a zoom in around the time of the _NuSTAR_ observation with the _Swift_ data binned by snapshot to show that the source showed short term variability during this time. The luminosity axis on the right assumes a distance of 3.7 Mpc to the source. Figure 9: _Chandra_ (left) and _HST_/WFC3/UVIS (right, red is \(F814W\), green is \(F547M\) and blue is \(F275W\)) image of the nuclear region of NGC 7793, with the corrected _Chandra_ position of Swift J235749.9-323526, the X-ray transient, marked with a magenta circle. The _Gaia_ position of the nucleus is shown with a 0.8\({}^{\prime\prime}\) green circle. 2CXO J235749.7-323527, a previously catalogued X-ray source coincident with the nucleus is also marked with a 0.8\({}^{\prime\prime}\) magenta circle. The white contours show the VLA radio emission (-3, 3 4, 5,6 and 7\(\times\)rms, where rms = 1\(\times\)10\({}^{-5}\) Jy/beam). North is up and East is left. starting \(\sim\)3\(\arcsec\) towards west from the transient position ( Fig 9), with an angular size of about 13\(\arcsec\) corresponding with optical emission from the nuclear star cluster (e.g. Carson et al., 2015; Mondal et al., 2021). ### Swift J095520.7+690401, an X-ray transient in the field of M81 This X-ray source was first detected in a _Swift_/XRT observation taken on 2022 April 3 (obsID 00096886002). The target of the Swift observation was M81, a Seyfert 2 galaxy. The enhanced position given by the online tool was 148.83625\({}^{\circ}\), 69.06692\({}^{\circ}\) (=09h 55m 20.70s, +69\({}^{\circ}\) 04\(\arcmin\) 00.9\(\arcsec\)), with an error radius of 3.2\(\arcsec\)(90% confidence) which appeared to place the source within the galaxy 1.1\(\arcmin\) from the nucleus (Figure 12). We will henceforth refer to this source as Swift J095520.7+690401. No X-ray source had been reported at this position previously, despite multiple _Chandra_, _XMM-Newton_, _NuSTAR_ and _Swift_ observations, the last of which was by _Swift_ 2 days prior to the new X-ray source being detected, albeit in a short (\(<\)200 s) observation. Since this source is close to the bright nucleus of M81, we do not use the automated online tool to generate the spectrum and lightcurve as for the other sources. This is to ensure that the nucleus is properly accounted for. We therefore download the observations and extracted events of the source using the heasoft v6.25 tool xselect(Arnaud, 1996). Source events were selected from a circular region with a 25\(\arcsec\) radius centered on the above coordinates. Background events were also selected from a circular region with a 25\(\arcsec\) radius placed at the same distance from the nucleus as the source in order to sample the PSF at its position. For each source spectrum, we constructed the auxiliary response file (ARF) using xrtmkarf. The relevant response matrix file (RMF) from the CALDB was used. All spectra were grouped with a minimum of 1 count per bin for spectral fitting purposes. We start by simultaneously fitting the _Swift_/XRT spectra from the first 15 observations where the source was detected. We fitted the spectra with an absorbed power-law model with a constant applied to account for the variability of the source from observation to observation. This yielded \(W=78.24\) with 94 DoFs where \(N_{\rm H}\)\(<5.2\times 10^{21}\) cm\({}^{-2}\) and \(\Gamma=2.0^{+1.9}_{-0.6}\). To produce the lightcurve, we stack the individual observations in time bins of 10 days using the tool addascaspec and again group the stacked spectra with a minimum of 1 count per bin. We then fit these with the above model where \(N_{\rm H}\) and \(\Gamma\) are frozen. We plot the flux and implied luminosity in Figure 13. We also obtained a _Chandra_ DDT observation of the source which took place on 2022 June 04 (obsID 24621), with ACIS-S at the aimpoint in FAINT mode. The source was well detected with a count rate of \(3.89\times 10^{-2}\) counts s\({}^{-1}\) in the 10 ks exposure. We extracted the _Chandra_ spectrum with specextract from a circular region, radius 2.0\(\arcsec\) for the source and an annulus radii of 2.5 and 12 \(\arcsec\) for the background. The spectra were grouped with a minimum of 1 count per bin with the tool grppha. We fitted the spectrum with an absorbed power-law model as done for the _Swift_/XRT data which yielded \(W=27.66\) with 35 DoFs where \(N_{\rm H}\)\(<2.8\times 10^{22}\) cm\({}^{-2}\) and \(\Gamma=4.6^{+3.5}_{-2.6}\), consistent with _Swift_/XRT, albeit with large uncertainties. If we fit the stacked _Swift_/XRT data and the _Chandra_ data together we find \(W=109.49\) with 128 DoFs where \(N_{\rm H}\)\(<6.8\times 10^{21}\) cm\({}^{-2}\) and \(\Gamma=2.6^{+1.6}_{-1.1}\). We show these spectra in Figure 14. A diskbb model in place of the powerlaw one produced a worsened fit with \(\Delta\)C=10, where \(N_{\rm H}\)\(<5\times 10^{20}\) cm\({}^{-2}\) and \(T_{\rm in}=1.03^{+0.18}_{-0.16}\) with a normalization, \(N=1.17^{+1.20}_{-0.52}\times 10^{-2}\). The normalization corresponds to \(R_{\rm in}=40\) km which is the innermost stable orbit of a 5 \(M_{\odot}\) black hole when assuming a face on disk. We note that the luminosity estimate would be a factor of 1.8 lower if this model is assumed and integrated over all energies. The deepest upper limit on the flux of Swift J095520.7+690401 prior to its detection with _Swift_/XRT is from _Chandra_ observations which have a sensitivity of \(9.8\times 10^{-16}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.5-8 keV band listed in CSC 2.0. This is 3 orders of magnitude lower than the flux measured above. The deepest upper limit from _XMM-Newton_ observations is \(<1.7\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the 0.2-12 keV band listed in XSA and is from a slew. We also used the _Chandra_ data to acquire a more precise position for Swift J095520.7+690401 in the same way as was done for Swift J130456.1-493158 and Swift J235749.9-323526, which produced nine _Chandra_/_Gaia_ matched sources. The astrometric shifts were \(\delta\)RA\(=-0.85\arcsec\) and \(\delta\)Dec\(=-0.75\arcsec\). After subtracting these shifts, the corrected position is R.A. = 09h 55m 20.873s (148.83697\({}^{\circ}\)), Decl.=+69\({}^{\circ}\) 04\(\arcmin\) 02.53\(\arcsec\) (+69.06737\({}^{\circ}\), J2000), which lies towards the edge of the _Swift_ error circle. The mean residual offset between the corrected _Chandra_ positions and the _Gaia_ positions is 0.33\(\arcsec\), which we use as our positional error. There are no sources catalogued at other wavelengths within the _Chandra_ error circle, with the exception of 3 Hubble Source Catalog (HSC) v3 sources (Whitmore et al., 2016) which we will discuss in Section 4.1. ## 3 What is the nature of these sources? ### Foreground sources in our Galaxy? If these transient X-ray sources are within our Galaxy, the implied peak luminosities would be \(\sim 10^{33}\) erg s\({}^{-1}\) if assuming a distance of 10 kpc. On the timescale-luminosity plot of Polzin et al. (2022) the source types most consistent with this luminosity and timescales of \(10^{2}\) days are classical/dwarf novae, however novae are usually accompanied by optical/UV emission (e.g. Page et al., 2020). This luminosity is comparable to X-ray flares from stars, but the X-ray activity lasted much longer than typical stellar flares which are not normally longer than a few hours (e.g. Pye et al., 2015). Furthermore, the lack of any stellar counterpart at other wavelengths, particularly in the HST observations of 2SXPS J235825.7-323609, make the association with a star in our Galaxy unlikely. These HST observations contained sources down to \(m_{\rm F814W}\sim 26\), which when applying a distance modulus of 15 corresponding to 10 kpc implies \(M_{\rm F814W}\sim 11\). This does not rule out a white dwarf or cool main sequence star, however. ### Sources in the background of these nearby galaxies? With the exception of Swift J235749.9-323526 and Swift J095520.7+690401, which appear to be in the disks of NGC 7793 and M81 respectively and therefore not likely to be in the background of these galax Figure 14: _Swift_/XRT (black) and _Chandra_ (magenta) spectra of Swift J095520.7+690401, the X-ray transient in M81, fitted simultaneously with an absorbed power-law model with all parameters tied between instruments, but with a cross-normalization constant to allow for differing responses and flux levels. Figure 12: _Swift_/XRT (left, red is 0.3–1 keV, green is 1–2.5 keV and blue is 2.5–10 keV, smoothed with a 8\({}^{\prime\prime}\) Gaussian), _Swift_/UVOT (middle, \(UVW2\)-band), and PanSTARRS \(g\)-band image (right) of M81, with the position of 2SXPS J235825.7-323609 marked with a green circle with 25\({}^{\prime\prime}\) radius. North is up and East is left. Figure 13: _Swift_/XRT lightcurve of Swift J095520.7+690401, the transient in M81 (black data points), with the _Chandra_ flux data point in red. The luminosity axis on the right assumes a distance of 3.7 Mpc to the source. ies, the other sources may be in the background of the galaxy they appear to be associated with. If so, their timescales and fluxes compare well with tidal disruption events (TDEs, e.g. Auchettl et al., 2017). Assuming a typical TDE X-ray luminosity of 10\({}^{44}\) erg s\({}^{-1}\), this puts the sources at \(z\sim 0.1\). At this distance, all known TDE host galaxies (e.g. French et al., 2020) have a \(V\)-band magnitude of 17.5-21.5. The _HST_ observation of 2SXPS J235825.7-323609 would have detected a background galaxy where the \(F606W\) (wide \(V\) band) observations reach \(m_{\rm F606W}\sim 26\). Furthermore, TDEs are usually also bright in the optical/UV, so the lack of a UVOT counterpart also argues against a TDE. Gamma-ray bursts also have much shorter timescales than these X-ray transients, of the order hours (e.g. Tarnopolski, 2015). The after-glows of Gamma-ray bursts are longer, but are usually accompanied by emission at other wavelengths. ## 4 A New Population of Transient ULXs If we can rule out foreground sources in our Galaxy, and background sources, we are left with the conclusion that these X-ray sources are associated with the galaxies close in projected separation to them, i.e. NGC 4945, NGC 7793 and M81. At the distances to these galaxies, all at 3-4 Mpc, their peak luminosities are 2-3\(\times\)10\({}^{39}\) erg s\({}^{-1}\). While supernovae can produce these X-ray luminosities on the timescales observed (Chevalier and Fransson, 2017), the lack of optical/UV emission disfavors a supernova origin for these sources. This then makes these sources likely ULXs. While these galaxies are known hosts to other ULXs, these other ULXs are relatively persistent sources, whereas the sources we have identified are transient. With the exception of 2SXPS J235825.7-323609, all our sources are within the \(D_{25}\) isophotal ellipses of their apparent host galaxies which are traditionally used for the creation of ULX catalogs (e.g. Earnshaw et al., 2019; Walton et al., 2021). 2SXPS J235825.7-323609 is 1.7 times further from the center of NGC 7793 than the semi-major axis of that galaxy's isophotal ellipse and therefore would have been missed by these catalogs. For Swift J235749.9-323526, the _Swift_/XRT position was consistent with the nucleus which initially made it a candidate TDE, albeit a low luminosity one. However, the _NuSTAR_ spectrum revealed a turnover in the spectrum of the source that is characteristic of ULXs, and not seen so far in TDEs. Furthermore, the more precise position provided by _Chandra_ ruled out the nucleus, confirming that the source was indeed a ULX. ### A search for the stellar counterparts As mentioned in Section 2, _HST_ has observed the regions of 2SXPS J235825.7-323609, Swift J095520.7+690401 and Swift J235749.9-323526. 2SXPS J235825.7-323609, which is in the outskirts of NGC 7793, was observed as part of the GHOSTS survey (Radburn-Smith et al., 2011) with the ACS and F606W and F814W filters. While none of the sources detected in these observations are within the X-ray positional uncertainty region, several are nearby, the properties of which may yield clues as to the environment of the source. The position of the X-ray source is 7.6\({}^{\prime}\) from the center of NGC 7793, which implies a projected distance of 8.4 kpc assuming a distance of 3.8 Mpc to the galaxy (Figure 6). The _HST_ observations were all taken many years before the X-ray transients, so the photometry is unlikely to be contaminated by the accretion disks. In Figure 15 we present color magnitude diagrams (CMDs) with all stars in the vicinity of the ULX plotted in the black histogram in the background. The green star (or green arrows) on each CMD indicates the star closest to the ULX source position and the blue squares indicate stars within the positional error circle (with the exception of 2SXPS J235825.7-323609 where we show the stars within 10\({}^{\prime\prime}\)). Some stars had non-detections in the HST filters we plot here, and these non-detections are indicated with arrows. In the middle panel, we have two stars plotted with non-detections. The green arrows indicate a star that was not detected in either band plotted, and the star plotted with a horizontal arrow was detected in the F814W band, but not the F606W band. We include isochrones from the Padova stellar models (Marigo et al., 2008; Bressan et al., 2012; Marigo et al., 2017) at different ages, which are listed in the figure legend. The purple lines represent isochrones of various ages with no dust extinction applied (A\({}_{V}\)=0.0) and the orange lines represent isochrones with 1 magnitude of dust extinction (A\({}_{V}\)) applied using the reddening coefficients from Schlafly and Finkbeiner (2011) in the HST filters presented in each CMD. For 2SXPS J235825.7-323609 the closest stars lie on the red giant branch (RGB) of the CMD described in Radburn-Smith et al. (2011). The closest source falls in a region identified by Radburn-Smith et al. (2012) as belonging to old RGB stars with ages of 1-10 Gyr. Our isochrones imply they could be 100-300 Myr or 1-30 Myr with 1 A\({}_{V}\) of extinction. The case is similar for Swift J095520.7+690401. For Swift J235749.9-323526, the closest star lies on the main sequence with an age of 1-10 Myr. The other stars within the positional error circle may have ages of up to 30 Myr. While other transient ULXs have previously been presented in the literature, many, if not all of these sources were discovered serendipitously, and not in real time. As far as we know, this is the first study to carry out a systematic search for transient ULXs outside of our Galaxy in real time, allowing us to conduct a more detailed study with follow up observations, such as with _Swift_ to get well-sampled lightcurves, _Chandra_ to obtain more precise positions, and _NuSTAR_ to obtain a broadband spectrum. We have reported on two transient ULXs in NGC 4945, however two other transient X-ray sources, Suzaku J1305-4931 and Suzaku J1305-4930, with ULX or close-to-ULX luminosities were reported by Isobe et al. (2008) and Ide et al. (2020) from _Suzaku_ observations of the galaxy. The sources were detected at positions of 13h 05m 05.5s, -49\({}^{\circ}\) 31\({}^{\prime\prime}\) 39\({}^{\prime\prime}\) and 13h 05m 17s.0, -49\({}^{\circ}\) 30\({}^{\prime}\) 15\({}^{\prime\prime}\) respectively. Suzaku J1305-4931 was active in 2006 Jan and Suzaku J1305-4930 was active in 2010 July, and lasted less than 6 months. Both _Suzaku_ sources were close to our _Swift_/XRT sources (\(\sim 1^{\prime}\) from either), but closer to the plane of the galaxy. In addition to 2SXPS J235825.7-323609 and Swift J235749.9-323526 in NGC 7793, Quintin et al. (2021) reported the discovery of another transient ULX in that galaxy found while looking for long-term variability of _XMM-Newton_ sources in the 4XMM-DR9 catalogue. The source, which they name NGC 7793 ULX-4, was active for 8 months from 2012 May-Nov. The ULX had a position of 23h 57m 47s.9 -32\({}^{\circ}\) 34\({}^{\prime}\) 57\({}^{\prime\prime}\) which is close to the center of the galaxy, \(\sim 8^{\prime}\) from 2SXPS J235825.7-323609 and \(\sim 40^{\prime\prime}\) from Swift J235749.9-323526. They also reported a pulsation signal at 2.52 Hz from the _XMM-Newton_ data making it the second ULX pulsar in that galaxy. Examples of other transient ULXs presented in the literature are ones in M31 (Middleton et al., 2012, 2013), M51 (Brightman et al., 2020), M83 (Soria et al., 2012), M86 (van Haaften et al., 2019), NGC 55 (Robba et al., 2022), NGC 300 (Carpano et al., 2018), NGC 821 (Dage et al., 2021), NGC 925 (Earnshaw et al., 2020), NGC 1365 (Soria et al., 2007), NGC 3628 (Strickland et al., 2001), NGC 4157 (Dage et al., 2021), NGC 5907 (Pintore et al., 2018), NGC 6946 (Earnshaw et al., 2019) and NGC 7090 (Liu et al., 2019; Walton et al., 2021). One of the best studied transient ULXs is NGC 300 ULX1, which was classified as a supernova imposter in 2010, with an observed X-ray luminosity of 5\(\times 10^{38}\) erg s\({}^{-1}\)(Binder et al., 2011). The source was observed at lower fluxes in observations made in 2014 (Binder et al., 2016) but then reached ULX luminosities during observations made in 2016 with \(L_{\rm X}\)\(\sim 5\times 10^{39}\) erg s\({}^{-1}\) during which pulsations were detected (Carpano et al., 2018) identifying it as a ULX pulsar powered by a neutron star. Regular _Swift_ monitoring of the source in 2018 revealed that the source initially persisted at these luminosities but then went into decline. Spectral analysis showed a hard spectrum. Another source, Swift J0243.6+6124, was an X-ray transient found in our own Galaxy, identified by _Swift_/BAT (Cenko et al., 2017) and with no previously reported activity. The source reached an X-ray luminosity of \(2\times 10^{39}\) erg s\({}^{-1}\) in a period of around 30 days, before steadily declining over a period of \(\sim 100\) days (Wilson-Hodge et al., 2018). The detection of pulsations also identified it as a neutron star accretor (Kennea et al., 2017) and Kong et al. (2022) reported the detection of a cyclotron resonance scattering feature at 146 keV with HXMT, allowing for the estimation of the magnetic field strength to be \(\sim 1.6\times 10^{13}\) G. The source exhibited rebrightenings in the X-ray after the decline, albeit to peak Figure 15: Color-magnitude diagram of HSC sources in and around 2SXPS J235825.7-323609 (left), Swift J095520.7+690401 (middle), and Swift J235749.9-323526 (right). The closest sources are shown with a green star, and the other sources within the positional error circle are shown with blue squares. Arrows indicate upper limits. The lines represent stellar isochrones showing where stars of a certain age are expected to lie. luminosities around 2 orders of magnitude less than the initial outburst (van den Eijnden et al., 2019). The companion star is a known Be type. RX J0209.6-7427 is a Be X-ray binary in the SMC and also briefly became ULX pulsar in 2019 (Chandra et al., 2020; Vasilopoulos et al., 2020; Coe et al., 2020). The spin period was 9.3 s and reached a luminosity of 1-2\(\times 10^{39}\) erg s\({}^{-1}\) similar to the super-Eddington outburst of SMC X-3 (Tsygankov et al., 2017). Karino (2022) discussed the possibility that a large number of ULXs are formed of Be HMXBs. We also note that the peak luminosities of these sources, disk temperatures, and implied inner disk radii are similar to the brightest outbursts from Galactic X-ray binaries such as GRO J1655-40, GX 339-4 and XTE J1550-564 when considering fits with the disk model. ### Are these new transient ULXs also Be X-ray binaries? As described above, many well known ULX transients are Be X-ray binaries in outburst, therefore it is reasonable to ask if these new systems are also Be XRBs. The peak luminosities of 2-3\(\times 10^{39}\) erg s\({}^{-1}\) are consistent with the type II outbursts from these sources, however Be XRBs typically have longer rise times, up to 50 days from detection to peak, whereas our ULX transients have rise times of 10 days or less where the the lightcurves are well sampled (Reig and Nespoli, 2013). Furthermore, Be stars are young and massive, at odds with the older stellar population that 2SXPS J235825.7-323609 and Swift J095520.7+690401 are found in. For Swift J235749.9-323526, the potential stellar counterparts are young and massive, therefore we cannot rule out a Be star in this case. ### Modeling the lightcurves with a disk-instability model A model was presented in Hameury and Lasota (2020) to explain the transient ULX phenomenon with a disk instability model previously used to explain dwarf novae and other X-ray transients. There, the super-Eddington outbursts can be explained by thermal-viscous instabilities in large unstable disks with outer radii greater than 10\({}^{7}\) km. They showed that this model can successfully reproduce the lightcurve of the transient ULX M51 XT-1 presented in Brightman et al. (2020), with derived accretion rates of 6-15\(\times 10^{19}\) g s\({}^{-1}\) depending on the mass of the accretor. We fit the observed transient ULX lightcurves using these models. Hameury and Lasota (2020) provides analytical formulas that accurately approximate the observable properties of the outbursts. According to this model, the accretion disk in outburst is brought into a fully hot state, then the disk mass decreases until the surface density at the outer edge of the disk becomes inferior to the critical value below which quasi-stationary hot states can no longer exist. This results in a cooling wave, propagating into the disk from its outer edge, bringing the whole disk back into a low state. When the disk is fully in the hot state, it is close to steady state with a mass accretion rate that is constant with radius and larger than the mass transfer rate from the secondary. Hameury and Lasota (2020), following Ritter and King (2001) have shown that during this phase, the accretion rate does not decrease exponentially, but according to: \[\dot{M}=\dot{M}_{\rm max}\left[1+\frac{t}{t_{0}}\right]^{-10/3}, \tag{1}\] where \(\dot{M}=\dot{M}_{\rm max}\) is the initial mass accretion rate and \(t_{0}\) is a characteristic timescale which depends on \(\dot{M}_{\rm max}\) and the disk parameters (size, viscosity): \[t_{0}=3.19\alpha_{0.2}^{-4/5}M_{1}^{1/4}r_{12}^{5/4}\dot{M}_{\rm max,19}^{-3/ 10}\ \rm yr, \tag{2}\] where \(\dot{M}_{\rm max,19}\) is \(\dot{M}_{\rm max}\) measured in units of 10\({}^{19}\) g s\({}^{-1}\), \(r_{12}\) is the disk size in units of 10\({}^{12}\) cm, \(M_{1}\) the accretor mass in solar units and \(\alpha_{0.2}\) the Shakura-Sunyaev viscosity parameter normalized to 0.2. Therefore, for given binary parameters and disk viscosity, the initial time evolution of the disk depends only on one free parameter, the initial accretion rate. Conversely, the determination of \(t_{0}\) from observations (at time \(t=t_{0}\), the mass accretion rate is one tenth of its initial value) enables one to determine the disk size. This phase lasts until the accretion rate falls below the critical rate \(M_{\rm crit}^{+}\) at which the hot solution can no longer exist at the outer radius. Using Eqs. (1), (2) and the fits for \(M_{\rm crit}^{+}\) provided by Hameury and Lasota (2020) in their Eq. (9), one finds that the duration of this phase is: \[\Delta t_{1}=t_{0}[1.38t_{0}^{-0.50}M_{1}^{0.25}\dot{M}_{19,\rm max}^{0.15}f_{ \rm irr}^{0.15}\alpha_{0.2}^{-0.4}-1] \tag{3}\] and is followed by a rapid decay phase during which the accretion rate drops steeply; the duration of this final phase is \[\Delta t_{2}=0.74\ M_{1}^{0.37}f_{\rm irr}^{0.15}\alpha_{0.2}^{-0.8}r_{12}^{0.6 2}\ \rm yr. \tag{4}\] where \(f_{\rm irr}\sim 1\) is a parameter describing the effect of irradiation on the disk. The above equations describe the time evolution of the mass accretion rate as a function of only four parameters: the initial (i.e. peak) mass accretion rate, \(t_{0}\), \(M_{1}\) and \(\alpha_{0.2}\) to which one could add \(f_{\rm irr}\) which enters only via \(f_{\rm irr}^{0.15}\). The parameters \(\Psi\) and \(\xi\), as defined in Hameury & Lasota (2020), were taken equal to 1.3 and 6.3 respectively, since, as discussed in Hameury & Lasota (2020), these values provide the best agreement between the numerical results and their analytical approximations. \(\Psi\) accounts for deviations of the opacities from the Kramers' law and \(\xi\) is the ratio between the rate at which the inner, hot disk mass decreases and the accretion rate at the inner edge; it is larger than unity because of the strong mass outflow at the cooling front. In order to compare the model predictions with observations, one must convert mass accretion rates into luminosities; we used \(L_{\rm X}=(1+\ln\dot{m})[1+\dot{m}^{2}/b]L_{\rm Edd}\) for luminosities larger than the Eddington value, where \(b\) is a constant which differs from, but is related to the beaming parameter (see King, 2009). This relation differs somewhat from the formula derived by King (2009), valid only for strong beaming; we modified it in order to account for a smooth transition with the case where beaming is negligible, as explained in Hameury & Lasota (2020). If the final decay is not observed, one cannot determine the viscosity parameter, since the disk radius, which enters Eq. (2), is not known a priori. One can nevertheless obtain upper limits on \(\alpha_{0.2}\) because the observed duration of the outbursts sets a lower limit on \(\Delta t_{1}\). Moreover, one expects significant degeneracies between \(t_{0}\), \(\dot{M}_{\rm max}\) and \(M_{1}\) when the light curve does not deviate much from an exponential (i.e. when \(t\) never becomes larger than \(t_{0}\)) which is defined by two parameters only. On the other hand, useful constraints can be obtained when the final decay is observed, and therefore the duration of the hot phase is measured. Because of the rapid drop off during the final decay, one does not expect to get significant constraints from the shape of this phase, but one at least gets a determination of \(\Delta t_{1}\), and some constraint, usually in form of an upper limit, on \(\Delta t_{2}\). This basically sets three strong constraints on four parameters, implying that degeneracies should still exist that preclude the simultaneous determination of \(M_{1}\) and \(\alpha_{0.2}\) unless observational uncertainties are very low. Table 1 shows the results of the fitting procedure, both when using the powerlaw model to convert from count rate to flux, and the diskbb model. As expected, the viscosity parameter \(\alpha\) can be determined only when the final decay has been observed, i.e. in the case of 2SXPS J235825.7-323609 and Swift J130511.5-422933. In this case, the value of \(\alpha\) we obtain is large, and typically much larger than the value \(\alpha\sim 0.1-0.2\) determined when fitting the light curve of cataclysmic variables (Smak, 1999; Kotko & Lasota, 2012). This should not come as a surprise since Tetarenko et al. (2018) found that much larger values of \(\alpha\), in the range 0.2 - 1 are obtained when considering low-mass X-ray binaries; they attributed this large value of the viscosity parameter to the existence of strong winds in these system that carry away matter and also angular momentum. We take \(b=73\), as in King (2009), but because of the relatively large size of the error bars, the fits are not sensitive to the value of the beaming parameter. For M51 XT-1, with two data points with relatively small error bars, Hameury & Lasota (2020) were able to exclude \(b=20\), but found that \(b=200\) gives an acceptable fit. We also note that, again as expected, the primary mass cannot be determined from fitting the observed light curves. The fits are equally good for neutron star and for 10 M\({}_{\odot}\) black hole accretors. The only difference is that the lower \(M_{1}\), the shorter the duration \(\Delta t_{1}\) of the main outburst phase, and long outbursts may require unrealistically low values of \(\alpha\) in the neutron star case. The maximum accretion rate is in all cases is close to or larger than the Eddington limit, except in the case of Swift J095520.7+690401 for large primary masses. The fit quality, as measured by the \(\chi^{2}\) compared to the number of degrees of freedom is quite good in all cases, except for 2SXPS J235825.7-323609. The reason for this is the existence of a low data point at \(t\sim 50\) d, and, to a lesser extent, a slightly steeper final decay than predicted by the model. Although the \(\chi^{2}\) value is good for Swift J130511.5-422933, the observed drop between \(t=250\) d and \(t=284\) d is too sharp to be accounted for by the model. The detection at \(t\sim 284\) d is somewhat marginal and the estimate of the X-ray luminosity becomes questionable because of uncertainties in the spectral model; it is however unlikely that the current model can reproduce the sharp cutoff observed in this source. This would mean mean that the cooling front propagates faster than expected when the propagation is controlled by irradiation with a constant efficiency. Dropping this hypothesis might solve this problem, at the expense of a new and uncontrolled parameter; given other oversimplifying assumptions of the model, notably about winds, this would be of limited interest. One should note that similar problems are encountered when modeling outbursts of sub-Eddington X-ray transients (Tetarenko et al., 2018). Although the \(\chi^{2}\) value is also good for Swift J235749.9-323526, the model does not reproduce what appears to be a plateau or rebrightening at \(t\sim 50\) d which is unlikely due to changes in the accretion disk. The short drop at \(t\sim 10\) d is also not accounted for by the model, and the _NuSTAR_ data point has not been included in the fit. We show the fits to all sources in Figure 16. ### Implications for the wider ULX population We summarize the properties of the sources in Table 2. We find that the average \(N_{\rm H}\)\(=5.7\times 10^{21}\) cm\({}^{-2}\) with a standard deviation of \(3.8\times 10^{21}\) cm\({}^{-2}\). The average \(\Gamma\) is 2.3 with a standard deviation of 0.4. This is consistent within the standard deviations of the sample of persistent sources from Gladstone et al. (2009), where the average \(N_{\rm H}\)\(=2.8\times 10^{21}\) cm\({}^{-2}\) with a standard deviation of \(1.7\times 10^{21}\) cm\({}^{-2}\) and the average \(\Gamma\) is 2.3 with a standard deviation of 0.5. Therefore we do not see any significant spectral differences between our transient sources and their persistent counterparts. For Swift J235749.9-323526 where we obtain _NuSTAR_ data to extend the spectral coverage to higher energies, the diskbb model was preferred over the power-law one, typical of ULX spectra as shown by Walton et al. (2018). Again, the parameters of this model were consistent with those seen in the persistent sources. We have found that two of our sources appear to lie in a population of old RGB stars. Interestingly, Wiktorowicz et al. (2017) predicted that the majority of neutron star ULXs have low-mass (\(<\)1.5 \(M_{\odot}\)), red giant donors. According to Wiktorowicz et al. (2017) red-giant donor NS-ULXs form at late times, and start with the primary becoming a Oxygen-Neon white dwarf. When the secondary becomes a red giant and fills its Roche lobe, the primary accretes additional mass and forms a NS due to an accretion induced collapse (AIC). Following this, the RG refills its RL and a short (\(0.1<\Delta t<0.2\) Myr) ULX phase occurs. We have not unambiguously identified the donor stars of these sources as red giants, and neither do we know they are neutron stars, but these properties do match well, albeit the timescales are much shorter than suggested by Wiktorowicz et al. (2017). It has been suggested that fast radio bursts (FRBs) may be associated with ULXs (Sridhar et al., 2021). In this model, the accreting compact object is a black hole or a non-magnetar neutron star as in King (2009). One FRB with an intriguing similarity to our transient ULXs is FRB 20200120E which was found in the outskirts of M81 (Bhardwaj et al., 2021). The FRB was localized to a globular cluster with an old stellar population which challenged the magnetar models that invoke young magnetars formed in a core-collapse supernova but would be consistent with the (Sridhar et al., 2021) scenario. AIC of a white dwarf was also suggested as a possible formation channel (Kirsten et al., 2022). However, to date, no FRB has been reported at the position of a ULX. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{powerlaw model} & \multicolumn{4}{c}{diskbb model} \\ Source name & \(M_{1}\) & \(t_{0}\) & \(\dot{M}_{\rm max}/\dot{M}_{\rm Edd}\) & \(\alpha\) & \(\chi^{2}\)/DOF & \(t_{0}\) & \(\dot{M}_{\rm max}/\dot{M}_{\rm Edd}\) & \(\alpha\) & \(\chi^{2}\)/DOF \\ & & (days) & & & & (days) & & & \\ \hline Swift J130456.1-493158 & 1.4 & 159.8 & 18.67 & \(<1.4\) & 0.90/6 & 78.4 & 8.97 & \(<1\) & 0.61/6 \\ & 10 & 74.9 & 4.23 & \(<7\) & 0.86/6 & 96.0 & 0.89 & \(<3\) & 0.79/6 \\ Swift J130511.5-422933 & 1.4 & 395.2 & 20.89 & 0.37 & 19.06/21 & 309.6 & 14.93 & 0.33 & 15.55/21 \\ & 10 & 164.2 & 5.33 & 1.39 & 18.52/21 & 199.3 & 2.20 & 1.06 & 18.51/21 \\ 2SXPS J235825.7-323609 & 1.4 & 365.0 & 13.75 & 0.35 & 15.78/12 & 271.6 & 10.07 & 0.33 & 12.34/12 \\ & 10 & 335.3 & 1.56 & 1.06 & 18.02/12 & 322.9 & 0.96 & 0.90 & 16.63/12 \\ Swift J235749.9-323526 & 1.4 & 216.4 & 19.49 & \(<1\) & 10.71/11 & 184.9 & 15.67 & \(<0.9\) & 9.62/12 \\ & 10 & 89.4 & 4.72 & \(<6\) & 14.72/11 & 102.5 & 2.45 & \(<4\) & 14.60/12 \\ Swift J095520.7+690401 & 1.4 & 61.6 & 9.98 & \(<0.5\) & 4.59/5 & 65.6 & 4.83 & \(<0.4\) & 5.80/5 \\ & 10 & 81.2 & 0.90 & \(<1.5\) & 5.36/5 & 81.2 & 0.49 & \(<1\) & 5.36/5 \\ \hline \hline \end{tabular} \end{table} Table 1: Fits of the outburst light curves Figure 16: Lightcurves of all the transients presented here fitted with the disk instability model presented by Hameury & Lasota (2020). Luminosities are from the powerlaw spectral model. Upper limits are omitted in the plot for clarity. The solid lines represent the model assuming a 1.4 \(M_{\odot}\) accretor, whereas the dashed lines represents a 10 \(M_{\odot}\) accretor. In addition to the 5 transient ULXs we have presented here, 3 further transient ULXs were serendipitously discovered in the same galaxies from previous observations implying that the rates of these sources is potentially high. While our sample of 5 sources is small, we next attempt to estimate the rates of transient ULXs in these galaxies, and compare these to their persistent counterparts. For NGC 4945 we found 2 transient ULXs in searches of observations over 3.0 years from 2019 Dec to 2022 Dec, implying a rate of 0.7\(\pm\)0.5 year\({}^{-1}\). Using the same technique to identify the transient sources, and in the same period, we found 4 persistent sources classified as ULXs identified in SIMBAD as [CHP2004] J130518.5-492824, [BWC2008] U31, [CHP2004] J130521.2-492741 and [BWC2008] U32. For NGC 7793 we found 2 transient ULX in searches of observations over 5.0 years from 2017 Dec to 2022 Dec, implying a rate of 0.4\(\pm\)0.3 year\({}^{-1}\). In the same period, we found 2 persistent sources classified as ULXs, P9 and P13. For M81 we found 1 transient ULX in searches of observations over 9 months from 2022 Apr to 2022 Dec, implying a rate of 1.3 year\({}^{-1}\). In the same period, we found 1 persistent source classified as a ULX, [LB2005] NGC 3031 ULX2. If we compare the number of transient ULXs in any one snapshot to the number of persistent ULXs, as would be done when computing the X-ray luminosity function of a galaxy (e.g. Lehmer et al., 2019), the persistent sources would dominate the high end. However, if we take the total number of sources that have exceeded 10\({}^{39}\) erg s\({}^{-1}\) over the time period of our searches, the transient ULX numbers roughly equal those of the persistent ones. Further, if we integrate the derived transient ULX rates over the timescales that the persistent source have been detected, several decades, then the transient ULX numbers would dominate the persistent ones. In other words, the number of systems that exhibit ULXs luminosities in each of these galaxies is dominated by transients rather than persistent sources. Since we have only considered galaxies where a transient ULX has been identified in our searches, we cannot extend this conclusion to all galaxies. The rates are also biased by the _Swift_ targeting and our incomplete search of observations. A more systematic search using eROSITA data could reveal the true rate. However, we note that the 6-month scanning pattern of eROSITA means some of the sources we identified can be missed. While the duration of the transient sources studied here is well determined, the duration of the persistent sources is not well known. However evidence points to their far longer duration. For example, the collisionally ionized bubbles surrounding Holmberg IX X-1, NGC 1313 X-2, NGC 7793 S26 and NGC 5585 ULX have estimated dynamical ages of \(\sim 10^{5}\) years (Pakull and Mirioni, 2002; Pakull et al., 2010; Moon et al., 2011; Weng et al., 2014; Soria et al., 2021). ## 5 Summary and Conclusions We have presented results on five newly found X-ray transients in the fields of nearby galaxies identified in a search of _Swift_/XRT observations. Our results are as follows: * The timescales (60-400 days), fluxes (\(\sim 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\)), and lack of bright optical/UV counterparts argue against foreground sources in our Galaxy such as stars or X-ray binaries, and more distant sources such as tidal disruption events or Gamma-ray bursts. * These X-ray transients appear to be ultraluminous X-ray sources associated with the nearby galaxies of NGC 4945, NGC 7793 and M81 with peak luminosities of 2-3\(\times 10^{39}\) erg s\({}^{-1}\). * For 4 out of 5 sources, modeling the lightcurves of these transients with the disk instability model of Hameury and Lasota (2020) implies that the mass accretion rate through the disk is greater than the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Source name & Host galaxy & Best position & \multicolumn{2}{c}{uncertainty} & \(N_{\rm H}\) & \(\Gamma\) & \(L_{\rm X}\) (peak) \\ & & RA (\({}^{\circ}\)) & Dec (\({}^{\circ}\)) & (\({}^{\prime\prime}\)) & (cm\({}^{-2}\)) & & ( erg s\({}^{-1}\)) \\ \hline Swift J130456.1-493158 & NGC 4945 & 196.23479 & -49.53324 & 0.53 & \(1.1^{+0.5}_{-0.4}\times 10^{22}\) & \(2.8^{+0.6}_{-0.5}\) & \(2\times 10^{39}\) \\ Swift J130511.5-492933 & NGC 4945 & 196.2985 & -49.4928 & 2.4 & \(6.7^{+2.1}_{-1.7}\times 10^{21}\) & \(2.2\pm 0.3\) & \(2\times 10^{39}\) \\ 2SXPS J235825.7-323609 & NGC 7793 & 359.60828 & -32.60291 & 1.0 & \(2.0^{+1.2}_{-1.0}\times 10^{21}\) & \(2.0\pm 0.3\) & \(3\times 10^{39}\) \\ Swift J235749.9-323526 & NGC 7793 & 359.45793 & -32.59110 & 0.57 & \(2.1^{+0.9}_{-0.8}\times 10^{21}\) & \(2.0\pm 0.3\) & \(3\times 10^{39}\) \\ Swift J095520.7+690401 & M81 & 148.83697 & +69.06737 & 0.33 & \(<6.8\times 10^{21}\) & \(2.6^{+1.6}_{-1.1}\) & \(2\times 10^{39}\) \\ \hline \end{tabular} \end{table} Table 2: Summary of source properties Eddington rate regardless of whether a 1.4 \(M_{\odot}\) neutron star or 10 \(M_{\odot}\) black hole is assumed. * For the three sources where _HST_ imaging enables a search for a stellar counterpart. We plotted CMDs with stellar isochrones which imply varying ages of the potential stellar counterparts. * The rate of transient ULXs for these three galaxies is in the range of 0.4-1.3 year\({}^{-1}\). While persistent ULXs dominate the high end of galaxy luminosity functions, the number of systems that produce ULX luminosities are likely dominated by transient sources. * The potential dominance of transient ULXs may imply results on ULXs may be biased by studies of persistent sources. Swift (XRT, UVOT), CXO, NuSTAR, XMM, VLA CIAO(Fruscione et al., 2006), XSPEC(Arnaud, 1996) We thank the anonymous referee for the careful review of our manuscript, and their helpful comments which improved its quality. We wish to thank the _Swift_ PI, Brad Cenko, for approving the target of opportunity requests we made to observe these transient sources, as well as the rest of the _Swift_ team for carrying them out. We also acknowledge the use of public data from the _Swift_ data archive. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. We also wish to thank Patrick Slane, Director of the Chandra X-ray Center, for approving the DDT requests to observe Swift J130456.1-493158, Swift J095520.7+690401 and Swift J235749.9-323526, and the _Chandra_ team for carrying out the observations. In addition we wish to thank the _NuSTAR_ PI, Fiona Harrison, for approving the DDT request we made to observe Swift J235749.9-323526 as well as the _NuSTAR_ SOC for carrying out the observation. This work was also supported under NASA Contract No. NNG08FD60C. _NuSTAR_ is a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by the National Aeronautics and Space Administration. This research has made use of the NuSTAR Data Analysis Software (NuSTAR-DAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). This research has made use of data obtained from the Chandra Source Catalog, provided by the Chandra X-ray Center (CXC) as part of the Chandra Data Archive. This work was also based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (STECF/ESAC/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). The Hubble Source Catalog can be accessed via DOI, and the specific observations used can be accessed via DOI. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. JMC's research was supported by an appointment to the NASA Postdoctoral Program at the NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA. MH acknowledges support from an ESO fellowship and JPL
2309.00658
Contrasting the Implicit Method in Incoherent Lagrangian and the Correction Map Method in Hamiltonian
The equations of motion for a Lagrangian mainly refer to the acceleration equations, which can be obtained by the Euler--Lagrange equations. In the post-Newtonian Lagrangian form of general relativity, the Lagrangian systems can only maintain a certain post-Newtonian order and are incoherent Lagrangians since the higher-order terms are omitted. This truncation can cause some changes in the constant of motion. However, in celestial mechanics, Hamiltonians are more commonly used than Lagrangians. The conversion from Lagrangian to Hamiltonian can be achieved through the Legendre transformation. The coordinate momentum separable Hamiltonian can be computed by the symplectic algorithm, whereas the inseparable Hamiltonian can be used to compute the evolution of motion by the phase-space expansion method. Our recent work involves the design of a multi-factor correction map for the phase-space expansion method, known as the correction map method. In this paper, we compare the performance of the implicit algorithm in post-Newtonian Lagrangians and the correction map method in post-Newtonian Hamiltonians. Specifically, we investigate the extent to which both methods can uphold invariance of the motion's constants, such as energy conservation and angular momentum preservation. Ultimately, the results of numerical simulations demonstrate the superior performance of the correction map method, particularly with respect to angular momentum conservation.
Junjie Luo, Jie Feng, Hong-Hao Zhang, Weipeng Lin
2023-09-01T10:50:23Z
http://arxiv.org/abs/2309.00658v1
Contrasting the Implicit Method in Incoherent Lagrangian and the Correction Map Method in Hamiltonian ###### Abstract The equations of motion for a Lagrangian mainly refer to the acceleration equations, which can be obtained by the Euler-Lagrange equations. In the post-Newtonian Lagrangian form of general relativity, the Lagrangian systems can only maintain a certain post-Newtonian order and are incoherent Lagrangians since the higher-order terms are omitted. This truncation can cause some changes in the constant of motion. However, in celestial mechanics, Hamiltonians are more commonly used than Lagrangians. The conversion from Lagrangiano Hamiltonian can be achieved through the Legendre transformation. The coordinate momentum separable Hamiltonian can be computed by the symplectic algorithm, whereas the inseparable Hamiltonian can be used to compute the evolution of motion by the phase-space expansion method. Our recent work involves the design of a multi-factor correction map for the phase-space expansion method, known as the correction map method. In this paper, we compare the performance of the implicit algorithm in post-Newtonian Lagrangians and the correction map method in post-Newtonian Hamiltonians. Specifically, we investigate the extent to which both methods can uphold invariance of the motion's constants, such as energy conservation and angular momentum preservation. Ultimately, the results of numerical simulations demonstrate the superior performance of the correction map method, particularly with respect to angular momentum conservation. DOI:[https://doi.org/10.3390/sym15071401](https://doi.org/10.3390/sym15071401) Published by Symmetry 1 School of Physics, Sun Yat-sen University, Guangzhou 510275, China 2 School of Science, Sun Yat-sen University, Shenzhen 518107, China 3 School of Physics and Astronomy, Sun Yat-sen University, Zhuhai 519000, China ## 1 Introduction Compact binary systems, composed of neutron stars or black holes, etc., are of immense interest to experimental and theoretical researchers as sources of gravitational waves for broadband laser interferometers. The temporal progression of binary systems encompassing compact objects can be elucidated through the implementation of Einstein's equations of general relativity. Explicit symplectic integrators are supposed to be the ideal candidate with several benefits for the numerical simulations in these systems. They are designed to preserve the symplectic structure, which guarantees the precision and stability of numerical solutions over long time intervals. However, Einstein's equations of general relativity describe the motion of strong gravitational systems for which exact solutions are very difficult to obtain, but there have been some efforts to address this issue. For example, Xin Wu [X. Wu(2022)] developed the explicit symplectic integrators for Hamiltonian systems in curved spacetimes, particularly for black hole spacetimes. The papers [Zhou(2022), Zhou(2023)] discuss the motion of charged particles around the Schwarzschild black hole with an external magnetic field. Therefore, the orbital motion described in this paper is a two-body problem, involving the motion of a charged particle around a Schwarzschild black hole. Ying wang [Y. Wang(2021a), Y. Wang(2021b)] constructs the explicit symplectic integrators in general relativity, specifically for the Hamiltonian of Schwarzschild spacetime geometry. The integrators are useful for the long-term integration of N-body Hamiltonian systems and modeling the chaotic motion of charged particles around a black hole with an external magnetic field. Xin Wu [X. Wu(2021)] discusses the construction of explicit symplectic integrators for Kerr black holes in general relativity. The authors introduce a time transformation function to the Hamiltonian of Kerr geometry to obtain a time-transformed Hamiltonian consisting of five splitting parts whose analytical solutions are explicit functions of the new coordinate time. Wei Sun [W. Sun(2021)] proposes an explicit symplectic integrator for the Kerr spacetime geometry to simulate the nonintegrable dynamics of charged particles moving around the Kerr black hole in an external magnetic field. The algorithm shows good numerical performance and is used to study the dynamics of order and chaos of charged particles. Despite the efficacy of this method, another useful and well-developed approach is the form of the post-Newtonian (PN) Lagrangian or Hamiltonian [G. Pan(2021)] approximation [Blanchet & Iyer(2003), Tanay(2021), G. Pan(2021)]. Arun et al. [Arun(2008)] investigated inspiralling compact binaries in quasi-elliptical orbits and provided a comprehensive analysis of the third post-Newtonian energy flux. Their study focused on understanding the energy loss due to gravitational radiation and its implications for compact binary systems. Tessmer and Schafer [Tessmer(2014)] studied the eccentric motion of spinning compact binaries. They examined the dynamics of these systems with non-circular orbits, considering the effects of spin and exploring the consequences of eccentricity on the gravitational wave signals emitted during inspiral. Hinder et al. [Hinder(2018)] developed an eccentric binary black hole waveform model by combining numerical relativity simulations with post-Newtonian theory. Their work aimed to accurately describe the complete inspiral-merger-ringdown phase of eccentric binary black hole systems, providing insights into the gravitational waveforms emitted during these events. Chattaraj et al. [Chattaraj(2022)] conducted high-accuracy comparisons between post-Newtonian theory and numerical relativity simulations, specifically focusing on eccentric binary black holes. They investigated the influence of higher modes on the waveforms and developed a model that incorporates eccentricity and accurately describes the inspiral, merger, and ringdown phases. Chowdhury and Khlopov [Chowdhury(2022)] studied an eccentric binary black hole system within the framework of post-Newtonian theory. Their research aimed to understand the behavior of binary black holes with non-circular orbits, providing insights into the dynamics and gravitational wave emissions of eccentric binary systems. These approximations provide high-precision theoretical templates of gravitational waveforms, although their higher-order terms are truncated, which affects their equivalence [X. Wu(2015a), X. Wu(2015b), L. Huang(2016)]. The choice of approximation and the selection of the algorithm becomes crucial to ensuring an accurate and effective description of the trajectory evolution of compact binaries systems and matching corresponding gravitational waveforms. The PN Lagrangian equations of motion are derived from the Euler-Lagrangian equations of a PN Lagrangian formulation, denoted as \(L(\mathbf{r},\mathbf{v})\). By calculating the partial derivative of \(L\) with respect to velocity, we obtain the generalized momentum \(\mathbf{p}=\partial L/\partial\mathbf{v}\). Similarly, the acceleration equations \(\mathbf{a}=f(\mathbf{r},\mathbf{v},\mathbf{a})=\partial L/\partial\mathbf{r}\) can be derived and we can obtain a coherent Lagrangian [Li(2021a), Li(2019b), Li(2021b)]. By limiting the inclusion of accelerations up to a certain PN order in the Lagrangian, the accelerations \(\mathbf{a}\) in the function \(f\) will be modified to \(\mathbf{a}^{\star}\); it only has lower-order terms, i.e., \(\mathbf{a}=f(\mathbf{r},\mathbf{v},\mathbf{a}^{\star})\).The acceleration equations become incoherent, due to the higher-order PN terms disappearing, leading to a loss of some values of the constants of motion during subsequent evolution. The same problem occurs with the post-Newtonian Hamiltonian form. The error of the constant of motion can be used as an indicator to test the performance of different algorithms in both approximate forms. Various algorithms are available for the calculation of post-Newtonian Lagrangian quantities. For example, in optimizing the fifth-order Runge-Kutta method as a high-precision integrator, Zhong [Zhong(2010)] employed corrections to all integrals within the conservative 3PN order Hamiltonian. Tsang [Tsang(2015)] introduced an implicit symplectic integrator that accounts for 2.5PN gravitational radiation reaction terms in the Newtonian two-body problem. This approach effectively captures the effects of radiation reactions. Lubich [Lubich(2010)] devised an explicit and implicit mixed symplectic integration technique that facilitates the splitting of orbital and spin contributions. By employing this approach, the dynamics of both orbital and spin variables can be accurately simulated. Zhong [Zhong et al(2010)] proposed fourth-order canonical explicit and implicit mixed symplectic methods. These methods offer improved accuracy and stability in the computation of post-Newtonian quantities. Seyrich [Seyrich(2013)] developed Gauss Runge-Kutta implicit canonical symplectic schemes that preserve the structural properties of the system. These schemes ensure long-term numerical stability and accuracy. These algorithms, with their distinct methodologies, contribute to advancing the computation of post-Newtonian Lagrangian quantities, addressing specific aspects such as precision, radiation reaction, spin contributions, stability, and structural preservation. Regarding the post-Newtonian Hamiltonian, the phase-space expansion method [Pihajoki(2015), Li(2017), Li(2019a)] is a usable algorithm. The Hamiltonian lacks separability and does not possess a coordinate momentum or multiple integrable splitting components. Pihajoki [Pihajoki(2015)] extended the phase space variables by copying the coordinates and momenta. We achieved a Hamiltonian splitting form so that the explicit leapfrog algorithms become available. The permutation map of momentum was designed to suppress the interaction of the original and extended variables. Liu [Liu et al.(2016)] devised a sequential mapping of coordinate and momentum permutations and constructed fourth-order phase-space expansion explicit method compositions of two triple products of the usual second-order leapfrog. These algorithms suffer a clear failure when calculating the chaotic orbits of celestial systems. The interactions between the original variables and the extended one become increasingly strong and show considerably different values, whereas they are supposed to be equivalent. Midpoint and correction maps [Luo et al.(2017), Luo et al.(2021)] have been proposed to ensure the equivalence of the original variables and the copy one. Recently, we proposed a multi-factor correction map that yields a higher accuracy of the phase-space expansion method without significant computational resource increases [Luo et al.(2022)]. This paper aims to design a multi-factor correction map for post-Newton Hamiltonian and examine its performance. This article is divided into several sections. In Section 2, we revisit the Lagrangian and Hamiltonian equations of motion for compact binary systems within the post-Newtonian (PN) approximation. Section 3 presents the introduction of the phase-space expansion method and the development of a correction map for the post-Newtonian Hamiltonian. In Section 4, we conduct a comparative analysis of the accuracy of numerical solutions obtained using the implicit midpoint method in the computation of the post-Newtonian Lagrangian and the correction map method for the post-Newtonian Hamiltonian. Finally, in Section 5, we conclude PN Lagrangian and Hamiltonian in Compact Binary Let us consider a compact binary system governed by a PN Lagrangian \(L(\mathbf{r},\mathbf{v})\) up to the \(m\)-th order, where \(\mathbf{r}\) and \(\mathbf{v}\) represent the position and velocity vectors, respectively. The Euler-Lagrangian equation is given by Equation 1, where the generalized momentum \(\mathbf{p}\) is defined by Equation 2. The expression for \(\mathbf{p}\) is given by Equation 2 and represents a nonlinear algebraic equation of \(\mathbf{v}\). \[\frac{d\mathbf{p}}{dt}=\frac{\partial L}{\partial\mathbf{r}}. \tag{1}\] Here \[\mathbf{p}=\frac{\partial L}{\partial\mathbf{v}}; \tag{2}\] \[\frac{d\mathbf{r}}{dt}=\mathbf{v}. \tag{3}\] We note that Equations 2 and 3 are differential equations, and that \((\mathbf{r},\mathbf{p})\) are treated as the integration variables, whereas \(\mathbf{v}\) is not. However, we can substitute Equation 2 into Equation 3 to obtain the corresponding acceleration equation, given by Equation 4. Here, \(\mathbf{a}_{N},\mathbf{a}_{1PN},\mathbf{a}_{2PN},\ldots,\mathbf{a}_{mPN}\) correspond to the Newtonian term, the 1st, 2nd post-Newtonian-order term to the \(m\)-th post-Newtonian-order contributions for the accelerations. \[\frac{d\mathbf{v}}{dt}=\mathbf{a}_{N}+\mathbf{a}_{1PN}+\mathbf{a}_{2PN}+.......\mathbf{a}_{mPN}. \tag{4}\] When considering only the \(m\)-th PN order term in Equation 4, all terms higher than the \(m\)-th PN order are truncated. Consequently, Equation 4 does not align with the PN Lagrangian \(L\), and Equations 3 and 4 are treated as incoherent PN equations of motion in the Lagrangian \(L\). However, when utilizing Equations 3 and 4, the variables \((\mathbf{r},\mathbf{v})\) can be used as a set of integration variables instead of the variables \((\mathbf{r},\mathbf{p})\). Nevertheless, this approach does not fully maintain constants of motion, such as the energy integral expressed by \[E=\mathbf{v}\cdot\mathbf{p}-L. \tag{5}\] In this paper, \(L\) is the dimensionless post-Newtonian (PN) Lagrangian formulation for compact binaries. The evolution of binaries can be given by the expression: \[L=L_{N}+L_{1PN}. \tag{6}\] \(L_{N}\) and \(L_{1PN}\) denote the non-relativistic and 1PN contributions to the Lagrangian, respectively. For simplicity, higher-order terms are not considered. The non-relativistic part is expressed as: \[L_{N}=\frac{\mathbf{r}^{2}}{2}+\frac{1}{r}, \tag{7}\] whereas the 1PN part is given by [Blanchet & Iyer(2003)]: \[L_{1PN}=\frac{1}{c^{2}}\left\{\frac{1}{8}(1-3\eta)v^{4}+\frac{1}{2r}[(3+\eta )v^{2}+\frac{\eta}{r^{2}}(\mathbf{r}\cdot\mathbf{v})^{2}-\frac{1}{r}]\right\}. \tag{8}\] Here \(\eta=\mu/M\) is the dimensionless mass parameter. The reduced mass, \(\mu\), is defined as \(M_{1}M_{2}/M=\beta(1+\beta)^{-2}\), and \(\beta=M_{1}/M_{2}\) is the mass ratio, where \(M_{1}\) and \(M_{2}\) represent the masses of the two bodies constituting the binary system and the total mass is denoted as \(M=M_{1}+M_{2}\). Additionally, \(c\) is the speed of light and \(G\) represents the constant of gravity given in natural units with \(c=G=1\). \(c\) is retained in some of the latter equations, and it can be ignored in the actual calculation. The equations for the evolution of the system can be derived from the Lagrangian formulation. According to Equations 1 and 2, the equation for the evolution of the momentum can be written as: \[\frac{d\mathbf{p}}{dt}=-\frac{\mathbf{r}}{r^{3}}\left\{1+\frac{1}{c^{2}}[ \frac{3\eta}{2r^{2}}(\mathbf{r}\cdot\mathbf{v})^{2}+\frac{3+\eta}{2}v^{2}- \frac{1}{r}]\right\}+\frac{\eta}{c^{2}r^{3}}(\mathbf{r}\cdot\mathbf{v}) \mathbf{v}, \tag{9}\] where \(\mathbf{r}\) is the separation vector between the two masses. The non-relativistic and 1PN contributions to this equation are, respectively, expressed in the first and second terms. The expression for the generalized momentum \(\mathbf{p}\) till 1pN is given in terms of the velocity \(\mathbf{v}\) as: \[\mathbf{p}=\mathbf{v}+\frac{1}{c^{2}}\left\{\frac{v^{2}}{2}(1-3\eta)\mathbf{v }+\frac{1}{r}[\frac{\eta}{r^{2}}(\mathbf{r}\cdot\mathbf{v})\mathbf{r}+(3+\eta )\mathbf{v}]\right\}. \tag{10}\] The value of momentum can be obtained from Equation 10 once the velocity is known and vice versa. However, velocity needs to be solved iteratively since it cannot be obtained directly from the Lagrangian. The first post-Newtonian relative acceleration equation is used to determine the value of velocity, given by: \[\frac{d\mathbf{v}}{dt}=\mathbf{a}_{N}+\mathbf{a}_{1PN}. \tag{11}\] The sub-terms are \[\mathbf{a}_{N}=-\frac{\mathbf{r}}{r^{3}}, \tag{12}\] \[\mathbf{a}_{1PN}=-\frac{1}{r^{2}c^{2}}\left\{\frac{\mathbf{r}}{r}[(1+3\eta)v^{2}- \frac{2}{r}(2+\eta)-\frac{3\eta}{2r^{2}}(\mathbf{r}\cdot\mathbf{v})^{2}]-\frac{ 2}{r}(2-\eta)(\mathbf{r}\cdot\mathbf{v})\mathbf{v}\right\}, \tag{13}\] Here, \(\mathbf{a}_{N}\) and \(\mathbf{a}_{1PN}\) describe the non-relativistic and 1PN contributions to the acceleration. Using Equations 6 and 10, the energy integral in Equation 5 can be expressed as. \[E=\frac{v^{2}}{2}-\frac{1}{r}+\frac{1}{c^{2}}\left\{\frac{3}{8}(1-3\eta)v^{4}+ \frac{1}{2r}[(3+\eta)v^{2}+\frac{\eta}{r^{2}}(\mathbf{r}\cdot\mathbf{v})^{2}+ \frac{1}{r}].\right\} \tag{14}\] In summary, the Lagrangian formulation of the evolution of binaries provides a mathematical framework for studying their motion. The momentum and velocity of the system can be determined from the equations derived from the Lagrangian, which include non-relativistic and relativistic contributions to the acceleration. The dimensionless PN Lagrangian offers valuable insights for the dynamics system, enabling the study of gravitational wave emission caused by binary systems. With Equations 3 and 11, we can obtain the numerical solution \((\mathbf{r}\cdot\mathbf{v})\) by using the fourth-order implicit midpoint method \((IM_{4})\). PN Hamiltonian form \(H\) can be derived through Lagrangian \(L\) using the Legendre transformation, \[H=\mathbf{p}\cdot\mathbf{\hat{r}}-L. \tag{15}\] Then we obtain the 1PN Hamiltonian, \[H=H_{N}+H_{1PN}. \tag{16}\] In order to compare the effect of higher-order PN terms on the error in the constants of motion, we introduce the 2PN post-Newton Hamiltonian, \[H^{\star}=H_{N}+H_{1PN}+H_{2PN}. \tag{17}\] The expressions for the sub-terms in Hamiltonians 16 and 17 are, respectively, given by \[H_{N}=T(\mathbf{p})+V(\mathbf{r})=\frac{\mathbf{p}^{2}}{2}-\frac{1}{r}, \tag{18}\] \[H_{1PN}=\frac{1}{8}(3\eta-1)\mathbf{p}^{4}-\frac{1}{2}[(3+\eta) \mathbf{p}^{2}\] \[+\frac{\eta}{r}(\mathbf{r}\cdot\mathbf{p})^{2}]\frac{1}{r}+\frac{1 }{2r^{2}}, \tag{19}\] \[H_{2PN} = \frac{1}{16}(1-5\eta+5\eta^{2})\mathbf{p}^{6}+\frac{1}{8}[(5-20 \eta-3\eta^{2})\mathbf{p}^{4} \tag{20}\] \[-\frac{2\eta^{2}}{r}(\mathbf{r}\cdot\mathbf{p})^{2}\mathbf{p}^{ 2}-\frac{3\eta^{2}}{r}(\mathbf{r}\cdot\mathbf{p})^{4}]\frac{1}{r}\] \[+\frac{1}{2}[(5+8\eta)\mathbf{p}^{2}+\frac{3\eta}{r}(\mathbf{r} \cdot\mathbf{p})^{2}]\frac{1}{r^{2}}\] \[-\frac{1}{4}(1+3\eta)\frac{1}{r^{3}},\] Due to the disappearance of higher-order terms, \(H\) and \(H^{\star}\) are approximately equal to \(E\), and not strictly equivalent. The integrators used in the Hamiltonian \(H\) and \(H^{\star}\) will be described in the next section. ## 3 Phase-Space Expansion Method with a Multi-Factors Correction Map Since neither the Hamiltonian \(H\) nor \(H^{\star}\) can be separated into multiple integrable parts, the symplectic leapfrog method cannot be applied directly to these Hamiltonians unless they are suitably modified to a splitting form. An effective approach to solving this problem is the phase-space expansion method. Piajoki [Piajoki(2015)] introduced a new pair of canonical and conjugate variables \((\mathbf{\tilde{r}},\mathbf{\tilde{p}})\) from the original variables \((\mathbf{r},\mathbf{p})\). This doubles the phase-space variables, \((\mathbf{r},\mathbf{\tilde{r}},\mathbf{p},\mathbf{\tilde{p}})\) and constructs a new Hamiltonian \(\widetilde{H}(\mathbf{r},\mathbf{\tilde{r}},\mathbf{p},\mathbf{\tilde{p}})\) using two identical Hamiltonians \(H_{1}\) and \(H_{2}\): \[\widetilde{H}(\mathbf{r},\mathbf{\tilde{r}},\mathbf{p},\mathbf{\tilde{p}})=H_ {1}(\mathbf{r},\mathbf{\tilde{p}})+H_{2}(\mathbf{\tilde{r}},\mathbf{p}). \tag{21}\] where both \(H_{1}\) and \(H_{2}\) should be equal to the original Hamiltonian \(H\). The new Hamiltonian \(\widetilde{H}\) already exhibits two integrable components. A conventional second-order leapfrog algorithm can be employed for its integration: \[\mathbf{A}_{2}(h)=\mathbf{H}_{2}(\frac{h}{2})\mathbf{H}_{1}(h)\mathbf{H}_{2}(\frac{ h}{2}), \tag{22}\] where \(h\) represents the time step, \(\mathbf{H}_{1}\) and \(\mathbf{H}_{2}\) are Hamiltonian operators. The code corresponding to integration 22 from \(n\)th to \((n+1)\)th step is \[\mathbf{r}_{n+\frac{1}{2}} =\mathbf{r}_{n}+\frac{h}{2}\nabla_{\mathbf{p}}H_{2}(\widetilde{ \mathbf{r}}_{n},\mathbf{p}_{n})\] \[\widetilde{\mathbf{p}}_{n+\frac{1}{2}} =\widetilde{\mathbf{p}}_{n}-\frac{h}{2}\nabla_{\mathbf{r}}H_{2}( \widetilde{\mathbf{r}}_{n},\mathbf{p}_{n})\] \[\widetilde{\mathbf{r}}_{n+1} =\widetilde{\mathbf{r}}_{n}+h\nabla_{\widetilde{\mathbf{p}}}H_{1 }(\mathbf{r}_{n+\frac{1}{2}},\widetilde{\mathbf{p}}_{n+\frac{1}{2}})\] \[\mathbf{p}_{n+1} =\mathbf{p}_{n}-h\nabla_{\mathbf{r}}H_{1}(\mathbf{r}_{n+\frac{1 }{2}},\widetilde{\mathbf{p}}_{n+\frac{1}{2}})\] \[\mathbf{r}_{n+1} =\mathbf{r}_{n+\frac{1}{2}}+\frac{h}{2}\nabla_{\mathbf{p}}H_{2} (\widetilde{\mathbf{r}}_{n+1},\mathbf{p}_{n+1})\] \[\widetilde{\mathbf{p}}_{n+1} =\widetilde{\mathbf{p}}_{n+\frac{1}{2}}-\frac{h}{2}\nabla_{ \mathbf{r}}H_{2}(\widetilde{\mathbf{r}}_{n+1},\mathbf{p}_{n+1}). \tag{23}\] It can be seen that the calculation of the numerical solution \((\mathbf{r},\widetilde{\mathbf{p}})\) of \(H_{2}\) requires the numerical solution \((\widetilde{\mathbf{r}},\mathbf{p})\) of \(H_{1}\), and vice versa, so there is an energy exchange between \(H_{1}\) and \(H_{2}\), and even if the initial conditions are the same, \(H_{1}\) and \(H_{2}\) will become unequal in the later evolution unless the errors are constant equal to \(0\). To submit the accuracy, we construct a fourth-order algorithm using Yoshida's triplet product \[\mathbf{A}_{4}(h)=\mathbf{A}_{2}(\lambda_{3}h)\mathbf{A}_{2}(\lambda_{2}h) \mathbf{A}_{2}(\lambda_{1}h). \tag{24}\] The time coefficients \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are identical to those presented in [Yoshida(1990)] and are set to \(\lambda_{1}=\lambda_{3}=1/(2-2^{1/3})\) and \(\lambda_{2}=1-2\lambda_{1}\). Algorithm \(\mathbf{A}_{4}\) is utilized to obtain a set of numerical solutions \((\mathbf{r},\widetilde{\mathbf{r}},\mathbf{p},\widetilde{\mathbf{p}})\). It is crucial to note that the original variables \((\mathbf{r},\mathbf{p})\) and their counterparts \((\widetilde{\mathbf{r}},\widetilde{\mathbf{p}})\) are intended to be identical at each integration step, but in reality, they exhibit discrepancies. The interaction between the solutions \((\mathbf{r},\widetilde{\mathbf{p}})\) of \(H_{1}\) and \((\widetilde{\mathbf{r}},\mathbf{p})\) of \(H_{2}\) leads to their divergence over time. To ensure the equivalence of the original variables and their copy one, Pihajoki [Pihajoki(2015)] proposed a momentum permutation map, whereas Liu [Liu et al.(2016)] proposed coordinate and momentum permutation maps. These applications were successful in several examples, but not in chaotic orbits. However, our previous work [Luo et al.(2021), Luo et al.(2022)] proposed the manifold corrections map, which effectively overcame the challenge. Unlike that in the original paper [Luo et al.(2021), Luo et al.(2022)], the correction map for the 1PN Hamiltonian is \[\mathbf{M}_{1PN}=\left(\begin{array}{c}\frac{\gamma}{2},\frac{\gamma}{2}, \mathbf{0},\mathbf{0}\\ \frac{\gamma}{2},\frac{\gamma}{2},\mathbf{0},\mathbf{0}\\ \mathbf{0},\mathbf{0},\frac{\gamma}{2},\frac{\gamma}{2}\\ \mathbf{0},\mathbf{0},\frac{\gamma}{2},\frac{\gamma}{2}\\ \end{array}\right). \tag{25}\] Here, the momentum scaling factor \(\gamma\) and the coordinate scaling factor \(\alpha\) are incorporated into \(\mathbf{M}_{1PN}\) and can be obtained by solving the following formulations: \[T(\frac{\alpha\mathbf{p}+\alpha\widetilde{\mathbf{p}}}{2})=\frac{\widetilde{ T}(\mathbf{p},\widetilde{\mathbf{p}})}{2}=\frac{T_{1}(\widetilde{\mathbf{p}})+T_{2}( \mathbf{p})}{2}, \tag{26}\] \[V(\frac{\gamma\mathbf{r}+\gamma\widetilde{\mathbf{r}}}{2})+H_{1PN}(\frac{ \gamma\mathbf{r}+\gamma\widetilde{\mathbf{r}}}{2},\frac{\alpha\mathbf{p}+ \alpha\widetilde{\mathbf{p}}}{2})\] \[=\frac{\widetilde{V}(\mathbf{r},\widetilde{\mathbf{r}})+\widetilde{H}_{1PN}( \mathbf{r},\widetilde{\mathbf{r}},\mathbf{p},\widetilde{\mathbf{p}})}{2}. \tag{27}\] Equation 26 provides \(\alpha=\sqrt{\frac{2(\mathbf{p}^{2}+\widetilde{p}^{2})}{(\mathbf{p}+\mathbf{p} )^{2}}}\), and Newton's method is used to obtain \(\gamma\) from Equation 27. Then, the fourth-order phase-space expansion method with multi-factor correction map for 1PN Hamiltonian \(H\) is established as \[\mathbf{CM}_{1PN}(h)=\mathbf{M}_{1PN}\otimes\mathbf{A}_{4}(h). \tag{28}\] Similarly, for the 2PN Hamiltonian \(H^{*}\), the aforementioned steps are applicable. The correction map \(\mathbf{M}_{2PN}\) for the Hamiltonian \(H^{*}\) follows the same structure as \(\mathbf{M}_{1PN}\). \[\mathbf{M}_{2PN}=\left(\begin{array}{c}\frac{\gamma}{2},\frac{\gamma}{2}, \mathbf{0},\mathbf{0}\\ \frac{\gamma}{2},\frac{\gamma}{2},\mathbf{0},\mathbf{0}\\ \mathbf{0},\mathbf{0},\frac{\gamma}{2},\frac{\gamma}{2}\\ \end{array}\right). \tag{29}\] However, the solution for \(\gamma\) is replaced by the following equation \[V(\frac{\gamma\mathbf{r}+\gamma\widetilde{\mathbf{r}}}{2})+H_{1PN}( \frac{\gamma\mathbf{r}+\gamma\widetilde{\mathbf{r}}}{2},\frac{\alpha\mathbf{p} +\alpha\widetilde{\mathbf{p}}}{2})+H_{2PN}(\frac{\gamma\mathbf{r}+\gamma \widetilde{\mathbf{r}}}{2},\frac{\alpha\mathbf{p}+\alpha\widetilde{\mathbf{p}}}{2})\] \[=\frac{\widetilde{V}(\mathbf{r},\widetilde{\mathbf{r}})+\widetilde{ H}_{1PN}(\mathbf{r},\widetilde{\mathbf{r}},\mathbf{p},\widetilde{\mathbf{p}})+ \widetilde{H}_{2PN}(\mathbf{r},\widetilde{\mathbf{r}},\mathbf{p},\widetilde{ \mathbf{p}})}{2}. \tag{30}\] For convenience, such algorithms are referred to as the correction map method. The correction map method for 2PN Hamiltonian \(H^{*}\) is set up as \[\mathbf{CM}_{2PN}(h)=\mathbf{M}_{2PN}\otimes\mathbf{A}_{4}(h). \tag{31}\] The \(\mathbf{A}_{4}\) algorithm is treated as an explicit symplectic method serving to the new Hamiltonian \(\widetilde{H}\), ensuring effective preservation of the energy of \(\widetilde{H}\), i.e., \(\Delta\widetilde{H}=\Delta H_{1}+\Delta H_{2}\approx 0\). The error evolution of \(H_{1}\) and \(H_{2}\) shows a clear time-axis symmetry, as demonstrated in Figure 1. Specifically, if \(H_{1}\) calculates more energy than the initial energy, \(H2\) will calculate less, and vice versa. Taking advantage of this symmetry, we designed a manifold correction mapping approach to optimize the performance of the \(\mathbf{A}_{4}\) algorithm. The \(\mathbf{M}_{1PN}\) and \(\mathbf{M}_{2PN}\) corrections imposed on the solutions of \(\mathbf{A}_{4}\) serve three primary purposes. The \(\mathbf{A}_{4}\) algorithm serves multiple purposes in relation to the new Hamiltonian \(\widetilde{H}\). Firstly, it ensures that \(H_{1}\) is equal to \(H_{2}\) to prevent energy discrepancies that could impede the availability of numerical solutions. Secondly, it maintains the constancy of \(\widetilde{H}\) after the correction, effectively suppressing the growth of energy errors. Thirdly, it reduces the energy deviation of each subterm of \(H\) from half of the corresponding subterm of \(\widetilde{H}\) through the correction process. As the \(\mathbf{A}_{4}\) algorithm is an explicit symplectic method serving the new Hamiltonian \(\widetilde{H}\), it accurately calculates the total energy as well as the energy of each individual subterm in \(\widetilde{H}\); thus, the algorithm altering these energies is not desirable, as it may weaken the algorithm's stability and precision. In Section 4, we will set initial values and perform numerical simulations of post-Newtonian Lagrangian and post-Newtonian Hamiltonian to compare the differences between the algorithms in terms of maintaining the constants of motion. ## 4 Numerical Simulation This section showcases the outcomes of our numerical simulations, where we compare the post-Newtonian Lagrangian and post-Newtonian Hamiltonian algorithms in maintaining the constants of motion. To this end, we set initial values for a specific orbit, named orbit 1, with initial conditions \((\beta;\mathbf{r},\mathbf{v})=(\frac{5}{2};10,0,0,0,0.52,0)\). The initial value of the momentum \(\mathbf{p}\) in the post-Newtonian Hamiltonian is obtained from Equation (10). We use the fourth-order implicit midpoint method (\(IM_{4}\)) to calculate the 1PN Lagrangian, whereas the algorithms \(\mathbf{CM}_{1PN}\) and \(\mathbf{CM}_{2PN}\) are used to calculate the Hamiltonian \(H\) and \(H^{*}\), respectively. We take a fixed step size of \(h=1\) and plot the energy errors in Figure 2a,b. We observe that the \(\mathbf{CM}_{1PN}\) algorithm designed for the Hamiltonian \(H\) has significantly better accuracy in terms of energy error compared to \(IM_{4}\). However, the accuracy of \(\mathbf{CM}_{1PN}\) drops considerably in the \(H^{*}\) error behavior, as expected due to the vanishing of the 2PN term, whereas the error in \(\mathbf{CM}_{2PN}\) is at an order of \(10^{-8}\). Finally, Figure 2c shows that \(\mathbf{CM}_{2PN}\) performs the best in terms of accuracy and long-term stability compared to \(\mathbf{CM}_{1PN}\) and \(IM_{4}\). Aside from ensuring energy conservation, we also track the preservation of orbital angular momentum \(\mathbf{L}=\mathbf{r}\times\mathbf{p}=\)\([1+\frac{1}{c^{2}}(\frac{1-3\eta}{2}v^{2}+\frac{3\eta}{r})]\mathbf{r}\times \mathbf{v}\), and examine its error as another performance metric for the algorithms. Figure 3 depicts the angular momentum errors \(\Delta\mathbf{L}=\mathbf{L}-\mathbf{L}_{0}\) with \(\mathbf{L}=|\mathbf{L}|\), where \(\mathbf{L}_{0}\) represents the initial value. We deduce that the performance of the \(IM_{4}\) algorithm in terms of angular momentum Figure 1: The energy errors \(\Delta E\) of the Hamiltonian in Equation 16, as computed using the \(\mathbf{A}_{4}\) algorithm after the extended phase space, can be expressed as \(\Delta\widetilde{H}=\widetilde{H}-2H(0)=\Delta H_{1}+\Delta H_{2}\), whereas \(\Delta H_{i}=H_{i}(t)-H(0)\) and \(H_{i}(t)\) represent the value of the Hamiltonian \(H_{i}\) at time \(t\). \(H(0)\) denotes the initial value of the Hamiltonian in Equation 16. Time–axis symmetry exists between \(\Delta H_{1}\)(red dot) and \(\Delta H_{2}\)(blue dash). Figure 2: Different energy errors \((\Delta H,\Delta H^{*},\Delta E)\) in orbit 1. (**a**) The energy error of \(H\), denoted as \(\Delta H=|H(t)-H(0)|\), where \(H(t)\) represents the value of the Hamiltonian \(H\) at time \(t\), and \(H(0)\) is the initial value. (**b**) The energy error of \(H^{*}\), denoted as \(\Delta H^{*}=|H^{*}(t)-H^{*}(0)|\). (**c**) The energy error of \(E\), denoted as \(\Delta E^{*}=|E(t)-E(0)|\). The algorithm \(\mathbf{IM}_{4}\) is drawn with a black line, whereas \(\mathbf{CM}_{1PN}\) and \(\mathbf{CM}_{2PN}\) are drawn with red and blue lines, respectively. error is similar to its performance in energy error, with the worst accuracy but good long-term stability. Conversely, both \(\mathbf{CM}_{1PN}\) and \(\mathbf{CM}_{2PN}\) exhibit exceptional performance in terms of angular momentum error, with very little difference between them and significantly superior to \(IM_{4}\). However, there is a noticeable error growth in \(\mathbf{CM}_{1PN}\) and \(\mathbf{CM}_{2PN}\). Our simulations show that the algorithm \(\mathbf{CM}_{1PN}\) in the 1PN Hamiltonian has a significant advantage over the \(IM_{4}\) algorithm in maintaining the conservation of orbital angular momentum and a small accuracy advantage in maintaining energy integrals. The algorithm \(\mathbf{CM}_{2PN}\) in the 2PN Hamiltonian also has a significant advantage in maintaining angular momentum, while being comparable to \(\mathbf{CM}_{1PN}\) with some improvement in the accuracy of the energy. To validate the aforementioned conclusion, additional numerical simulations will be conducted in a different orbit, referred to as orbit 2. The initial conditions for orbit 2 are set as \((\beta;\mathbf{r},\mathbf{v})=(\frac{3}{2};10,0,0,0,0.52,0)\). In Figure 4, three categories of energy error in orbit 2 will be depicted as follows: (a) Energy error analysis of orbit 2 for \(IM_{4}\), \(\mathbf{CM}_{1PN}\), and \(\mathbf{CM}_{2PN}\) will be presented in Figure 4a. (b) Figure 4b will display the energy error analysis of orbit 2 specifically for \(IM_{4}\). (c) The energy error analysis of orbit 2, focusing on \(IM_{4}\), will be illustrated in Figure 4c. It is evident from the figures that the performance of \(IM_{4}\), \(\mathbf{CM}_{1PN}\), and \(\mathbf{CM}_{2PN}\) in orbit 2 closely resembles that of orbit 1. \(IM_{4}\) continues to exhibit the highest error, \(\mathbf{CM}_{1PN}\) demonstrates a widening gap with \(IM_{4}\), and \(\mathbf{CM}_{2PN}\) remains the most advanced in terms of accuracy. Turning to the angular momentum errors depicted in Figure 5, it is observed that there is no significant improvement for \(IM_{4}\), which still exhibits considerable deviation compared to the first-order post-Newtonian approximation, \(\mathbf{CM}_{1PN}\). Furthermore, the inclusion of the second-order post-Newtonian term in \(\mathbf{CM}_{2PN}\) does not contribute significantly to reducing the angular momentum error. Summarizing the findings from the numerical simulations conducted for both orbit 1 and orbit 2, we can conclude that \(\mathbf{CM}_{1PN}\) and \(\mathbf{CM}_{2PN}\) perform better for the calculation of the post-Newtonian approximation to the Hamiltonian. They exhibit a slight advantage in terms of energy error while demonstrating a notably superior accuracy in the calculation of angular momentum. ## 5 Summary The exact equations of motion for a post-Newtonian Lagrangian formalism are the Euler-Lagrange equations, which consist of a coherent Lagrangian without any truncated terms. However, when the post-Newtonian Lagrangian form of general relativity maintains only a certain post-Newtonian order, it is referred to as the incoherent Lagrangian, with higher-order terms of the acceleration truncated. Incoherent Lagrangian can be numerically simulated using the Runge-Kutta method and implicit algorithms. Therefore, in the incoherent Lagrangian, motion constants such as energy integrals are only approximately conserved. The retention of the Hamiltonian in a certain post-Newtonian (PN) order leads to a high-order truncation problem. In addition, if the Hamiltonian is separable, symplectic algorithms can be used, which provide excellent performance. For the case where the Hamiltonian is inseparable, symplectic-like algorithms such as the phase-space expansion method with correction map, namely correction map method, can be used. The phase-space expansion method with correction map is referred to as the correction map method, which utilizes the symmetry of energy errors in \(H_{1}\) and \(H_{2}\) in the new Hamiltonian \(\widehat{H}\) to improve the accuracy and stability of the algorithm. A comparison was made between the performance of the implicit midpoint method in the incoherent Figure 3: The angular momentum errors in orbit 2, denoted as \(\Delta\L=\L-\L_{0}\), where \(\L=|\L|\) and \(\L\) is calculated using three different methods: \(IM_{4}\) (represented by black), \(\mathbf{CM}_{1PN}\) (represented by red), and \(\mathbf{CM}_{2PN}\) (represented by blue). Figure 4: Different energy errors(\(\Delta H,\Delta H^{*},\Delta E\)) in orbit 2. (**a**) The energy error of \(H\), represented as \(\Delta H\), is calculated as the absolute difference between the value of the Hamiltonian \(H\) at time \(t\) (\(H(t)\)) and its initial value (\(H(0)\)). (**b**) The energy error of \(H^{*}\), denoted as \(\Delta H^{*}\), is determined as the absolute difference between the value of the Hamiltonian \(H^{*}\) at time \(t\) (\(H^{*}(t)\)) and its initial value (\(H^{*}(0)\)). (**c**) The energy error of \(E\), denoted as \(\Delta E^{*}\), is computed as the absolute difference between the value of \(E\) at time \(t\) (\(E(t)\)) and its initial value (\(E(0)\)). The algorithm IM4 is represented by a solid black line, whereas C1PN and C2PN are indicated by dashed red and blue lines, respectively. The performance of each algorithm in orbit 2 is similar to that of orbit 1. Lagrangian and the correction map method in the PN Hamiltonian. Under the 1PN Newtonian approximation, the correction map method performed better in terms of energy error, exhibiting higher accuracy and comparable stability. On the other hand, with regards to angular momentum error, the correction map method was significantly higher, reaching an order of \(10^{-11}\), whereas the implicit midpoint method was only \(10^{-1}\). Similarly, under the 2PN Newtonian Hamiltonian, the manifold correction mapping method further improved the accuracy of energy error, but there was no noticeable impact on the angular momentum error. In conclusion, we compared the implicit midpoint methods for solving the equations of motion in post-Newtonian Lagrangians and the correction map method for PN Hamiltonians and investigate the extent to which both methods can uphold invariance of the motion's constants, such as energy conservation and angular momentum preservation. Ultimately, the results of numerical simulations demonstrate the superior performance of the correction map method, particularly with respect to angular momentum conservation. Compared to incoherent Lagrangian, we recommend using the manifold correction map method for the Hamiltonian of compact binaries as a numerical tool.
2305.12189
Discovery of an Extended $γ$-ray Emission around the Supernova Remnant Candidate associated with PSR J0837$-$2454
Motivated by the recent discovery of a low surface brightness diffuse emission, a supernova remnant (SNR) candidate, surrounding the young pulsar PSR~J0837--2454, we carry out a likelihood analysis of the $\gamma$-ray data obtained by the \emph{Fermi} Gamma-ray Space Telescope from August 2008 to November 2022. Using a 2D Gaussian spatial template, we detect a significant extended $\gamma$-ray emission with a 68\% containment radius of $\sim1^{\circ}.8$, which is spatially coincident with the new SNR candidate at $\sim12\sigma$ confidence level. The spectrum of the extended $\gamma$-ray emission, obtained in the energy range of 0.1-500.0 GeV, shows a significant spectral curvature at $\sim$1 GeV, with a log-parabola spectral shape. Several scenarios, such as the SNR, pulsar wind nebula, and pulsar halo, are discussed as the potential origins of the extended $\gamma$-ray emission, and our model fitting results are preferred for the SNR scenario.
Pengfei Zhang, Yuliang Xin
2023-05-20T13:28:02Z
http://arxiv.org/abs/2305.12189v1
Discovery of an Extended \(\gamma\)-ray Emission around the Supernova Remnant Candidate associated with PSR J0837\(-\)2454 ###### Abstract Motivated by the recent discovery of a low surface brightness diffuse emission, a supernova remnant (SNR) candidate, surrounding the young pulsar PSR J0837-2454, we carry out a likelihood analysis of the \(\gamma\)-ray data obtained by the _Fermi_ Gamma-ray Space Telescope from August 2008 to November 2022. Using a 2D Gaussian spatial template, we detect a significant extended \(\gamma\)-ray emission with a 68% containment radius of \(\sim 1^{\circ}.8\), which is spatially coincident with the new SNR candidate at \(\sim 12\sigma\) confidence level. The spectrum of the extended \(\gamma\)-ray emission, obtained in the energy range of 0.1-500.0 GeV, shows a significant spectral curvature at \(\sim\)1 GeV, with a log-parabola spectral shape. Several scenarios, such as the SNR, pulsar wind nebula, and pulsar halo, are discussed as the potential origins of the extended \(\gamma\)-ray emission, and our model fitting results are preferred for the SNR scenario. Gamma-rays(637); Pulsars (1306); Supernova remnants (1667) ## 1 Introduction During the ending stage of massive star evolution, the core of the star may undergo a powerful supernova explosion, collapsing into a rotating neutron star (i.e., a pulsar), which may lead to the creation of a supernova remnant (SNR), as the expanding gaseous remnant interacts with the surrounding circumstellar and interstellar medium. In our Galaxy, nearly 300 SNRs have been identified by the radio observations (Green, 2014, 2019) at low Galactic latitudes \(\lesssim 300\) pc (Maiz-Apellaniz, 2001). Thanks to the \(\gamma\)-ray telescopes, approximately 40 SNRs have been detected with \(\gamma\)-ray emissions (Zeng et al., 2019, and references therein), including the GeV \(\gamma\)-ray SNRs detected by _Fermi_, e.g. IC 443 and W44 (Ackermann et al., 2013), and the TeV \(\gamma\)-ray SNRs detected by the ground-based Cherenkov telescopes (e.g. HESS, HAWC, VERITAS and LHAASO), such as RX J1713.7-3946 (H. E. S. S. Collaboration et al., 2018), G106.3+02.7 (Acciari et al., 2009; Albert et al., 2020; Cao et al., 2021), etc. SNR's electromagnetic emissions extend from MHz radio frequencies to TeV \(\gamma\)-ray energies (Zeng et al., 2019, 2021). The high-velocity shock of SNR could accelerate cosmic rays to very high energies (even up to hundred of TeV). The studies of the \(\gamma\)-ray emissions from SNRs provide us excellent tools for probing the interstellar medium and stellar evolution in our Galaxy, especially for the Galactic cosmic rays acceleration. Recently, Pol et al. (2021) reported a discovery and timing of a young pulsar PSR J0837-2454 at a high Galactic latitude with a Galactic coordinate (J2000) of \(l=247^{\circ}.6\) and \(b=9^{\circ}.8\). They presented the pulsar's timing solution by using the radio data from the Parkes radio telescope. Its spin period (\(P\)) and spin-down rate (\(\dot{P}\)) are 629.4 ms and 3.5\(\times 10^{-13}\) s s\({}^{-1}\), respectively. And the characteristic age (\(\tau_{\rm c}=\frac{P}{2P}\); Lorimer and Kramer, 2012) is 28.6 kyr based on the assumption that the magnetic-dipole braking as the only energy-loss mechanism with a braking index of 3 and \(P\ll P_{\rm init}\) (\(P_{\rm init}\) is the pulsar's initial spin period). Its spin-down luminosity (\(\dot{E}\)) is calculated to be \(5.5\times 10^{34}\) erg s\({}^{-1}\), and the surface dipole magnetic field strength (\(B_{\rm S}\)) is \(1.5\times 10^{13}\) G. Based on the NE2001 electron density model provided in Cordes and Lazio (2002), Pol et al. (2021) claimed that the pulsar locates at a larger distance of 6.3 kpc inferred by a DM-derived distances, which implies that PSR J0837-2454 appears at the edge of Galaxy and has a z-height above the Galactic plane of 1.1 kpc. If this value is true, PSR J0837-2454 will be the first pulsar known to be born from a runaway O/B star. Furthermore, they also claimed a discovery of a low surface brightness diffuse emission with a region of \(1^{\circ}\).5 extent concentrated around PSR J0837-2454 by using the archival Galactic and Extragalactic All-sky Murchison Widefield Array Survey (GLEAM; Hurley-Walker et al., 2017) data in 170-231 MHz bands1. And the diffuse emission has a morphology consistent with a SNR. Based on the data from GLEAM and Southern H\(\alpha\) Sky Survey Atlas (SHASSA; Gaustad et al., 2001), the distance for the diffuse emission is estimated to be \(\sim\)0.9 and 0.2 kpc, respectively, which is much smaller than that predicted by Cordes & Lazio (2002). If the diffuse emission is indeed an SNR associated with the high Galactic latitude pulsar, searching for the multi-wavelength emission of the SNR, especially in \(\gamma\)-ray, would be helpful to study the particle acceleration and probe the interstellar medium above the Galactic plane. Motivated by their report, we carried out the data analysis with the \(\gamma\)-ray data surrounding PSR J0837-2454 collected by the Large Area Telescope onboard the _Fermi_ Gamma-ray Space Telescope (_Fermi_-LAT; Atwood et al., 2009). And this paper is structured as follows: the likelihood analysis for the _Fermi_-LAT data and the main results are described in Section 2. Discussions of the probable physical origins for the extended \(\gamma\)-ray emission are shown in Section 3. In Section 4, we present a summary. Footnote 1: [https://vo.astron.nl/tgssadr/q_fits/cutout/form](https://vo.astron.nl/tgssadr/q_fits/cutout/form) ## 2 Data Analysis and Results ### _Fermi_-LAT Data and source model Around the position of PSR J0837-2454 reported by Pallanca et al. (2017) (R. A.=\(129^{\circ}\).49, decl.=\(-24^{\circ}\).91), there is a \(\gamma\)-ray point source named as 4FGL J0838.9-2502 in the Data Release 3 of the fourth Fermi-LAT source catalog (4FGL-DR3; Abdollahi et al., 2022) based on 12 yr data. In the 4FGL-DR3, 4FGL J0838.9-2502 has no associated source in other wavelengths, and its \(\gamma\)-ray spectrum is described by a log-parabola spectral shape (LP) of \(dN/dE=N_{0}(E/E_{b})^{-[\alpha+\beta\log(E/E_{b})]}\) with \(\alpha\)=2.60, \(\beta\)=0.45 and \(E_{b}\)=1.03 GeV (Abdollahi et al., 2022). The following data analysis is aiming to determine the association between this \(\gamma\)-ray source and PSR J0837-2454 or the SNR around it. Firstly, we carried out a whole data analysis in order to update the catalog's parameters for the \(\gamma\)-ray sources in the region of interest (RoI) with the 14 yr _Fermi_-LAT observations. We selected the _Fermi_-LAT Pass 8 _Front+Back_ events (evclass = 128 and evtype = 3) in the energy range of 0.1-500.0 GeV within a \(20^{\circ}\times 20^{\circ}\) RoI centered at the position of 4FGL J0838.9\(-\)2502. The observations span from August 4 2008 to November 24 2022 (MJD: 54682.687-59907.214). The events with zenith angles \(\geqslant 90^{\circ}\) were removed to exclude the \(\gamma\)-ray contamination from the Earth Limb. The expression of "DATA_QUAL \(>\) 0 & LAT_CONFIG == 1" was used, for _gtmktime_, to save the events having high-quality with flags of "good" in the good time intervals. In our data analysis, the instrumental response function of "P8R3_SOURCE_V3" and the software package of Fermitools-2.2.0 were used for the data reduction. Based on the newest 4FGL-DR3 catalog, we used a python script, make4FGLxml.py2, to create a model file. The model file includes all the spectral parameters of the \(\gamma\)-ray sources in 4FGL-DR3 within \(25^{\circ}\) around 4FGL J0838.9\(-\)2502. We freed the normalizations and spectral parameters for the sources within \(5^{\circ}\) of the ROI center, and the normalizations for the sources within \(5^{\circ}\)-\(10^{\circ}\), together with the ones which are \(10^{\circ}\) outside but identified as variable sources. The normalizations of Galactic and extragalactic diffuse emission components were also set free. All other parameters in the model file were fixed to be their values provided in 4FGL-DR3. Then, a binned maximum likelihood analysis was performed between the whole LAT data set and the above model file. Then we saved all the best-fit parameters as a new model file. In order to reveal the \(\gamma\)-ray emissions around 4FGL J0838.9-2502, a TS map with a region of \(6^{\circ}\times 6^{\circ}\) was created based on the new model with \(gttsmap\) by fixing all model parameters for all 4FGL-DR3 sources including the two diffuse backgrounds, and removing 4FGL J0838.9-250 from the model. The TS map is shown in the left panel of Figure 1, and a significant extended \(\gamma\)-ray emission (hereafter named as SG0837) is positional coincident with the SNR candidate reported in Pol et al. (2021). Footnote 2: [http://fermi.gsfc.nasa.gov/ssc/data/analysis/user/](http://fermi.gsfc.nasa.gov/ssc/data/analysis/user/) ### Spatial Analysis In order to study the spatial extension of the \(\gamma\)-ray emission from SG0837, we employed the _Fermi_ ScienceTools packaged in Python (_Fermipy_) to derive the best-fit position based on the assumption that the SG0837 is a point source. Its coordinate was derived to be R. A.=\(129^{\circ}\).64 and decl.=\(-24^{\circ}\).97 with a \(2\sigma\) error radius of \(0^{\circ}\).15, which is shown as a purple cross in Figure 1, together with its error circle marked by the dashed circle. Based on the new coordinate, we obtained SG0837's TS value to be \(\sim\)82.44 as a point source. Then we used two spatial models, an uniform disk and a 2D Gaussian extended template, to describe the \(\gamma\)-ray emission from SG0837. The best-fit central positions and extensions of the two extended templates are listed in Table 1. Using the positions and extensions of the two spatial models, we performed the likelihood analysis again, and the best-fit results are summarized in Table 1. In these analysis, the spectral shape of power-law (PL), \(dN/dE=N_{0}(E/E_{0})^{-\Gamma}\), for each extended spatial model is employed in order to compare their likelihood results easily. The significance of the extension for a \(\gamma\)-ray source is defined by a likelihood-ratio test as shown in Lande et al. (2012), which can be calculated by TS\({}_{\rm ext}\)\(=-2(\log{\cal L}_{\rm pt}-\log{\cal L}_{\rm ext})\), where \({\cal L}_{\rm pt}\) (null hypothesis) and \({\cal L}_{\rm ext}\) (alternative hypothesis) are the maximum likelihood values for point source model and spatial extended source model, respectively. For the likelihood results listed in Table 1, we found that the 2D Gaussian template is significantly preferred to the point source model with TS\({}_{\rm ext}\)\(\sim 93\), which corresponds to the significance level of \(9.6\sigma\) with one additional degree of freedom (DoF). The TS value of the 2D Gaussian template is 178.88 in the likelihood analysis based on the spectral shape of PL, corresponding to a significance level at \(\sim 12\sigma\) with five DoFs. The best-fit position of the 2D Gaussian template \begin{table} \begin{tabular}{c c c c c c} \hline \hline Template\({}^{(1)}\) & Position\({}^{(2)}\) & TS\({}^{(3)}\) & \(\Gamma^{(4)}\) & –\(\mathcal{L}^{(5)}\) & DoF\({}^{(6)}\) \\ \hline Point source & \(129^{\circ}.64\pm 0^{\circ}.06\), –\(24^{\circ}.97\pm 0^{\circ}.06\) & 82.44 & \(2.54\pm 0.09\) & –\(1018839.61\) & 4 \\ Uniform disk & \(129^{\circ}.95\pm 0^{\circ}.09\), –\(24^{\circ}.78\pm 0^{\circ}.14\); \(R_{88}=1^{\circ}.97^{\circ}_{-0^{\circ}.23}\) & 161.30 & \(2.09\pm 0.05\) & –\(1018878.54\) & 5 \\ Uniform disk+Point source & — & 158.98; 11.09 & \(2.07\pm 0.09\); \(2.33\pm 0.40\) & –\(1018882.81\) & 9 \\ 2D Gaussian & \(129^{\circ}.93\pm 0^{\circ}.14\), –\(24^{\circ}.67\pm 0^{\circ}.14\); \(R_{88}=1^{\circ}.80^{\circ}_{-0^{\circ}.22}\) & 178.88 & \(2.06\pm 0.04\) & –\(1018886.21\) & 5 \\ 2D Gaussian+Point source & — & 176.81; 9.81 & \(2.04\pm 0.06\); \(2.34\pm 0.29\) & –\(1018890.01\) & 9 \\ \hline \end{tabular} \end{table} Table 1: Spatial Analysis Results Figure 1: TS maps in 0.5–500.0 GeV covering \(6^{\circ}\times 6^{\circ}\) region around the new found SG0837, centered at the position of 4FGL J0838.9–2502 with each pixel of \(0^{\circ}.1\). Left panel: TS map for the \(\gamma\)-rays from SG0837. PSR J0837–24 is flagged with a cyan plus. The best-fit _Fermi_-LAT position of the 2D Gaussian template is marked with an orange plus, and its best-fit extension is indicated by an orange dashed circle. The best-fit positions of the uniform disk and point source models are marked with black and purple crosses, respectively. The positional uncertainty of point source is marked by the purple dashed circle. The pink contours are derived based on the TS value at each pixel, and the \(\Delta\)TS between the neighbouring contours is 8.6. All \(\gamma\)-ray sources in the 4FGL are colored with green and labeled with ‘4FGL’. Right panel: the residual TS map by subtracting the \(\gamma\)-rays from all sources inlcuding SG0837. And the two TS maps share same scaled colorbar for convenient comparison. was derived to be R. A.=\(129^{\circ}.93\) and decl.=\(-24^{\circ}.67\) with a 68% containment radius of \(\sim 1^{\circ}.8\). The position and extension are shown in Figure 1 with an orange plus and a dashed circle, respectively. From Figure 1, we can see that the angular radius visually encloses the most of extended \(\gamma\)-ray emissions from SG0837. We updated the model file with the best-fit values in this likelihood analysis with the 2D Gaussian template. Then a residual TS map was created based on the updated model by fixing all model parameters for all sources (including SG0837) in the model file, which is shown in the right panel of Figure 1. None of obvious excess suggests that 2D Gaussian template can well describe the SG0837's \(\gamma\)-ray emission. Meanwhile, we also tested other complex models, i.e. 2D Gaussian/uniform disk plus a point source, and the likelihood results are shown in Table 1. The TS of the point source is not significant with \(\rm TS_{{}_{PS}}<16\), and these models are not favored in the further analysis. Moreover, we also performed a timing analysis by folded the _Fermi_-LAT data using the ephemeris for PSR J0837-2454 (Pol et al., 2021), while no creditable pulsation was found. These analysis make PSR J0837-2454 to be a radio loud and \(\gamma\)-ray quiet pulsar. ### Spectral Analysis To test SG0837's spectral properties in \(\gamma\)-rays, **we used** a spectral form of LP to fit the \(\gamma\)-rays from SG0837 in 0.1-500.0 GeV. And the corresponding \(\rm TS_{LP}\) value of SG0837 is calculated to be \(\sim\)198. Other parameters \(\alpha\), \(\beta\), and \(E_{\rm b}\) are fitted to be \(2.23\pm 0.22\), \(0.29\pm 0.09\), and \(1.4\pm 0.4\) GeV, respectively. The corresponding integrated photon and energy fluxes are \((1.36\pm 0.21)\times 10^{-8}\) photons cm\({}^{-2}\) s\({}^{-1}\) and \((1.26\pm 0.18)\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\), respectively. The variation of TS value between LP and PL models is \(\Delta\rm TS=TS_{LP}-TS_{PL}\sim 19\), corresponding to a significance level of \(\sim 4.4\sigma\). Hence we suggest that the LP model is relatively better to the PL one to describe the gamma-ray emission from SG0837. Then we saved the best-fit parameters into a final model file, and fixed the spectral parameters for all the sources in the model file to be the above likelihood analysis values. The normalizations for the sources within \(10^{\circ}\) around the ROI center and the two diffuse backgrounds were left free. Based on the final model file, we extracted a spectral energy distribution (SED) for SG0837 in the energy range of 0.1-500.0 GeV by adopting the spatial template of the 2D Gaussian and the global spectral shape of LP. The data were divided into 12 equal logarithmically spaced energy bins, and the individual likelihood analysis was employed for each bin. We show the \(\gamma\)-ray spectrum of SG0837 in Figure 2, in which the 95% flux upper limit is calculated for the energy bin with TS value of SG0837 lower than 10. The global fitting with LP and PL models are also plotted with the blue solid and yellow dashed lines, respectively. Comparing the best-fit LP and PL models in Figure 2, SG0837's \(\gamma\)-ray SED in 0.1-500.0 GeV is relatively well described by the LP model, which is in agreement with the result shown by \(\Delta\)TS. ## 3 Possible Gamma-Ray Origins ### Supernova Remnant According to the discovery in Pol et al. (2021), thus the origin of the SNR scenario for the \(\gamma\)-ray emission from SG0837 is considered. The \(\gamma\)-ray spectra of tens of _Fermi_-LAT observed SNRs can be basically divided into two classes (Zeng et al., 2019): one has the hard GeV \(\gamma\)-ray spectrum with the spectral curvature at \(\sim\)TeV, which corresponds to the young-aged SNR, like RX J1713.7-3946 (H. E. S. S. Collaboration et al., 2018) or RX J0852.0-4622 (H. E. S. S. Collaboration et al., 2018). And the \(\gamma\)-ray emissions from these SNRs are typically suggested to be from inverse Compton scattering (ICS) of accelerated electrons (leptonic process). Another class shows the spectral break at \(\sim\)GeV, which corresponds to the old-aged SNRs interacting with molecular clouds, like IC 443 and W44 (Ackermann et al., 2013). And such \(\gamma\)-ray emissions are suggested to be from the decay of neutral pions produced by the inelastic proton-proton collisions (hadronic pro Figure 2: \(\gamma\)-ray SED of SG0837 obtained from the data in 0.1–500 GeV. The best-fit LP and PL spectral shapes are shown as the blue solid and yellow dashed lines, respectively. The flux data points with \(\rm TS>10\) are shown with the black squares with the pluses as their uncertainties, and the black arrows indicate the 95% upper limits. cess). The \(\gamma\)-ray spectrum of SG0837 is similar to that of IC 443 and W44, etc, and the hadronic model is also considered here for it. Considering the observational fact that the size of the \(\gamma\)-ray emission region is much larger than that of the remnant, which is shown as in Figure 3, the escaping scenario of protons is suggested, i.e. the \(\gamma\)-ray emission is produced by the protons accelerated and escaped from the shock of SNR, like the SNR W28 (Aharonian et al., 2008; Cui et al., 2018). Here we assume instantaneous injection of protons into an uniform emission zone at \(T=28.6\) kyr. Here the age of remnant is assumed to be the characteristic age of PSR J0837-2454 (Pol et al., 2021). The spectrum of injected protons is adopted to be a power-law with an exponential cutoff \(E_{\rm cut}\): \[Q_{\rm inj}(E)=Q_{0}E^{-\Gamma}exp(-E/E_{\rm cut}). \tag{1}\] Here the spectral index and cutoff energy of protons are adopted to be \(\Gamma=2.0\) and \(E_{\rm cut}\) = 3 PeV, respectively. And the total energy of injected protons is assumed to be \(W_{\rm p,inj}\) = \(\eta E_{\rm SN}\), where \(\eta\) is the fraction of the kinetic energy of SNR, \(E_{\rm SN}\), converted into the escaped proton energy, and the typical value of \(E_{\rm SN}\) is adopted to be \(10^{51}\) erg (Woosley and Janka, 2005; Vink and Kuiper, 2006). The proton spectrum within the emission region can be derived as Liu et al. (2020): \[N_{p}(E,t)=\frac{Q(E)}{[4\pi D(E)T]^{\frac{3}{2}}}\int_{0}^{R}4\pi r^{2}dr\, \exp\left[-\frac{\rm r^{2}}{4D(E)T}\right]. \tag{2}\] And the diffusion coefficient of protons is assumed to be spatially consistent and energy dependent with \(D(E)=\chi D_{0}(E/E_{0})^{\delta}\), where \(D_{0}=3\times 10^{28}\) cm\({}^{2}\) s\({}^{-1}\) at \(E_{0}\) = 10 GeV, and \(\chi\) = 1.0 corresponds to the typical value of Galactic diffusion coefficient (Blasi, 2013). For an injected source spectrum given by \(Q(E)\propto E^{-\Gamma}\) and \(D(E)\propto E^{\delta}\), the spectrum of escaped protons, \(N_{p}(E)\), approximately equal \(Q(E)\) at low energies where the diffusion radius defined as \(r_{\rm diff}=\sqrt{4D(E)T}\) is much smaller than the size of the emission region \(R\). Here \(R\) is adopted to be 28.3/6.3 pc with the distance of 0.9/0.2 kpc. And at high energies, \(N_{p}(E)\) will follow \(N_{p}(E)\propto E^{-\left(\Gamma+\frac{3}{2}\delta\right)}\), where the spectral break shown at \(E_{\rm p,bre}\) with \(R=\sqrt{4D(E_{\rm p,bre})T}\). With the different parameters of \(\eta\), \(\chi\) and \(\delta\) adopted, the different spectra of escaped protons in the \(\gamma\)-ray emission region are produced and the corresponding \(\gamma\)-ray fluxes are calculated with the \(naima\) package (Zabalza, 2015). And the value of the ambient gas density is assumed to be \(n_{\rm gas}\) = 1.0 cm\({}^{-3}\) considering the absence of the observations of molecular clouds in this region. The resulting hadronic \(\gamma\)-ray flux with the different parameters are shown in Figure 4. Compared with the spectra of \(\delta=1/3\) with Kolmogorov turbulence for the Figure 4: Modeling of the \(\gamma\)-ray spectra with the hadronic escaping model. The different lines indicate the scenarios with the different \(\eta\), \(\chi\), \(\delta\) and distance values as shown in the legend. The cyan dotted–dashed line shows the differential sensitivity of CTA-North (50 hrs; Cherenkov Telescope Array Consortium et al., 2019). Figure 3: Radio imaging with a wide-field view around SG0837 in the stacked 170–231 MHz band data. The archival data are obtained from the TIFR GMRT sky survey, as reported in Figure 6 of Pol et al. (2021). PSR J0837–2454 is flagged with a plus, while the cross stands for the position of PSR J0838–2621 with a characteristic age of \(1.3\times 10^{5}\) kyr (Burgay et al., 2006). The pink contours show the GeV \(\gamma\)-ray emission of SG0837 drawn from Figure 1. diffusion coefficient (Ptuskin et al., 2006), the higher value with \(\delta\) = 1/2 for Kraichnan turbulence gives a much softer \(\gamma\)-ray spectrum in the high energy, which is not very consistent with the observation. With the typical value of Galactic diffusion coefficient, \(\chi\) = 1.0, the total energy of injected protons is fitted to be 2.5 \(\times\) 10\({}^{51}\) erg, which is not reasonable. And the total energy of escaped protons above 1 GeV in this region is calculated to be \(W_{\rm p}\) = 1.5\(\times\) 10\({}^{49}\) (\(n_{\rm gas}\)/1.0 cm\({}^{-3}\))\({}^{-1}\) erg. It should be noted that the much higher total energies could be attributed to the underestimated of the gas density in this region. In addition, by fixing the total energy of injected protons to be 10\({}^{50}\) erg, \(\eta\) = 0.1, the \(\gamma\)-ray spectrum with the distance of 0.9 kpc also can be explained by the hadronic escaping model with a lower diffusion coefficient, and such value is needed to be one order of magnitude lower than the typical Galactic value. And for the distance of 0.2 kpc, the total energy of injected protons need to be about 4 \(\times\) 10\({}^{50}\) erg with \(\eta\) = 0.4. And the corresponding total energy of escaped protons in the \(\gamma\)-ray emission region are estimated to be \(W_{\rm p}\) = 1.2\(\times\)10\({}^{49}\) (\(n_{\rm gas}\)/1.0 cm\({}^{-3}\))\({}^{-1}\) erg and \(W_{\rm p}\) = 8.4\(\times\)10\({}^{47}\) (\(n_{\rm gas}\)/1.0 cm\({}^{-3}\))\({}^{-1}\) erg for the distance of 0.9 kpc and 0.2 kpc, respectively. ### Pulsar Wind Nebula Taking into account the detected pulsar in the \(\gamma\)-ray emission region, a scenario of the pulsar wind nebula (PWN) driven by PSR J0837-2454 is also considered. Such extended \(\gamma\)-ray emissions are also detected in several typical PWNe, such as HESS J1825-137 (Principe et al., 2020), HESS J1640-645 (Xin et al., 2018), etc. However, these \(\gamma\)-ray PWNe detected by _Fermi_-LAT are driven by the energetic pulsars with spin-down powers between 10\({}^{36}\) and 10\({}^{39}\) erg s\({}^{-1}\)(Acero et al., 2013). And the spin-down luminosity of PSR J0837-2454 with \(\dot{E}\) = \(5.5\times 10^{34}\) erg s\({}^{-1}\) seems to be too low to produce such energetic \(\gamma\)-ray emissions around it. So we suggest that the PWN scenario for the observed \(\gamma\)-ray emission for SG0837 is not favored. ### Pulsar Halo Along with the evolution of the PWN into the interstellar medium (ISM), the energetic particles could escape and their transport becomes to be dominated by diffusion. And these escaped particles could form a detectable halo around the pulsar, which is defined as a pulsar halo. Such halos are first detected as the extended TeV \(\gamma\)-ray emissions around the nearby low power pulsars Geminga and PSR B0656+14 (Abeysekara et al., 2017). And the searching for the GeV \(\gamma\)-ray emission for these pulsar halos is ongoing. While only the GeV \(\gamma\)-ray emission from Geminga halo is detected by _Fermi_-LAT (Di Mauro et al., 2019), which is still under debate (Xi et al., 2019). Therefore, we also consider the possible origin as a pulsar halo for SG0837 around PSR J0837-2454 that detected here and discuss the potential TeV observation of it. For the diffusion process in the pulsar halo, the diffusion size of particles is calculated as \(r_{\rm d}=2\sqrt{Dt}\), and \(D\) represents the diffusion coefficient of particles. Considering the updated distance of 0.9 or 0.2 kpc for PSR J0837-2454 (Pol et al., 2021) and the 68% containment radius of 1\({}^{\circ}\).8, the physical size of the \(\gamma\)-ray emission region is calculated to be 28.3 or 6.3 pc. Adopting the characteristic age of PSR J0837-2454 \(\tau_{\rm c}\) = 28.6 kyr, the diffusion coefficient is estimated to be \(2\times 10^{27}\) cm\({}^{2}\)/s or 1 \(\times 10^{26}\) cm\({}^{2}\)/s for d = 0.9 or 0.2 kpc, respectively. And such values are much lower than the typical Galactic value of cosmic rays with \(D\simeq 3\times 10^{28}\) cm\({}^{2}\) s\({}^{-1}\)(Blasi, 2013). Based on the definition of an electron halo in Giacini et al. (2020), namely that of over-density of relativistic electrons around pulsar compared with the ISM, we calculated the energy density \(\epsilon_{\rm e}\) in relativistic particles around pulsar with established associated with the \(\gamma\)-ray emissions with \(\epsilon_{\rm e}\) = \(E_{\rm inj}\)/\(V\), where \(V\) is the volume of the \(\gamma\)-ray emission region. And considering the non-detection of the TeV \(\gamma\)-ray emission from SG0837, the total injected energy is calculated based on the pulsar properties with \(E_{\rm inj}\) = \(\dot{E}\tau_{\rm c}\), where \(\dot{E}\) and \(\tau_{\rm c}\) are the present spin-down power and characteristic age of the pulsar. For PSR J0837-2454, the energy density around is estimated to be 0.01 eV cm\({}^{-3}\) or 1.0 eV cm\({}^{-3}\) for the distance of \(d\) = 0.9 or 0.2 kpc. We replot the Figure 2 in Giacini et al. (2020) by adding the values of PSR J0837-2454 in our Figure 5, together with PSR J0622+3749, which is identified to be a pulsar halo by LHAASO (Aharonian et al., 2021). From Figure 5, we can see that the energy density around PSR J0837-2454 with \(d\) = 0.2 kpc and its current spin-down luminosity are close to the characteristics of TeV halo, like Geminga and PSR B0656+14. However, the characteristic age of PSR J0837-2454 is at least one order of magnitude lower than that of other halos. Hence, the extended \(\gamma\)-ray emission of SG0837 is not much favored for the halo scenario. Nonetheless, the potential TeV \(\gamma\)-ray emission from this source could be expected by the Cherenkov telescopes in the future. ## 4 Summary Pol et al. (2021) claimed a discovery and timing for a young pulsar PSR J0837-2454 with \(P\)=629.4 ms and \(\dot{P}\)=3.5\(\times\)10\({}^{-13}\) s s\({}^{-1}\) by using the radio data from the Parkes radio telescope. Moreover, an extended low-surface-brightness diffuse emission around PSR J0837-2454 was also detected by the radio data from the GLEAM, which suggests it to be a SNR candidate. Motivated by these, we analyzed the 14 yr \(\gamma\)-ray data from the _Fermi_-LAT observations surrounding PSR J0837-2454. Interestingly, we found a significant extended \(\gamma\)-ray emission named as SG0837 at a significance level of \(\sim 12\sigma\) (see Figure 1), which is spatially coincident with the SNR candidate shown as in Figure 3. And SG0837 has a spatial extension with a 68% containment radius of \(\sim 1^{\circ}.8\). The extension significance level is \(9.6\sigma\) with a 2D Gaussian spatial model comparing with a point source model. And its SED in 0.1-500.0 GeV can be well described by the LP model. PSR J0837-2454 is one of the relatively young pulsars compared with other cataloged pulsars. Pol et al. (2021) had shown 74 pulsars with characteristic age \(\leqslant 28.6\) kyr in their Figure 9 and summarised that 80% of 74 pulsars are often associated with a SNR and/or PWN. Considering the above percentage coupled with a diffuse emission in radio and an extended emission in \(\gamma\)-rays spatially coincident with the pulsar, we suggest that SG0837 is correlated with the SNR candidate around PSR J0837-2454. In our spatial analysis, no significant point source was found by subtracting the extended \(\gamma\)-ray emission from SG0837. And the \(\gamma\)-ray pulsation of PSR J0837-2454 was also not found in the timing analysis. These factors make PSR J0837-2454 to be a radio loud and \(\gamma\)-ray quiet pulsar. Several scenarios for the potential origins of the extended \(\gamma\)-ray emission are discussed, such as a SNR, PWN, or pulsar halo. Based on the model fitting results, see the discussion in Section 3, the \(\gamma\)-ray emission origin of the SG0837 is preferred for the SNR scenario. And the future potential detection in the TeV band by the Cherenkov Telescope Array in the northern hemisphere (CTA-North; Cherenkov Telescope Array Consortium et al., 2019) and the molecular clouds observations could be help to test the different models. We thank anonymous referee for her/his very helpful suggestions. This work is supported in part by the National Natural Science Foundation of China No. 12163006, No. 12233006 and No. 12103040, the Basic Research Program of Yunnan Province No. 202201AT070137, the joint foundation of Department of Science and Technology of Yunnan Province and Yunnan University No. 202201BF070001-020, and the Natural Science Foundation for Young Scholars of Sichuan Province, China (No. 2022NSFSC1808).
2307.04192
Self-Adaptive Sampling for Efficient Video Question-Answering on Image--Text Models
Video question-answering is a fundamental task in the field of video understanding. Although current vision--language models (VLMs) equipped with Video Transformers have enabled temporal modeling and yielded superior results, they are at the cost of huge computational power and thus too expensive to deploy in real-time application scenarios. An economical workaround only samples a small portion of frames to represent the main content of that video and tune an image--text model on these sampled frames. Recent video understanding models usually randomly sample a set of frames or clips, regardless of internal correlations between their visual contents, nor their relevance to the problem. We argue that such kinds of aimless sampling may omit the key frames from which the correct answer can be deduced, and the situation gets worse when the sampling sparsity increases, which always happens as the video lengths increase. To mitigate this issue, we propose two frame sampling strategies, namely the most domain frames (MDF) and most implied frames (MIF), to maximally preserve those frames that are most likely vital to the given questions. MDF passively minimizes the risk of key frame omission in a bootstrap manner, while MIS actively searches key frames customized for each video--question pair with the assistance of auxiliary models. The experimental results on three public datasets from three advanced VLMs (CLIP, GIT and All-in-one) demonstrate that our proposed strategies can boost the performance for image-text pretrained models. The source codes pertaining to the method proposed in this paper are publicly available at https://github.com/declare-lab/sas-vqa.
Wei Han, Hui Chen, Min-Yen Kan, Soujanya Poria
2023-07-09T14:54:30Z
http://arxiv.org/abs/2307.04192v4
# SAS Video-QA: Self-Adaptive Sampling for Efficient Video Question-Answering ###### Abstract. Video question-answering is a fundamental task in the field of video understanding. Though current state-of-the-art video-language pretrained models have yielded appealing performance, they are at the cost of huge computational power and thus hard to deploy on many platforms with limited resources. An economical workaround simply samples a small portion of frames to tune an image-text model on these sampled frames. However, the sampling methods adopted by these VLMs are quite simple and straightforward- such methods are aimless and often inevitably omit the key frames from which the correct answer can be deduced, and the situation becomes worse as the sampling sparsity increases, which particularly requires when the video lengths increase. To mitigate this issue, we propose two frame sampling strategies, namely the _most dominant frames_ (MDF) and _most implied frames_ (MIF), to maximally preserve those frames that are most likely vital to the given questions. MDF passively minimizes the risk of key frame omission in a bootstrap manner, while MIF actively searches key frames customized for each video-question pair with the assistance of auxiliary models. The experimental results on three public datasets and three advanced VLMs (CLIP, GIT and All-in-one) demonstrate that our proposed strategies can boost the performance for image-text pretrained models. The source codes pertaining to the method proposed in this paper are publicly available at [https://github.com/declare-lab/sas-vqa](https://github.com/declare-lab/sas-vqa). visual language models, video sampling, sampling strategy + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer Vision and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer and Image Processing + Footnote †: journal: Computer and Image Processing Footnote †: journal: Computer Vision and Image Processing 33]. In this family of approaches, image frames or clips (consecutive frames, as shown in 1d) are sampled from raw videos, cut into patches, and then encoded through a visual encoder (e.g., ViT [(7)]). X-CLIP [(25)] further inserts cross-frame communication modules (transformer blocks) to construct connections across timestamps. The output representations from the visual encoder are concatenated as prefixes which are subsequently added at the head of the embedded question sequences. The whole sequences are fed into the decoder network to generate the predicted answers. To select the best set of frames for the question, we require some sampling strategies, which also play a crucial role in the dataset preparation stage and are prone to profoundly affect models' performance. A sampling strategy in videos can be mathematically understood as how to generate a set of integer numbers (indices of candidate frames) within a given range (length of that video). To our best knowledge, in previous works video sampling strategies can be generally clustered into four types as in Fig. 1. Uniform and random sampling have the same meaning as their corresponding mathematical definition, i.e., the frame indices are generated in equal interval or purely randomly. Wall random first splits the whole sequence to several segments of equal lengths and then performs random sampling in each segment. Clip-level sampling is another family of sampling strategy, which takes several clips (short continual frames, Fig. 1d) out from a video. This method is preferred by video-text models due to the preservation of temporal information. However, in video-QA datasets there are a large number of videos with switching shots and scenes, such sampling strategies inevitably aimlessly select a set of video frames irrelevant to the input question--they may get addicted to those long-lasting segments regardless of whether similar contents have been sampled before and overlook those short segments that are really important to answer the given question. Take the video-question-answer triplet in Fig. 2 as an example. To answer the two questions, we have to refer to two other frames that cannot be fetched through uniform sampling (termed as key frame). In this case, simple uniform sampling may incur unexpected error. To alleviate this issue, we propose two self-adaptive and more targeted sampling strategies : the _most dominant frames_ (MDF) and _most implied frames_ (MIF). MDF selects the most (locally) dominant frames in a video, i.e., frames whose semantic information expresses high semantic similarity to their surrounding counterparts. In other words, it focuses on the frames in the _steady state_ where objects and scenes change or move slower and neglects those in the _transient state_ where they are in a high speed. This could minimize the risk of missing important frames, especially in the non-transient video-QA task in which we have to depend on some static information. In contrast, MIF absorbs the knowledge from automatic captions and picks the frames that can generate the most implied caption to a given question. A pretrained textual question-answer grader is then applied to measure how much the caption implies a possible answer to that question. In a nutshell, our contribution in this paper encompasses: 1. We propose two video sampling methods, the _most dominant frames_ (MDF) and _most implied frames_ (MIF), for the video question-answering task. We will then give detailed illustrations of these two algorithms. 2. We test our proposed sampling methods on three image-text models: CLIP [(27)], GIT [(33)] and All-in-one [(32)], and on three publicly-available video-QA datasets: MSVD-QA, MSRVTT-QA [(38)] and TGIF-Frame [(11)], under typical and innovative settings. The results show a consistent accuracy increment by our two sampling methods. 3. We give a comprehensive analysis of the results and run ablative experiments in extensive application scenarios. These supplementary statistics further demonstrate the effectiveness and robustness of our proposed methods. ## 2. Related Work ### Visual Language Models Since the unprecedented success of CLIP [(27)] and ALIGN [(12)] in the field of zero-shot multimodal learning, there is a trend in training large VLMs by applying the image-text contrastive loss (ITM) [(15; 20; 42; 44)] as the target function to endow them with better multimodal understanding capability. With devised structure and pre-training procedure, these models extend the original achievement from the zero-shot image classification to many other vision-language tasks, such as image captioning, visual question answering, multimodal retrieval, etc. Early VLMs for multi-task purposes frequently adopt a bi-encoder architecture [(18; 19; 27)], where visual and textual modality are separately encoded in their dedicated encoders and finally simply combined for downstream tasks. Recent achievements turn to the more efficient GPT-style [(3)] architecture, which takes the output sequences from visual encoders as the visual prefixes and jointly tunes the decoder and visual encoder [(1; 17; 31)]. Moreover, they also find the naturally acquired capability of multimodal few-shot learning in these VLMs. Although state-of-the-art VLMs have been proven excellent at almost all image-text tasks, they are restricted when the input data type is switched to video--even though image and video are in the same modality. A generally accepted explanation is that temporal correlation can no longer be neglected, and therefore we require more advanced models to deal with video inputs. In this sense, more powerful but more expensive video encoders are on the way. ### VLM with Video Input VLMs pretrained on image-text pairs with an image encoder can hardly applied to video tasks, since they rarely capture the temporal correlation between frames. There are mainly three types of solutions for this issue: The most straightforward recipe [(29; 40)] replaces image encoders in these VLMs with video encoders that incorporate temporal convolution, like S3D [(37)] and Video-Swin-Transformer [(23)]. Alternatively, Lei et al. [(16)] downsample the original video into many clips (a clip is a set of continuous frames). They forward these frames through the image encoder one by one to obtain the frame-level representations, and then employ another simple neural network to fuse them into a unified representation. The third choice is to add projection modules on the top of the image encoder so that encoding results from different frames can be processed to a single representation. This routine is harder to implement because it actually changes nothing of an image-text model and requires careful tuning to yield competitive performance. Luckily, many researchers work in this way and succeed in getting excellent models. Ni et al. (2018) integrate an attention module to provide temporal modeling for the image encoder. Rasheed et al. (2019) leverage the average pooling operation to immediately get a unit-length visual representation. They show that this incredibly simple approach can boost CLIP's performance on several visual-language tasks by a large margin. Wang et al. (2019) linearly project many image vectors into the same size as text, and treat the sequence of these projected vectors as the visual prefix in the decoder-only architectures. This design moves the attention calculation into the decoder model. They reformulated downstream tasks into the generative style and also exhibit promising results. ## 3. Method In this section, we first briefly recap the definition of the video-QA task. Then we list the backbone models tested on and how the question-answering task are formulated on these models. Finally, we describe our two sampling algorithms in detail. ### Problem Definition Given a short video \(V=\{v_{1},v_{2},...,v_{T}\}\) of \(T\) frames and a literal question \(Q=\{q_{1},q_{2},...,q_{l}\}\) of \(I\) tokens, a VLM \(\mathcal{M}\) is expected to predict an answer \(A=\{a_{1},a_{2},...,a_{n}\}\) to match a reference answer which serves as a valid response to the given question. The reference answer must be some elements resided in that video, or in other words, one can only answer that question after watching the video. In the generative setting, at inference time the predicted answer should be generated along with the question sequence until the end-of-sequence ("[EOS]") token, which can be written as: \[[\hat{Q};\hat{A}]=\mathcal{M}(V^{\prime},Q) \tag{1}\] where \(V^{\prime}\subset V\) is a set of sampled frames, \(\hat{Q}\) and \(\hat{A}\) denote the question and answer part in the generated output, and \([;]\) denotes the concatenation of the two sequences. Because the decoding needs to look up the entire vocabulary rather than choose from a predefined set of several candidate answers, this type of video QA Figure 1. Commonly used video sampling strategies in previous works. The sampled frames are in black and unselected frames are in white. Figure 2. Uniformly sampled frames from a video in the msrvtt-qa (Weng et al., 2019) dataset and two of the questions. The brackets are the timestamps where we can get the cues for corresponding answers from the video. The QA-pair in the red box cannot be grounded from the four sampled frames. is also called _open-ended_ video-QA (Kang et al., 2017). For fair comparison, we keep the size of the sample set \(V^{\prime}\) fixed for all sampling methods. In evaluation, we use item-wise accuracy as the performance metric, defined as: \[acc=\frac{1}{|\mathbf{Q}|}\sum_{i=1}^{|\mathbf{Q}|}\mathbf{1}(A_{i}=\hat{A_{i}}) \tag{2}\] where \(\mathbf{Q}\) is the entire set of questions in the dataset, \(\mathbf{1}(\cdot)\) is the indicator function that equals 1 only if the expression is true. ### Most Dominant Frames (MDF) It has been pointed out in early video sampling papers (Zhu et al., 2017; Wang et al., 2018) that the sampling rate in each temporal region should be proportional to the object motion speed. Besides, due to the limited number (3 or 6 in our experiments) of frames in sampling, if the sampled frames are temporally too closed, at a large chance they will be in analogous contents and the really essential frames will be missing in the eventual sampling results. For these two concerns, we construct our solution on (1) the VLM's self-perception within its own vision-module; (2) the wall setting (Fig. 1c). The first intuition comes from the theory and experience of representation learning from large pretrained models (Beng et al., 2016; Chen et al., 2017; Chen et al., 2017), which believes that learned representations output from well-tuned large models have been embedded with meaningful semantic information. Since we only focus on the relative similarity between frames instead of what the content is, we can harness the vision encoder of these VLMs (if it has one) to acquire these embeddings and calculate the pairwise similarity. We then search for those frames that have a higher similarity with their neighbor (the meaning of "dominated" in the context). Concretely, MDF first utilizes the inner-model visual encoder to encode all video frames into a line of vectors \(E=\{e_{1},e_{2},...,e_{T}\}\). Then it computes the cosine similarity \(S\in\mathbb{R}^{T\times T}\) between each vector pair. To kick off the searching process, we first quantify _dominance_--by defining the _neighbor cumulative similarity_ (NCS) of a frame as a measure of dominance. For the \(i^{th}\) frame \(V_{i}\), NCS is calculated as \[NCS(i)=\sum_{j=i-W,j\neq i}^{i+W}S_{ij} \tag{3}\] where \(W\) the wall width. The algorithm continues searching till finding \(N\) frames of the highest \(NCS\) values, as described in Alg. 1. Considering the disparity in the lengths of videos, instead of keeping a constant \(W\), we set \(W\) automatically in an self-adaptive way: \[W_{i}=L_{i}/(\lambda\cdot N) \tag{4}\] where \(L_{i}\) is the length of video \(i\) in terms of frame numbers, \(\lambda\) is the constant width-adjusting rate that controls the scope to search in every steps. Fig. 3 visualizes an example of searching results on the similairty map. ``` Input: Video frames \(V=\{v_{1},v_{2},...,v_{T}\}\), vision model \(\mathcal{M}\), width-adjusting rate \(\lambda\) Output: Visual prefix \(F=\{f_{1},f_{2},...,f_{N}\}\) 1 Encode frames using the vision model \(E=\mathcal{M}(V)=\{e_{1},e_{2},...,e_{T}\}\) 2 Compute the frame similarity matrix \(S\), \(S_{ij}=\mathbf{cos}(e_{i},e_{j})\) and \(NCS\), set \(W\) according to eq. 3 and 4. 3Init\(F=\{e_{\text{arg}\max_{i}NCS(i)}\}\), index set \(I=\{0,1,...,i-W,i+W,...,T\}\) 4while\(|F|<N\)and\(I\neq\emptyset\)do 5\(j\leftarrow\arg\max_{i}NCS(i)\); 6\(F\gets F\cup\{ij\}\) 7\(I\gets I\setminus\{j-W+1,...,j+W-1\}\)// not find enough frames that satisfy the interval requirements, simply pick frames with Top-k NCS scores 8if\(|F|<N\)then 9\(J\gets Top(NCS,N)\) 10 return \(F=\{f_{j},j\in J\}\) 11else 12 return \(F\) ``` **Algorithm 1**Most Dominant Frames (MDF) ### Most Implied Frames (MIF) Distinct from question-agnostic (i.e., sample independent of the visible question) MDF, which passively minimizes the risk of missing dominant frames, MIF more actively searches the frames most correlated to the target question. This process is completed by two auxiliary models--a caption model (\(\mathcal{M}_{\text{c}}\)) and a question--answer scoring model (\(\mathcal{M}_{\text{s}}\)). As depicted in Fig. 4 and Alg. 2, we reduce the computational cost by uniformly sampling \(T^{\prime}\) frames from the original video (\(N<T^{\prime}<<T\)). The caption model \(\mathcal{M}_{\text{c}}\) takes every single frame as input and outputs a title. Then \(\mathcal{M}_{\text{s}}\) computes the matching scores between the target question and generated captions. The matching scores reflect how much possibility each frame may imply the answer to that question. Lastly, we rank these frames Figure 3. An example of 6-frame sampling process by MDF. The heatmap visualizes the frame similarity matrix calculated as the cosine value between pairs of frame vectors. The entry at \(i^{th}\) row \(j^{th}\) column represents the similarity between frame \(i\) and frame \(j\). Blue points are the eventually extracted frames in the video. according to these scores and pick the top \(N\) of them as the sampled results. Note that there is usually more than one question provided in the dataset for most of the videos. Therefore, unlike MDF which always generates the same sampling results for different questions attached to the same video, MIF can produce more personalized question-specific sampling set. ``` Input: Video frames \(V=\{\texttt{\small$\texttt{p}_{1},\texttt{\small$\texttt{p}_{2},...,\texttt{ \small$\texttt{p}_{T}$}}$}\}\), caption model \(\mathcal{M}_{\text{c}}\), question-answer scoring model \(\mathcal{M}_{\text{s}}\), question \(Q\), downsampled set size \(T^{\prime}\) or rate \(r\) Output: Visual prefix \(F=\{\texttt{\small$\texttt{f}_{1},\texttt{\small$\texttt{f}_{2},...,\texttt{ \small$\texttt{f}_{N}$}}$}\}\) Downsample the frames by the downsample rate \(r\), \(V^{\prime}=\textit{\small$\texttt{d}owsample$}(V,r)=\{\textit{\small$\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ }}}}}}}}}}}\}\) or to the fixed length \(V^{\prime}=\textit{\small$\texttt{d}owsample$}(V)=\{\textit{\small$\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \ \texttt{ \texttt{ \ \texttt{ \texttt{ \ \texttt{ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \texttt{ \ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \ \texttt{ \ \ \texttt{ \texttt{ \texttt{ \texttt{ \ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ } \texttt{ \texttt{\texttt{\texttt{ } \texttt{ \texttt{ } \texttt{{\texttt{ } \texttt{\texttt{ } \texttt{\texttt{{\texttt{ }} \texttt{\texttt{\texttt{\texttt{ }}}}}}}}}}}} \) \\ \end ``` **Algorithm 2**Most Implied Frames (MIF) ## 4. Experiments ### Datasets To examine our proposed method, we conduct extensive experiments on the following three commonly used benchmark datasets: _MSVD-QA and MSRVTT-QA_[38]._ These two datasets are adapted from two general video captioning datasets--Microsoft Research Video Description Corpus (Chen et al., 2019) and MSR-VTT dataset (Wang et al., 2019). Both datasets have five types of questions--_what, where, who, when, how._ All answers are in the format of a single word. _TGIF-QA_[11]._ The TGIF-QA dataset contains 165K QA pairs for the animated GIFs from the TGIF dataset (Kumar et al., 2019). Its question-answer pairs are annotated via crowdsourcing with a carefully designed user interface to ensure quality. TGIF-QA has three question types: frame, transition, and (repetition) count. We only test on the frame-QA task because others do not belong to the open-end QA category. ### Backbone Models The backbone VLMs utilized in our experiments are as follows: * **CLIP**(Kumar et al., 2019). CLIP is a large VLM that focuses on zero-shot transfer onto diverse multimodal downstream tasks. It is composed of two modality-specific encoders to process input modality signals separately. In our experiments, we also modify its structure by adding a single-layer transformer decoder on the top of the two encoders (dubbed "CLIP-dec", see Fig. 5). We decode for only one step to get the answer, not alike other generative VLMs that predict the whole sequence containing both the question and answer words. We also implement a baseline (CLIP, Uni) that predicts over the answer vocabulary based on the multimodal feature vectors, in order to show the benefits from structural modification directly. * **GIT**(Wang et al., 2019). GIT is one of the state-of-the-art VLMs for video question answering tasks, released by Microsoft Research. It adopts ViT-B-16(Kumar et al., 2019) as its visual encoder and has a GPT-style decoder that receives both the encoded image patches (as prefix) and textual embeddings to generate the entire sequence of the question and answer in an auto-regressive fashion. Currently the GIT family consists of four versions1. In our experiments, we tune GIT-Base on these three datasets (denoted as GIT in later context for simplicity). Footnote 1: GIT-Base, GIT-Large, GIT and GIT2, as of May 2023 * **All-in-one**(Wang et al., 2019). All-in-one is another family of VLMs which follows the philosophy of _learning-by-fusion_. The model is composed of many stacked multimodal attention layers called unified transformer that takes concatenated video-text input as the basic fusion modules. Similar to previous two VLMs, by appropriate formulation, it can employ the output embeddings to solve many downstream video-language tasks. Particularly, we use All-in-one(-Base) in all our experiments. To enforce a fair comparison, we run both training and testing stages for each VLM on a single NVIDIA RTX-A6000 GPU (except All-in-one because its implementation only has multi-GPU version, therefore we run it on 2 GPUs) while holding other hyperparameters and settings consistent with the default ones introduced in their original papers or codes (e.g., number of frames sampled per video, learning rate, training epoch, numerical precision in computation, etc). Gradient accumulation is applied to enable the large batch size (\(\geq 512\)) required in the fine-tuning process. To further reduce the computational complexity, all experiments are implemented with the pytorch Automatic Mixed Precision (AMP) 2 package. The Figure 4. MIF workflow. Here we just show an example of how it selects one frame out of two frames. checkpoints in our finetuning stage can all be found and downloaded from publicly available links. By default "CLIP" and "All-in-one" repectively denote CLIP-ViT-base-patch16 3 and All-in-one-Base 4. For GIT-related models, we follow (Srivastava et al., 2017) to finetune the pretrained GIT-Base 5 on three datasets, although there is already a released finetuning checkpoint for msrvtt-qa6). Particularly, we have yet known the exact sampling strategy adopted in GIT on three datasets. To this end, we run and examine the results on uniform sampling and find they exceed the reported numbers on 2 of 3 datasets. Hence, we treat the uniform sampling as baseline for GIT and CLIP-series (because there is not open-sourced implementation provided for CLIP on these datasets as well). All-in-one instead has the publicly available code, which clearly shows its sampling strategy. Therefore, we just simply reproduce and report the result ("All-in-one, Reproduced") using released code as baseline for comparison. For all experiments, we keep the sampling strategy (including their hyperparameters if any) unchanged in training and testing. Footnote 3: [https://huggingface.co/openai/clip-vt-base-patch16](https://huggingface.co/openai/clip-vt-base-patch16) Footnote 4: [https://github.com/showlab/all-in-one](https://github.com/showlab/all-in-one) Footnote 5: [https://huggingface.co/microsoft/git-base-msrvtt-qa](https://huggingface.co/microsoft/git-base-msrvtt-qa) Footnote 6: [https://huggingface.co/microsoft-git-base-ccco](https://huggingface.co/microsoft-git-base-ccco) In MDF, we use each model's inherent vision encoder to encode the sampled frames, and then calculate the cosine values between these vectors as the measure of frame similarity. A special case is that All-in-one does not possess an independent visual encoder, as introduced above. Hence, we use ViT-B-16 (the same visual encoder as CLIP and GIT) as the "pseudo visual encoder", and following the same procedure to obtain the sampled frames in each video. In MIF, we use GIT-base pretrained on COCO for captioning 7 to annotate the downsampled video frames and BERT 8 to evaluate the matching extent between the target questions and generated captions. Footnote 8: [https://huggingface.co/microsoft-git-base](https://huggingface.co/microsoft-git-base) Footnote 8: [https://huggingface.co/microsoft-git-base-ccco](https://huggingface.co/microsoft-git-base-ccco) ### Evaluation Metrics and Baselines **Evaluation Metrics.** In all models, the sampled raw frames \(V^{\prime}\) are resized to match the model-acceptable scales and then normalized. VLMs then take these frames as input and embed them into a sequence of vectors. Since the decoding mechanisms are different in these models, we illustrate them one by one: In non-generative Video-LM (CLIP), the outputs from both modality encoders first pass through a transformer decoder layer and a classification layer: \[\hat{A}=f(E_{o},E_{q}) \tag{5}\] In generative VLM (CLIP-Dec, GIT), the visual (from the visual encoder, like a prefix prepended to the text) and textual embeddings (from the embedding layer) constitute the input of the decoder. The decoder keeps generating the whole question and answer sequence in an auto-regressive manner: \[P(Q,A|V,Q)=\sum_{t=1}^{n+t-1}\log P(y_{t+1}|y_{1},y_{2},...,y_{t},V) \tag{6}\] In All-in-one, the model first generates answer predictions \(z_{i}\) for each frame. Then, these predictions are fused together by summation to form a consensus at the video level (Srivastava et al., 2017). \[p=\frac{1}{S}\sum_{i=1}^{S}z_{i} \tag{7}\] **Baseline Models.** We compare the results on the listed image-text pretrained models to other models in similar sizes that have (1) an image encoder inside but experience no or a different pretraining procedure (including the pretraining task selection and design, the goal function, datasets and annotation methods, etc) (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016). (2) a video encoder to tune during training time or merely use feature vectors extracted from pretrained video networks (I3D (Krizhevsky et al., 2014), S3D (Srivastava et al., 2017)) (Krizhevsky et al., 2014; Srivastava et al., 2017; Srivastava et al., 2017). For baselines that work as our backbone network and finetuning starting point, we report our reproducing results as a more accurate benchmark, since we found many of these results are distinct from those reported in the original paper owing to the disparity in implementation environments. Particularly, since we do not find any details introduced in the paper or official implementations online regarding the sampling strategies in GIT, and our implementation with uniform sampling in both training and testing can achieve comparable results as the reported ones (Srivastava et al., 2017) on 2 of 3 datasets, we treat this implementation as the reproduced results of GIT standalone. ### Results **Results on CLIP and CLIP-Dec.** Encoder-only models have been proven not good at handling fine-grained tasks like visual question answering (Krizhevsky et al., 2016), therefore we are interested in how the performance will change after revamping the such type of model with a self-regressive decoder. As shown in Table 1, we find that the accuracy of the newly crafted architecture on all three datasets increases significantly. This situation may indicate that CLIP's potential on these tasks has yet been fully elaborated due to architecture limitation, Figure 5. The architecture of our implemented CLIP-Dec. The decoding result on the top of the decoder just shows an example and should equal to the length of the ground-truth answers. and can be excavated by subtle architecture modification. When substituting uniformly sampled frames with MDF and MIF sampled ones, we note an increment of 1.2%\(\sim\)1.7% in the accuracy on the MSVD-QA and MSRVTT-QA datasets, and the increment is much larger on the TGIF-frame. However, the performance difference between two proposed sampling strategies is not significant on MSVD and MSRVTT, which implies that there are some trade-offs between the passive and active pair of strategies. **Results on GIT and All-in-one.** Although we have verified that CLIP can be finetuned to perform much better on the video-QA task after architecture refinement, it still falls behind many other VLMs (compare Table 1 and Table 2) which either experience a more complicated pretraining procedure, or have a video encoder to harness the temporal correlation. We further test our methods on these more advanced models and see how they behave. Table 2 displays the results on these two models. Firstly it clearly shows that VLMs with a video encoder or consumes features pre-extracted from advanced video models outperformed their counterparts with only an image encoder, but the gap is narrowed or even subverted as the VLM pretraining techniques develop-our selected benchmark models can achieve comparable or better results than them. Secondly, compared to the reproduction results on the same condition with uniform sampling, both MDF and MIF can raise the accuracy on all three datasets regardless of model architectures, especially on MSVD-QA and MSRVTT-QA--the accuracy significantly surpasses the reported and reproduced (higher than reported) values. This phenomenon is consistent with CLIP-Dec, which demonstrates our proposed methods are broadly applicable to diverse datasets and models. Thirdly, the increment in accuracy is higher on models with more sampled frames (6 for GIT v.s. 3 for All-in-one), from which we speculate that our proposed methods are possibly more suitable to those VLMs that take more frames as input. Lastly, we notice that the improvement on TGIF-Frame by MDF and MIF over the uniform sampling is more drastic than the other two datasets. This quite contradicts to our belief since "video" (GIF strictly) in TGIF-frame is much shorter with fewer switching in scenes than the other two datasets. Hence we deem that it should be less sensitive to the sampling methods. Meanwhile, All-in-one adopts wall-random sampling in training and uniform sampling in the testing phase, and correspondingly its reported accuracy on TGIF-Frame is higher. This fact further confirms that the TGIF-Frame dataset is more sensitive to the sampling strategy. ## 5. Analysis ### Impact of Frame Numbers We note that every baselines fix the input frame number during the experiments. Intuitively frame number should be regarded as a potential factor that contributes to the accuracy, since it equals to increasing the amount of training data to tune a model. To see how much this factor affects the models' performance and whether our proposed sampling methods can consistently enhance the accuracy when sampling more or fewer frames, we continue to fine-tune GIT on the msrvtt-qa dataset with other input frame numbers. The results of this set of experiments are plotted in Fig. 6. From the figure we firstly discover that as expected, after increasing the number of input frames, the accuracy scores also become higher. Moreover, the accuracy of the proposed two sampling strategies MDF and MIF consistently surpass the uniform baseline, indicating that they can really locate those key frames in videos even after changing the input length. ### Sampling Interval in MDF In MDF, we prevent the sampling frames from being excessively close by setting a hyperparameter \(\lambda\) (\(W=L/(\lambda\cdot N)\) However, decreasing \(\lambda\) (enlarging the interval \(W\)) causes the model more probably fail to sample enough frames, and in our algorithm when this happens the model turns to directly picking the \(K\) frames of the highest NCS scores, some of which may get too closed. In our experiments, we found that such situations would not always \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Model & MSVD-QA & MSRVTT-QA & TGIF-Frame \\ \hline \multicolumn{4}{c}{**Models with an Image Encoder**} \\ \hline LGCN (Long et al., 2018) & 34.3 & - & 56.3 \\ QueST (Liu et al., 2018) & 36.1 & 34.6 & 59.7 \\ CLIP-BERT (Liu et al., 2018) & - & 37.4 & 60.3 \\ HAIR (Liu et al., 2018) & 37.5 & 36.9 & 60.2 \\ \hline \multicolumn{4}{c}{**Models with a Video Encoder**} \\ \hline HQGA (Liu et al., 2018) & 41.2 & 38.6 & 61.3 \\ VQA-T (Liu et al., 2018) & 46.3 & 41.5 & - \\ VIOLET (Liu et al., 2018) & 47.9 & 43.9 & 68.9 \\ MERLOT (Liu et al., 2018) & - & - & 69.5 \\ \hline \multicolumn{4}{c}{**GIT related**} \\ \hline GIT, Reported (Liu et al., 2018) & 51.2 & 41.0 & **69.1** \\ GIT, Uni (Reproduced) & 52.2 & 41.1 & 47.0 \\ GIT, MDF & **55.3** (13.1) & 42.0 (70.9) & 68.8 (721.8) \\ GIT, MIF & 54.5 (72.3) & **42.3** (11.2) & 67.5 (120.5) \\ \hline \multicolumn{4}{c}{**All-in-one related**} \\ \hline All-in-one, Reported (Liu et al., 2018) & 46.5 & 42.9 & 64.2 \\ All-in-one, Reproduced & 46.1 & 42.7 & 64.0 \\ All-in-one, MDF & **46.9** (70.8) & 43.8 (71.1) & **66.2** (72.2) \\ All-in-one, MIF & 46.7 (70.6) & **44.0** (1.3) & 65.9 (71.9) \\ \hline \hline \end{tabular} \end{table} Table 2. Experimental results on three datasets. Baseline results are from Zhong et al. (2019). GIT-Reported scores are from Wang et al. (2018). Best score in respective baseline categories are with an underline. Best scores on backbone models are in bold. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Model & MSVD-QA & MSRVTT-QA & TGIF-Frame \\ \hline CLIP, Uni & 27.7 & 30.3 & 42.8 \\ CLIP-Dec,Uni & 33.8 & 33.7 & 47.2 \\ CLIP-Dec,MDF & **35.0** (\(\uparrow\)1.2) & 35.2 (\(\uparrow\)1.5) & **63.2** (\(\uparrow\)16.0) \\ CLIP-Dec,MIF & **35.0** (\(\uparrow\)1.2) & **35.4** (\(\uparrow\)1.7) & 61.8 (\(\uparrow\)14.6) \\ \hline \hline \end{tabular} \end{table} Table 1. Experimental results of CLIP and CLIP-Dec on three datasets. The numbers inside brackets are the increment over the uniform sampling baseline. result in performance degradation. To delve into its effects, we define the outcome where the collected \(K\) frames satisfy the interval requirements as "success" and otherwise as "failure". We calculate and plot the curve of success rate (\(r_{success}=\overline{n_{success}}/n_{total}\)) and accuracy against \(\lambda\) on three datasets produced by GT, as shown in Fig. 7. The horizontal axis denotes the hyperparameter \(\lambda\) that controls the minimal sampling interval. The figure indicates that is a critical point that failure will never happen if continuing to increase \(\lambda-\)we do not know the precise value but only mark the minimal value on the picture that we can earn 100% success. Moreover, there is no strong correlation between the success rate and model performance, but a minimum interval should be reached to ensure a promising performance. The performance peak is achieved under a hybrid sampling strategy (\(\lambda=2.3,r_{success}=79.1\%\)). ### Auto-generated Captions in MIF In MIF, we invoke a captioning model and anticipate it to provide an precise and informative annotation to each frame. Since intuitively, the question-answering matching judgement model can not probably differentiate nuance in two sentences if their pattern looks quite similar. However, the actual results are opposite to our expectation. Take our randomly selected video from MSVD-QA in Table 3 as an example, where Q1 and Q2 represent two questions "what does a small dog wildly play with?" and "what wildly plays with a ball?". First we observe that the titles generated by the VLM looks similar to each other, i.e., "_\(\sim\)noum- \(\sim\)verb- \(\sim\)prep phrases-_", suggesting that a model may tend to generate captions in a nearly fixed pattern. Moreover, the sentence similarity among these captions confuse the QA pair scoring model--Q1 and Q2 describe nearly the same scenario and should share some cue frames, but the key frame (the \(12th\) frame) is captured by Q1 but overlooked by Q2, as well as the secondary important frame (the \(3rd\) frame). Therefore, we believe that a captioning model that can provide diversified output and a robust scoring model that can offer objective and fair ratings to question-answer pairs are necessary to guarantee sampling effectiveness which is vulnerable to possible intermediate noises. ## 6. Conclusion In this paper, we focus on the frame sampling issue inhering in the task of video question-answering and propose two simple and effective methods--most dominant frames (MDF) and most implied frames (MIF). The core idea behind these two sampling strategies target at avoiding missing any key frames or actively seeking and locating them. We test the two methods on three advanced VLMs in different architectures and three popular datasets. The results indicate that both the two methods can significantly enhance models' performance, supporting our claim that sampling strategy has a ponderous impact on the video question-answering task. Moreover, the success on these sampling strategies from CLIP to All-in-one manifests a broad range of general application scenarios of our proposed methods. \begin{table} \begin{tabular}{c|c|c|c|} \hline \hline ID & Caption & Q1 & Q2 \\ \hline 1 & a puppy playing with toys. & & \\ \hline 2 & a white puppy playing with a toy. & & \\ \hline 3 & a white puppy with black eyes and & & \\ & a blue ball. & ✓ & \\ \hline 4 & a puppy that is laying down on the floor. & & \\ \hline 5 & a puppy playing with a blue ball. & & \\ \hline 6 & a puppy that was found in a house. & & ✓ \\ \hline 7 & a puppy that is laying down on the floor. & & \\ \hline 8 & a puppy that is sitting on the floor. & & ✓ \\ \hline 9 & a puppy is sitting on the floor. & ✓ & ✓ \\ \hline 10 & a white puppy sitting on a table. & & ✓ \\ \hline 11 & a white puppy laying on the floor. & ✓ & ✓ \\ \hline 12 & a puppy playing with a blue ball. & ✓ & \\ \hline 13 & a white dog standing on top of a floor. & ✓ & ✓ \\ \hline 14 & a white dog walking on the floor. & ✓ & \\ \hline 15 & a small white dog playing with a ball. & & \\ \hline 16 & a dog chewing on a toy in a cage. & & \\ \hline \hline \end{tabular} \end{table} Table 3. An example of frame captions and sampling results. “✓” means this frame is chosen to constitute the input together with the question of that column. Figure 6. Accuracy v.s. Number of input frames on the MSRVTT-QA dataset by GIT. Figure 7. Accuracy v.s. Sampling intervals in MDF on the MSRVTT-QA dataset by GIT.
2302.01634
A model for slip and drag in turbulent flows over superhydrophobic surfaces with surfactant
Superhydrophobic surfaces (SHSs) can reduce the friction drag in turbulent flows. In the laminar regime, it has been shown that trace amounts of surfactant can negate this drag reduction, at times rendering these surfaces no better than solid walls (Peaudecerf et al., Proc. Natl. Acad. Sci. USA 114(28), 7254-9, 2017). However, surfactant effects on the drag-reducing properties of SHSs have not yet been studied under turbulent flow conditions, where predicting the effects of surfactant in direct numerical simulations remains expensive by today's standards. We present a model for turbulent flow inclusive of surfactant, in either a channel or boundary-layer configuration, over long but finite-length streamwise ridges that are periodic in the spanwise direction, with period $P$ and gas fraction $\varphi$. We adopt a technique based on a shifted log law to acquire an expression for the drag reduction. The average streamwise and spanwise slip lengths are derived by introducing a local laminar model within the viscous sublayer, whereby the effect of surfactant is modelled by modifying the average streamwise and spanwise slip lengths. Our model agrees with available laboratory experimental data from the literature when conditions are clean (surfactant-free), or when there are low surfactant levels. However, we find an appreciable drag increase for larger background surfactant concentrations that are characteristic of turbulent flows over SHSs for marine applications.
Samuel D. Tomlinson, François Peaudecerf, Fernando Temprano-Coleto, Frederic Gibou, Paolo Luzzatto-Fegiz, Oliver E. Jensen, Julien R. Landel
2023-02-03T10:05:15Z
http://arxiv.org/abs/2302.01634v1
# A model for slip and drag in turbulent flows over superhydrophobic surfaces with surfactant ###### Abstract Superhydrophobic surfaces (SHSs) can reduce the friction drag in turbulent flows. In the laminar regime, it has been shown that trace amounts of surfactant can negate this drag reduction, at times rendering these surfaces no better than solid walls (Peaudecerf et al., Proc. Natl. Acad. Sci. USA 114(28), 7254-9, 2017). However, surfactant effects on the drag-reducing properties of SHSs have not yet been studied under turbulent flow conditions, where predicting the effects of surfactant in direct numerical simulations remains expensive by today's standards. We present a model for turbulent flow inclusive of surfactant, in either a channel or boundary-layer configuration, over long but finite-length streamwise ridges that are periodic in the spanwise direction, with period \(P\) and gas fraction \(\phi\). We adopt a technique based on a shifted log law to acquire an expression for the drag reduction. The average streamwise and spanwise slip lengths are derived by introducing a local laminar model within the viscous sublayer, whereby the effect of surfactant is modelled by modifying the average streamwise and spanwise slip lengths. Our model agrees with available laboratory experimental data from the literature when conditions are clean (surfactant-free), or when there are low surfactant levels. However, we find an appreciable drag increase for larger background surfactant concentrations that are characteristic of turbulent flows over SHSs for marine applications. keywords: Drag reduction, Superhydrophobic surfaces, Marangoni effects + Footnote †: journal: International Journal of Heat and Fluid Flow ## 1 Introduction Superhydrophobic surfaces (SHSs) combine hydrophobic chemistry and surface roughness to entrap gas layers in their texture, reducing the drag when compared to solid walls. Harnessing this feature in turbulent flows could benefit a number of marine, industrial and environmental applications. For example, SHSs could help reduce energy consumption and associated gas emissions in the shipping industry, which is responsible for around 2.5% of global greenhouse gas emissions and 13% of NOx and SOx emissions (Smith et al., 2015). Early investigations into laminar flows over SHSs modelled the liquid-solid and liquid-gas interfaces as a mixture of no-slip and shear-free boundaries (where the liquid-gas interface is often assumed to be flat), thereby predicting large reductions in drag (Rothstein, 2010). However, recent experimental studies in laminar flow conditions have shown that trace amounts of surfactant can strongly impair the drag-reducing effect of SHSs (Kim and Hidrovo, 2012; Bolognesi et al., 2014; Peaudecerf et al., 2017; Song et al., 2018). Motivated by these findings, laminar theories have been constructed and compared with numerical simulations inclusive of surfactant (Landel et al., 2020; Temprano-Coleto et al., 2023), which demonstrate that surfactant effects should be taken into account to improve model predictions of the drag in channels bounded by SHSs. In this study, we are interested in quantifying the effect of surfactant on the drag reduction in _turbulent_ flows over SHSs with long but finite-length streamwise ridges that are periodic in the spanwise direction, for marine applications (see Fig. 1). Surfactant traces have been measured in many natural settings, such as seawater (Pereira et al., 2018; Frossard et al., 2019), rivers, estuaries and fog (Lewis, 1991; Facchini et al., 2000). Surfactants can adsorb at liquid-gas interfaces and lower the surface tension between liquid and gas (Manikantan and Squires, 2020). They are transported by the flow and accumulate at stagnation points (liquid-gas-solid contact lines), inducing an adverse Marangoni stress at the interface which increases the drag (see Fig. 1). In order to model flows inclusive of surfactant, Landel et al. (2020) assumed that the surfactant concentration is small, and therefore, that there is a uniform interfacial concentration gradient and shear rate along the liquid-gas interface. They constructed a scaling theory to model the average streamwise slip and drag in a two-dimensional channel with periodic streamwise ridges in the low-Reynolds-number regime. The theory described in Landel et al. (2020) was extended to three dimensions by Temprano-Coleto et al. (2023). In particular, Temprano-Coleto et al. (2023) found that for many small-scale applications, the detrimental effect of surfactants essentially depends on a ratio between a surfactant mobilization length and the grating length. The mobilization length depends on the normalised surfactant concentration, Marangoni number, Damkohler number and Biot number. For most small-scale applications, the mobilization length is of the order of centimetres. If the grating length is larger than the mobilization length, substantial slip, and thus significant drag reduction, can occur, as confirmed by laminar flow experiments (Peaudecerf et al., 2017; Song et al., 2018; Temprano-Coleto et al., 2023). Direct numerical simulations (DNSs) that resolve the SHS texture have been used to analyse the mechanisms behind drag reduction in turbulent channel flows with SHS ridges and posts, exclusive of surfactant (Park et al., 2013; Turk et al., 2014; Jelly et al., 2014; Rastegari and Akhavan, 2015; Egan et al., 2021). Park et al. (2013) performed DNS to examine the average streamwise slip length and drag in a turbulent channel flow with streamwise grooves that are periodic in the spanwise direction, whilst varying the gas fraction (\(\phi\in[0.5,\,0.94]\)) and the ratio of the SHS texture period to the channel height (\(P/H\in[0.09,\,3]\)). As the period in wall units is increased the viscous sublayer shrinks and the drag reduction appears to converge to the gas fraction of the SHS. Turk et al. (2014) carried out DNS to study the dependence of the drag reduction on the spanwise period of the SHSs in a turbulent channel flow (\(P/H\in[0.04,\,1.56]\)). When the period of the SHS is small, they find that the average streamwise slip length can be predicted by Stokes flow theory (Philip, 1972); they also show that this approximation breaks down when the period of the SHS becomes larger than approximately twenty wall units. Rastegari and Akhavan (2015) used DNS to investigate the mechanisms behind turbulent drag reduction for both SHS ridges and posts. The drag reduction is decomposed into a gain from the average streamwise slip length and a loss due to modifications to turbulent dynamics and secondary mean flows; these contribute to approximately 80% and 20% of the total drag reduction, respectively, for the friction Reynolds number of the no-slip flow (\(Re_{\tau_{0}}=223\)) considered in Rastegari and Akhavan (2015). Experimental studies have investigated the performance of SHSs in internal and external turbulent flows (Daniello et al., 2009; Park et al., 2014; Xu et al., 2021). Daniello et al. (2009) found a significant drag reduction in a turbulent channel flow bounded by SHSs with streamwise ridges that are periodic in the spanwise direction, when the viscous sublayer thickness is comparable to the period of the SHS. As discussed in Park et al. (2013), the drag reduction measured by Daniello et al. (2009) appears to reach a plateau as the viscous sublayer thickness reduces. They hypothesised that the drag reduction should asymptote towards the gas fraction, as the viscous sublayer thickness becomes small compared to the SHS texture period. Park et al. (2014) measured the drag reduction in a turbulent boundary layer flow over a longitudinally ridged SHS test section, which they find increases with increasing gas fraction and period of the SHS. However, they did not vary the boundary layer thickness by moving the test section with respect to the upstream origin of the boundary layer or by changing the Reynolds number. Xu et al. (2021) investigated the stability of the liquid-gas interface using a towing plate with a SHS test section made of periodic streamwise ridges in open water. They measured the drag reduction for varying Reynolds numbers, such that at large Reynolds numbers, they observed that a portion of the upstream region of the SHS grooves became wet. They found that reducing the streamwise length of the ridges can improve the drag reduction, due to the enhanced stability of the liquid-gas interface (however, results for laminar flows outlined in Temprano-Coleto et al., 2023, imply that shorter ridges would also make the SHS more susceptible to surfactant effects). A review by Gose et al. (2018) of fourteen experimental studies into the turbulent drag reduction for flows over SHSs shows broad discrepancies: the drag reduction ranges from \(-90\%\) (i.e. drag increase) to \(+90\%\), with five studies finding little (\(<20\%\)) or no drag reduction. A number of possible causes may explain these discrepancies, as discussed in detail in the review by Park et al. (2021). For example, the liquid-gas interface at the SHS can deform due to pressure differences in the fluid and gas cavity, which has been shown to alter the drag reduction in laminar and turbulent flows over SHSs depending on the protrusion angle (Teo and Khoo, 2009; Rastegari and Akhavan, 2018). Alternatively, the turbulence intensity may induce partial or complete wetting of the grooves containing the gas subphase, where the flow would no longer benefit from a flat shear-free interface (Rastegari and Akhavan, 2019; Xu et al., 2021). We neglect both of these features of SHSs here for simplicity, and instead focus on the effect of surfactants. As previously mentioned, surfactants have been shown to limit the drag-reducing effect of SHSs in laminar flows with a flat liquid-gas interface (Peaudecerf et al., 2017; Landel et al., 2020; Temprano-Coleto et al., 2023). However, their effect in turbulent flow conditions is yet to be investigated using theory, DNS or experiments. Figure 1: Diagram showing the mechanism by which the presence of surfactant can negatively impact the drag reduction for a flow over a SHS, with period \(P\), gas ridge (plastron) width \(W\) and gas fraction \(\phi=W/P\). A buildup of surfactant at the downstream stagnation point of a long but finite-length grating induces an adverse Marangoni force due to the reduction in surface tension (Peaudecerf et al., 2017). The adverse Marangoni force acts to reduce the average streamwise slip length \(\lambda_{x}\) and slip velocity \(U_{s}\) at the interface. The smaller average streamwise slip length (or slip velocity) reduces the drag reduction when compared to a surfactant-free flow over a SHS. By exploiting data from DNS which impose average streamwise and spanwise slip lengths at the SHS, semi-empirical models based on a shifted log law have been constructed that predict the drag reduction for turbulent channel flows over SHSs with streamwise ridges that are periodic in the spanwise direction (Fukagata et al., 2006; Busse and Sandham, 2012). Fukagata et al. (2006) proposed two independent mechanisms that can alter the drag and split the log-law shift into two contributions. Their model assumes that the characteristic size of the SHS texture is much smaller than the smallest length scale in the turbulent flow, so that the turbulent flow experiences a spatially averaged slip effect, averaged in planes parallel to the SHS. The spatially-averaged streamwise slip length increases the mean velocity and decreases the drag. The average spanwise slip length decreases the log law velocity and increases the drag. They found that the effect of the spanwise slip length on the drag reduction saturates as the spanwise slip length becomes large, following a nonlinear empirical relationship. The empirical relationship between the average spanwise slip length and the log law velocity shift proposed by Fukagata et al. (2006) was refined in Busse and Sandham (2012), who performed DNS for flows in SHS channels with streamwise grooves that are periodic in the spanwise direction, where the average slip lengths in the streamwise and spanwise directions are imposed at the boundary. Applying the average slip lengths that were imposed as boundary conditions in their DNS to the shifted log law model, both Fukagata et al. (2006) and Busse and Sandham (2012) found good agreement between their model and DNS. However, neither Fukagata et al. (2006) nor Busse and Sandham (2012) related the average streamwise and spanwise slip length to the geometry of the SHS texture, namely the gas fraction and the spanwise period of the SHS, in order to acquire a predictive model that requires only known input parameters. Luchini (2015) related the average slip length to the geometry of the SHS using the laminar solutions due to Philip (1972). Luchini's model could provide predictions to compare with experiments, where the average slip lengths are not known in general and can be hard to measure due to the size of the SHS texture. His model predictions for the drag reduction compare well with texture-resolving DNS simulations of turbulent flows over SHS. However, his predictions agree with DNS results for texture period in wall units up to roughly 30. The poor comparison at larger values may be due to the fact that the log law velocity shift used by Luchini (2015) does not saturate, as suggested by the DNS performed by Fukagata et al. (2006) and Busse and Sandham (2012). Here, we will combine the models proposed by Fukagata et al. (2006), Busse and Sandham (2012) and Luchini (2015) to relate the drag reduction to the relevant non-dimensional input parameters related to the flow and liquid properties and to the geometry, in the case without surfactant. Then, we will discuss how this model can be modified to include surfactant effects in order to predict their impact on the drag reduction for turbulent flows over SHS, which is the main objective of our study. This study investigates the potential effects of surfactant in turbulent flows, for both internal and external geometries, over SHSs made of long but finite-length streamwise ridges that are periodic in the spanwise direction (see Fig. 1). We use an existing laminar theory from the literature (Landel et al., 2020) to relate the shear rate at the liquid-gas interface to properties of the fluid, flow, geometry and surfactant. This allows us to construct a predictive model that relates the shear rate at the liquid-gas interface to the drag reduction, by combining elements from previous theories (Fukagata et al., 2006; Busse and Sandham, 2012; Luchini, 2015). We compare our model with available texture-resolving DNS (exclusive of surfactants) and laboratory experimental data in the literature. We use our model to discuss the potential role of surfactant in the drag-reduction performance of SHSs for applications in marine transport, where the surfactant concentrations found in natural environments may be much greater than those found in laboratory conditions (Pereira et al., 2018; Frossard et al., 2019; Temprano-Coleto et al., 2023). In Section 2, we formulate the problem and introduce the quantities used to assess the performance of a SHS: the average streamwise slip length and drag reduction. In Section 3, we formulate a model to assess the performance of a SHS. The model is based on the shifted log law for turbulent flow and uses slip lengths that include surfactant effects provided by laminar theories. In Section 4, we present results that compare our model to texture-resolving DNS and laboratory experiments in the literature. We then discuss the predictions of our model inclusive of surfactant in relation to the application of drag-reducing SHS in marine environments. In Section 5, we outline key outcomes and extensions of this theory. ## 2 Formulation ### Superhydrophobic surface flow configuration We consider a channel flow bounded by symmetric SHSs with channel height \(2H\) (see Fig. 2a) and a boundary layer flow over a single SHS with boundary layer thickness \(H=H(x)\) (see Fig. 2b). The SHS texture consists of long but finite-length ridges aligned with the main flow direction, where the ridges are periodic in the spanwise direction. The liquid is suspended over the SHS texture in the Cassie-Baxter state (Rothstein, 2010). The liquid, assumed incompressible and Newtonian, has dynamic viscosity \(\mu\) and density \(\rho\). A no-slip boundary condition is assumed at the ridge walls. We assume that the liquid-gas interfaces (referred to hereafter as 'plastrons') are flat, impermeable and have a constant Marangoni shear rate \(\gamma_{Ma}\); the Marangoni shear rate is generated by the concentration gradient that arises from surfactant build-up at the downstream stagnation point (Landel et al., 2020). We give a description of how Landel et al. (2020) relate the \(\gamma_{Ma}\) to the fluid, flow, geometry and surfactant in Appendix A. The three-dimensional time-dependent velocity field is defined by \(\mathbf{u}=u\mathbf{e}_{x}+v\mathbf{e}_{y}+w\mathbf{e}_{z}\), where \(\mathbf{e}_{x}\), \(\mathbf{e}_{y}\) and \(\mathbf{e}_{z}\) are the unit vectors that describe the streamwise (\(x\)), wall-normal (\(y\)) and spanwise (\(z\)) directions in a Cartesian coordinate frame. The origin of the Cartesian coordinate frame is at the bottom SHS, located at \(y=0\), on the right-hand-side corner of a ridge at \(z=0\). A plastron lies at \(y=0\) for \(0<z<G\), and a ridge lies at \(y=0\) for \(G<z<P=G+W\), with \(G\) the plastron width, \(W\) the ridge width and \(P\) the period of the SHS texture. The velocity vector is decomposed into time-averaged and fluctuating components, assumed to be of the form \(\mathbf{u}=(U,\,V,\,W)(\mathbf{x})+\mathbf{u}^{\prime}(\mathbf{x},\,t)\), to arrive at the Reynolds-averaged Navier-Stokes equations for a turbulent flow (Pope, 2000). We assume that the streamwise length of the ridges \(L\) is finite in order to generate the surfactant gradient that impedes the drag reduction, however, we also assume that \(L\) is much larger than \(G\), \(W\), \(P\) and \(H\), such that the flow is statistically invariant in the \(x\)-direction, \(U=U(\text{y},\,z)\), and \(|V|\), \(|W|\ll|U|\). In the channel flow configuration only, \(U\) is assumed to be symmetric in the \(y\)-direction with respect to \(y=H\). The friction velocity (or shear velocity) is denoted \(U_{\tau}=\sqrt{\tau/\rho}\) (\(U_{\tau_{0}}=\sqrt{\tau_{0}/\rho}\) for the no-slip flow), and the viscous length scale is written as \(\delta_{\tau}=\nu/U_{\tau}\) (\(\delta_{\tau_{0}}=\nu/U_{\tau_{0}}\) for the no-slip flow), with \(\nu=\mu/\rho\) the kinematic viscosity. Normalizing length scales and velocity scales using \(\delta_{\tau}\) and \(U_{\tau}\) for the SHS flow defines non-dimensional quantities in wall units, which we denote using a superscript +. To avoid confusion, we typically use the superscript notation with + only for the SHS flow, whereas for the no-slip flow, the normalisation is written explicitly (e.g. we use \(y/\delta_{\tau_{0}}\) rather than, say, \(y^{+0}\)). ### No-slip flow configuration As is commonly done in the literature, we compare the SHS flow to a reference flow with conventional no-slip walls, referred to hereafter as the 'no-slip flow'. More specifically, in the no-slip channel, the SHS texture is replaced by a no-slip wall for all \(x\) and \(z\). Hereafter, we use the subscript \(0\) to refer to quantities related to the no-slip flow, which differ from the corresponding quantities for the SHS flow. For instance, the time-averaged velocity field in the no-slip flow is \(U_{0}(\text{y})\), which is invariant in both \(x\) and \(z\). ### Constant flow rate and constant pressure gradient conditions Two flow conditions have been used in the literature to drive the flow in the SHS and no-slip channels, in order to set up a comparison. The SHS and no-slip flows can be driven by imposing the same constant flow rate (CFR), such that the bulk average velocities in both flows are equal and constant, \(\overline{U}=\overline{U}_{0}\). The overbar \(\tau\) represents a spatial average in both the \(y\) and \(z\) directions. Alternatively, the SHS and no-slip flows can be driven by imposing the same constant pressure gradient (CPG), such that the average shear stresses at the boundaries in both flows are equal and constant, \(\tau=\tau_{0}\), where \(\tau=\mu\langle\partial U/\partial y\rangle\) at \(y=0\) is the time- and space-averaged wall shear stress of the SHS flow, \(\tau_{0}\) is the time-averaged wall shear stress of the no-slip flow, and \(\langle\cdot\rangle\) represents a spatial average in the spanwise \(z\) direction. We include a description of both these conditions here as we convert DNS data from studies performed under CPG conditions to CFR conditions in Section 4. ### Independent non-dimensional parameters For the purposes of our study, the SHS flow has four independent non-dimensional parameters, which encode the SHS geometry, surfactant strength and driving condition, whilst the no-slip flow has only one independent non-dimensional parameter, which expresses the driving condition. For the SHS flow, two non-dimensional geometric parameters are related to the SHS texture, namely \(P/H\) and \(\phi\), which express the ratio of the SHS texture period to the wall-normal height and the gas fraction, respectively. The non-dimensional parameter that represents the surfactant strength, namely \(\gamma_{Ma}^{+}=\gamma_{Ma}/(\tau/\mu)=\gamma_{Ma}/(U_{\tau}/\delta_{\tau})\), is the time- and space-averaged interfacial shear rate due to surfactant divided by the wall shear rate of the SHS flow. If the two flows are driven under the CFR condition, the non-dimensional parameters are \(Re=H\overline{U}/\nu\) and \(Re_{0}=H\overline{U}_{0}/\nu\) which denotes the bulk Reynolds numbers of the SHS and no-slip flows, re Figure 2: Schematic of the (a) symmetric channel flow with channel height \(2H\) and (b) boundary layer flow with boundary layer thickness \(H(x)\). The top and bottom walls are made of long but finite superhydrophobic ridges that are periodic in the spanwise direction, such that the liquid is in the Cassie-Baxter state. A shear-rate condition due to the surfactant gradient is assumed at the plastrons and a no-slip condition is assumed at the ridges. The time-averaged fully-developed flow velocity in the streamwise \(x\) direction \(U\) is assumed invariant with \(x\) and periodic in the \(z\) direction with period \(P\). In this study, we model the drag-reducing effect of the SHS on the flow field, varying Reynolds number, SHS texture geometry and surfactant effects in the turbulent regime. We focus on the periodic flow region for \(0\leq z\leq P\), and in the channel flow configuration, we focus on the symmetric region for \(0\leq y\leq H\), at any \(x\). , respectively. Under the CFR condition, \(Re=Re_{0}\). Alternatively, if the two flows are driven under the CPG condition, the remaining non-dimensional parameters are \(Re_{\tau}=HU_{\tau}/\nu\) and \(Re_{\tau_{0}}=HU_{\tau_{0}}/\nu\), which denotes the friction Reynolds numbers of the SHS and no-slip flows, respectively. Under the CPG condition, \(Re_{\tau}=Re_{\tau_{0}}\). ### Superhydrophobic surface performance There are three main quantities of interest, commonly used in the literature, that characterise the local and global performance of the SHS flow compared to the no-slip flow. These quantities are functions of the non-dimensional parameters stated above. Firstly, the spanwise-averaged streamwise slip length (hereafter designated as the average streamwise slip length) is defined, dimensionally, as \[\lambda_{x}=\frac{U_{s}}{\langle\gamma_{I}\rangle}, \tag{1}\] where \(U_{s}=\langle U_{I}\rangle\) is the spanwise-averaged slip velocity at the SHS boundary, \(U_{I}(z)\) is the local time-averaged velocity at the SHS boundary \(y=0\) (see Fig. 1) and \(\gamma_{I}(z)=\partial U/\partial y\) is the local time-averaged shear rate at \(y=0\). The average streamwise slip length \(\lambda_{x}\) represents the extrapolated distance, below the wall, where \(U\) vanishes. The slip length \(\lambda_{x}\) can be normalised with a relevant length scale, usually either \(H\) or \(\delta_{\tau}\), depending on whether the effect of local slip is being compared to the bulk flow, or to the viscous sublayer, respectively. Secondly, for flows under the CFR condition \(\overline{U}=\overline{U}_{0}\) (i.e. \(Re=Re_{0}\)), the drag reduction is defined as \[DR=\frac{\tau_{0}-\tau}{\tau_{0}}=1-\frac{Re_{\tau}^{2}}{Re_{\tau_{0}}^{2}}. \tag{2}\] Thirdly, for flows under the CPG condition \(\tau=\tau_{0}\) (i.e. \(Re_{\tau}=Re_{\tau_{0}}\)), one defines the added flux, or the relative increase in the bulk-averaged velocity, \[\frac{\Delta\overline{U}}{\overline{U}_{0}}=\frac{\overline{U}-\overline{U}_ {0}}{\overline{U}_{0}}=\frac{Re}{Re_{0}}-1. \tag{3}\] For turbulent flows under the CFR condition, the impact on \(DR\) of the turbulent flow interactions with the SHS texture can be difficult to interpret for flows near laminar-turbulent transition (Turk et al., 2014). As the friction Reynolds number of the SHS flow is lower than for the no-slip flow (i.e. \(Re_{\tau}<Re_{\tau_{0}}\)), the SHS flow may relaminarize and no longer offer a meaningful comparison to the no-slip flow. In contrast, the added flux \(\Delta\overline{U}/\overline{U}_{0}\) compares the SHS and no-slip flow under the CPG condition, such that the friction Reynolds numbers are the same, i.e. \(Re_{\tau}=Re_{\tau_{0}}\). Under the CPG condition, the bulk Reynolds number of the SHS flow increases, i.e. \(Re>Re_{0}\), such that a no-slip turbulent flow will correspond to a turbulent SHS flow. This increase in bulk Reynolds number tends to have a lesser impact on the global performance of the SHS, as measured through \(\Delta\overline{U}/\overline{U}_{0}\), owing to the homogeneity of the bulk turbulence properties of both SHS and no-slip flows, provided \(Re\) and \(Re_{0}\) are both sufficiently large for the turbulent flows to be fully developed. In this study, we assess the global performance of SHSs using \(DR\) as it is most usually calculated and discussed in experimental studies (Daniello et al., 2009; Park et al., 2014; Xu et al., 2021). However, some of the numerical results from the literature (Turk et al., 2014; Egan et al., 2021), which will be compared to model predictions, give only \(\Delta\overline{U}/\overline{U}_{0}\), and therefore, we include its discussions and outline a procedure to convert the data to from \(\Delta\overline{U}/\overline{U}_{0}\) to \(DR\) in Appendix B. To evaluate the global performance of a SHS texture, the relationships between \(DR\) and the relevant independent non-dimensional parameters is sought in the form \[DR=f\left(Re,\,\frac{P}{H},\,\phi,\,\gamma_{Ma}^{\dagger}\right), \tag{4}\] where \(Re=Re_{0}\) under the CFR condition and \(f\) is a function to be determined. For turbulent flows, \(DR\) in (4) could also be given as a function of \(P^{+}=P/\delta_{\tau}\), instead of \(P/H\)(e.g. Park et al., 2013). As mentioned earlier, for Stokes flows and stable laminar flows, the dependence on the Reynolds number can be neglected in (4) as \(Re\) is found to have negligible influence on \(DR\)(Park et al., 2013). ### Reference turbulent no-slip flow model For completeness, the canonical turbulent no-slip flow model is reported here. A log-law velocity profile holds for \(y\gg\delta_{\tau_{0}}\), \[\frac{U_{0}}{U_{\tau_{0}}}=\frac{1}{\kappa}\ln\left(\frac{y}{\delta_{\tau_{0}} }\right)+B+\Pi\left(\frac{y}{H}\right), \tag{5}\] where \(\kappa=0.41\) is the von Karman constant and \(B\approx 5.3\) is an empirical constant (Pope, 2000), and \(\Pi\) is the wake function. Note that the net effect of the wake function is expected to be small in our study, as we will discuss in SS3.2.1 when comparing flows with no-slip and SHS boundaries. In the viscous sublayer (\(y\lesssim 10\,\delta_{\tau_{0}}\)), the velocity field of the no-slip flow follows \[\frac{U_{0}}{U_{\tau_{0}}}=\frac{y}{\delta_{\tau_{0}}}. \tag{6}\] The bulk Reynolds number of the no-slip flow is defined as \[Re_{0}=\frac{\overline{U}_{0}H}{\nu}=\frac{1}{\nu}\int_{y=0}^{H}U_{0}\,\mathrm{ dy}. \tag{7}\] The bulk Reynolds number can be found by integrating the velocity profile. A common approximation is to neglect the flux associated with the viscous sublayer, thereby integrating the log law from \(y=0\) to \(H\)(Pope, 2000). To facilitate comparisons with SHS results at relatively low \(Re_{\tau_{0}}\), we retain the viscous sublayer in the calculation and switch from (5) to (6) at the value of \(y\) for which the two expressions for \(U_{0}\) are equal, which we write as \(y=\beta\delta_{\tau_{0}}\), where \(\beta=(\ln\beta)/\kappa+B\approx 11.2\)(Pope, 2000). Therefore, \(Re_{0}\) is calculated as \[Re_{0} = \frac{1}{\delta_{\tau_{0}}}\int_{y=0}^{\beta\delta_{\tau_{0}}} \frac{y}{\delta_{\tau_{0}}}\mathrm{dy}+\frac{1}{\kappa\delta_{\tau_{0}}}\int_ {y=\beta\delta_{0}}^{H}\left[\ln\left(\frac{y}{\delta_{\tau_{0}}}\right)+ \kappa B\right]\mathrm{d}y, \tag{8}\] \[= \beta\left(\frac{1}{\kappa}-B+\frac{\beta}{2}-\frac{\ln(\beta)}{ \kappa}\right)+Re_{\tau_{0}}\left[\frac{\ln\left(Re_{\tau_{0}}\right)}{\kappa}+ B-\frac{1}{\kappa}\right]. \tag{9}\] The relative contribution from the first integral in (8), accounting for the viscous sublayer, is usually negligible for no-slip flows (e.g. approximately 0.9% of the total \(Re_{0}\) at \(Re_{\tau_{0}}=180\)). However, this term can become significant for SHS flows, where the near-wall fluid can move much faster. ## 3 Model ### Low Reynolds number laminar model #### 3.1.1 Laminar slip lengths At low Reynolds numbers, for laminar flows, the slip velocity can be found by solving the incompressible Stokes equation for a linear shear flow in a semi-infinite domain with free-stream shear rate \(\tau\). At the solid wall, we have no slip. Following Landel et al. (2020), at the liquid-gas interface, the tangential stress balance in the streamwise (\(x\)) direction can be linearised for small surfactant concentrations, and therefore, we can assume that the surfactant gradient generates a uniform dimensional average Marangoni shear rate denoted by \(\gamma_{Ma}\) in the streamwise direction. Using transformations detailed in Appendix C, we can solve for the mean streamwise velocity field when \(\gamma_{Ma}\neq 0\), building on the solution previously found by Philip (1972) for the case \(\gamma_{Ma}=0\). The average streamwise slip length including surfactant effects is \[\lambda_{x}=\frac{P}{\pi}\left(1-\frac{\gamma_{Ma}}{\tau/\mu}\right)\ln\left( \sec\left(\frac{\pi\phi}{2}\right)\right). \tag{10}\] If we define \(\gamma_{Ma}^{+}=\gamma_{Ma}/(\tau/\mu)\), when \(\gamma_{Ma}^{+}=0\) the interface is unaffected by the surfactant (the average streamwise slip length \(\lambda_{x}\) is maximised) and when \(\gamma_{Ma}^{+}=1\) the interface is immobilised by surfactant (\(\lambda_{x}=0\)). However, we leave (10) in terms of \(\gamma_{Ma}\) because we will use the laminar scaling theory from Landel et al. (2020) to relate \(\gamma_{Ma}\) to the properties of the flow, geometry, liquid and surfactant, as detailed in Appendix A. If we consider the flow that is perpendicular to the ridges in clean conditions (surfactant-free), the average spanwise slip length is given by (Philip, 1972) \[\lambda_{z}=\frac{\lambda_{x}}{2}=\frac{P}{2\pi}\ln\left(\sec\left(\frac{\pi \phi}{2}\right)\right)\quad\text{when}\quad\gamma_{Ma}=0. \tag{11}\] However, when surfactants are present, the short spanwise length scale of the SHS implies that the liquid-gas interface are immobilised or close to immobilisation in the spanwise direction as the threshold to achieve immobilisation over short distances is very low (Peaudecerf et al., 2017; Temprano-Coleto et al., 2023), such that \(\lambda_{z}=0\), when \(\gamma_{Ma}\neq 0\), i.e. as soon as small amounts of surfactants are present. Therefore, the average spanwise slip length is given by \[\lambda_{z}=\begin{cases}0\quad\text{when}\quad\gamma_{Ma}\neq 0,\\ \\ \frac{P}{2\pi}\ln\left(\sec\left(\frac{\pi\phi}{2}\right)\right) \quad\text{when}\quad\gamma_{Ma}=0.\end{cases} \tag{12}\] #### 3.1.2 Channel flow configuration In order to make a comparison with DNS studies in Section 4, we compare the laminar flow in a SHS channel to the no-slip flow in a no-slip channel, in the limit of \(H\gg P\). In general, the drag reduction can be computed numerically, or using separation of variables and dual series techniques (see e.g. Teo and Khoo, 2009). To compute \(DR\), one starts from the CFR condition \(\overline{U}_{0}=\overline{U}\), where \[\overline{U}_{0}=\frac{1}{HP}\int_{y=0}^{H}\int_{z=0}^{P}U_{0}\,\mathrm{d}y\, \mathrm{d}z, \tag{13}\] and \[\overline{U}=\frac{1}{HP}\int_{y=0}^{H}\int_{z=0}^{P}U\,\mathrm{d}y\,\mathrm{ d}z.\] (14a) The flow fields, \[U_{0}(y)\] and \[U(y,z)\], are given by the solution to the incompressible Stokes equations. The velocity field of the no-slip flow is the canonical Poiseuille solution, leading to \[\langle U_{0}\rangle=U_{0}=\frac{1}{2\mu}\frac{\mathrm{d}p_{0}}{\mathrm{d}x}y \left(y-2H\right), \tag{15}\] with \(\mathrm{d}p_{0}/\mathrm{d}x\) the uniform pressure gradient in the no-slip flow. In the limit \(P/H\ll 1\) for the SHS channel, we can replace the mixed shear-rate/no-slip boundary condition by the homogenised boundary condition \(U_{s}=\langle U_{I}\rangle=\lambda_{x}\langle\gamma_{I}\rangle\), such that the SHS flow has velocity \[\langle U\rangle=\frac{1}{2\mu}\frac{\mathrm{d}p}{\mathrm{d}x}\left(y^{2}-2H \left(\lambda_{x}+y\right)\right), \tag{16}\] with \(\mathrm{d}p/\mathrm{d}x\) the uniform pressure gradient in the SHS flow. Calculating (13) and (14) using (15) and (16), the bulk average velocities are \[\overline{U}_{0}=\frac{1}{H}\int_{y=0}^{H}\langle U_{0}\rangle\mathrm{d}y=- \frac{\mathrm{d}p_{0}}{\mathrm{d}x}\frac{H^{2}}{3\mu}, \tag{17}\] and \[\overline{U}=\frac{1}{H}\int_{y=0}^{H}\langle U\rangle\mathrm{d}y=-\frac{ \mathrm{d}p}{\mathrm{d}x}\frac{H(H+3\lambda_{x})}{3\mu}. \tag{18}\] Then, using the definition (2), the drag reduction can be computed under the CFR condition, \(\overline{U}_{0}=\overline{U}\), to give \(\mathrm{d}p_{0}/\mathrm{d}x=(3\lambda_{x}/H+1)\mathrm{d}p/\mathrm{d}x\) and \[DR=\frac{3\lambda_{x}}{H+3\lambda_{x}}. \tag{19}\] ### Turbulent flow model #### 3.2.1 Shifted log law profile We assume that the bulk Reynolds numbers \(Re\) and \(Re_{0}\) are sufficiently high for the establishment of a fully-developed turbulent flow in both the SHS and no-slip configurations. To analyze the effect of surfactants on the drag reduction in the turbulent flow regime, we derive a model based on the (surfactant-free) shifted log-law technique proposed by Fukagata et al. (2006), and refined by Busse and Sandham (2012) and Luchini (2015). The shifted log-law technique is closed using the laminar solutions for the average streamwise (\(\lambda_{x}\)) and spanwise (\(\lambda_{z}\)) slip lengths based on semi-infinite shear flows (Philip, 1972). The streamwise and spanwise average slip lengths can be related to a uniform surfactant-induced Marangoni shear stress \(\gamma_{Ma}\) as shown in equations (10) and (12) for \(\lambda_{x}\) and \(\lambda_{z}\), respectively. Based on classical wall turbulent boundary layer flows, we assume that the turbulent boundary layer flow over the SHS contains two regions of variation close to the SHS boundary: an inner viscous sublayer and an outer log-law layer (see Fig. 1). We assume that \(P^{+}=P/\delta_{\tau}\ll 10\). This assumption implies that the viscous sublayer thickness, of order \(10\delta_{\tau}\), is much larger than the SHS texture period \(P\). In practice, however, models of this form provide reasonable approximations up to \(P^{+}\lessapprox 25\)(Fairhall et al., 2019). The flow near the SHS is homogenised by viscosity within the viscous sublayer since the layer affected by the SHS texture has a thickness of order \(P\)(Philip, 1972; Ybert et al., 2007). Thus, the SHS texture affects the turbulent bulk flow via homogenised quantities, such as the average streamwise and spanwise slip lengths. In the outer region, corresponding to \(y^{+}\gg 1\), the bulk flow velocity over the SHS is assumed to follow the shifted log-law model (Fukagata et al., 2006) \[U^{+}=\frac{1}{\kappa}\ln\left(y^{+}\right)+B+\Delta U^{+}(\lambda_{x}^{+}, \lambda_{z}^{+}), \tag{20}\] where \(U^{+}=U/U_{\tau}\). For the boundary layer flows considered herein, the log laws (5, 20) could be extended to include a wake function (Pope, 2000). However, if we assume that the wake function is the same over both a SHS and solid wall, then these terms will have a small effect on the drag reduction calculation. The term \(\Delta U^{+}\) is modelled as (Busse and Sandham, 2012) \[\Delta U^{+}(\lambda_{x}^{+},\lambda_{z}^{+})=U_{s}^{+}-\Delta U_{\text{loss} }^{+}=\lambda_{x}^{+}-\frac{4\lambda_{z}^{+}}{4+\lambda_{z}^{+}}. \tag{21}\] In (21), \(U_{s}^{+}\) describes the gain (positive shift in \(U^{+}\)) due to the streamwise slip length, since \(U_{s}^{+}=\lambda_{x}^{+}\) in wall units by definition. The term \(\Delta U_{\text{loss}}^{+}\) reflects the losses (negative shift in \(U^{+}\)) due to spanwise turbulent momentum transfer. The quantity \(\Delta U_{\text{loss}}^{+}\) is related to the normalised spanwise slip length, \(\lambda_{z}^{+}\), through the empirical relationship proposed by Busse and Sandham (2012), that is \(\Delta U_{\text{loss}}^{+}=4\lambda_{z}^{+}/(4+\lambda_{z}^{+})\). An alternative relationship for \(\Delta U_{\text{loss}}^{+}\) in (21) was proposed by Fukagata et al. (2006), based on an exponential dependence with \(\lambda_{z}^{+}\). We choose to employ the relationship of Busse and Sandham (2012) because of its simplicity and accuracy. We note that the modelling approach above is inspired by the work of Luchini et al. (1991) on riblets. Riblets are another type of passive drag-reducing surface using geometrical surface undulations at the boundary, which can modify the turbulent flow near the boundary to reduce drag. Luchini et al. (1991) proposed that for riblets \(\Delta U^{+}=\lambda_{x}^{+}-\lambda_{z}^{+}\), which is the linearised form of (21) and does not account for the saturation effects later proposed by Fukagata et al. (2006) and Busse and Sandham (2012) for SHSs (see Ibrahim et al., 2021, for a recent review on riblets and SHSs). #### 3.2.2 Average slip lengths The model in (20) must be closed to provide a fully predictive relationship, in the form (4), for the drag reduction \(DR\) as a function of the relevant input non-dimensional parameters: \(Re\) the bulk Reynolds number, \(P/H\) or \(P^{+}\) the non-dimensional texture period, \(\phi\) the gas fraction, and \(\gamma_{Ma}^{+}\) the non-dimensional Marangoni shear rate due to the effect of surfactant, which could be set to zero for surfactant-free flows. We close the model in (20) and (21) following the approach proposed by Luchini et al. (1991) for riblets (see also Luchini, 2015, for SHS). Since the flow in the viscous sublayer is dominated by viscosity, we assume that the average streamwise and spanwise slip lengths \(\lambda_{x}^{+}\) and \(\lambda_{z}^{+}\) follow the Stokes flow solutions (10) and (12) (normalised in wall units), which provide the dependence on \(P/H=P^{+}/Re_{\tau}\), \(\phi\) and \(\gamma_{Ma}^{+}\). We couple the flow within the viscous sublayer with the turbulent flow in the log-layer (20) through the characteristic shear rate driving the Stokes flow problems leading to \(\lambda_{x}^{+}\) and \(\lambda_{z}^{+}\) in (10) and (12). The shear rate \(\tau\) can then be related to \(Re\) by integrating the velocity profile (20), as shown in the next section, thereby fully closing the model for \(DR\). The normalisation of the average streamwise slip length in wall coordinates is well defined through \(\lambda_{x}^{+}=\lambda_{x}/\delta_{\tau}\). However, the normalisation of the average spanwise slip length in wall coordinates, \(\lambda_{z}^{+}\), is more subtle (Turk et al., 2014; Seo and Mani, 2016). Since the average shear stress in the spanwise direction is zero, by definition in this problem, it is unclear what the imposed stress should be for the spanwise Stokes flow leading to (11), and thus how \(\lambda_{z}\) should be normalised. To resolve this uncertainty, we note that \(\Delta U_{\text{loss}}^{+}\) in (21) represents the homogenised effect of the spanwise turbulent momentum transfer related to the turbulent flow interactions and the SHS texture through the viscous sublayer. We assume that the spanwise velocity fluctuations at the origin of the spanwise turbulent momentum transfer scale with the streamwise velocity fluctuations. This assumption is commonly made for wall turbulent boundary layers (Pope, 2000). It implies that the outer flow is homogenised in such a way that the average and fluctuating bulk shear stress in the streamwise and spanwise directions are of the same order of magnitude as the prescribed streamwise shear stress \(\tau\) (the only characteristic shear stress in the problem). Therefore, we normalise both the streamwise and spanwise average slip lengths using \(\delta_{\tau}\), with \(\lambda_{x}\) from (10) and \(\lambda_{z}\) from (12). #### 3.2.3 Drag reduction To compute \(DR\) and determine the relationship with known input parameters (4), we impose the CFR condition, \(\overline{U}_{0}=\overline{U}\), or equivalently \(Re_{0}=Re\), with \[Re=\frac{1}{\nu}\int_{y=0}^{H}\!\langle U\rangle\,\mathrm{d}y. \tag{22}\] We decompose the SHS flow between the outer turbulent bulk flow, which follows the shifted log law (20) and with bulk Reynolds number \(Re_{\text{log}}\), and the flow in the inner viscous sublayer, which we approximate by the Stokes solution described in Section 3.1 and with bulk Reynolds number \(Re_{\rm sub}\), such that \(Re=Re_{\rm log}+Re_{\rm sub}\), where \[Re_{\rm log} =\frac{1}{\nu}\int_{y=\beta s}^{H}U{\rm d}y \tag{23}\] \[=\frac{\beta}{\kappa}\left\{1-\kappa\left[B+\Delta U^{+}(\lambda _{x}^{+},\lambda_{z}^{+})\right]-\ln(\beta)\right\}\] \[\qquad+\frac{Re_{\tau}}{\kappa}\left[\ln\left(Re_{\tau}\right)+ \kappa\left[B+\Delta U^{+}(\lambda_{x}^{+},\lambda_{z}^{+})\right]-1\right\}, \tag{24}\] with \(\Delta U^{+}(\lambda_{x}^{+},\lambda_{z}^{+})\) given in (21), and \(\lambda_{x}\) and \(\lambda_{z}\) given in (10) and (11)-(12), respectively; and \[Re_{\rm sub}=\frac{1}{P\delta_{\tau}}\int_{z=0}^{P}\int_{y=0}^{\beta s}U_{P}^{ +}\,{\rm d}y\,{\rm d}z. \tag{25}\] The velocity field \(U_{P}\) inside the viscous sublayer is given by (see Appendix C), \[U_{P}^{+}=y^{+}+\Im\left(\frac{P^{+}}{\pi}\left(1-\gamma_{Ma}^{+}\right) \arccos\left(\frac{\cos\left(\frac{\pi\theta^{+}}{P^{+}}\right)}{\cos\left( \frac{\pi\phi}{2}\right)}\right)-\theta^{+}\right), \tag{26}\] where \(\theta=z+iy\), \(i^{2}=-1\) and \(\Im(\cdot)\) denotes the imaginary part. Combining these with the CFR condition \(Re_{0}=Re\), we have an implicit equation relating \(Re_{\tau}\) and \(Re_{\tau_{0}}\), as well as all the other relevant non-dimensional parameters \(Re\), \(P^{+}\), \(\phi\) and \(\gamma_{Ma}^{+}\), \[Re_{0}\left(Re_{\tau_{0}},\,P^{+},\,\phi,\,\gamma_{Ma}^{+}\right) =Re_{\rm log}\left(Re_{\tau},\,P^{+},\,\phi,\,\gamma_{Ma}^{+}\right)\] \[\qquad+Re_{\rm sub}\left(Re_{\tau},\,P^{+},\,\phi,\,\gamma_{Ma}^{ +}\right), \tag{27}\] where \(Re_{0}\) is given by (8), \(Re_{\rm log}\) by (24) and \(Re_{\rm sub}\) by (25). Incidentally, the contribution from \(Re_{\rm sub}\) can often be more than 5% of the total. We solve (27) numerically to compute the ratio \(Re_{\tau}/Re_{\tau_{0}}\) and calculate \(DR=1-(Re_{\tau}/Re_{\tau_{0}})^{2}\) according to (2), as a function of \(Re=Re_{0}\), \(P^{+}\), \(\phi\) and \(\gamma_{Ma}^{+}\). ## 4 Results flow model, as also found by Turk et al. (2014). In Fig. 3(b), the drag reduction predicted by the laminar model using (19) does not vary with \(P^{+}\), as expected since for laminar flows the drag reduction does not depend on the Reynolds number. In contrast, the drag reduction predicted by the turbulent model using (2, 27) increases rapidly with \(P^{+}\), also in agreement with the turbulent DNS data by Park et al. (2013); Turk et al. (2014); Rastegari and Akhavan (2015); Egan et al. (2021). This change in drag-reduction behaviour, from laminar to turbulent flow, is associated with the development of a turbulent boundary layer near the SHS boundary, where the viscous sublayer thickness (\(\sim 10\delta_{r}\)) replaces the channel height \(H\) as the relevant length scale when evaluating the drag reduction (Rothstein, 2010). Due to the limited amount of turbulent DNS data for \(\phi>0.5\), the dependence of the laminar-turbulent transition on the gas fraction is not entirely clear. Nevertheless, the DNS data suggest a similar transition at all gas fractions studied, which is captured by the laminar and turbulent model predictions at different gas fractions (shown with different colours) In Fig. 3, both the average streamwise slip length in wall units (panel a) and the drag reduction (panel b) increase as the gas fraction of the SHS increases. Next, we comment on the other regime transition that takes place at large \(P^{+}\) as the turbulent DNS data change trend, which has been observed and discussed previously (Rothstein, 2010; Park et al., 2013; Turk et al., 2014; Seo et al., 2015, 2018; Fairhall et al., 2019; Rastegari and Akhavan, 2019; Park et al., 2021). This change in trend also corresponds to a departure from the turbulent theory. As a matter of fact, turbulent model predictions (solid lines) and the DNS data (solid symbols) agree well for both \(\lambda_{x}^{+}\) and \(DR\) for \(P^{+}=P/\delta_{r}\lessapprox 50\) at all gas fractions (Fig. 3). In this regime, the viscous sublayer thickness (\(\sim 10\delta_{r}\)) is large or comparable to the SHS period \(P\), which effectively corresponds to, and even extends, the regime of validity of the model, which was assumed to be valid for \(P^{+}\ll 10\). However, for \(P^{+}\gtrap the SHS. This enhances turbulent momentum transfer close to the SHS and increases the drag. The spanwise slip mechanism is less dominant than the streamwise slip mechanism, as the ridges are much longer in the streamwise direction than in the spanwise direction. ### Comparison with laboratory experiments #### 4.2.1 Turbulent model excluding surfactant We compare the drag reduction predicted by our turbulent model using (2, 27) (excluding surfactant effects, such that \(\gamma_{Ma}=0\)) with the available experimental data in the literature for turbulent flows over SHSs (Daniello et al., 2009; Park et al., 2014; Xu et al., 2021), as a function of the Reynolds number \(Re\in[1000,10000]\) and gas fraction \(\phi=0.31\) (green symbols and lines), \(\phi=0.5\) (blue and red), \(\phi=0.52\) (yellow), \(\phi=0.61\) (orange), \(\phi=0.9\) (grey), \(\phi=0.91\) (brown) and \(\phi=0.96\) (pink). We note that no surfactant was added artificially in the experiments above from the literature. However, surfactants may have been present in these experiments in small amounts from contamination due to laboratory conditions and equipment (e.g. microfluidic devices made of PDMS have been shown to lead to surfactant effects in Hourlier-Fargette et al., 2018). In contrast to Section 4.1 and based on the information presented in these experimental studies, we cannot present results on the average streamwise slip length. Local quantities, such as \(\lambda_{x}^{+}\), are much harder to measure than global quantities (i.e. \(DR\)) in experiments because of the small length scales associated with flows over SHSs. We first discuss how the experimental configuration changes the turbulent drag reduction for flows over SHSs and then use this to explain the non-monotonicity of \(DR\) with respect to \(\phi\) in Figure 5. Similar to the texture-resolving DNS results presented in Section 4.1, Daniello et al. (2009) considered an internal flow configuration bounded by SHSs with streamwise ridges that are periodic in the spanwise direction with \(\phi=0.5\) (red and blue symbols and lines). The experimental works of Park et al. (2014) and Xu et al. (2021) consider turbulent flows over a test section with finite streamwise ridges that are periodic in the spanwise direction (pink, brown, yellow, orange, green and grey). The turbulent boundary layer thickness must first be obtained in order to evaluate the drag reduction using (2, 27) (see Fig. 2b). A boundary layer originates from the leading edge of the channel in Park et al. (2014) and the plate in Xu et al. (2021), developing over approximately \(45\,\mathrm{cm}\) and \(1.1\,\mathrm{m}\), respectively, measured from the leading edge to the centre of the SHS test section. For the purpose of this study, we will assume that the turbulent boundary layer thickness \(H=H(x)\) can be approximated by the classical result from turbulent boundary-layer theory (Schlichting and Gersten, 2003), \(H=0.37x/Re_{x}^{1/5}\), where \(Re_{x}=Ux/\nu\) is the boundary layer Reynolds number and \(x\) is the distance from the leading edge to the centre of the test section, as done in Xu et al. (2021). We now use the above boundary-layer approximation to highlight an important difference between configurations with varying \(H/P\) in external flows to explain the non-monotonicity of \(DR\) with respect to \(\phi\). The ratio \(H/P\) varies significantly if we compare the experimental setup that generates the brown curve of Park et al. (2014) (where the distance from the leading edge to the centre of the test section is \(45\,\mathrm{cm}\)) and the experimental setup that generates the grey curve of Xu et al. (2021) (\(1.1\,\mathrm{m}\)). This change in \(H/P\) causes the drag reduction to be smaller in Xu et al. (2021) even though the gas fraction \(\phi=0.9\) and texture period \(P=50\,\mathrm{\SIUnitSymbolMicro m}\) do not change across the two experiments. The model in (2, 27) captures the increase in drag reduction with increasing gas fraction in the experiments of Park et al. (2014) (see the green, orange, yellow, brown and pink curves in Fig. 5). The orange data where \(\phi=0.61\) exhibit a smaller drag reduction than the yellow data where \(\phi=0.52\), as the texture period has decreased from \(P=100\,\mathrm{\SIUnitSymbolMicro m}\) to \(P=50\,\mathrm{\SIUnitSymbolMicro m}\)(Park et al., 2014), reducing the area of the liquid-gas interface at the SHS. The same effect is noticed in the experimental data from Daniello et al. (2009) by comparing results for \(P=30\,\mathrm{\SIUnitSymbolMicro m}\) (red) and \(P=50\,\mathrm{\SIUnitSymbolMicro m}\) (blue). There is a significant spread in the original experimental data presented in Daniello et al. (2009), which could be due to a number of features of SHSs, such as the liquid-gas interface curvature, the gas subphase, loss of plastron and ridge misalignment (Park et al., 2021). In Fig. 5 we show an ensemble average (error bars) of the drag-reduction data extracted from Daniello et al. (2009) over all Reynolds numbers in order to simplify the comparison between these data and the other experiments. #### 4.2.2 Turbulent model including surfactant We now investigate the potential effect of surfactants on the drag reduction of flows over SHSs in experiments reported in the literature. Since surfactants have generally not been added artificially in experiments in the literature, the concentration level and type of any potential surfactant present in the experiments is unknown. In Fig. 6, we compute theoretical predictions from our model using (2, 27) for varying average Marangoni shear rate \(\gamma_{Ma}\), thereby simulating different surfactant conditions. We compare our turbulent model predictions inclusive of surfactant with available experimental data Figure 5: Comparison of our turbulent model predictions for the drag reduction (\(DR\)) using (2, 27) with experimental results in the literature for internal turbulent channel flows with SHS (Daniello et al., 2009) and external turbulent boundary layer flows with SHS (Park et al., 2014; Xu et al., 2021), whilst varying the Reynolds number (\(Re\)) and gas fraction (\(\phi\)), where the average Marangoni shear rate \(\gamma_{Ma}=0\). Filled symbols and solid lines represent turbulent experiments and theory. from: Daniello et al. (2009) in panel (a), Park et al. (2014) in panel (b) and Xu et al. (2021) in panel (c), as a function of the bulk Reynolds number \(Re\in[1000,\,10000]\), gas fraction \(\phi\in[0.1,\,0.9]\) and friction Reynolds number \(Re_{\tau_{0}}\in[0,\,8000]\) respectively. Our theoretical model shows that the effect of surfactant is stronger at lower Reynolds numbers, where the average Marangoni shear rate \(\gamma_{Ma}\) at the liquid-gas interface is large compared to the average wall shear stress. This can be seen in panels (a) and (c) where \(DR\) decreases more rapidly with increasing \(\gamma_{Ma}\) (coloured solid lines) at smaller \(Re\) and \(Re_{\tau_{0}}\), respectively. When \(DR=0\), the surfactant is strong enough to immobilise the liquid-gas interface, such that the mean streamwise slip velocity is zero. For those configurations where the liquid-gas interface becomes immobilised at a fixed finite Reynolds number, the interface remains immobilised at all Reynolds numbers that are smaller than this value (see, for example, the purple curve in Fig. 6(a), where the interface is immobilised for all \(Re\lessapprox 3500\)). As the gas fraction of the SHS decreases in the limit \(\phi\to 0\), there is no interface for the surfactant to adsorb to, and therefore, the curves for different \(\gamma_{Ma}\) collapse and \(DR\to 0\). Overall, we find that theoretically the inclusion of surfactant effects in our turbulent model can clearly impair drag reduction. Nevertheless, the limited amount of experimental data is not sufficient to confirm or infirm the impact of surfactants in the experiments we have analysed. The experimental data plotted in Fig. 5 and Fig. 6 do not strongly deviate from the model assuming \(\gamma_{Ma}=0\), thus suggesting weak or negligible surfactant impact in the laboratory experiments that we have analysed from Daniello et al. (2009), Park et al. (2014) and Xu et al. (2021). In Table 1, surfactant effects are quantified via the root mean squared (RMS) error \(\epsilon_{\text{RMS}}\) which compares the drag reduction predicted by our model \(DR_{\text{Model}}\) to the drag reduction predicted by experimental data \(DR_{\text{Data}}\). We see that for the experimental data in Daniello et al. (2009), Park et al. (2014) and Xu et al. (2021), the predictions for weak surfactant effects with a small non-zero Marangoni shear rate \(\gamma_{Ma}\) give rise to a smaller RMS error than those for a clean channel where \(\gamma_{Ma}=0\). Conversely, the predictions for moderate or strong surfactant effects with a larger \(\gamma_{Ma}\) have a greater \(\epsilon_{\text{RMS}}\) than those channels with \(\gamma_{Ma}=0\). The limited data and lack of experiments including surfactant make these experimental results difficult to interpret. One would expect the effect of surfactants to be more prominent in fieldwork rather than in a laboratory setting, where the water is relatively clean. We discuss our model predictions when surfactant concentrations and ridge lengths are characteristic of marine applications in Section 4.3. More experiments that vary the surfactant concentration are therefore required to infer whether surfactants are important in turbulent applications. As previously mentioned, several additional features of flows over SHSs could be involved and cause the changes in drag; e.g. liquid-gas interface curvature, the gas subphase, loss of plastron or ridge misalignment (Park et al., 2021). ### Model predictions for marine applications We finally investigate how the drag reduction varies with respect to the average Marangoni shear rate \(\gamma_{Ma}\), which arises due to surfactant accumulation (with background concentration \(c_{0}\) \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multicolumn{5}{c}{Daniello _et al._ (2009)} \\ \hline \(\gamma_{Ma}\) (s\({}^{-1}\)) & 0 & \(2.5\times 10^{-2}\) & \(5\times 10^{-2}\) & \(1\times 10^{-1}\) \\ \(\epsilon_{\text{RMS}}\) & 0.0173 & 0.0023 & 0.0001 & 0.0173 \\ \hline \hline \multicolumn{5}{c}{Park _et al._ (2014)} \\ \hline \(\gamma_{Ma}\) (s\({}^{-1}\)) & 0 & \(1\times 10^{-1}\) & \(7.5\times 10^{-1}\) & \(1.5\times 10^{0}\) \\ \(\epsilon_{\text{RMS}}\) & 0.0052 & 0.0052 & 0.0090 & 0.0210 \\ \hline \hline \multicolumn{5}{c}{Xu _et al._ (2021)} \\ \hline \(\gamma_{Ma}\) (s\({}^{-1}\)) & 0 & \(5\times 10^{-2}\) & \(2.5\times 10^{-1}\) & \(1\times 10^{0}\) \\ \(\epsilon_{\text{RMS}}\) & 0.0010 & 0.0009 & 0.0019 & 0.0215 \\ \hline \hline \end{tabular} \end{table} Table 1: The RMS error of our model, \(\epsilon_{\text{RMS}}\), comparing the drag reduction predicted by our model (\(DR_{\text{Model}}\)) using \((2,\,27)\) to the drag reduction predicted by laboratory experimental data (\(DR_{\text{Data}}\)), considering experimental results in internal (Daniello et al., 2009) and external flows (Park et al., 2014; Xu et al., 2021) from the literature, for different average Marangoni shear rates (\(\gamma_{Ma}\)). Figure 6: Comparison of our turbulent model predictions (solid lines) for the drag reduction \(DR\) using \((2,\,27)\) at various Marangoni shear rates \(\gamma_{Ma}\) (colours) with experimental data (symbols) from the literature: (a) Daniello et al. (2009) for varying Reynolds number (\(Re\)); (b) Park et al. (2014) for varying gas fraction (\(\phi\)); (c) Xu et al. (2021) for varying no-slip friction Reynolds number (\(Re_{\tau_{0}}\)). at the downstream stagnation point of the long but finite streamwise ridges (with length \(L\)) that are periodic in the spanwise direction. We compute predictions for \(DR\) across a range of \(\gamma_{Ma}\) in Fig. 7, using the model in (2, 27) with \(\phi=0.5\) (blue curves), \(\phi=0.75\) (red) and \(\phi=0.94\) (green), for a range of length and velocity scales characteristic of marine applications. In Table 2, we present these typical length and velocity scales that are characteristic of marine applications, such as a tanker or submarine. The data in Table 2 is used to calculate the bulk Reynolds number \(Re\) and turbulent boundary-layer thickness \(H\), which is approximated using \(H=0.37x/Re_{x}^{1/5}\)(Schlichting and Gersten, 2003), for the equivalent no-slip flow. The approximate turbulent boundary-layer thickness is evaluated at the streamwise mid-point of the marine vessels considered in Table 2, such that it lies within the range \(0.15\,\mathrm{m}\leq H\leq 0.35\,\mathrm{m}\). We choose a value for the SHS texture period \(P\) based on those SHSs that have been reported to maintain a stable Cassie-Baxter state in experiments in the literature (Daniello et al., 2009; Jung and Bhushan, 2010; Park et al., 2014; Woolford et al., 2009; Xu et al., 2021): \(100\,\mathrm{\SIUnitSymbolMicro m}\leq P\leq 200\,\mathrm{\SIUnitSymbolMicro m}\), i.e. we take \(P=150\,\mathrm{\SIUnitSymbolMicro m}\). We also estimate the average Marangoni shear rate \(\gamma_{Ma}\) in lab and ocean environments using the theory outlined in Landel et al. (2020), with the characteristic velocities \(U\), boundary layer thicknesses \(H\), streamwise ridge lengths \(L\) and background concentrations \(c_{0}\) that are summarised in Table 2. The scaling theory derived in Landel et al. (2020) approximates the surfactant dynamics using a linear equation of state and adsorption-desorption kinetics (see Appendix A). In order to use this model, we have assumed that the spanwise variations in the velocity and concentration fields are negligible compared to the streamwise variation. Indeed, in the experiments conducted by Xu et al. (2021), the gas fraction is large, \(\phi=0.9\), and therefore, we would expect three-dimensional effects to be small. The validity of the above assumptions in turbulent flows over SHSs with surfactant is left for future study. We base the streamwise ridge length of the SHS on the configuration in Xu et al. (2021) where a stable liquid-gas interface was mostly maintained; these experiments took place for \(Re\in[2.3\times 10^{6},\,1.12\times 10^{7}]\), which is closest to the marine applications that we investigate in this study. First, we let the total length of the streamwise ridges be \(L=0.035\,\mathrm{m}\) and the length of the solid region between ridges to be \(30\,\mathrm{\SIUnitSymbolMicro m}\). We then allow for the possibility of longer ridges than those considered in Xu et al. (2021), i.e. \(L=0.35\,\mathrm{m}\), primarily to demonstrate how \(\gamma_{Ma}\) depends on \(L\). We plot the average Marangoni shear rate that we estimate to be characteristic of laboratory environments \(\gamma_{Ma}=0.14\,\mathrm{s}^{-1}\) when \(L=0.35\,\mathrm{m}\) and \(\gamma_{Ma}=1.25\,\mathrm{s}^{-1}\) when \(L=0.035\,\mathrm{m}\) (leftmost vertical black dashed lines), where we expect surfactant concentrations to be low, i.e. \(c_{0}=1\times 10^{-4}\,\mathrm{mol}\,\mathrm{m}^{-3}\), as estimated in lab conditions by Temprano-Coleto et al. (2023). We also plot the average Marangoni shear rate that we assume to be characteristic of ocean environments \(\gamma_{Ma}=23.97\,\mathrm{s}^{-1}\) when \(L=0.35\,\mathrm{m}\) and \(\gamma_{Ma}=86.08\,\mathrm{s}^{-1}\) when \(L=0.035\,\mathrm{m}\) (rightmost vertical black dashed lines) where the surfactant concentration can be much higher, i.e. \(c_{0}=1\,\mathrm{mol}\,\mathrm{m}^{-3}\), as measured in ocean conditions by Frossard et al. (2019). In Fig. 7, we find that the surfactant concentrations that are characteristic of clean laboratory conditions are not high enough to develop an appreciable surfactant gradient and increase the drag for flows with this particular SHS geometry. Hence, our model predicts that surfactant effects are weak in this regime. The surface velocity is large in turbulent flows, which means that the shear rate of the SHS flow is greater than the shear rate due to surfactant, and the liquid-gas interface is effectively shear-free. However, the higher surfactant concentrations that are present for marine applications in the ocean mean that the shear rate due to surfactant increases, and therefore, a surfactant gradient might develop at the liquid-gas interface that generates an appreciable increase in the drag for flows with this particular SHS geometry. Hence, our model predicts that surfactant effects are moderate to strong in this regime. For example, in Fig. 7, surfactant effects are strong and the interface is immobilised (i.e. \(DR=0\)) when the background concentration is larger than a threshold we estimate at \(c_{0}\gtrsim 1\,\mathrm{mol}\,\mathrm{m}^{-3}\). Immobilisation occurs for a smaller \(\gamma_{Ma}\) for tanker applications when compared to submarine applications, as the characteristic velocities are typically slower (see Table 2). ## 5 Conclusions Motivated by recent developments that demonstrate the importance of surfactants in laminar flows over SHSs (Kim and Hidrovo, 2012; Bolognesi et al., 2014; Peaudecerf et al., 2017; Song et al., 2018; Landel et al., 2020; Temprano-Coleto et al., 2023), we have proposed a model for turbulent flow over SHSs with long but finite streamwise ridges that are periodic in the spanwise direction, including surfactant effects, based on the shifted-log-law theory applied Figure 7: Turbulent model predictions for the drag reduction (\(DR\)) using (2, 27) in laboratory (where the bulk concentration \(c_{0}=1\times 10^{-4}\,\mathrm{mol}\,\mathrm{m}^{-3}\) and the streamwise ridge length \(0.035\,\mathrm{m}\leq L\leq 0.35\,\mathrm{m}\), indicated by the leftmost vertical black dotted lines) and ocean environments (where \(a=1\,\mathrm{mol}\,\mathrm{m}^{-3}\) and \(0.035\,\mathrm{m}\leq L\leq 0.35\,\mathrm{m}\), indicated by the rightmost vertical black dotted lines), whilst varying the average Marangoni shear rate (\(\gamma_{Ma}\)), for different gas fractions (\(\phi\)) and applications detailed in Table 2. For each gas fraction, the upper bound gives \(DR\) for a submarine and the lower bound gives \(DR\) for a tanker. to SHSs by Fukagata et al. (2006). We consider both internal and external flows over SHSs, in order to compare with the wide range of numerical (Park et al., 2013; Turk et al., 2014; Rastegari and Akhavan, 2015; Egan et al., 2021) and experimental (Daniello et al., 2009; Park et al., 2014; Xu et al., 2021) data in the literature and predict the drag reduction for marine applications. The turbulent model assumes that the viscous sublayer thickness is much larger than the SHS texture period \(P\), and therefore, that the SHS texture affects the turbulent bulk flow via the average streamwise and spanwise slip length. Our model employs an empirical relationship for the saturation of the log-law shift due to the average spanwise slip length based on riblet theory (Luchini et al., 1991; Ibrahim et al., 2021) and applied to SHSs by Busse and Sandham (2012). We close the model using laminar solutions due to Philip (1972), where we extend the solutions in Philip (1972) to include surfactant effects. This provides us with a fully predictive relationship for the turbulent drag reduction, which we can use to relate the turbulent drag reduction to the geometry of the SHS, the flow, the fluid and the properties of the surfactant, using a laminar scaling theory outlined in Landel et al. (2020). We compare our model predictions with direct numerical simulations (DNS), where there is good agreement in the drag reduction for small \(P^{+}\) (in wall units \(+\)), i.e. when the viscous sublayer is thick compared to the period of the SHS (Fig. 3). The model captures the dependence of the drag reduction on the cross-plane geometry of the SHS, i.e. the gas fraction \(\phi\), texture wavelength \(P\) and the wall-normal height \(H\), where the streamwise and spanwise slip mechanisms that give rise to the drag reduction can be examined using the flow field (Fig. 4). The agreement between the model and DNS holds for \(P^{+}\lessapprox 50\) until we transition into a different regime that is dominated by turbulence for \(P^{+}\gtrapprox 50\), where the drag reduction from the DNS asymptotically approaches the gas fraction for \(P^{+}\rightarrow\infty\), as also discussed by (Daniello et al., 2009; Rothstein, 2010; Park et al., 2021). We calculate the streamwise slip length that corresponds to this empirical asymptote to improve model predictions at large \(P^{+}\). We also compare our model predictions with experimental data in nominally clean (i.e. where no surfactants were added artificially) laboratory settings (Fig. 5), which allows us to investigate any potential contaminant surfactant effects in turbulent flows over SHSs. The theory demonstrates that the presence of surfactant is detrimental to drag reduction, where greater increases in drag are seen at smaller Reynolds numbers. By comparing the surfactant-inclusive model and the laboratory experimental data found in the literature, our model shows that surfactants did not affect significantly the drag reduction performance of the SHSs studied in laboratory conditions (Fig. 6), as expected from clean experimental conditions. For shorter gratings, which are necessary at high speeds to maintain a stable liquid-gas interface (see e.g. \(L=0.035\,\mathrm{m}\) in Xu et al., 2021), and higher surfactant concentrations which have been measured in the ocean (Frossard et al., 2019), our model predicts that surfactant can become important again for velocities and length scales characteristic of marine applications (Fig. 7). Both DNS including surfactant dynamics and experimental studies with surfactant concentrations that are typical of ocean environments are required to further disentangle the effect of surfactants in turbulent flows over SHSs. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements We acknowledge support from CBET-EPSRC (EPSRC Ref. EP/T030739/1, NSF #2054894), as well as partial support from ARO MURI W911NF-17-1-0306. ## Appendix A Scaling theory for the average Marangoni shear rate For completeness, we outline one of the main results from the scaling theory derived in Landel et al. (2020), so that we can discuss the dependence of the average Marangoni shear rate \(\gamma_{Ma}\) on the wall-normal height \(H\), streamwise length of the ridges \(L\), characteristic velocity \(U\) and background bulk surfactant concentration \(c_{0}\). Landel et al. (2020) consider a steady, two-dimensional pressure-driven channel flow bounded by a single SHS, which is contaminated with a small concentration of surfactant. They linearise the equation of state and adsorption-desorption kinetics and perform a scaling analysis on the resulting governing equations. By solving for the two-dimensional velocity field using dual series techniques and combining this with the scaling analysis results, they find that \[\gamma_{Ma}=\frac{a_{1}kMaF_{0}U}{H\left(\frac{1}{Pe_{t}}+\frac{a_{2}L^{2}Bi \chi}{\chi+BiPe\delta}+a_{1}kMaF_{0}\right)}, \tag{1}\] \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline Vessel & Length & Speed & \(Re\) & \(H\) & \(P\) & \(L\) & \(c_{0}\) \\ & m & m s\({}^{-1}\) & - & m & m & m & mol m\({}^{-3}\) \\ \hline Tanker & 400 & 8.5 & \(4.3\times 10^{8}\) & 0.35 & \(1.5\times 10^{-4}\) & \([0.035,0.35]\) & \([0.0001,1]\) \\ Submarine & 150 & 13 & \(2.4\times 10^{8}\) & 0.15 & \(1.5\times 10^{-4}\) & \([0.035,0.35]\) & \([0.0001,1]\) \\ \hline \hline \end{tabular} \end{table} Table 2: Table showing the typical length, speed, bulk Reynolds number (based on the speed and length of the vessel and the kinematic viscosity of water), boundary-layer thickness, pitch (based on Daniello et al., 2009; Jung and Bhushan, 2010; Park et al., 2014; Woolford et al., 2009; Xu et al., 2021), streamwise ridge length (Xu et al., 2021) and background concentration (Frossard et al., 2019; Temprano-Coleto et al., 2023), for various marine vessels. These length, velocity and concentration scales are used to evaluate the drag reduction for a tanker and submarine in Fig. 7. where \(a_{1}\approx 2.3\) and \(a_{2}\approx 0.32\) are empirical parameters that are fitted using simulations, \(k=k_{a}c_{0}/k_{d}\) is the bulk concentration, \(Ma=nRT\Gamma_{m}/\mu/U\) is the Marangoni number, \(F_{0}\) is the interfacial velocity of the clean flow (see Landel et al., 2020, for more details), \(Pe_{I}=HU/D_{I}\) is the interfacial Peclet number, \(Bi=k_{d}H/U\) is the Biot number, \(\chi=k_{d}H/k_{a}/\Gamma_{m}\) is the kinetics number, \(Pe=HU/D\) is the bulk Peclet number and \(\delta\approx 1.68(L/H)(1+0.05(L/H)^{2}Pe)^{-1/3}\) is the typical thickness of the diffusive layer of bulk surfactant. The dimensional surfactant parameters that are used to calculate the above non-dimensional numbers and generate the results in Fig. 7 are given in Table 3. From (13), we observe that as the bulk surfactant concentration increases, the dimensionless group \(a_{1}kMaF_{0}\) increases and the average Marangoni shear rate increases. Conversely, we observe that as the streamwise ridge length increases, the dimensionless group \(a_{2}L^{2}Bi\chi/(\chi+BiPe\delta)\) increases and the average Marangoni shear rate decreases. ## Appendix B Converting direct numerical simulation data In general, studies in the literature reporting direct numerical simulations of SHS flows similar to our problem (see Fig. 3) provide results only for the drag reduction \(DR\) or the added flux \(\Delta\overline{U}/\overline{U}_{0}\), but not both (Park et al., 2013; Turk et al., 2014; Rastegari and Akhavan, 2015; Egan et al., 2021). The quantities \(DR\) and \(\Delta\overline{U}/\overline{U}_{0}\), defined in (2) and (3) respectively, are two independent measurements of the performance of the SHS flow. Studies providing \(DR\) were performed under the CFR condition (Park et al., 2013; Rastegari and Akhavan, 2015), whilst studies providing \(\Delta\overline{U}/\overline{U}_{0}\) were performed under the CPG condition (Turk et al., 2014; Egan et al., 2021). In order to compare the numerical results for \(DR\) with the largest data set from the literature, we have converted the results given for \(\Delta\overline{U}/\overline{U}_{0}\) into \(DR\). In the following, we describe our procedure to convert data for \(\Delta\overline{U}/\overline{U}_{0}\) into data for \(DR\). To minimise the conversion error, the procedure uses the log law for the no-slip flows (Pope, 2000), whilst using the original published data for the SHS flows. We convert the data for \(\Delta\overline{U}/\overline{U}_{0}\), obtained under the CPG condition, into data for \(DR\). In these simulations (Turk et al., 2014; Egan et al., 2021) the input parameters include the prescribed stress \(\tau=\tau_{0}\) and the wall-normal height \(H\). To compute \(DR\), we find the no-slip wall shear stress or friction Reynolds number obtained at the same bulk average velocity, i.e. \(\tau_{0}(\overline{U})\) or \(Re_{\tau_{0}}(\overline{U})\). If not given, we first seek \(\overline{U}_{0}\) from (9), knowing \(Re_{\tau_{0}}\) and \(H\). Then, we can obtain \(\overline{U}\) from \(\Delta\overline{U}/\overline{U}_{0}\) through (3). Finally, \(\tau_{0}(\overline{U})\) or \(Re_{\tau_{0}}(\overline{U})\) can be obtained using (9), using \(\overline{U}\) and \(H\). The drag reduction is then computed as \(DR=1-\tau(\overline{U})/\tau_{0}(\overline{U})\). We note that, as long as the simulated no-slip flows are well resolved numerically, the conversion procedure above should have a relatively small error as it only requires the use of classical log-law theory. The classical log-law theory should closely model the simulated no-slip flows in all the studies from which we have used data (Park et al., 2013; Turk et al., 2014; Rastegari and Akhavan, 2015; Egan et al., 2021). ## Appendix C Laminar streamwise velocity field including surfactant effects The laminar streamwise velocity field including surfactant effects can be found by solving the incompressible Stokes equation for a linear shear flow in a semi-infinite domain with free-stream shear-rate \(\tau\). The flow is assumed steady and homogeneous in the streamwise direction with a negligible pressure gradient. The streamwise velocity is given by Laplace's equation \[\frac{\partial^{2}U}{\partial y^{2}}+\frac{\partial^{2}U}{\partial z^{2}}=0. \tag{14}\] The wall-normal and spanwise velocities are negligible as the streamwise length of the ridges is much larger than the cross-plane length scales (see Section 2). We solve (14) subject to a shear-rate condition at the liquid-gas interface (which is derived from the linearised streamwise component of the tangential stress balance, where full details are given in Temprano-Coleto et al., 2023) \[\frac{\partial U}{\partial y}(y=0,\,0\leq z\leq\phi)=\gamma_{Ma}, \tag{15}\] no-slip conditions at the solid wall \[U(y=0,\,\phi\leq z\leq P)=0, \tag{16}\] symmetry conditions \[\frac{\partial U}{\partial z}(y,\,z=0)=\frac{\partial U}{\partial z}(y,\,z=P)=0, \tag{17}\] and a free stream shear rate \[\lim_{y\rightarrow\infty}\frac{\partial U}{\partial y}=\frac{\tau}{\mu}. \tag{18}\] Utilising superposition, we decompose the streamwise velocity field into one- and two-dimensional components. We can then solve for the two-dimensional component, using superposition to modify the conformal mapping solution due to Philip \begin{table} \begin{tabular}{|c|c|c|c|} \hline Quantity & Symbol & Units & Value \\ \hline Adsorption rate & \(k_{a}\) & m\({}^{3}\) mol\({}^{-1}\) s\({}^{-1}\) & 89.5 \\ Desorption rate & \(k_{d}\) & s\({}^{-1}\) & 500 \\ Salinity parameter & \(n\) & - & 2 \\ Ideal gas constant & \(R\) & J mol\({}^{-1}\) K\({}^{-1}\) & 8.31 \\ Temperature & \(T\) & K & 296 \\ Packing concentration & \(\Gamma_{m}\) & mol m\({}^{-2}\) & \(3.9\times 10^{-6}\) \\ Dynamic viscosity & \(\mu\) & kg m\({}^{-1}\) s\({}^{-1}\) & \(8.9\times 10^{-4}\) \\ Surface diffusivity & \(D_{I}\) & m\({}^{2}\) s\({}^{-1}\) & \(7\times 10^{-10}\) \\ Bulk diffusivity & \(D\) & m\({}^{2}\) s\({}^{-1}\) & \(7\times 10^{-10}\) \\ \hline \end{tabular} \end{table} Table 12: Parameters appearing in the scaling theory for the average Marangoni shear rate (13) from Landel et al. (2020) alongside their value used in the model prediction for marine applications in Fig. 7. (1972) to include surfactant effects through \(\gamma_{Ma}\). Together, we have that \[U=\frac{\tau y}{\mu}+\Im\left(\frac{P}{\pi}\left(\frac{\tau}{\mu}-\gamma_{Ma} \right)\arccos\left(\frac{\cos\left(\frac{\pi\theta}{P}\right)}{\cos\left(\frac{ \pi\phi}{2}\right)}\right)-\theta\right), \tag{10}\] where \(\theta=z+y\sqrt{-1}\), \(i^{2}=-1\) and \(\Im(\cdot)\) denotes the imaginary part. Taking the limit as \(y\rightarrow\infty\) of the difference between one and two-dimensional components, i.e. \(U-\tau y/\mu\), we can evaluate the average streamwise slip length, \(\lambda_{x}\), as (10). This can then be used to evaluate the turbulent drag reduction, using the methodology outlined in Section 3. As \(\gamma_{Ma}\to 0\), we recover the original solution due to Philip (1972) for a shear-free liquid-gas interface. As \(\gamma_{Ma}\rightarrow\tau/\mu\), the liquid-gas interface is immobilised and \(U=\tau y/\mu\).
2306.02812
Weak representability of actions of non-associative algebras
We study the categorical-algebraic condition that internal actions are weakly representable (WRA) in the context of varieties of (non-associative) algebras over a field. Our first aim is to give a complete characterization of action accessible, operadic quadratic varieties of non-associative algebras which satisfy an identity of degree two and to study the representability of actions for them. Here we prove that the varieties of two-step nilpotent (anti-)commutative algebras and that of commutative associative algebras are weakly action representable, and we explain that the condition (WRA) is closely connected to the existence of a so-called amalgam. Our second aim is to work towards the construction, still within the context of algebras over a field, of a weakly representing object $E(X)$ for the actions on (or split extensions of) an object $X$. We actually obtain a partial algebra $E(X)$, which we call external weak actor of $X$, together with a monomorphism of functors ${\operatorname{SplExt}(-,X) \rightarrowtail \operatorname{Hom}(U(-),E(X))}$, which we study in detail in the case of quadratic varieties. Furthermore, the relations between the construction of the universal strict general actor $\operatorname{USGA}(X)$ and that of $E(X)$ are described in detail. We end with some open questions.
Jose Brox, Xabier García-Martínez, Manuel Mancini, Tim Van der Linden, Corentin Vienne
2023-06-05T12:08:02Z
http://arxiv.org/abs/2306.02812v2
# Weak representability of actions ###### Abstract. We study the categorical-algebraic condition that _internal actions are weakly representable_ (WRA) in the context of varieties of (non-associative) algebras over a field. It is known that such a variety is action accessible if and only if it is algebraically coherent, and it is also known that (both) these conditions are implied by (WRA) in this context. Our first aim is to separate them, by giving examples of action accessible varieties which do not satisfy (WRA): we prove that the varieties of commutative associative algebras, of Jacobi-Jordan algebras and of anti-commutative anti-associative algebras (over a field of characteristic different from 2) are such. Thus we answer a question asked by George Janelidze. We further show that amongst quadratic varieties of (anti-)commutative algebras, these are the only examples. Our second aim is to work towards the construction, still within the context of algebras over a field, of a weakly representing object \(\mathscr{E}(X)\) for the actions on (or split extensions of) an object \(X\). We actually obtain a _partial_ algebra \(\mathscr{E}(X)\), which we call _external weak actor_ of \(X\), together with a monomorphism of functors \(\operatorname{SplExt}(-,X)\rightharpoonup\operatorname{Hom}(-,\mathscr{E}(X))\), which we study in detail in the case of quadratic varieties. We end with some open questions. Key words and phrases:Action representable category, split extension, non-associative algebra, partial algebra, quadratic operad 2020 Mathematics Subject Classification: 08A35; 08C05; 16B50; 16W25; 17A32; 17A36; 18C05; 18E13 The first author is supported by Ministerio de Economia y Competitividad (Spain) with grant number PID2021-127075NA-I00. The second author is supported by the University of Palermo and by the "National Group for Algebraic and Geometric Structures, and their Applications" (GNSAGA - INdAM). The third author is a Senior Research Associate of the Fonds de la Recherche Scientifique-FNRS. The fourth author is supported by the Fonds Thelam of the Fondation Roi Baudouin. First comes the concept of an _action accessible_ category due to D. Bourn and G. Janelidze [5]: it is weak enough to include all Orzech categories of interest [25], as proved by A. Montoli in [24]. Alternatively, the properties of the representing object \([X]\) may be weakened; this is the aim in [7], where it is shown that each Orzech category of interest admits a so-called _universal strict general actor (USGA)_. Our present article focuses on a concept which was more recently introduced, by G. Janelidze in [16]: _weak representability of actions (WRA)_. Instead of asking that for each object \(X\) in a semi-abelian category \(\mathscr{C}\) we have an object \([X]\) and a natural isomorphism \(\operatorname{Act}(-,X)\cong\operatorname{Hom}_{\mathscr{C}}(-,[X])\), we require the existence of an object \(T\) and a monomorphism of functors \[\tau\colon\operatorname{Act}(-,X)\rightarrow\operatorname{Hom}_{\mathscr{C }}(-,T).\] Such an object \(T\) is then called a _weak actor_ of \(X\), and when each \(X\) admits a weak actor, \(\mathscr{C}\) is said to be _weakly action representable_. For instance, if in an Orzech category of interest, each \(\operatorname{USGA}(X)\) is an object of the category, then this category is weakly action representable [9]. This is the case of the category **Assoc** of associative algebras [16] or the category **Leib** of Leibniz algebras [9] over a field. J. R. A. Gray observed in [14] that an Orzech category of interest need not be weakly action representable. One of our aims in the present article is to sharpen this result, by studying the condition (WRA) in the context of varieties of (non-associative) algebras over a field. (We recall basic definitions and results concerning this setting in Section 1.) It is known that such a variety is action accessible if and only if it is algebraically coherent [13], and it is also known [16] that action accessibility is implied by (WRA). Here we separate these conditions, by giving examples of action accessible varieties which do not satisfy (WRA): we prove that the variety of commutative associative algebras, the variety of Jacobi-Jordan algebras and the variety of anti-commutative anti-associative algebras (over a field of characteristic different from 2) are such (Corollary 2.5, Corollary 2.10 and Corollary 2.14, respectively). Thus we answer a question asked by G. Janelidze in [16]. We further show that these are the only examples, as long as the variety is (anti-)commutative and has no non-trivial identities of degree larger than 3. We start this work in Section 2 where we consider varieties with an identity of degree 2 (so commutative or anti-commutative algebras). In Section 3 this is extended to general algebras over a field. Our second aim is to work towards the construction, still within the context of algebras over a field, of a weakly representing object \(\mathscr{E}(X)\) for the actions on/split extensions of an object \(X\). In Definition 3.3 we actually obtain a _partial_ algebra \(\mathscr{E}(X)\), which we call _external weak actor_ of \(X\), together with a monomorphism of functors \(\operatorname{SplExt}(-,X)\rightarrow\operatorname{Hom}(-,\mathscr{E}(X))\), which we study in detail in the case of quadratic varieties (Section 4). We end the article with some open questions (Section 5). ## 1. Preliminaries Our work takes place in _semi-abelian categories_ which were introduced in [17] in order to capture categorical-algebraic properties of non-abelian algebraic structures. A category is _semi-abelian_ if it is pointed, admits binary coproducts, is protomodular and Barr-exact. Well-known examples are the category \(\mathbf{Gp}\) of groups, the category \(\mathbf{Rng}\) of not necessarily unitary rings, or any variety \(\mathscr{V}\) of non-associative algebras over a field \(\mathbb{F}\). From now on, when we consider a category \(\mathscr{C}\), we assume it to be semi-abelian; when we consider a variety \(\mathscr{V}\), we assume the field \(\mathbb{F}\) is fixed, so that we may drop it from our notation. ### Internal actions and their representability A central notion which appears in the semi-abelian context is that of _internal actions_. In fact, in any semi-abelian category, the functor \(\operatorname{Ker}_{B}\colon\operatorname{\mathbf{Pt}}_{B}(\mathscr{C}) \to\mathscr{C}\) sending a point \((A,\pi\colon A\to B,s\colon B\to A)\) over \(B\) (where \(\pi\circ s=1_{B}\)) to the kernel of \(\pi\) is monadic. The corresponding monad on \(\mathscr{C}\) is the functor \[B\flat(-)\colon\mathscr{C}\to\mathscr{C}\colon X\mapsto B\flat X,\] where the object \(B\flat X\) is the kernel of \(\langle 1_{B},0\rangle\colon B+X\to B\), together with certain natural transformations \(\eta^{B}\colon 1_{\mathscr{C}}\Rightarrow B\flat(-)\) and \(\mu^{B}\colon B\flat(B\flat(-))\Rightarrow B\flat(-)\). An **(internal) \(B\)-action** is a \(B\flat(-)\)-algebra, which is a pair \((X,\xi)\) consisting of an object \(X\) and a morphism \(\xi\colon B\flat X\to X\) called an **action** of \(B\) on \(X\), such that the diagrams commute. We write \(\operatorname{Act}(B,X)\) for the set of (internal) actions of \(B\) on \(X\). One equivalent way of viewing actions uses split extensions. We recall that a **split extension** of \(B\) by \(X\) is a diagram in \(\mathscr{C}\) such that \(\pi\circ s=1_{B}\) and \((X,i)\) is the kernel of \(\pi\). **Lemma 1.1** ([3, 4]).: _Given two objects \(B\) and \(X\) in \(\mathscr{C}\), there is a bijection_ \[\operatorname{SplExt}(B,X)\cong\operatorname{Act}(B,X)\] _between the set of isomorphism classes of split extensions of \(B\) by \(X\) and the set of actions of \(B\) on \(X\). _ In fact, from any action \(\xi\colon B\flat X\to X\) one can construct a split extension of \(B\) by \(X\) and vice versa. Then, the object \(A\) in the previous split extension is called the **semi-direct product** of \(B\) with \((X,\xi)\). We denote it by \(B\ltimes_{\xi}X\). Let \(X\) be an object in the category, then both \[\operatorname{Act}(-,X)\colon\mathscr{C}^{\operatorname{op}}\to\operatorname{ \mathbf{Set}}\] and \[\operatorname{SplExt}(-,X)\colon\mathscr{C}^{\operatorname{op}}\to\operatorname {\mathbf{Set}}\] define a contravariant functor from \(\mathscr{C}\) to \(\operatorname{\mathbf{Set}}\). In [2], the authors proved that Lemma 1.1 extends to an isomorphism of functors \(\operatorname{SplExt}(-,X)\cong\operatorname{Act}(-,X)\). **Definition 1.2** ([3]).: A semi-abelian category \(\mathscr{C}\) is said to be **action representable** if for every object \(X\) in it the functor \(\operatorname{Act}(-,X)\) is representable. In other words, there exists an object \([X]\) in \(\mathscr{C}\), called the **actor** of \(X\), and a natural isomorphism \[\operatorname{Act}(-,X)\cong\operatorname{Hom}_{\mathscr{C}}(-,[X]).\] Basic examples of semi-abelian categories which satisfy action representability are the category \(\operatorname{\mathbf{Gp}}\) of groups with the actor of \(X\) being \(\operatorname{Aut}(X)\), the category \(\operatorname{\mathbf{Lie}}\) of Lie algebras with the actor of \(X\) being \(\operatorname{Der}(X)\), and any abelian category with the actor of \(X\) being the zero object. The representability of actions of the category \(\mathbf{Assoc}\) of associative algebras was studied in [13] where the authors proved that it is not action representable. It is explained in [2] that action representability is equivalent to the condition that for any object \(X\) in \(\mathscr{C}\) the category \(\mathbf{SplExt}(X)\) of split extensions in \(\mathscr{C}\) with kernel \(X\) has a terminal object of the form We can weaken this condition assuming instead that for any \(X\), every object in \(\mathbf{SplExt}(X)\) is accessible (i.e. it has a morphism into a subterminal or so-called _faithful_ object, see [5]). In this way, we encompass a wider class of examples that did not satisfy representability of actions such as the category \(\mathbf{Pois}\) of (non-commutative) Poisson algebras or the category \(\mathbf{CAssoc}\) of commutative associative algebras. This notion called **action accessibility** was introduced by D. Bourn and G. Janelidze [5] in order to calculate centralisers of subobjects or of equivalence relations. It was then proven by A. Montoli that any Orzech category of interest is an action accessible category [24]. Since by definition the existence of a terminal object in \(\mathbf{SplExt}(X)\) is stronger than every object being accessible, it is immediate that _action representability \(\Rightarrow\) action accessibility._ Recently, in [16], G. Janelidze introduced an intermediate notion: _weak representability of actions_. **Definition 1.3**.: A semi-abelian category \(\mathscr{C}\) is said to be **weakly action representable** if for every object \(X\) in it, there exists an object \(T\) of \(\mathscr{C}\) and a monomorphism of functors \[\tau\colon\operatorname{Act}(-,X)\rightarrowtail\operatorname{Hom}_{\mathscr{ C}}(-,T).\] Such an object \(T\) is called a **weak actor** of \(X\). We call a morphism \(\varphi\colon B\to T\) in the image of \(\tau_{B}\) an **acting morphism**. It is clear from the definitions that every action representable category is weakly action representable. Also in [16], it is proven that the category \(\mathbf{Assoc}\) is weakly action representable with a weak actor of \(X\) given by the associative algebra \[\operatorname{Bim}(X)=\{(f*-,-*f)\in\operatorname{End}(X)\times \operatorname{End}(X)^{\operatorname{op}}\mid f*(xy)=(f*x)y,\] \[(xy)*f=x(y*f),\;x(f*y)=(x*f)y,\;\forall x,y\in X\}\] of _bimultipliers_ of \(X\) (see [21]). The case of the category \(\mathbf{Leib}\) of Leibniz algebras was studied in [9], where the authors showed that a weak actor of a Leibniz algebra \(X\) is the Leibniz algebra \[\operatorname{Bider}(X)=\{(d,D)\in\operatorname{End}(X)^{2}\mid d(xy)=d(x)y+xd (y),\] \[D(xy)=D(x)y-D(y)x,\;xd(y)=xD(y),\;\forall x,y\in X\}\] of _biderivations_ of \(X\) (see [20] and [22]), where the bilinear operation is defined by \[[(d,D),(d^{\prime},D^{\prime})]=(d\circ d^{\prime}-d^{\prime}\circ d,D\circ d^ {\prime}-d^{\prime}\circ D).\] In the same paper, a counterexample was studied: it was proven that the category \(\mathbf{CPois}\) of commutative Poisson algebras is not weakly action representable. Another important observation made by G. Janelidze is that every weak action representable category is action accessible. We thus have that _action representability \(\Rightarrow\) weak action representability \(\Rightarrow\) action accessibility._ A first example of a category which is action accessible but not weakly action representable was given by J. R. A. Gray in [14]. We give two more in Section 2. **Varieties of non-associative algebras.** We now recall the algebraic setting we are working in: _varieties of non-associative algebras_ over a field \(\mathbb{F}\). We think of those as collections of algebras satisfying a chosen set of polynomial equations. The interested reader can find a more detailed presentation of the subject in [28]. By a **(non-associative) algebra**\(A\) we mean a vector space \(A\) equipped with a bilinear operation \(A\times A\to A\colon(x,y)\mapsto xy\) which we call the _multiplication_. The existence of a unit element is not assumed, nor are any other conditions on the multiplication besides its bilinearity. Let \(\mathbf{Alg}\) denote the category of non-associative algebras where morphisms are the linear maps which preserve the multiplication. We consider the _free algebra_ functor \(\mathbf{Set}\to\mathbf{Alg}\) which sends a set \(S\) to the _free algebra_ generated by elements of \(S\). This functor has the forgetful functor as a right adjoint. Moreover, it factorises through the _free magma functor_ FM\(\colon\mathbf{Set}\to\mathbf{Mag}\), which sends a set \(S\) to the magma \(\operatorname{FM}(S)\) of non-associative words in \(S\), and the _magma algebra functor_\(\mathbb{F}[-]\colon\mathbf{Mag}\to\mathbf{Alg}\). Let \(S\) be a set. An element \(\varphi\) of \(\mathbb{F}[\operatorname{FM}(S)]\) is called a **non-associative polynomial** on \(S\). We say that such a polynomial is a **monomial** if it is a scalar multiple of an element in \(\operatorname{FM}(S)\). For example, if \(S=\{x,y,z,t\}\), then \((xy)t+(zy)x\), \(xx+yz\) and \((xt)(yz)\) are polynomials in \(S\) and only the last one is a monomial. For a monomial \(\varphi\) on a set \(\{x_{1},\dots,x_{n}\}\), we define its **type** as the \(n\)-tuple \((k_{1},\dots,k_{n})\in\mathbb{N}^{n}\) where \(k_{i}\) is the number of times \(x_{i}\) appears in \(\varphi\), and its **degree** as the natural number \(k_{1}+\dots+k_{n}\). A polynomial is said to be **multilinear** if all monomials composing it have the same type of the form \((1,\dots,1)\). Among the examples above, only the last one is multilinear. **Definition 1.4**.: An **identity** of an algebra \(A\) is a non-associative polynomial \(\varphi=\varphi(x_{1},\dots,x_{n})\) such that \(\varphi(a_{1},\dots,a_{n})=0\) for all \(a_{1}\), \(\dots\), \(a_{n}\in A\). We say that the algebra \(A\)_satisfies_ the identity \(\varphi\). Let \(I\) be a subset of \(\mathbb{F}[\operatorname{FM}(S)]\) with \(S\) being a set of variables. The **variety of algebras** determined by \(I\) is the class of all algebras which satisfy all the identities in \(I\). We say that a variety **satisfies the identities in \(I\)** if all algebras in this variety satisfy the given identities. In particular, if the variety is determined by a set of multilinear polynomials, then we say that the variety is **operadic**. If there exists a set of identities of degree \(2\) and \(3\) that generate all the identities of \(\mathscr{V}\), we say that the variety is **quadratic**. Recall--see for instance [10] where this is explained in detail--that an operadic, quadratic variety of algebras over a field can be viewed as a variety determined by a quadratic operad. Any variety of non-associative algebras can, of course, be seen as a category where the morphisms are the same as in \(\mathbf{Alg}\). In particular, any such variety is a semi-abelian category. _Remark 1.5_.: Whenever the characteristic of the field \(\mathbb{F}\) is zero, any variety of non-associative algebras over \(\mathbb{F}\) is operadic. This is due to the well-known multilinearisation process, see [26, Corollary 3.7]. The reason behind the name "operadic" is explained in [27, Section 2]. _Examples 1.6_.: 1. We write \(\mathbf{AbAlg}\) for the variety of _abelian_ algebras determined by the identity \(xy=0\). Seen as a category, this variety is isomorphic to the category \(\mathbf{Vec}\) of vector spaces over \(\mathbb{F}\). It is the only variety of non-associative algebras which is an abelian category; this explains the terminology. 2. We write \(\mathbf{Assoc}\) for the variety of _associative_ algebras determined by the identity of _associativity_ which is \(x(yz)-(xy)z=0\), or equivalently \(x(yz)=(xy)z\). 3. We write \(\mathbf{AAssoc}\) for the variety of _anti-associative_ algebras, determined by the _anti-associative_ identity \(x(yz)=-(xy)z\). 4. We write \(\mathbf{Com}\) for the variety of _commutative_ algebras determined by the identity of _commutativity_ which is \(xy-yx=0\), or equivalently \(xy=yx\). 5. We write \(\mathbf{ACom}\) for the variety of _anti-commutative_ algebras determined by _anti-commutativity_ which is \(xy+yx=0\), or equivalently \(xy=-yx\). 6. We write \(\mathbf{CAssoc}\) for the variety of commutative associative algebras. 7. We write \(\mathbf{ACAAssoc}\) for the variety of anti-commutative anti-associative algebras. 8. We write \(\mathbf{Lie}\) for the variety of _Lie algebras_ determined by _anti-commutativity_ and the _Jacobi identity_, which respectively are \(xy+yx=0\) and \(x(yz)+y(zx)+z(xy)=0\). 9. One can see that all the previous examples are operadic varieties. Let us provide a non-operadic example: the variety \(\mathbf{Bool}\) of _Boolean rings_, which may be seen as associative \(\mathbb{Z}_{2}\)-algebras satisfying \(xx=x\). This variety is action representable. 10. We write \(\mathbf{JJord}\) for the variety of _Jacobi-Jordan algebras_ which is determined by commutativity and the Jacobi identity. Jacobi-Jordan algebras are the commutative counterpart of Lie algebras. Indeed, if we have an associative algebra \((X,*)\), then the bilinear maps \[(x,y) \mapsto[x,y]=x*y-y*x\] \[(x,y) \mapsto x\circ y=x*y+y*x\] define respectively a Lie algebra structure and a Jacobi-Jordan algebra structure on \(X\). The name of Jordan in the definition of a Jacobi-Jordan algebra is justified by the fact that every Jacobi-Jordan algebra is a Jordan algebra (see [6]). Sometimes this variety is also known as _mock-Lie_ algebras. 11. We write \(\mathbf{Leib}\) for the variety of _(right) Leibniz algebras_ determined by the _(right) Leibniz identity_ which is \((xy)z-(xz)y-x(yz)=0\). 12. Taking any variety \(\mathscr{V}\), one can look at a subvariety of it by adding further identities to be satisfied. For example, let \(\mathscr{V}\) be a variety determined by a set of identities \(I\) and let \(k\) be any positive natural number, then we write \(\mathbf{Nil}_{k}(\mathscr{V})\) for the variety of \(k\)_-step nilpotent algebras in \(\mathscr{V}\)_ determined by the identities in \(I\) and the identities of the form \(x_{1}\cdots x_{k+1}=0\) with all possible choices of parentheses. We now want to explain how we may describe actions in a variety of non-associative algebras. As we already mentioned before, in a semi-abelian category, actions are split extensions. **Definition 1.7**.: Let \[\begin{CD}0@>{}>{}>X@>{i}>{}>A@>{\pi}>{s}>B@>{}>{}>0\end{CD} \tag{1.1}\] be a split extension in the variety \(\mathscr{V}\). The pair of bilinear maps \[l\colon B\times X\to X,\qquad r\colon X\times B\to X\] defined by \[b*x=s(b)i(x),\quad x*b=i(x)s(b),\quad\forall b\in B,\;\forall x\in X\] where \(b*-=l(b,-)\) and \(-*b=r(-,b)\), is called the _derived action_ of \(B\) on \(X\) associated with (1.1). Given a pair of bilinear maps \[l\colon B\times X\to X,\qquad r\colon X\times B\to X\] with \(B\), \(X\) objects of \(\mathscr{V}\), we may define a multiplication on the direct sum of vector spaces \(B\oplus X\) by \[(b,x)\cdot(b^{\prime},x^{\prime})=(bb^{\prime},xx^{\prime}+b*x^{\prime}+x*b^{ \prime}) \tag{1.2}\] with \(b*x^{\prime}\coloneqq l(b,x^{\prime})\) and \(x*b^{\prime}\coloneqq r(x,b^{\prime})\). This construction allows us to build the split extension in **Alg** \[\begin{CD}0@>{}>{}>X@>{i_{2}}>{}>B\oplus X@>{\pi_{1}}>{i_{1}}>B@>{}>{}>0\end{CD} \tag{1.3}\] with \(i_{2}(x)=(0,x)\), \(i_{1}(b)=(b,0)\) and \(\pi_{1}(b,x)=b\). This is a split extension in \(\mathscr{V}\) if and only if \((B\oplus X,\cdot)\) is an object of \(\mathscr{V}\), i.e. it satisfies the identities which determine \(\mathscr{V}\). In other words, we have the following result analogous to [25, Theorem 2.4] and [13, Lemma 1.8]: **Lemma 1.8**.: _In a variety of non-associative algebras \(\mathscr{V}\), given a pair of bilinear maps_ \[l\colon B\times X\to X,\qquad r\colon X\times B\to X,\] _we define the multiplication on \(B\oplus X\) as above in (1.2). Then, the pair \((l,r)\) is a derived action of \(B\) on \(X\) if and only if \((B\oplus X,\cdot)\) is in \(\mathscr{V}\). In this case, we call \(B\oplus X\) the **semi-direct product** of \(B\) and \(X\) (with respect to the derived action) and we denote it by \(B\ltimes X\)._ _Remark 1.9_.: Notice that, for any split extension (1.1) and the corresponding derived action \((l,r)\), there is an isomorphism of split extensions where \(\theta\colon B\ltimes X\to A\colon(b,x)\mapsto s(b)+i(x)\). Thus, when we write \(b*x\) (resp. \(x*b\)), one can think of it as the multiplication \((b,0)\cdot(0,x)\) (resp. \((0,x)\cdot(b,0)\)) in \(B\ltimes X\). ### Categorical Consequences Let \(\mathscr{V}\) be an operadic variety of non-associative algebras. We recall two results which will be useful for understanding the rest of the paper. **Theorem 1.10** ([11, 12]).: _The following conditions are equivalent:_ 1. \(\mathscr{V}\) _is_ _algebraically coherent__[_8_]__;_ 2. \(\mathscr{V}\) _is an Orzech category of interest;_ 3. \(\mathscr{V}\) _is action accessible;_ 4. _there exist_ \(\lambda_{1}\)_,_ \(\dots\)_,_ \(\lambda_{8}\)_,_ \(\mu_{1}\)_,_ \(\dots\)_,_ \(\mu_{8}\) _in_ \(\mathbb{F}\) _such that_ \[x(yz)=\lambda_{1}(xy)z+\lambda_{2}(yx)z+\lambda_{3}z(xy)+\lambda _{4}z(yx)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \lambda_{5}(xz)y+\lambda_{6}(zx)y+\lambda_{7}y(xz)+\lambda_{8}y(zx)\] _and_ \[(yz)x=\mu_{1}(xy)z+\mu_{2}(yx)z+\mu_{3}z(xy)+\mu_{4}z(yx)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mu_{5}( xz)y+\mu_{6}(zx)y+\mu_{7}y(xz)+\mu_{8}y(zx)\] _are identities in_ \(\mathscr{V}\)_._ We call the two previous identities together the \(\lambda/\mu\)**-rules**. Since (WRA) implies action accessibility in general, the existence of the \(\lambda/\mu\)-rules is a necessary condition for the variety \(\mathscr{V}\) to be weakly action representable. **Theorem 1.11** ([13]).: _The following conditions are equivalent:_ 1. \(\mathscr{V}\) _is action representable;_ _ 2. \(\mathscr{V}\) _is either the variety_ \(\mathbf{Lie}\) _or the variety_ \(\mathbf{AbAlg}\)_._ \(\square\)__ Theorem 1.11 helps motivating our interest in the condition (WRA). In fact, in our context, there is only one non-trivial example of a variety which is action representable. Therefore, in order to study the representability of actions, it makes sense to weaken our assumptions. However, action accessibility may not be enough to study some kind of (weak) actor. The next result explains one way of understanding weak action representability for any variety. **Proposition 1.12**.: _A variety of non-associative algebras \(\mathscr{V}\) is weakly action representable if and only if for any object \(X\) in it, there exists an object \(T\) of \(\mathscr{V}\) such that for every derived action of an object \(B\) of \(\mathscr{V}\) on \(X\)_ \[l\colon B\times X\to X,\qquad r\colon X\times B\to X,\] _there exists a unique morphism \(\varphi\in\operatorname{Hom}_{\mathscr{V}}(B,T)\) and a derived action \((l^{\prime},r^{\prime})\) of \(\varphi(B)\) on \(X\) such that_ \[l^{\prime}(\varphi(b),x)=l(b,x),\quad r^{\prime}(x,\varphi(b))=r(x,b),\] _for every \(b\in B\) and for every \(x\in X\)._ Proof.: \((\Rightarrow)\) If \(\mathscr{V}\) is weakly action representable, then for any object \(X\) in it there exists a weak representation \((T,\tau)\). Let \(B\) be an object of \(\mathscr{V}\) which acts on \(X\) and let \(\varphi\colon B\to T\) be the corresponding acting morphism. Consider the split extension diagram where \(\widetilde{\varphi}\) is the corestriction of \(\varphi\) to its image, \(i^{\prime}(x)=(0,x)\), \(s^{\prime}(\varphi(b))=(\varphi(c),0)\), where \((c,0)=s(b)\), and \(f(b,x)=(\varphi(b),x)\). Then the action of \(\varphi(B)\) on \(X\) is defined by the pair of bilinear maps \[l^{\prime}\colon\varphi(B)\times X\to X,\qquad r^{\prime}\colon X\times \varphi(B)\to X\] where \[l^{\prime}(\varphi(b),x)=s^{\prime}(\varphi(b))i^{\prime}(x)=s(b)i(x)=l(b,x)\] and \[r^{\prime}(\varphi(b),x)=i(x)s^{\prime}(\varphi(b))=i(x)s(b)=r(b,x),\] for every \(b\in B\) and for every \(x\in X\). \((\Leftarrow)\) Conversely, given an object \(X\) of \(\mathscr{V}\), a weak representation of \(\operatorname{SplExt}(-,X)\) is given by \((T,\tau)\), where the component \[\tau_{B}\colon\operatorname{SplExt}(B,X)\rightarrow\operatorname{Hom}_{ \mathscr{V}}(B,T)\] sends every action of \(B\) on \(X\) to the corresponding morphism \(\varphi\). Moreover, \(\tau_{B}\) is an injection since the morphism \(\varphi\) is uniquely determined by the action of \(B\) on \(X\). Thus \(\tau\) is a monomorphism of functors. \(\square\)__ ### Partial Algebras We end this section with a construction we shall use throughout the text. **Definition 1.13**.: A _partial algebra_ over \(\mathbb{F}\) is an \(\mathbb{F}\)-vector space \(X\) endowed with a _bilinear partial operation_ \[\therefore\Omega\to X,\] where \(\Omega\) is a vector subspace of \(X\times X\). When \(\Omega=X\times X\) we say that the algebra is _total_. Let \((X,\cdot,\Omega)\) and \((X^{\prime},*,\Omega^{\prime})\) be partial algebras. A homomorphism of partial algebras is a linear map \(f\colon X\to X^{\prime}\) such that \(f(x\cdot y)=f(x)*f(y)\) whenever \((x,y)\in\Omega\)--which tacitly implies that \((f(x),f(y))\in\Omega^{\prime}\). (In other words, both \(x\cdot y\) and \(f(x)*f(y)\) are defined.) We denote by \(\mathbf{PAlg}\) the category whose objects are partial algebras and whose morphisms are partial algebra homomorphisms. We say that a partial algebra \(X\)_satisfies an identity_ when that identity holds wherever the bilinear partial operation is well defined. For instance, a partial algebra \(X\) is associative if \[x(yz)=(xy)z\] for all \(x\), \(y\), \(z\in X\) such that \((x,y)\), \((y,z)\), \((x,yz)\), \((xy,z)\in\Omega\). ## 2. Commutative and anti-commutative algebras In this section we aim to study the (weak) representability of actions of some varieties of non-associative algebras which satisfy the commutative law or the anti-commutative law. As explained in Section 1, we may assume our variety to satisfy the \(\lambda/\mu\)-rules, or equivalently to be action accessible. When \(\mathscr{V}\) is either a variety of commutative or anti-commutative algebras, i.e. \(xy=\varepsilon yx\) is an identity of \(\mathscr{V}\), with \(\varepsilon=\pm 1\), the \(\lambda/\mu\)-rules reduce to \[x(yz)=\alpha(xy)z+\beta(xz)y,\] for some \(\alpha\), \(\beta\in\mathbb{F}\). The following proposition is a representation theory exercise: **Proposition 2.1**.: _Let \(\mathscr{V}\) be a non-abelian, action accessible, operadic variety of non-associative algebras._ 1. _If_ \(\mathscr{V}\) _is a variety of commutative algebras, then_ \(\mathscr{V}\) _is a either a subvariety of_ \(\mathbf{CAssoc}\) _or a subvariety of_ \(\mathbf{JJord}\)_._ 2. _If_ \(\mathscr{V}\) _is a variety of anti-commutative algebras, then_ \(\mathscr{V}\) _is either a subvariety of_ \(\mathbf{Lie}\) _or a subvariety of_ \(\mathbf{ACAAssoc}\)_._ \(\square\)__ **Corollary 2.2**.: _Let \(\mathscr{V}\) be a non-abelian, action accessible, operadic, quadratic variety of non-associative algebras._ 1. _If_ \(\mathscr{V}\) _is commutative, then it has to be one of the following varieties:_ \(\mathbf{Com}\)_,_ \(\mathbf{JJord}\)_, or their intersection,_ \(\mathbf{Nil}_{2}(\mathbf{Com})\)_._ 2. _If_ \(\mathscr{V}\) _is anti-commutative, then it has to be one of the following varieties:_ \(\mathbf{Lie}\)_,_ \(\mathbf{ACAAssoc}\)_, or their intersection,_ \(\mathbf{Nil}_{2}(\mathbf{ACom})\)_._ \(\square\)__ We already know that \(\mathbf{Lie}\) is action representable and that the actor of a Lie algebra \(X\) is the Lie algebra \(\mathrm{Der}(X)\) of derivations of \(X\). Therefore, we shall study the weakly representability of actions of the other five varieties listed in the corollary. ### Commutative associative algebras The representability of actions of the variety of commutative associative algebras over a field was studied in [3], where F. Borceux, G. Janelidze and G. M. Kelly proved that it is not action representable. We want to extend this result proving that the variety \(\mathbf{Cassoc}\) is not weakly action representable. We start by recalling the following result. **Lemma 2.3** ([3]).: _Let \(X\) be a commutative associative algebra. There exists a natural isomorphism_ \[\mathrm{SplExt}(-,X)\cong\mathrm{Hom}_{\mathbf{Assoc}}(-,\mathrm{M}(X)),\] _where_ \[\mathrm{M}(X)=\{f\in\mathrm{End}(X)\mid f(xy)=f(x)y,\quad\forall x,y\in X\}\] _is the associative algebra of multipliers of \(X\), endowed with a product induced by the usual composition of functions (see [7, 21])._ \(\square\)__ We observe that \(\mathrm{M}(X)\) in general does not need to be a commutative algebra. For instance, let \(X=\mathbb{F}^{2}\) be the abelian two-dimensional algebra, then \(\mathrm{M}(X)=\mathrm{End}(X)\) which is not commutative. However there are special cases where \(\mathrm{M}(X)\) is an object of \(\mathbf{CAssoc}\), such as when the _annihilator_ of \(X\) (which coincides with the categorical notion of centre) \[\mathrm{Ann}(X)=\{x\in X\mid xy=0,\;\forall y\in X\}\] is trivial or when \(X^{2}=X\), where \(X^{2}\) denotes the subalgebra of \(X\) generated by the products \(xy\) where \(x\), \(y\in X\). We refer the reader to [7] for further details. **Theorem 2.4**.: _Let \(X\) be a commutative associative algebra. The following statements are equivalent:_ 1. \(\mathrm{M}(X)\) _is a commutative associative algebra;_ 2. _the functor_ \(\mathrm{SplExt}(-,X)\) _admits a weak representation;_ 3. \(\mathrm{M}(X)\) _is the actor of_ \(X\)_, hence_ \(\mathrm{SplExt}(-,X)\) _is representable._ Proof.: (i) \(\Rightarrow\) (iii). If \(\mathrm{M}(X)\) is an object of \(\mathbf{CAssoc}\), we have a natural isomorphism \[\rho\colon\;\mathrm{SplExt}(-,X)\cong\mathrm{Hom}_{\mathbf{CAssoc}}(-,\mathrm{ M}(X)).\] (iii) \(\Rightarrow\) (ii). If \(\mathrm{M}(X)\) is the actor of \(X\), then trivially \(\mathrm{Act}(-,X)\) admits a weak representation. (ii) \(\Rightarrow\) (i). Finally, if we suppose that the functor \(\mathrm{SplExt}(-,X)\) admits a weak representation \((T,\tau)\), then, by composition, we have a monomorphism of functors \[i^{*}\circ\tau\circ\rho^{-1}\colon\;\mathrm{Hom}_{\mathbf{Assoc}}(-,\mathrm{M }(X))\rightsquigarrow\mathrm{Hom}_{\mathbf{Assoc}}(-,T),\] where the monomorphism of functors \[i^{*}\colon\;\mathrm{Hom}_{\mathbf{CAssoc}}(-,T)\rightsquigarrow\mathrm{Hom}_ {\mathbf{Assoc}}(-,T)\] is induced by the full inclusion of the category \(\mathbf{CAssoc}\) in \(\mathbf{Assoc}\). By the _Yoneda Lemma_, it follows that \(\mathrm{M}(X)\) is a subobject of \(T\) in the category \(\mathbf{Assoc}\). But \(T\) is also an object of \(\mathbf{CAssoc}\), thus \(\mathrm{M}(X)\) is a commutative associative algebra. Since we have examples where \(\mathrm{M}(X)\) is not commutative, we obtain: **Corollary 2.5**.: _The variety \(\mathbf{CAssoc}\) of commutative associative algebras over a field is not weakly action representable. _ This answers one of the open questions formulated by G. Janelidze in [16]: Is the converse of the known implication _weakly action representable category \(\Rightarrow\) action accessible category_ valid in the context of varieties of non-associative algebras over a field? The answer is "no" and a counterexample is given by the variety \(\mathbf{CAssoc}\). ### Jacobi-Jordan algebras As already mentioned in Section 1, every split extension of \(B\) by \(X\) in \(\mathbf{Lie}\) is represented by a homomorphism \(B\to\mathrm{Der}(X)\). For Jacobi-Jordan algebras, the role the derivations have in \(\mathbf{Lie}\) is played by the so-called _anti-derivations_. **Definition 2.6**.: Let \(X\) be a Jacobi-Jordan algebra. An _anti-derivation_ is a linear map \(d\colon X\to X\) such that \[d(xy)=-d(x)y-d(y)x,\quad\forall x,y\in X.\] The (left) multiplications \(L_{x}\) for \(x\in X\) are particular anti-derivations, called _inner anti-derivations_. We denote by \(\mathrm{ADer}(X)\) the space of anti-derivations of \(X\) and by \(\mathrm{Inn}(X)\) the subspace of the inner anti-derivations. Anti-derivations play a significant role in the study of cohomology of Jacobi-Jordan algebras: see [1] for further details. We now want to make explicit what are the derived actions in the category \(\mathbf{JJord}\) and how they are related with the anti-derivations. The following is an easy application of Lemma 1.8. **Proposition 2.7**.: _Let \(X\) and \(B\) be two Jacobi-Jordan algebras. Given a pair of bilinear maps_ \[l\colon B\times X\to X,\qquad r\colon X\times B\to X,\] _we construct \((B\oplus X,\cdot)\) as in (1.2). Then \((B\oplus X,\cdot)\) is a Jacobi-Jordan algebra if and only if_ 1. \(b*x=x*b\)_;_ 2. \(b*(xx^{\prime})=-(b*x)x^{\prime}-(b*x)*x^{\prime}\)_;_ 3. \((bb^{\prime})*x=-b*(b^{\prime}*x)-b^{\prime}*(b*x)\)_;_ _for all \(b\), \(b^{\prime}\in B\) and \(x\), \(x^{\prime}\in X\)._ In an equivalent way, a derived action of \(B\) on \(X\) in the variety \(\mathbf{JJord}\) is given by a linear map \[B\to\operatorname{ADer}(X)\colon b\mapsto b*-\] which satisfies \[(bb^{\prime})*-=-\langle b*-,b^{\prime}*-\rangle,\quad\forall b,b^{\prime}\in B, \tag{2.1}\] where \[\langle-,-\rangle\colon\operatorname{ADer}(X)\times\operatorname{ADer}(X) \to\operatorname{End}(X),\quad\langle f,f^{\prime}\rangle=f\circ f^{\prime}+ f^{\prime}\circ f\] denotes the _anti-commutator_ between two anti-derivations of \(X\). _Remark 2.8_.: The vector space \(\operatorname{ADer}(X)\) endowed with the anti-commutator is not in general a Jacobi-Jordan algebra. For instance, if \(X=\mathbb{F}\) is the abelian one-dimensional algebra, then \(\operatorname{ADer}(X)=\operatorname{End}(X)\cong\mathbb{F}\) (every linear endomorphism of \(X\) is of the form \(\varphi_{\alpha}\colon x\mapsto\alpha x\), for some \(\alpha\in\mathbb{F}\)) does not satisfy the Jacobi identity. Nevertheless, there are some subspaces of \(\operatorname{ADer}(X)\) that are Jacobi-Jordan algebras. For instance, the subspace \(\operatorname{Inn}(X)\) of all inner anti-derivations of \(X\). Indeed, the linear map \[X\to\operatorname{Inn}(X)\colon x\mapsto L_{x},\] is a Jacobi-Jordan algebra homomorphism. This is true in general for the image of any linear map \(B\to\operatorname{ADer}(X)\) satisfying equation (2.1). Thus we need to use an algebraic structure which includes the space of anti-derivations endowed with the anti-commutator and which allows us to describe categorically the representability of actions of the variety \(\mathbf{JJord}\). The answer is given by _partial algebras_. Indeed, the vector space \(\operatorname{ADer}(X)\) endowed with the anti-commutator \(\langle-,-\rangle\) is a commutative partial algebra. In this case \(\Omega\) is the preimage \[\langle-,-\rangle^{-1}(\operatorname{ADer}(X))\] of the inclusion \(\operatorname{ADer}(X)\hookrightarrow\operatorname{End}(X)\). **Theorem 2.9**.: _Let \(X\) be a Jacobi-Jordan algebra._ 1. _There exists a natural isomorphism_ \[\rho\colon\operatorname{SplExt}(-,X)\cong\operatorname{Hom}_{\mathbf{PAlg}}(-,\operatorname{ADer}(X));\] 2. _if_ \(\operatorname{ADer}(X)\) _is a Jacobi-Jordan algebra, then the functor_ \(\operatorname{SplExt}(-,X)\) _is representable and_ \(\operatorname{ADer}(X)\) _is the actor of_ \(X\)_;_ 3. _if the functor_ \(\operatorname{SplExt}(-,X)\) _admits a weak representation, then_ \(\operatorname{ADer}(X)\) _satisfies the Jacobi identity._ Proof.: (1) For a Jacobi-Jordan algebra \(B\), we define the component \[\rho_{B}\colon\operatorname{SplExt}(B,X)\to\operatorname{Hom}_{\mathbf{PAlg}}(B, \operatorname{ADer}(X))\] as the functor which sends any split extension to the morphism \(B\to\operatorname{ADer}(X)\colon b\mapsto b*-\). The transformation \(\rho\) is natural. Indeed, for any Jacobi-Jordan algebra homomorphism \(f\colon B^{\prime}\to B\), it is easy to check that the diagram in \(\mathbf{Set}\) where \(\operatorname{Hom}(-,-)=\operatorname{Hom}_{\mathbf{Alg}}(-,-)\), is commutative. Moreover, for any Jacobi-Jordan algebra \(B\), the morphism \(\rho_{B}\) is an injection, as each element of \(\operatorname{SplExt}(B,X)\) is uniquely determined by the corresponding action of \(B\) on \(X\). Thus \(\rho\) is a monomorphism of functors. Finally \(\rho\) is a natural isomorphism since, given any Jacobi-Jordan algebra \(B\) and any homomorphism of partial algebras \(\varphi\colon B\to\operatorname{ADer}(X)\), the bilinear maps \(l_{\varphi}\colon B\times X\to X\colon(b,x)\mapsto\varphi(b)(x)\), \(r_{\varphi}=l_{\varphi}\) define a (unique) derived action of \(B\) on \(X\) such that \(\rho_{B}(l_{\varphi},r_{\varphi})=\varphi\). (2) If \(\operatorname{ADer}(X)\) is a Jacobi-Jordan algebra, then by (1) we have a natural isomorphism \[\operatorname{SplExt}(-,X)\cong\operatorname{Hom}_{\mathbf{JJord}}(-, \operatorname{ADer}(X)),\] hence \(\operatorname{ADer}(X)\) is the actor of \(X\). (3) If the functor \(\operatorname{SplExt}(-,X)\) admits a weak representation \((T,\tau)\), then, by composition, we have a monomorphism of functors \[i^{*}\circ\tau\circ\rho^{-1}\colon\operatorname{Hom}_{\mathbf{PAlg}}(-, \operatorname{ADer}(X))\rightharpoonup\operatorname{Hom}_{\mathbf{PAlg}}(-,T),\] where the monomorphism \[i^{*}\colon\operatorname{Hom}_{\mathbf{JJord}}(-,T)\rightharpoonup \operatorname{Hom}_{\mathbf{PAlg}}(-,T)\] is induced by the full inclusion of the variety \(\mathbf{JJord}\) in \(\mathbf{PAlg}\). From the _Yoneda Lemma_, it follows that \(\operatorname{ADer}(X)\) is a subobject of \(T\) in the category \(\mathbf{PAlg}\). But \(T\) is also an object of \(\mathbf{JJord}\), thus the Jacobi identity holds in \(\operatorname{ADer}(X)\). **Corollary 2.10**.: _The variety \(\mathbf{JJord}\) of Jacobi-Jordan algebras over a field is not weakly action representable. _ ### Two-step nilpotent commutative algebras We now analyse the case where \(\mathscr{V}\) is a subvariety of both \(\mathbf{CASsoc}\) and \(\mathbf{JJord}\), i.e. \(\mathscr{V}\) is the variety \(\mathbf{Nil}_{2}(\mathbf{Com})\) of two-step nilpotent commutative algebras. We recall this means that \(xyz=0\) is an identity of \(\mathscr{V}\). An example of such an algebra is the _Kronecker algebra_\(\mathfrak{k}_{1}\) (see [18]), which is the three-dimensional algebra with basis \(\{e_{1},e_{2},e_{3}\}\) and multiplication determined by \(e_{1}e_{2}=e_{2}e_{1}=e_{3}\). We shall show that \(\mathbf{Nil}_{2}(\mathbf{Com})\) is the only example of a weakly action representable, operadic, quadratic variety of commutative algebras. **Proposition 2.11**.: _Let \(X\) and \(B\) be two algebras in \(\mathbf{Nil}_{2}(\mathbf{Com})\). Given a pair of bilinear maps_ \[l\colon B\times X\to X,\qquad r\colon X\times B\to X,\] _we construct \((B\oplus X,\cdot)\) as in (1.2). Then \((B\oplus X,\cdot)\) is in \(\mathbf{Nil}_{2}(\mathbf{Com})\) if and only if_ 1. \(b*x=x*b\)_;_ 2. \(b*(xx^{\prime})=(b*x)x^{\prime}=0\) _ 3. \((bb^{\prime})*x=b*(b^{\prime}*x)=0\)_;_ _for all \(b\), \(b^{\prime}\in B\) and \(x\), \(x^{\prime}\in X\). _ The second equation of Proposition 2.11 states that, for every \(b\in B\), the linear map \(b*-\) belongs to the vector space \[[X]_{2}=\{f\in\operatorname{End}(X)\mid f(xy)=f(x)y=0,\;\forall x\in X\}.\] Moreover, seeing \([X]_{2}\) as an abelian algebra (i.e. \(\langle f,g\rangle=0_{\operatorname{End}(X)}\), for every \(f,g\in[X]_{2}\)), from the third equation we deduce that the linear map \[B\to[X]_{2}\colon b\mapsto b*-\] is an algebra homomorphism. On the other hand, given a morphism of algebras \[\varphi\colon B\to[X]_{2},\quad\varphi(b)=b*-\] satisfying \[b*(b^{\prime}*x)=0,\quad\forall b,b^{\prime}\in B,\;\forall x\in X,\] we can consider the split extension where the two-step nilpotent commutative algebra structure of \(B\oplus X\) is given by \[(b,x)*_{\varphi}(b^{\prime},x^{\prime})=(bb^{\prime},xx^{\prime}+b*x^{\prime }+b^{\prime}*x),\quad\forall(b,x),(b^{\prime},x^{\prime})\in B\oplus X.\] We can now claim the following result. **Theorem 2.12**.: 1. _Let_ \(B\) _and_ \(X\) _be two-step nilpotent commutative algebras. The isomorphism classes of split extensions of_ \(B\) _by_ \(X\) _are in bijection with the algebra homomorphisms_ \[B\to[X]_{2}\colon b\mapsto b*-\] _satisfying_ \[b*(b^{\prime}*x)=0,\quad\forall b,b^{\prime}\in B,\;\forall x\in X.\] (2.2) 2. _The variety_ \(\mathbf{Nil}_{2}(\mathbf{Com})\) _is weakly action representable. For any object_ \(X\) _of_ \(\mathbf{Nil}_{2}(\mathbf{Com})\)_, a weak representation of_ \(\operatorname{Act}(-,X)\) _is given by_ \[\tau\colon\operatorname{Act}(-,X)\to\operatorname{Hom}_{\mathbf{Nil}_{2}( \mathbf{Com})}(-,[X]_{2}),\] _where_ \(\tau_{B}\) _is the injection which sends any split extension of_ \(B\) _by_ \(X\) _to the corresponding homomorphism_ \(B\to[X]_{2}\)_, defined by_ \(b\mapsto b*-\) _as above._ 3. _A homomorphism_ \(B\to[X]_{2}\) _is an acting morphism if and only if it satisfies Equation (_2.2_)._ Proof.: (1) It follows from the analysis above. (2) We observe that \(\tau\) is a natural transformation. Indeed, for every morphism \(f\colon B^{\prime}\to B\) in \(\mathbf{Nil}_{2}(\mathbf{Com})\), we can check that the diagram in \(\mathbf{Set}\) is commutative. Moreover \(\tau_{B}\) is an injection since every isomorphism class of split extensions of \(B\) by \(X\) is uniquely determined by the corresponding derived action. Thus \(\tau\) is a monomorphism of functors and \(\mathbf{Nil}_{2}(\mathbf{Com})\) is a weakly action representable category. (3) Finally, \(\varphi\colon B\to[X]_{2}\) is an acting morphism if and only if it defines a split extension of \(B\) by \(X\) in \(\mathbf{Nil}_{2}(\mathbf{Com})\), i.e. it satisfies equation (2.2). Let us observe that not every morphism \(B\to[X]_{2}\) defines a split extension of \(B\) by \(X\). For instance, if \(B=\mathbb{F}\{b,b^{\prime}\}\) and \(X=\mathbb{F}\{x\}\cong\mathbb{F}\) are abelian algebras, then \([X]_{2}=\operatorname{End}(X)\) and the homomorphism \(\varphi\colon B\to[X]_{2}\), defined by \[\varphi(b)=\varphi(b^{\prime})=1_{X}\] is not an acting morphism. Indeed, \[\varphi(b)(\varphi(b^{\prime})(x))=1_{X}(1_{X}(x))=x\neq 0.\] ### Anti-commutative anti-associative algebras For the variety \(\mathbf{ACAAssoc}\) of anti-commutative anti-associative algebras, a similar description of split extensions and derived actions can be made as for the variety \(\mathbf{JJord}\). The role of the anti-derivations is played here by the endomorphisms in the associative partial algebra \[[X]\coloneqq\{f\in\operatorname{End}(X)\mid f(xy)=-f(x)y,\;\forall x\in X\},\] whose bilinear partial operation is given by \[\langle f,g\rangle=-f\circ g.\] It is easy to see that \(\langle-,-\rangle\) does not define, in general, a total algebra structure on \([X]\), nor need it be anti-commutative or anti-associative. An example is given by the abelian two-dimensional algebra \(X=\mathbb{F}^{2}\), where \([X]=\operatorname{End}(X)\). We may check that a derived action of \(B\) by \(X\) in the variety \(\mathbf{ACAAssoc}\) is the same thing as a partial algebra homomorphism \[B\to[X]\colon b\mapsto b*-\] which satisfies \[(bb^{\prime})*---b*(b^{\prime}*-),\quad\forall b,b^{\prime}\in B.\] Moreover, we obtain the following result whose proof is similar to the one of Theorem 2.9. **Theorem 2.13**.: _Let \(X\) be a an object of \(\mathbf{ACAAssoc}\)._ 1. _There exists a natural isomorphism_ \[\operatorname{SplExt}(-,X)\cong\operatorname{Hom}_{\mathbf{Palq}}(-,[X]);\] 2. _if_ \([X]\) _is an anti-commutative anti-associative algebra, then the functor_ \(\operatorname{SplExt}(-,X)\) _is representable and_ \([X]\) _is the actor of_ \(X\)_;_ 3. _if the functor_ \(\operatorname{SplExt}(-,X)\) _admits a weak representation, then_ \([X]\) _satisfies the anti-commutative law._ Again, this allows us to conclude: **Corollary 2.14**.: _The variety \(\mathbf{ACAAssoc}\) is not weakly action representable. _ ### Two-step nilpotent anti-commutative algebras We conclude this section by studying the representability of actions of the variety \(\mathbf{Nil}_{2}(\mathbf{ACom})\). An important example of a two-step nilpotent anti-commutative algebra is the \((2n+1)\)-dimensional _Heisenberg algebra_\(\mathfrak{h}_{2n+1}\), that is the algebra with basis \[\{e_{1},\dots,e_{n},f_{1},\dots,f_{n},h\}\] and non-trivial products \(e_{i}f_{j}=-f_{j}e_{i}=\delta_{ij}h\), for all \(i\), \(j=1,\dots,n\), where \(\delta_{ij}\) is the _Kronecker delta_. A similar analysis can be done as in the case of two-step nilpotent commutative algebras, so we may simply state the following theorem: **Theorem 2.15**.: 1. _Let_ \(B\) _and_ \(X\) _be two-step nilpotent anti-commutative algebras. The isomorphism classes of split extensions of_ \(B\) _by_ \(X\) _are in bijection with the algebra homomorphisms_ \[B\to[X]_{2}\colon b\mapsto b*-\] _where_ \([X]_{2}\) _is defined as in the commutative case, which satisfy the condition_ \[b*(b^{\prime}*x)=0,\quad\forall b,b^{\prime}\in B,\ \forall x\in X.\] (2.3) 2. _The variety_ \(\mathbf{Nil}_{2}(\mathbf{ACom})\) _is weakly action representable. For any object_ \(X\) _of_ \(\mathbf{Nil}_{2}(\mathbf{ACom})\)_, a weak representation of_ \(\mathrm{Act}(-,X)\) _is given by_ \[\tau\colon\,\mathrm{Act}(-,X)\actsrightarrow\mathrm{Hom}_{\mathbf{Nil}_{2}( \mathbf{ACom})}(-,[X]_{2}),\] _where_ \(\tau_{B}\) _is the injection which associates with any split extension of_ \(B\) _by_ \(X\)_, the corresponding homomorphism_ \(B\to[X]_{2}\colon b\mapsto b*-\) _as in (_1_)._ 3. _A homomorphism_ \(B\to[X]_{2}\) _is an acting morphism if and only if it satisfies Equation (_2.3_)._ Again, if \(B=\mathbb{F}\{b,b^{\prime}\}\) is the abelian two-dimensional algebra and \(X=\mathbb{F}\) is the abelian one-dimensional algebra, the linear map \(\varphi\colon B\to[X]_{2}=\mathrm{End}(X)\), defined by \(\varphi(b)=\varphi(b^{\prime})=1_{X}\) is an example of a morphism in \(\mathbf{Nil}_{2}(\mathbf{ACom})\) which is not an acting morphism. ## 3. Representability of actions of non-associative algebras We want to extend the results obtained in the previous section by studying the (weak) representability of actions of a general variety of non-associative algebras over a field \(\mathbb{F}\). Again, we assume that \(\mathscr{V}\) is an action accessible, operadic variety of non-associative algebras over \(\mathbb{F}\). Thus \(\mathscr{V}\) satisfies a set of multilinear identities \[\Phi_{k,i}(x_{1},\dots,x_{k})=0,\quad i=1,\dots,n,\] where \(k\) is the degree of the polynomial \(\Phi_{k,i}\). We fix \(\lambda_{1}\),..., \(\lambda_{8}\), \(\mu_{1}\),..., \(\mu_{8}\in\mathbb{F}\) which determine a choice of \(\lambda/\mu\) rules, i.e. \[x(yz)=\lambda_{1}(xy)z +\lambda_{2}(yx)z+\lambda_{3}z(xy)+\lambda_{4}z(yx)\] \[+\lambda_{5}(xz)y+\lambda_{6}(zx)y+\lambda_{7}y(xz)+\lambda_{8}y(zx)\] and \[(yz)x=\mu_{1}(xy)z +\mu_{2}(yx)z+\mu_{3}z(xy)+\mu_{4}z(yx)\] \[+\mu_{5}(xz)y+\mu_{6}(zx)y+\mu_{7}y(xz)+\mu_{8}y(zx)\] which are identities in \(\mathscr{V}\). Note that these are not unique, but fixed for our purposes. For any object \(X\) of \(\mathscr{V}\), we want to define a vector space \(\mathscr{E}(X)\) such that \[\mathrm{Inn}(X)\leq\mathscr{E}(X)\leq\mathrm{End}(X)^{2},\] where \(\mathrm{Inn}(X)=\{(L_{x},R_{x})\mid x\in X\}\) is the vector space of left and right multiplications of \(X\), and we want to endow it with a bilinear partial operation \[\langle-,-\rangle\colon\Omega\subseteq X\times X\to X,\] such that we can associate in a natural way a homomorphism of partial algebras \(B\to\mathscr{E}(X)\), with every split extension of \(B\) by \(X\) in \(\mathscr{V}\). To do this, we describe derived actions in \(\mathscr{V}\) in a similar fashion as in the previous section. **Proposition 3.1**.: _Let \(X\) and \(B\) be two algebras in \(\mathscr{V}\). Given a pair of bilinear maps_ \[l\colon B\times X\to X,\qquad r\colon X\times B\to X,\] _we construct \((B\oplus X,\cdot)\) as in (1.2). Then \((B\oplus X,\cdot)\) is an object of \(\mathscr{V}\) if and only if_ \[\Phi_{k,i}(\alpha_{1},\dots,\alpha_{k})=0,\quad\forall i=1,\dots,n,\] _where at least one of the \(\alpha_{1}\),..., \(\alpha_{k}\) is an element of of the form \((0,x)\), with \(x\in X\), and the others are of the form \((b,0)\), with \(b\in B\). The resulting algebra is the semi-direct product of \(B\) and \(X\), denoted by \(B\ltimes X\). _ Using the same notation of Remark 1.9, we obtain the following: **Corollary 3.2**.: _When every identity of \(\mathscr{V}\) can be deduced from the \(\lambda/\mu\) rules, \((B\oplus X,\cdot)\) is an object of \(\mathscr{V}\) if and only if_ 1. \(b*(xx^{\prime})=\lambda_{1}(b*x)x^{\prime}+\dots+\lambda_{8}x(x^{\prime}*b)\)_;_ 2. \((xx^{\prime})*b=\mu_{1}(b*x)x^{\prime}+\dots+\mu_{8}x(x^{\prime}*b)\)_;_ 3. \(x(x^{\prime}*b)=\lambda_{1}(xx^{\prime})*b+\dots+\lambda_{8}x^{\prime}(b*x)\)_;_ 4. \((x^{\prime}*b)x=\mu_{1}(xx^{\prime})*b+\dots+\mu_{8}x^{\prime}(b*x)\)_;_ 5. \(x(b*x^{\prime})=\lambda_{1}(x*b)x^{\prime}+\dots+\lambda_{8}b*(x^{\prime}x)\)_;_ 6. \((b*x^{\prime})x=\mu_{1}(x*b)x^{\prime}+\dots+\mu_{8}b*(x^{\prime}x)\)_;_ 7. \(x*(bb^{\prime})=\lambda_{1}(x*b)*b^{\prime}+\dots+\lambda_{8}b*(b^{\prime}*x)\)_;_ 8. \((bb^{\prime})*x=\mu_{1}(x*b)*b^{\prime}+\dots+\mu_{8}b*(b^{\prime}*x)\)_;_ 9. \(b*(b^{\prime}*x)=\lambda_{1}(bb^{\prime})*x+\dots+\lambda_{8}b^{\prime}*(x*b)\)_;_ 10. \((b^{\prime}*x)*b=\mu_{1}(bb^{\prime})*x+\dots+\mu_{8}b^{\prime}*(x*b)\)_;_ 11. \(b*(x*b^{\prime})=\lambda_{1}(b*x)*b^{\prime}+\dots+\lambda_{8}x*(b^{\prime}b)\)_;_ 12. \((x*b)*b^{\prime}=\mu_{1}(b*x)*b^{\prime}+\dots+\mu_{8}x*(b^{\prime}b)\)_,_ _for all \(b\), \(b^{\prime}\in B\) and \(x\), \(x^{\prime}\in X\). _ **Definition 3.3**.: For every object \(X\) of \(\mathscr{V}\), we define \(\mathscr{E}(X)\) as the subspace of all pairs \((f*-,-*f)\in\operatorname{End}(X)^{2}\) satisfying \[\Phi_{k,i}(\alpha_{1},\dots,\alpha_{k})=0,\quad\forall i=1,\dots,n,\] for each choice of \(\alpha_{j}=f\) and \(\alpha_{t}\in X\), where \(t\neq j\in\{1,\dots,k\}\) and \(fx\coloneqq f*x\), \(xf\coloneqq x*f\). We endow it with the bilinear map \(\langle-,-\rangle\colon\mathscr{E}(X)\times\mathscr{E}(X)\to\operatorname{ End}(X)^{2}\) \[\langle(f*-,-*f),(g*-,-*g)\rangle=(h*-,-*h),\] where \[x*h=\lambda_{1}(x*f)*g +\lambda_{2}(f*x)*g+\lambda_{3}g*(x*f)+\lambda_{4}g*(f*x)\] \[+\lambda_{5}(x*g)*f+\lambda_{6}(g*x)*f+\lambda_{7}f*(x*g)+\lambda _{8}f*(g*x)\] and \[h*x=\mu_{1}(x*f)*g +\mu_{2}(f*x)*g+\mu_{3}g*(x*f)+\mu_{4}g*(f*x)\] \[+\mu_{5}(x*g)*f+\mu_{6}(g*x)*f+\mu_{7}f*(x*g)+\mu_{8}f*(g*x).\] When every identity of \(\mathscr{V}\) is a consequence of the \(\lambda/\mu\) rules, \(\mathscr{E}(X)\) becomes the subspace of all pairs \((f*-,-*f)\in\operatorname{End}(X)^{2}\) satisfying 1. \(f*(xx^{\prime})=\lambda_{1}(f*x)x^{\prime}+\dots+\lambda_{8}x(x^{\prime}*f)\); 2. \((xx^{\prime})*f=\mu_{1}(b*x)x^{\prime}+\dots+\mu_{8}x(x^{\prime}*f)\); 3. \(x(x^{\prime}*f)=\lambda_{1}(xx^{\prime})*f+\dots+\lambda_{8}x^{\prime}(f*x)\); 4. \((x^{\prime}*f)x=\mu_{1}(xx^{\prime})*f+\dots+\mu_{8}x^{\prime}(f*x)\); 5. \(x(f*x^{\prime})=\lambda_{1}(x*f)x^{\prime}+\dots+\lambda_{8}f*(x^{\prime}x)\); 6. \((f*x^{\prime})x=\mu_{1}(x*f)x^{\prime}+\dots+\mu_{8}f*(x^{\prime}x)\), for every \(x\), \(x^{\prime}\in X\). Note that the choice of \(\lambda/\mu\) rules does not affect to the definition of the underlying vector space of \(\mathscr{E}(X)\), but it does play an important role in the bilinear map \(\langle-,-\rangle\). In general, the vector space \(\mathscr{E}(X)\) endowed with the bilinear map \(\langle-,-\rangle\) is not an object of \(\mathscr{V}\). It may happen that \(\langle-,-\rangle\) does not even define a bilinear operation on \(\mathscr{E}(X)\), i.e. there exist \((f*-,-*f),(g*-,-*g)\in\mathscr{E}(X)\) such that \[\langle(f*-,-*f),(g*-,-*g)\rangle\notin\mathscr{E}(X)\] or that \((\mathscr{E}(X),\langle-,-\rangle)\) is a non-associative algebra which does not satisfy some identity of \(\mathscr{V}\). _Example 3.4_.: We may check that, if \(\mathscr{V}=\mathbf{Assoc}\), then \(\mathscr{E}(X)\cong\operatorname{Bim}(X)\) as vector spaces. Moreover, with the standard choice of \(\lambda/\mu\) rules \(\lambda_{1}=\mu_{8}=1\) and the rest equal to zero, it is also an isomorphism of associative algebras. _Example 3.5_.: Let \(\mathscr{V}=\mathbf{Leib}\), it is easy to see that \(\mathscr{E}(X)\cong\operatorname{Bider}(X)\) as vector spaces. Choosing the \(\lambda/\mu\) rules as \[x(yz) =(xy)z-(xz)y,\] \[(yz)x =(yx)z-y(xz),\] we get the standard multiplication defined in \(\operatorname{Bider}(X)\) as in [20], that defines a weak actor in \(\mathbf{Leib}\). On the other hand, choosing the \(\lambda/\mu\) rules as \[x(yz) =(xy)z-(xz)y,\] \[(yz)x =(yx)z+y(zx),\] we get the non-associative algebra structure defined in [7, Definition 5.2], which, in general, is not a Leibniz algebra. _Example 3.6_.: If \(\mathscr{V}=\mathbf{Nil}_{k}(\mathbf{Assoc})\), with \(k\geq 3\), then \[\mathscr{E}(X)=\{(f*-,-*f)\in\operatorname{Bim}(X)\mid f*(x_{1}\cdots x_{k})= (x_{1}\cdots x_{k})*f=0\}.\] With the same choice of \(\lambda/\mu\) rules as in Example 3.4, the bilinear operation \(\langle-,-\rangle\) becomes \[\langle(f*-,-*f),(g*-,-*g)\rangle=(f*(g*-),(-*f)*g)\] which makes \(\mathscr{E}(X)\) an associative algebra, but not a \(k\)-step nilpotent algebra. For instance, let \(X\) be the abelian one-dimensional algebra, then \[\mathscr{E}(X)=\operatorname{End}(X)\times\operatorname{End}(X)^{\operatorname {op}}\cong\mathbb{F}^{2}\] which is not nilpotent. Indeed, every linear endomorphism of \(X\) is of the form \(\varphi_{\alpha}\colon x\mapsto\alpha x\), for some \(\alpha\in\mathbb{F}\) and \[\langle(\varphi_{\alpha},\varphi_{\beta}),(\varphi_{\alpha^{\prime}},\varphi_{ \beta^{\prime}})\rangle=(\varphi_{\alpha}\circ\varphi_{\alpha^{\prime}}, \varphi_{\beta^{\prime}}\circ\varphi_{\beta})=(\varphi_{\alpha\alpha^{\prime}}, \varphi_{\beta\beta^{\prime}}).\] The construction of \(\mathscr{E}(X)\) gives rise to an alternative characterisation of the split extensions in \(\mathscr{V}\). In fact, a split extension of \(B\) by \(X\) in \(\mathscr{V}\) is the same as a linear map \[B\to\mathscr{E}(X)\colon b\mapsto(b*-,-*b),\] such that \(((bb^{\prime})*-,-*(bb^{\prime}))=\langle(b*-,-*b),(b^{\prime}*-,-*b^{\prime})\rangle\) and \[\Phi_{k,i}(\alpha_{1},\dots,\alpha_{k})=0,\quad i=1,\dots,n,\] where \(\alpha_{1}\),..., \(\alpha_{k}\) are as in Proposition 3.1. We remark also that the bilinear map \[\langle-,-\rangle\colon\mathscr{E}(X)\times\mathscr{E}(X)\to\operatorname{ End}(X)^{2}\] defines a partial operation \(\langle-,-\rangle\colon\Omega\to\mathscr{E}(X)\), where \(\Omega\) is the preimage \[\langle-,-\rangle^{-1}(\mathscr{E}(X))\] of the inclusion \(\mathscr{E}(X)\hookrightarrow\operatorname{End}(X)^{2}\). Now we are ready to announce and prove our main result about the weak representability of actions of non-associative algebras. **Theorem 3.7**.: _Let \(\mathscr{V}\) be an action accessible, operadic variety of non-associative algebras over a field \(\mathbb{F}\)._ 1. _Let_ \(X\) _be an object of_ \(\mathscr{V}\)_. There exists a monomorphism of functors_ \[\tau\colon\operatorname{SplExt}(-,X)\mapsto\operatorname{Hom}_{\mathbf{PAlg}}( -,\mathscr{E}(X)),\] _where, for every_ \(B\) _of_ \(\mathscr{V}\)_,_ \(\tau_{B}\) _is the injection which sends an element of_ \(\operatorname{SplExt}(B,X)\) _to the corresponding partial algebra homomorphism_ \[B\to\mathscr{E}(X)\colon b\mapsto(b*-,-*b).\] 2. _Let_ \(B\)_,_ \(X\) _be objects of_ \(\mathscr{V}\)_. The homomorphism of partial algebras_ \[B\to\mathscr{E}(X)\colon b\mapsto(b*-,-*b)\] _belongs to_ \(\operatorname{Im}(\tau_{B})\) _if and only if_ \(\Phi_{k,i}(\alpha_{1},\ldots,\alpha_{k})=0\)_, as in Proposition_ 3.1_._ 3. _If_ \((\mathscr{E}(X),\langle-,-\rangle)\) _is an object of_ \(\mathscr{V}\)_, then_ \((\mathscr{E}(X),\tau)\) _becomes a weak representation of_ \(\operatorname{SplExt}(-,X)\)_._ 4. _If_ \(\mathscr{V}\) _is a variety of commutative or anti-commutative algebras, then_ \(\mathscr{E}(X)\) _is isomorphic to the partial algebra_ \[\{f\in\operatorname{End}(X)\mid\Phi_{k,i}(f,x_{2},\ldots,x_{k})=0,\;\forall x_ {2},\ldots,x_{k}\in X\}\] _endowed with the bilinear partial operation_ \(\langle f,g\rangle=\alpha(f\circ g)+\beta(g\circ f)\)_, where_ \(\alpha\)_,_ \(\beta\in\mathbb{F}\) _are given by the_ \(\lambda/\mu\) _rules._ Because of these results, we can give the following definitions. **Definition 3.8**.: Let \(X\) be an object of an action accessible, operadic variety of non-associative algebras \(\mathscr{V}\) with a choice of \(\lambda/\mu\) rules. The partial algebra \(\mathscr{E}(X)\) is called _external weak actor_ of \(X\). The pair \((\mathscr{E}(X),\tau)\) is called _external weak representation_ of the functor \(\operatorname{SplExt}(-,X)\). When \(\tau\) is a natural isomorphism, we say that \(\mathscr{E}(X)\) is an _external actor_ of \(X\). Proof.: (1) The collection \(\{\tau_{B}\}_{B}\) gives rise to a natural transformation since, for every algebra homomorphism \(f\colon B^{\prime}\to B\), the diagram in \(\mathbf{Set}\) where \(\operatorname{Hom}(-,-)=\operatorname{Hom}_{\mathbf{PAlg}}(-,-)\), is commutative. Moreover, for every object \(B\) of \(\mathscr{V}\), the map \(\tau_{B}\) is an injection, since every element of \(\operatorname{SplExt}(B,X)\) is uniquely determined by the corresponding derived action of \(B\) on \(X\), i.e. by the pair of bilinear maps \[l\colon B\times X\to X,\qquad r\colon X\times B\to X\] defined as in Definition 1.7. Thus \(\tau\) is a monomorphism of functors. (2) Let \(B\), \(X\) be objects of \(\mathscr{V}\). A homomorphism of partial algebras \(B\to\mathscr{E}(X)\) belongs to \(\operatorname{Im}(\tau_{B})\) if and only if it defines a split extension of \(B\) by \(X\) in \(\mathscr{V}\). This is equivalent to saying that \[\Phi_{k,i}(\alpha_{1},\ldots,\alpha_{k})=0,\quad\forall i=1,\ldots,n,\] where \(\alpha_{1},\ldots,\alpha_{k}\) are as in Proposition 3.1 (3) If \((\mathscr{E}(X),\langle-,-\rangle)\) is an object of \(\mathscr{V}\), then we have a monomorphism of functors \[\tau\colon\operatorname{SplExt}(-,X)\mapsto\operatorname{Hom}_{\mathscr{V}}( -,\mathscr{E}(X)),\] and \((\mathscr{E}(X),\tau)\) is a weak representation of \(\operatorname{SplExt}(-,X)\). (4) If \(\mathscr{V}\) is a variety of commutative (resp. anti-commutative) algebras, then for every object \(X\) of \(\mathscr{V}\), \(\mathscr{E}(X)\) consists of pairs of the form \((f*-,-*f)\) with \(x*f=f*x\) (resp. \(x*f=-f*x\)), for every \(x\in X\). Thus, an explicit isomorphism \[\{f\in\operatorname{End}(X)\mid\Phi_{k,i}(f,x_{2},\dots,x_{k})=0\}\to\mathscr{E }(X)\] is given by \(f\mapsto(f,\pm f)\). _Example 3.9_.: We may check that, we the _obvious_ choices of the \(\lambda/\mu\) rules, 1. if \(\mathscr{V}=\mathbf{AbAlg}\), then \(\mathscr{E}(X)=0\) is the actor of \(X\); 2. if \(\mathscr{V}=\mathbf{CAssoc}\), then \(\mathscr{E}(X)\cong\operatorname{M}(X)\) is an external actor of \(X\) (see Lemma 2.3); 3. if \(\mathscr{V}=\mathbf{JJord}\), then as observed in Theorem 2.9, the external actor \(\mathscr{E}(X)\) is isomorphic to the partial algebra \(\operatorname{ADer}(X)\) of anti-derivations of \(X\); 4. if \(\mathscr{V}=\mathbf{Lie}\), then \(\mathscr{E}(X)\cong\operatorname{Der}(X)\) is the actor of \(X\); 5. if \(\mathscr{V}=\mathbf{ACAAssoc}\), then as observed in Theorem 2.13, the external actor \(\mathscr{E}(X)\) is isomorphic to the partial algebra \([X]\); 6. if \(\mathscr{V}=\mathbf{Nil}_{2}(\mathbf{Com})\) or \(\mathscr{V}=\mathbf{Nil}_{2}(\mathbf{ACom})\), then \(\mathscr{E}(X)\cong[X]_{2}\) is a weak actor of \(X\). _Remark 3.10_.: The construction of the vector space \(\mathscr{E}(X)\) can be done also in a variety of non-associative algebras \(\mathscr{V}\) which is not action accessible. However, there is no canonical way to endow \(\mathscr{E}(X)\) with a bilinear map \(\langle-,-\rangle\) as in Definition 3.3 so we only have a monomorphism of functors \[\tau\colon\operatorname{SplExt}(-,X)\to\operatorname{Hom}_{\mathbf{Vec}}(-, \mathscr{E}(X)).\] _Remark 3.11_.: As described in [9, Section 3], for every Orzech category of interest \(\mathscr{C}\) and for every object \(X\) of \(\mathscr{C}\), it is possible to define a monomorphism of functors \[\mu\colon\operatorname{SplExt}(-,X)\mapsto\operatorname{Hom}_{\mathscr{C}^{ \prime}}(-,\operatorname{USGA}(X)),\] where \(\mathscr{C}^{\prime}\) is a category which contains \(\mathscr{C}\) as a full subcategory and \(\operatorname{USGA}(X)\) is an object of \(\mathscr{C}^{\prime}\) called the _universal strict general actor_ of \(X\)[7]. We further recall that \(\operatorname{USGA}(X)\) is unique up to isomorphism, once the presentation of the Orzech category of interest \(\mathscr{C}\) is fixed. For a variety of non-associative algebras \(\mathscr{V}\), a presentation is given by a choice of constants \(\lambda_{1}\),..., \(\lambda_{8}\), \(\mu_{1}\),..., \(\mu_{8}\in\mathbb{F}\) which determine the \(\lambda/\mu\) rules. In this case, it turns out that \(\mathscr{V}^{\prime}=\mathbf{Alg}\). Thus we have monomorphism of functors \[\mu\colon\operatorname{SplExt}(-,X)\to\operatorname{Hom}_{\mathbf{Alg}}(-, \operatorname{USGA}(X))\] and, by Theorem 3.7, another monomorphism of functors \[\tau\colon\operatorname{SplExt}(-,X)\to\operatorname{Hom}_{\mathbf{Alg}}(-, \mathscr{E}(X)).\] We may check that \(\operatorname{USGA}(X)\) is the algebraic closure of the external weak actor \(\mathscr{E}(X)\) with respect to the bilinear partial operation \(\langle-,-\rangle\). When \(\langle-,-\rangle\) is well defined on \(\mathscr{E}(X)\times\mathscr{E}(X)\), then \(\operatorname{USGA}(X)=\mathscr{E}(X)\). However, it is often more convenient to work with the external weak actor \(\mathscr{E}(X)\), since it is easier to construct than the universal strict general actor \(\operatorname{USGA}(X)\). In fact, in the next section we shall present the construction of \(\mathscr{E}(X)\) in different varieties of non-associative algebras. ## 4. The quadratic case In this section we introduce a systematic approach to finding the explicit structure of \(\mathscr{E}(X)\) in the setting of operadic, quadratic varieties of algebras. Here we shall denote an element \((f*-,-*f)\) of \(\mathscr{E}(X)\) by the symbol \(f\); this means that \(fx\coloneqq f*x\) and \(xf\coloneqq x*f\). Let \(\mathscr{V}\) be a action accessible, operadic, quadratic variety of non-associative algebras with no identities of degree \(2\). Let us consider the free non-associative algebra generated by the symbols \(f\), \(x\) and \(y\), and let us focus on its multilinear component of degree \(3\). There are \(12\) possible monomials which we order as follows: \[f(xy) >f(yx)>(xy)f>(yx)f>(fy)x>(fx)y\] \[>(yf)x>(xf)y>x(fy)>y(fx)>x(yf)>y(xf).\] Permuting the variables determines an action of the symmetric group \(\mathbb{S}_{3}\) on this space. For a given variety of algebras \(\mathscr{V}\), we can write the orbit under the \(\mathbb{S}_{3}\)-action of its defining equations in matrix form, where each row corresponds to an equation and each column corresponds to a monomial, ordered as above. Let us denote this matrix by \(M_{3}\), and its reduced row echelon form by \(RM_{3}\). Action accessibility implies the following: **Lemma 4.1**.: _The rank of \(M_{3}\) is at least \(4\). Moreover, the \(4\times 4\) minor located on the top left of \(RM_{3}\) is the identity matrix. _ The vector space \(\mathscr{E}(X)\) will be the subspace of \(\operatorname{End}(X)^{2}\) formed by the pairs that satisfy the identities coming from \(RM_{3}\). Our task now is to endow this vector space with a partial multiplication, induced by action accessibility, and to provide strategies to check 1. when this multiplication is total; 2. when it induces a \(\mathscr{V}\)-algebra structure on \(\mathscr{E}(X)\). Let us rename the tags on the columns of \(M_{3}\) by the following rule: \(f\mapsto x\), \(x\mapsto f\) and \(y\mapsto g\). Then, the third and first columns of \(RM_{3}\) will give us equations of the form \[(fg)x=\lambda_{1}(fg)x +\lambda_{2}(fx)g+\lambda_{3}(gf)x+\lambda_{4}(xf)g\] \[+\lambda_{5}x(fg)+\lambda_{6}y(fx)+\lambda_{7}x(gf)+\lambda_{8}g (xf)\] and \[x(fg)=\mu_{1}(fg)x +\mu_{2}(fx)g+\mu_{3}(gf)x+\mu_{4}(xf)g\] \[+\mu_{5}x(fg)+\mu_{6}g(fx)+\mu_{7}x(gf)+\mu_{8}g(xf).\] At a first glance, these rules seem to yield a way of multiplying two elements \(f\) and \(g\) belonging to \(\mathscr{E}(X)\). However, this choice might not be unique. If the rank of \(M_{3}\) is strictly larger than \(4\), the lower rows will have zeroes in the first four positions, so adding any linear combination of them will produce a new bracket in \(\mathscr{E}(X)\). Let us exemplify this with a concrete variety: _Example 4.2_.: The most common presentation of the variety of right Leibniz algebras is given by the identity \((xy)z-(xz)y-x(yz)=0\). Then, \(M_{3}\) will be the matrix \[\begin{array}{cccccccccccc}f(xy)&f(yx)&(xy)f&(yx)f&(fx)y&(fx)y&(yf)x&(xf)y& x(fg)&y(fx)&y(xf)\\ \left(\begin{array}{cccccccccccc}-1&0&0&0&-1&1&0&0&0&0&0&0\\ 0&-1&0&0&1&-1&0&0&0&0&0\\ 0&0&-1&0&0&0&0&1&-1&0&0\\ 0&0&1&0&0&0&0&-1&0&-1&0\\ 0&0&0&-1&0&0&1&0&0&-1&0\\ 0&0&0&1&0&0&-1&0&0&0&-1\end{array}\right).\] while its reduced row echelon form is \[\begin{split} f(xy)&\quad f(yx)\quad(xy)f\quad(yx)f \quad(yx)f\quad(fy)x\quad(fx)y\quad(yf)x\quad(xf)y\quad x(fy)\quad y(fx)\quad x( yf)\quad y(xf)\\ \left(\begin{array}{cccccccccccc}1&0&0&0&1&-1&0&0&0&0&0&0\\ 0&1&0&0&-1&1&0&0&0&0&0\\ 0&0&1&0&0&0&0&-1&0&0&-1&0\\ 0&0&0&1&0&0&-1&0&0&0&0&-1\\ 0&0&0&0&0&0&0&0&1&0&1&0\\ 0&0&0&0&0&0&0&0&0&1&0&1\end{array}\right)\end{split}\] Removing the rows in odd position--which we are entitled to, thanks to the obvious symmetry--we obtain that \(\mathscr{E}(X)\) is formed by the elements of \(\operatorname{End}(X)^{2}\) satisfying the following identities: \[\begin{split} f(xy)&=(fx)y-(fy)x\\ (xy)f&=(xf)y+x(yf)\\ x(fy)&=x(yf)\end{split} \tag{4.1}\] These are exactly the identities satisfied by biderivations. With the change of tag in the columns described before, we obtain \[\begin{split} x(fg)&\quad x(gf)\quad(fg)x\quad(gf)x \quad(xg)f\quad(xf)g\quad(gx)f\quad(fx)g\quad f(xz)g\quad f(xg)\quad g(xf) \quad f(gx)\quad g(fx)\\ \left(\begin{array}{cccccccccccc}1&0&0&0&1&-1&0&0&0&0&0&0\\ 0&1&0&0&-1&1&0&0&0&0&0\\ 0&0&1&0&0&0&-1&0&0&-1&0\\ 0&0&0&1&0&0&-1&0&0&0&-1\\ 0&0&0&0&0&0&0&1&0&1&0\\ 0&0&0&0&0&0&0&0&1&0&1\end{array}\right)\end{split}\] Therefore, the multiplication \[\begin{split}&(fg)x=(fx)g+f(gx)+\alpha_{1}\big{(}f(xg)+f(gx) \big{)}+\alpha_{2}\big{(}g(xf)+g(fx)\big{)}\\ & x(fg)=(xf)g-(xg)f+\beta_{1}\big{(}f(xg)+f(gx)\big{)}+\beta_{2} \big{(}g(xf)+g(fx)\big{)}\end{split} \tag{4.2}\] induces a partial algebra structure on \(\mathscr{E}(X)\), for any choice of \(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\in\mathbb{F}\). Now that we have a partial algebra structure induced on a general \(\mathscr{E}(X)\), the next step is to verify when it is total. To do so, we have to focus on a partial subset of the set of consequences of degree 4. Let us consider the 120-dimensional space formed by the multilinear monomials of degree 4 in the free non-associative algebra generated by the symbols \(f,g,x,y\). To gather all the consequences of the identities in degree 3, we have three different ways of operating. Let us take any identity from \(RM_{3}\). The first way is to multiply it from the right or from the left by \(g\). The second way, is to substitute \(x\) by \((gx)\) or \((xg)\). Finally, we can substitute \(y\) by \((gy)\) or by \((yg)\). Doing all these substitutions together with the permutations of \(f\) and \(g\), we obtain all the consequences. Note that in none of these identities the terms \((fg)\) or \((gf)\) will appear. Now we need to check if the defining bracket satisfies the identities of \(\mathscr{E}(X)\). To do so, we take again the identities from \(RM_{3}\), substitute \(f\) by \((fg)\) and expand it by the already defined product. The bracket will be closed if and only if these new obtained equations are linear combination of the previously obtained consequences. To conclude, we shall check when the bracket satisfies the identities of the variety. This can be done just by directly substituting elements of \(\mathscr{E}(X)\) in the defining equations of the variety. After applying them to a generic element \(x\), once on the left and once on the right, it is a matter of substituting the bracket on \(\mathscr{E}(X)\) when necessary. _Example 4.3_.: Continuing with the Leibniz algebras example 4.2, applying the procedure described before to the first equation in (4.1) yields: \[g(f(xy)) =g((fx)y)-g((fy)x),\] \[(f(xy))g =((fx)y)g-((fy)x)g,\] \[f((gx)y) =(f(gx))y-(fy)(gx),\] \[f((xg)y) =(f(xg))y-(fy)(xg),\] \[f(x(gy)) =(fx)(gy)-(f(gy))x,\] \[f(x(yg)) =(fx)(yg)-(f(yg))x.\] It is a straightforward computation to check that the rank of the matrix formed by all the consequences is 72. Then, we have to compare it with the multiplication defined in Equation (4.2). For instance, taking again the first equation in Equation (4.1) we have to expand the identity \[(fg)(xy)=((fg)x)y-((fg)y)x,\] which gives us \[f(g(xy))+(f(xy))g+\alpha_{1}\big{(}f(g(xy))+f((xy)g)\big{)}+\alpha _{2}\big{(}g(f(xy))+g((xy)f)\big{)}\] \[=(f(gx))y+((fx)g)y+\alpha_{1}\big{(}(f(gx))y+(f(xg))y\big{)}+ \alpha_{2}\big{(}(g(fx))y+(g(xf))y\big{)}\] \[-(f(gy))x+((fy)g)x+\alpha_{1}\big{(}(f(gy))x+(f(yg))x\big{)}+ \alpha_{2}\big{(}(g(fy))x+(g(yf))x\big{)}.\] After a linear algebra computation it can be checked that no matter which \(\alpha_{1}\) and \(\alpha_{2}\) we choose, it belongs to the subspace generated by the consequences. In fact, this will be true for all the identities (4.1), so any \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{1}\), \(\beta_{2}\in\mathbb{F}\) will produce a total multiplication on \(\mathscr{E}(X)\). To check whether the induced bracket endows \(\mathscr{E}(X)\) with a Leibniz algebra structure, we just need to check when the following identities hold \[(f(gh))x =((fg)h)x-((fh)g)x\] \[x(f(gh)) =x((fg)h)-x((fh)g).\] A quick computation tells us that this is only true when \((\alpha_{1},\alpha_{2},\beta_{1},\beta_{2})=(1,0,0,0)\), so that we recover exactly the multiplication defined in [7, Definition 5.1]. We consider some further examples. _Example 4.4_.: In the case of associative algebras any choice of bracket will induce a total algebra structure, but only the already known example of bimultipliers will be an associative algebra. _Example 4.5_.: The variety of _symmetric Leibniz algebras_ is formed by the intersection between the varieties of right and left Leibniz algebras, i.e. the variety determined by \((xy)z-(xz)y-x(yz)=0\) (right Leibniz identity) and \(z(xy)-(xz)y-x(zy)=0\) (left Leibniz identity). The space generated by its bilinear identities of degree 3 has dimension 10, which means that there are 12 parameters to define a product in \(\mathscr{E}(X)\). With the help of a computer algebra system such as Macaulay2[15] we check that any choice will give us a total algebra structure, and the set of variables that induces a symmetric Leibniz algebra structure on \(\mathscr{E}(X)\) forms an affine variety of dimension 2. _Example 4.6_.: Following the algorithm proposed before, it can be checked easily that the variety of two-step nilpotent (non-commutative) algebras is weakly action representable. In fact, a weak actor may be given by the expected structure \[\mathscr{E}(X)=\{f\in\operatorname{End}(X)^{2}\mid f(xy)=(xy)f=0=(fx)y=x(yf)\}\] with product \(fg=0=gf\). Nevertheless, this is not the only product that can be induced. Since the space generated by its bilinear identities of degree \(3\) has maximum dimension \(12\), there are \(16\) parameters that can be taken into account to define a product in \(\mathscr{E}(X)\). All of them induce a total multiplication on it, and the set of parameters which induce a two-step nilpotent algebra on \(\mathscr{E}(X)\) forms an affine variety of dimension \(3\). Note that these algebras were studied and classified in [18] and [19]. _Example 4.7_.: Although commutative Poisson algebras are usually defined as a variety with two operations, in [23] it was shown that with the depolarisation technique they can be seen as a quadratic variety (with one operation), so they fit in the scope of this section. The algorithmic approach presented before shows that it is possible to induce several total algebra structures on \(\mathscr{E}(X)\) (specifically a \(3\)-parametric family), however, in accordance with [9] where it is proven that it is not a weakly action representable category, none of these total algebra structures satisfy the Poisson identity. _Example 4.8_.: The varieties of Novikov algebras or anti-associative algebras do not allow a total algebra structure on their respective \(\mathscr{E}(X)\) induced by action accessibility. Even though this might suggest that they are not weakly action representable, it is still an open problem to prove the veracity of this claim. ## 5. Open Questions ### Converse to point (3) of Theorem 3.7 We know that if \(\mathscr{E}(X)\) is an object of the variety \(\mathscr{V}\) for each \(X\) in \(\mathscr{V}\), then \(\mathscr{V}\) is a weakly action representable category. Is the converse true? In Example 3.6 and Example 4.8, we have instances of this situation. ### Subvarieties We do not know how the condition (WRA) behaves under taking subvarieties (especially in the non-quadratic case when the degree of the identities may be higher than \(3\)). For instance, we know that the variety \(\mathbf{Assoc}\) is weakly action representable, but we do not know whether or not the subvariety \(\mathbf{Nil}_{k}(\mathbf{Assoc})\) with \(k\geq 3\) satisfies the same condition. We recall that in this case, \(\mathscr{E}(X)\) is an associative algebra, but it is not \(k\)-step nilpotent in general (see Example 3.6). ### Initial weak representation In the article [16] it is shown that a variety is weakly action representable if and only if it is _initially_ weakly action representable, which means that for every object \(X\), the functor \(\operatorname{SplExt}(-,X)\) admits an initial weak representation. We do not know whether or not the weak representations that occur in this article (for Leibniz algebras or associative algebras, for instance) are initial, or how we would check this in practice. ## Acknowledgements We are grateful to Abdenacer Makhlouf for recommending us to study the representability of actions for Jacobi-Jordan algebras and to Giuseppe Metere for suggesting us the name _external weak actor_. We would like to express our sincere gratitude to the Institut de Recherche en Mathematique et Physique (IRMP) for the warm reception we received during our visits to Louvain-la-Neuve. We would also like to extend our heartfelt appreciation to the Universities of Santiago de Compostela and Vigo for the generous support and welcoming atmosphere provided during our time there.
2304.14809
Low-Rank Structured MMSE Channel Estimation with Mixtures of Factor Analyzers
This work proposes a generative modeling-aided channel estimator based on mixtures of factor analyzers (MFA). In an offline step, the parameters of the generative model are inferred via an expectation-maximization (EM) algorithm in order to learn the underlying channel distribution of a whole communication scenario inside a base station (BS) cell. Thereby, the wireless channels are effectively modeled on a piecewise linear subspace which is achieved by the low-rank structure of the learned covariances of the MFA. This suits the low-rank structure of wireless channels at high frequencies and additionally saves parameters and prevents overfitting. Afterwards, the trained MFA model is used online to perform channel estimation with a closed-form solution of the estimator which asymptotically converges to the minimum mean square error (MMSE) estimator. Numerical results based on real-world measurements demonstrate the great potential of the proposed approach for channel estimation.
Benedikt Fesl, Nurettin Turan, Wolfgang Utschick
2023-04-28T12:35:04Z
http://arxiv.org/abs/2304.14809v2
# Low-Rank Structured MMSE Channel Estimation with Mixtures of Factor Analyzers ###### Abstract This work proposes a generative modeling-aided channel estimator based on mixtures of factor analyzers (MFA). In an offline step, the parameters of the generative model are inferred via an expectation-maximization (EM) algorithm in order to learn the underlying channel distribution of a whole communication scenario inside a base station (BS) cell. Thereby, the wireless channels are effectively modeled on a piecewise linear subspace which is achieved by the low-rank structure of the learned covariances of the MFA. This suits the low-rank structure of wireless channels at high frequencies and additionally saves parameters and prevents overfitting. Afterwards, the trained MFA model is used online to perform channel estimation with a closed-form solution of the estimator which asymptotically converges to the minimum mean square error (MMSE) estimator. Numerical results based on real-world measurements demonstrate the great potential of the proposed approach for channel estimation. Mixtures of factor analyzers, channel estimation, low-complexity, variational inference, machine learning. ## I Introduction The concept of generative models has been existing for a long time. [1]. One prominent example is the Gaussian mixture model (GMM), which has the ability for universal approximation [2]. With the advent of deep learning, generative models based on neural networks, such as variational autoencoders (VAEs) [3], have become highly successful. Therefore, generative models play a key role in modern signal processing and wireless communication applications, particularly when domain and system knowledge is incorporated to solve inference tasks [4]. In this respect, the requirements for accurate channel estimation in the next generation of cellular systems (6G) are of significant interest [5]. In recent works, both GMMs and VAEs were leveraged to learn the underlying channel distribution of a whole communication scenario and to utilize this information to yield a tractable prior information for channel estimation, showing great improvements over state-of-the-art approaches [6, 7]. The advantages are the possibilities to incorporate structural features and even learn from imperfect data [8, 9]. A key feature of both approaches is the parameterization of the local distribution of the mobile terminals (MTs) inside a BS cell in a tractable manner by the learned parameters. This allows to utilize closed-form solutions for the estimation task. Although these methods show promising results, they face some challenges which can potentially hinder the application in real systems. First, the number of learned parameters is high, entailing demanding memory requirements. For instance, the number of parameters scales quadratically in the number of antennas for the GMM; Likewise, the VAE is comprised of deep neural networks (NNs) with typically even more parameters. While structural features imposed by antenna arrays can reduce the number of parameters, mutual coupling between the antennas or other hardware imperfections can corrupt this structure in real systems. Second, the generative abilities of GMMs are known to be somewhat limited due to the discrete nature of the latent space, whereas VAEs are generally lacking interpretability due to the elaborate design of nested nonlinearities. A powerful generative model that is related to GMMs and VAEs is the MFA model, which contains both a discrete and continuous latent variable, where the latter is modeled Gaussian just as in the VAE [10, Ch. 12, 11]. This motivated the usage of MFA for several applications, e.g. [12, 13]. In contrast to the VAE with a nonlinear latent space, the latent space of the MFA is piecewise linear with tractable expressions for the inference of the latent samples. From a different perspective, the MFA model can be interpreted as a GMM with low-rank structured covariances [10, Ch. 12]. This results in having less parameters and being less prone to overfitting, thereby matching the structural features of channels at high frequencies [14]. Altogether, the MFA model has the potential to excellently match the requirements for accurate channel estimation in 6G systems with low overhead. _Contributions:_ We propose to employ the MFA model to learn the unknown underlying channel distribution of a whole BS cell with low-rank structured covariances which effectively models the channel distribution on a piecewise linear subspace. The resulting channel estimator can be computed in closed form by a convex combination of linear MMSE (LMMSE) estimates, which asymptotically converges to the generally intractable mean square error (MSE)-optimal solution. We validate the effectiveness of the approach through simulation results based on real-world measurement data, which demonstrate that the MFA achieve great results in channel estimation performance. ## II System Model and Measurement Campaign We consider a single-input multiple-output (SIMO) communication scenario where the BS equipped with \(N\) antennas receives uplink training signals from a single-antenna MT. In particular, at the BS, after decorrelating the pilot signal, noisy observations of the form \[\mathbf{y}=\mathbf{h}+\mathbf{n}\in\mathbb{C}^{N} \tag{1}\] are received where the channel \(\mathbf{h}\in\mathbb{C}^{N}\), which follows an unknown distribution \(p(\mathbf{h})\), is corrupted by additive white Gaussian noise (AWGN) \(\mathbf{n}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\sigma^{2}\mathbf{I})\). Although the underlying channel distribution \(p(\mathbf{h})\) is unknown, we assume the availability of a training dataset \(\mathcal{H}=\{\mathbf{h}_{t}\}_{t=1}^{T}\) of \(T\) channel samples that represent the channel distribution of the whole BS cell. A common practice is to use simulation tools which are based on sophisticated models of the underlying communication scenario to generate a dataset, however, these models do not fully capture the characteristics of the real world. Therefore, we use real-world data from a measurement campaign which is described in the following. The measurement campaign was conducted at the Nokia campus in Stuttgart, Germany, in October/November 2017. As can be seen in Fig. 1, the receive antenna with a down-tilt of \(10^{\circ}\) was mounted on a rooftop about \(20\,\mathrm{m}\) above the ground and comprises a uniform rectangular array (URA) with \(N_{v}=4\) vertical and \(N_{h}=16\) horizontal single polarized patch antennas. The horizontal spacing is \(\lambda/2\) and the vertical spacing equals \(\lambda\), where the geometry of the BS antenna array was adapted to the urban microcell (UMi) propagation scenario. The carrier frequency is \(2.18\,\mathrm{GHz}\). The BS transmitted time-frequency orthogonal pilots using \(10\,\mathrm{MHz}\) orthogonal frequency-division multiplexing (OFDM) waveforms. In particular, \(600\) subcarriers with \(15\,\mathrm{kHz}\) spacing were used, which resembles typical Long Term Evolution (LTE) numerology. The pilots were sent continuously with a periodicity of \(0.5\,\mathrm{ms}\) and were arranged in \(50\) separate subbands, with \(12\) consecutive subcarriers each, for channel sounding purposes. For the duration of one pilot burst the propagation channel was assumed to remain constant. A single monopole receive antenna, which mimics the MT, was mounted on top of a moving vehicle at a height of \(1.5\,\mathrm{m}\). The maximum speed was \(25\,\mathrm{kmph}\). Synchronization between the transmitter and receiver was achieved using GPS. The data was collected by a TSMW receiver and stored on a Rohde & Schwarz IQR hard disk recorder. In a post-processing step, by the correlation of the received signal with the pilot sequence a channel realization vector with \(N=N_{v}N_{h}\) coefficients per subband was extracted. The measurement was conducted at a high signal-to-noise ratio (SNR), which ranged from \(20\,\mathrm{dB}\) to \(30\,\mathrm{dB}\). Thus, the measured channels are regarded as ground truth. In this work, we will therefore consider a system where we artificially corrupt the measured channels with AWGN at specific SNRs and thereby obtain noisy observations \(\mathbf{y}=\mathbf{h}+\mathbf{n}\). We note that we investigate a single-snapshot scenario, i.e., the coherence interval of the covariance matrix and of the channel is identical. ## III Mixture of Factor Analyzers We start by briefly revising the factor analysis (FA) which is the basis for the MFA. The generative model is given as \[\mathbf{h}^{(L)}=\mathbf{W}\mathbf{z}+\mathbf{u} \tag{2}\] where \(\mathbf{W}\in\mathbb{C}^{N\times L}\) is the so called factor loading matrix, \(\mathbf{z}\in\mathbb{C}^{L}\) is the latent variable, and \(\mathbf{u}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{\Psi})\) is an additive term. The prior distribution is assumed to be standard Gaussian, i.e., \(p(\mathbf{z})=\mathcal{N}_{\mathbb{C}}(\mathbf{z};\mathbf{0},\mathbf{I})\). The key assumptions for this model are that the latent variable \(\mathbf{z}\) is low-dimensional, i.e., \(L<N\) holds, and that the covariance \(\mathbf{\Psi}\) is diagonal. The rationale is that the data are modeled on a linear subspace described by \(\mathbf{W}\), which explains the common factors/features and their correlations, and the additional term \(\mathbf{u}\) accounts for both unique factors and noise in the data [10, Ch. 12]. Note that if \(\mathbf{\Psi}=\psi^{2}\mathbf{I}\) or \(\mathbf{\Psi}=\mathbf{0}\), the FA degenerates to the (probabilistic) principle component analysis (PCA). Moreover, the FA is a low-rank parameterization of a Gaussian, i.e., \(\mathbf{h}^{(L)}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{W}\mathbf{W}^{\mathrm{H}}+ \mathbf{\Psi})\), cf. [10, Ch. 12]. Since the linearity of the latent space and the Gaussian assumption of the data is too restrictive for modeling real-world communication scenarios, we aim for a more powerful generative model which is realized by the MFA. Thereby, a mixture of \(K\) linear subspaces is considered which yields the following generative model for the \(k\)th mixture component: \[\mathbf{h}^{(K,L)}\mid k=\mathbf{W}_{k}\mathbf{z}+\mathbf{u}_{k}+\mathbf{\mu}_{k} \tag{3}\] where \(\mathbf{W}_{k}\in\mathbb{C}^{N\times L}\) is the factor loading matrix of mixture \(k\), \(\mathbf{u}_{k}\sim\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{\Psi}_{k})\), and \(\mathbf{\mu}_{k}\) is the mean of mixture \(k\). The resulting generative model then follows the distribution \[p^{(K,L)}(\mathbf{h})=\sum_{k=1}^{K}\int p(\mathbf{h}\mid\mathbf{z},k)p(\mathbf{z}\mid k)p(k) \mathrm{d}\mathbf{z} \tag{4}\] where \(p(\mathbf{z}|k)=p(\mathbf{z})=\mathcal{N}_{\mathbb{C}}(\mathbf{0},\mathbf{I})\) and \(p(k)\) follows a categorical distribution [10, Ch. 12]. An important property for our considerations is that, given the latent variables, the distribution is conditionally Gaussian, i.e., \[p^{(K,L)}(\mathbf{h}\mid\mathbf{z},k)=\mathcal{N}_{\mathbb{C}}(\mathbf{h};\mathbf{\mu}_{k}+\mathbf{ W}_{k}\mathbf{z},\mathbf{\Psi}_{k}). \tag{5}\] By integrating out the latent variable \(\mathbf{z}\) in (4), one can interpret the MFA as a GMM with low-rank structured covariances, i.e., \[p^{(K,L)}(\mathbf{h})=\sum_{k=1}^{K}p(k)\mathcal{N}_{\mathbb{C}}(\mathbf{h};\mathbf{\mu}_{k},\mathbf{W}_{k}\mathbf{W}_{k}^{\mathrm{H}}+\mathbf{\Psi}_{k}). \tag{6}\] This model has generally much less parameters than a GMM with full covariances since \(L<N\). Note that there exist different restrictions on the diagonal covariance matrix \(\mathbf{\Psi}_{k}\), e.g., it is possible to choose a common matrix for all components Fig. 1: Measurement setup on the Nokia campus in Stuttgart, Germany. as \(\mathbf{\Psi}_{k}=\mathbf{\Psi},\ \forall k\in\{1,\ldots,K\}\), or to choose a scaled identity \(\mathbf{\Psi}_{k}=\psi_{k}^{2}\mathbf{I}\), yielding different degrees of freedom. Given a training dataset \(\mathcal{H}\), an EM algorithm can be used to fit the parameters of the MFA model [11]. Interestingly, the MFA model meets all formal requirements for the universal approximation property from [2] in the sense that, for an infinite number of mixture components, any continuous probability density function (PDF) can be asymptotically approximated arbitrarily well by means of MFA as \[\lim_{K\rightarrow\infty}\|p(\mathbf{h})-p^{(K,L)}(\mathbf{h})\|_{\infty}=0. \tag{7}\] It is worth noting that this result holds for an any number \(L\) of latent dimensions which is a powerful basis for our consideration of learning a tractable approximation of the underlying channel distribution by means of the MFA. In view of this property, in this work we investigate the asymptotic behavior of the MFA-based channel estimator and the relationship between the number of latent dimensions \(L\) and the number of mixture components \(K\) for a given number of training samples \(T\). ## IV Channel Estimation In this section, we derive an approximation of the generally intractable MMSE estimator via the MFA model. The MSE-optimal estimator for an arbitrary channel distribution is the conditional mean estimator (CME) which is given as \[\hat{\mathbf{h}}_{\text{CME}}(\mathbf{y})=\mathrm{E}[\mathbf{h}\mid\mathbf{y}]=\int\mathbf{h}p( \mathbf{h}\mid\mathbf{y})\mathrm{d}\mathbf{h}. \tag{8}\] Note that this estimator cannot be computed generally since the conditional distribution \(p(\mathbf{h}\mid\mathbf{y})\) is unknown. Even if the channel distribution \(p(\mathbf{h})\) would be known, computing the CME via the integral is intractable in real-time systems. To this end, one needs tractable expressions of the involved distributions such that the resulting estimator can be computed with a manageable complexity. ### _MFA-based Channel Estimator_ We aim to find a tractable expression of the CME by using the learned MFA model and introduce the discrete latent variable via the law of total expectation: \[\hat{\mathbf{h}}^{(K,L)}(\mathbf{y})=\mathrm{E}[\mathbf{h}^{(K,L)}\mid\mathbf{y}]=\mathrm{E} \left[\mathrm{E}[\mathbf{h}^{(K,L)}\mid\mathbf{y},k]\mid\mathbf{y}\right]. \tag{9}\] We note that, since conditioned on the component \(k\) the model is Gaussian, i.e., \(\mathbf{h}^{(K,L)}\mid k\sim\mathcal{N}_{\mathbb{C}}(\mathbf{\mu}_{k},\mathbf{W}_{k}\mathbf{W} _{k}^{\mathrm{H}}+\mathbf{\Psi}_{k})\), cf. (6), we can use the well-known LMMSE formula to solve the inner expectation in closed form, which yields \[\begin{split}\mathrm{E}[\mathbf{h}^{(K,L)}\mid\mathbf{y},k]& =\mathbf{\mu}_{k}+(\mathbf{W}_{k}\mathbf{W}_{k}^{\mathrm{H}}+\mathbf{\Psi}_{k}) \\ &\left(\mathbf{W}_{k}\mathbf{W}_{k}^{\mathrm{H}}+\mathbf{\Psi}_{k}+\sigma^{2} \mathbf{I}\right)^{-1}(\mathbf{y}-\mathbf{\mu}_{k}).\end{split} \tag{10}\] The outer expectation in (9) is then, due to the discrete nature of the latent variable, a convex combination of the linear filters from (10), given as \[\hat{\mathbf{h}}^{(K,L)}(\mathbf{y})=\sum_{k=1}^{K}p(k\mid\mathbf{y})\mathrm{E}[\mathbf{h}^{(K,L)}\mid\mathbf{y},k] \tag{11}\] where \(p(k\mid\mathbf{y})\) is the responsibility of the \(k\)th component for the pilot observation \(\mathbf{y}\) which is computed as \[p(k\mid\mathbf{y})=\frac{p(k)\mathcal{N}_{\mathbb{C}}(\mathbf{y};\mathbf{\mu}_{k},\mathbf{W}_{ k}\mathbf{W}_{k}^{\mathrm{H}}+\mathbf{\Psi}_{k}+\sigma^{2}\mathbf{I})}{\sum_{i=1}^{K}p(i) \mathcal{N}_{\mathbb{C}}(\mathbf{y};\mathbf{\mu}_{i},\mathbf{W}_{i}\mathbf{W}_{i}^{\mathrm{H} }+\mathbf{\Psi}_{i}+\sigma^{2}\mathbf{I})}. \tag{12}\] In the next subsections, we investigate the asymptotic behavior of the estimator for an increasing number of mixture components and discuss the complexity and memory requirements of the estimator based on the low-rank structure. ### _Asymptotic Optimality_ In [6, Theorem 2], it is shown that the CME approximation via an estimator based on a GMM which is learned on the underlying channel distribution is asymptotically converging to the true CME for large numbers \(K\) of mixture components. The key prerequisite for this result is the convergence of the approximate distribution to the true channel distribution as a consequence of the universal approximation property of the GMM. Due to space limitations we do not restate [6, Theorem 2] in this work. Building on this result, we can state the asymptotic optimality of the MFA-based channel estimator (11) as a direct consequence of [6, Theorem 2] by using the universal approximation property. **Corollary 1**.: _Let \(p(\mathbf{h})\) be any continuous PDF which vanishes at infinity. For an arbitrary number of latent dimensions \(L\), the MFA-based channel estimator (11) converges to the true MSE-optimal CME (8) in the sense that_ \[\lim_{K\rightarrow\infty}\|\hat{\mathbf{h}}_{\text{CME}}(\mathbf{y})-\hat{\mathbf{h}}^{(K,L)}(\mathbf{y})\|=0 \tag{13}\] _holds for any given \(\mathbf{y}\)._ Proof.: The result is a direct consequence of [6, Theorem 2] by using the universal approximation ability (7) of the MFA model which follows from [2, Theorem 5]. Corollary 1 shows the powerful abilities of the MFA-based channel estimator in combination with the possibility to reduce the latent dimension \(L\). That said, the practicability of the estimator for a fixed number \(L\) of latent dimensions and a finite number \(K\) of mixtures is yet to be investigated. We will therefore show simulation results that demonstrate the strong performance of the estimator for real-world data for different numbers of mixtures and latent dimensions in Section V. ### _Baseline Channel Estimators_ We first discuss non-data-based channel estimators, one of whom is the least squares (LS) estimator which simply computes \(\hat{\mathbf{h}}_{\text{LS}}=\mathbf{y}\) in our case. Another technique is compressive sensing, where the channel is assumed to be (approximately) sparse such that we have \(\mathbf{h}\approx\mathbf{\Delta}\mathbf{s}\) for a sparse vector \(\mathbf{s}\in\mathbb{C}^{M}\). The dictionary \(\mathbf{\Delta}\in\mathbb{C}^{N\times M}\) is typically an oversampled discrete Fourier transform (DFT) matrix [15]. A baseline algorithm is orthogonal matching pursuit (OMP) [16] which recovers an estimate \(\hat{\mathbf{s}}\) of \(\mathbf{s}\) assuming \(\mathbf{y}=\mathbf{\Delta}\mathbf{s}+\mathbf{n}\) and estimates the channel as \(\hat{\mathbf{h}}_{\text{OMP}}=\mathbf{\Delta}\hat{\mathbf{s}}\) for which OMP needs to know the sparsity order. Since order estimation is a difficult problem, we use a genie-aided approach: OMP gets access to the true channel to choose the optimal sparsity. We employ the OMP algorithm with \(M=4N\). Next, we investigate state-of-the-art data-based algorithms. An important baseline is the LMMSE formula based on the sample covariance matrix. For this, we use all \(T\) training data from \(\mathcal{H}\) to compute \(\boldsymbol{C}=\frac{1}{T}\sum_{t=1}^{T}\boldsymbol{h}_{t}\boldsymbol{h}_{t}^{ \text{H}}\) and then estimate channels as \(\hat{\boldsymbol{h}}_{\text{LMMSE}}=\boldsymbol{C}(\boldsymbol{C}+\sigma^{2} \mathbf{I})^{-1}\boldsymbol{y}\). A convolutional neural network (CNN)-based channel estimator was introduced in [17] whose architecture is derived via insights about the channel/system model. The activation function is the rectified linear unit (ReLU) and we use the \(2N\times N\) truncated DFT matrix as input transform, cf. [17, eq. (43)]. Thereby, an independent CNN is trained for each SNR value. A related technique to the proposed MFA approach is the GMM-based channel estimator from [6, 8] which learns a GMM of the form \(p^{(K)}(\boldsymbol{h})=\sum_{k=1}^{K}p(k)\mathcal{N}_{\mathbb{C}}(\boldsymbol {h};\boldsymbol{\mu}_{k},\boldsymbol{C}_{k})\) via an EM algorithm to approximate the underlying channel distribution with generally full covariances \(\boldsymbol{C}_{k}\). In [8], the case of circulant and Toeplitz structured covariances is discussed where the underlying EM algorithm is adapted to have covariances of the form \(\boldsymbol{C}_{k}=\boldsymbol{Q}^{\text{H}}\operatorname{diag}(\boldsymbol {c}_{k})\boldsymbol{Q}\) where \(\boldsymbol{Q}\) is a (truncated) DFT matrix for the (Toeplitz) circulant case. The resulting estimator then is of the form \[\hat{\boldsymbol{h}}_{\text{GMM}}=\sum_{k=1}^{K}p(k\mid\boldsymbol{y})\left( \boldsymbol{\mu}_{k}+\boldsymbol{C}_{k}\boldsymbol{C}_{\boldsymbol{y},k}^{-1} (\boldsymbol{y}-\boldsymbol{\mu}_{k})\right) \tag{14}\] where \(\boldsymbol{C}_{\boldsymbol{y},k}=\boldsymbol{C}_{k}+\sigma^{2}\mathbf{I}\) and \(p(k\mid\boldsymbol{y})\) is the responsibility of component \(k\) for pilot \(\boldsymbol{y}\), cf. [6, 8]. In [18], the GMM-based estimator was already evaluated on measurement data. ### _Memory and Complexity Analysis_ In this subsection, we analyze the memory requirements and the computational (online) complexity of the MFA-based channel estimator. We emphasize that the fitting of the parameters via the EM algorithm is done exclusively in an initial offline phase. In the online phase, the channel estimate in (11) is computed for a given pilot observation \(\boldsymbol{y}\). We first note that the calculation of the \(K\) LMMSE filters in (10) as well as of the \(K\) responsibilities in (12) can be parallelized which is of great importance in real-time systems. Furthermore, due to the specific structure of the parameterized covariances, the inverse that appears in (10) and in (12)--for evaluating the Gaussian density--can be computed less expensively by means of the inversion lemma as \[(\boldsymbol{W}_{k}\boldsymbol{W}_{k}^{\text{H}}+\boldsymbol{\Psi}_{k}+\sigma ^{2}\mathbf{I})^{-1}=\boldsymbol{D}_{k}-\boldsymbol{D}_{k}\boldsymbol{W}_{k} \boldsymbol{A}_{k}\boldsymbol{W}_{k}^{\text{H}}\boldsymbol{D}_{k} \tag{15}\] where \(\boldsymbol{D}_{k}=(\boldsymbol{\Psi}_{k}+\sigma^{2}\mathbf{I})^{-1}\) is a diagonal matrix and \(\boldsymbol{A}_{k}=(\mathbf{I}+\boldsymbol{W}_{k}^{\text{H}}\boldsymbol{D}_{k} \boldsymbol{W}_{k})^{-1}\) is an \(L\times L\) matrix of lower dimension. Thus, the overall order of complexity of the channel estimator is \(\mathcal{O}(K(N^{2}+NL^{2}))\) when taking the computation of the matrix products into account. However, since the LMMSE filters are fixed for a given SNR value, they can be pre-computed such that only matrix-vector products have to be evaluated. In this case, the complexity reduces to \(\mathcal{O}(KN^{2})\) which can be computed in \(K\) parallel processes. The memory requirement is determined by the number of parameters for \(\left\{\boldsymbol{W}_{k},\boldsymbol{\Psi}_{k},\boldsymbol{\mu}_{k},p(k) \right\}_{k=1}^{K}\), which depends on the choice of the diagonal covariances \(\boldsymbol{\Psi}_{k}\). As discussed also later, the option of having scaled identities \(\boldsymbol{\Psi}_{k}=\psi_{k}^{2}\mathbf{I},\ \forall k\in\{1,\ldots,K\}\) saves memory overhead and does not affect the channel estimation performance. In this case, the total number of parameters is \(K(LN+N+2)\). In Table I, we compare the number of parameters with the related GMM estimator from [6, 8] where we exemplarily depict the number of parameters for a setting with \(K=N=64\) for different numbers \(L\) of latent dimensions. This particular setting is evaluated in Section V in terms of the channel estimation performance. In comparison to the GMM estimator with full covariances, the number of parameters is drastically reduced, which is beneficial against overfitting as shown later. Although the GMM with structured covariances can have an even lower number of parameters, the performance in this case might suffer from too restrictive assumptions on the underlying structure which is also verified by the simulation results. Altogether, the MFA model allows for an adaptivity in the number of parameters which reflects a trade-off between memory overhead, computational complexity, and estimation performance which can be conveniently optimized for a certain application scenario. ## V Simulation Results We conducted numerical experiments for the data from the measurement campaign as discussed in Section II with \(N=64\) antennas. The channels are normalized such that \(\text{E}[\|\boldsymbol{h}\|_{2}^{2}]=N\), and the SNR is defined as \(1/\sigma^{2}\). The MSE between the true and estimated channels is normalized by \(N\). For the MFA model we restrict the covariances to \(\boldsymbol{\Psi}_{k}=\psi_{k}^{2}\mathbf{I}\ \forall k\). All data-based approaches are trained on the same dataset \(\mathcal{H}\). Fig. 2 shows the MSE performance of the MFA-based channel estimator with \(K=64\) components in comparison with the baseline estimators introduced in Section IV-C (the GMM variants also have \(K=64\) components) for \(T=100{,}000\) (top) and \(T=10{,}000\) (bottom) training samples. We observe that the approaches "LS", "LMMSE", and "genie-OMP" as introduced in Section IV-C do not vary in the performance between both cases. The compressive sensing approach genie-OMP, although having genie knowledge of the sparsity order, does not perform well which indicates that the sparsity assumption in the DFT dictionary is too restrictive for the measurement data, cf. [18]. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Name** & **Parameters** & \(L=2\) & \(L=8\) & \(L=16\) \\ \hline MFA & \(K(LN+N+2)\) & \(1.24\cdot 10^{4}\) & \(3.67\cdot 10^{4}\) & \(6.98\cdot 10^{4}\) \\ \hline GMM full & \(K(\frac{1}{2}N^{2}+2N+1)\) & & \(1.39\cdot 10^{5}\) & \\ \hline GMM deep & \(K(5N+1)\) & & \(2.05\cdot 10^{4}\) & \\ \hline GMM circ & \(K(2N+1)\) & & \(8.26\cdot 10^{3}\) & \\ \hline \end{tabular} \end{table} TABLE I: Analysis of the number of parameters of the MFA model as compared to a (structured) GMM with example numbers for \(K=N=64\). The CNN-based channel estimator, labelled "CNN ReLU", cf. Section IV-C, performs better, but is almost always worse than the GMM- and MFA-based channel estimators, highlighting their strong performance. For the case of \(T=100{,}000\) training samples in Fig. 2 (top), the GMM-based estimator with full covariances ("GMM full") performs better than the Toeplitz ("GMM toep") or circulant ("GMM circ") structured versions, however, it is outperformed by the MFA-based estimator with \(L=32\) latent dimensions over the whole SNR range. Reducing the latent dimension further leads to a worse MSE, but the performance gap is small, especially in the low SNR regime, and the structured GMM variants are still outperformed for all SNRs. In Fig. 2 (bottom), the case of \(T=10{,}000\) training samples is shown where the GMM-based estimator with full covariances suffers from overfitting, especially in the high SNR regime, cf. [6]. Although the structured GMM variants are less prone to overfitting, they are outperformed by the MFA-based estimator with \(L=8\) or \(L=16\) latent dimensions over the whole SNR range. Increasing the latent dimension to \(L=32\) leads to a slightly worse performance due to overfitting. Altogether, these results highlight the importance of the adaptive nature of the proposed MFA-based estimator, which allows to choose a suitable latent dimension for a limited amount of training data. In Fig. 3, we further investigate the impact of the latent dimension \(L\) on the performance for \(K=64\) mixture components and for different numbers of training samples and SNR values. We compare the results with the baseline estimators which have a constant behavior with respect to the latent dimensions. In the top plot, the case of \(T=100{,}000\) and \(\text{SNR}=15\,\mathrm{dB}\) is evaluated in which the MSE of the MFA-based estimator is consistently decreasing for higher latent dimensions, but with a saturation above \(L=32\). The approaches LS, LMMSE, and genie-OMP are outperformed already with a single latent dimension. All GMM-based variants and the CNN estimator are outperformed for \(L=20\) latent dimensions or more. The middle plot shows that for a small amount of training data \(T=10{,}000\) and \(\text{SNR}=15\,\mathrm{dB}\), the unstructured GMM-based estimator performs even worse than the MFA model with only a single latent dimension. Furthermore, increasing the latent dimension up to \(L=16\) yields a consistently better MSE performance, whereas a degradation can be observed for higher latent dimensions due to overfitting effects. In addition, even the structured GMM-based variants and the CNN estimator are outperformed for \(L\in[8,24]\) latent dimensions. In the bottom plot, a lower SNR value of \(0\,\mathrm{dB}\) for \(T=100{,}000\) training samples is evaluated. The plot indicates that for lower SNR values, generally less latent dimensions are necessary since a Fig. 3: MSE performance over the number \(L\) of latent dimensions for \(K=64\) components. Top: \(T=100{,}000\) and \(\text{SNR}=15\,\mathrm{dB}\). Middle: \(T=10{,}000\) and \(\text{SNR}=15\,\mathrm{dB}\). Bottom: \(T=100{,}000\) and \(\text{SNR}=0\,\mathrm{dB}\). Fig. 2: MSE performance for \(T=100{,}000\) (top) and \(T=10{,}000\) (bottom) training samples for \(K=64\) components. saturation already occurs above \(L=12\). All baseline estimators are outperformed with \(L=8\) or more latent dimensions. In Fig. 4, the combined impact of both the latent dimensions \(L\in[1,32]\) and mixture components \(K\in[1,128]\) on the MSE of the MFA model is shown for SNR \(=15\,\mathrm{dB}\). The case of \(K=1\) refers to the FA model as described in Section III. In the top plot for \(T=10{,}000\) it can be observed that the resulting MSE is minimal at \((K,L)=(8,28)\) and it increases for higher numbers of latent dimensions and mixture components as expected due to overfitting. In contrast, for \(T=100{,}000\) training samples, the MSE is consistently decreasing for increasing latent dimensions and mixture components with an overall saturation effect. This analysis validates the asymptotic analysis up to a certain extent, since even if the latent dimension is fixed, the MSE can be further decreased by increasing the number of mixture components \(K\). We note that this behavior inherently depends on the available amount of training data. ## VI Conclusion and Outlook In this work, we have proposed to employ the MFA model for channel estimation which is asymptotically optimal in the sense that the convergence to the MSE-optimal CME is guaranteed. Thereby, the underlying (unknown) channel distribution of a whole communication scenario is learned offline by fitting the MFA parameters. Afterwards, the MFA model is leveraged for online channel estimation where especially the number of parameters and computational complexity is reduced due to the inherent low-rank structure. Simulation results based on measurement data showed great estimation performances. In future work, we plan to further investigate the MFA model for physical layer applications. Particularly interesting is to utilize the subspace information that is provided by the MFA, e.g., for interference channels, feedback applications [19], or in reconfigurable intelligent surface (RIS)-aided systems [20]. In addition, we want to adapt the approach to deal with imperfect training data and apply it to channels at higher frequencies.
2301.08426
$η$-pairing on bipartite and non-bipartite lattices
The $\eta$-pairing is a type of Cooper pairing state in which the phase of the superconducting order parameter is aligned in a staggered manner, in contrast to the usual BCS superconductors with a spatially uniform phase. In this study, we search for a characteristic $\eta$-pairing state in a triangular lattice where a simple staggered alignment of the phase is not possible. As an example, we consider the attractive Hubbard model on both the square and triangular lattices under strong external Zeeman field. Using the mean-field approximation, we have identified several $\eta$-pairing states. Additionally, we have examined the electromagnetic stability of the pairing state by calculating the Meissner kernel. Odd-frequency pairing plays a crucial role in achieving diamagnetic response if the electrons experience a staggered superconducting phase during the propagation of current.
Yutaro Misu, Shun Tamura, Yukio Tanaka, Shintaro Hoshino
2023-01-20T05:37:47Z
http://arxiv.org/abs/2301.08426v1
# \(\eta\)-pairing on bipartite and non-bipartite lattices ###### Abstract The \(\eta\)-pairing is a type of Cooper pairing state in which the phase of the superconducting order parameter is aligned in a staggered manner, in contrast to the usual BCS superconductors with a spatially uniform phase. In this study, we search for a characteristic \(\eta\)-pairing state in a triangular lattice where a simple staggered alignment of the phase is not possible. As an example, we consider the attractive Hubbard model on both the square and triangular lattices under strong external Zeeman field. Using the mean-field approximation, we have identified several \(\eta\)-pairing states. Additionally, we have examined the electromagnetic stability of the pairing state by calculating the Meissner kernel. Odd-frequency pairing plays a crucial role in achieving diamagnetic response if the electrons experience a staggered superconducting phase during the propagation of current. ## I Introduction The diversity of superconducting phenomena has been attracting continued attention. The superconducting state of matter is characterized by the properties of Cooper pairs, which can be classified based on their space-time and spin structures. With regard to their space structure, Cooper pairs are typically classified as \(s\)-wave, \(p\)-wave, or \(d\)-wave pairs depending on their relative coordinate structure. As for their center-of-mass coordinate, while it is usually assumed to be zero in most superconductors, it is possible to consider the existence of a finite center-of-mass momentum. One example of this is the Flude-Ferrell-Larkin-Ovchinnikov (FFLO) state [1; 2], in which the Cooper pair has a small but finite center-of-mass momentum under the influence of a magnetic field. More generally, the magnitude of the center-of-mass momentum can be larger and of the order of the reciprocal lattice vector \(\sim\pi/a\), where \(a\) is a lattice constant. This type of pairing state is known as \(\eta\)-pairing, a concept first proposed by C. N. Yang, which forms a staggered alignment of the superconducting phase on a bipartite lattice [3]. The spatially modulating order parameter is known also as the pair density wave, and has been discussed in relation to cuprate superconductors [4]. The actual realization of the \(\eta\)-pairing has been proposed for the correlated electron systems such as the attractive Hubbard (AH) model with the magnetic field [5], the single- and two-channel Kondo lattices [6; 7], the Penson-Kolb model [8], and also the non-equilibrium situation [9; 10; 11; 12; 13; 14]. Since the phase of the superconducting order parameter can be regarded as the XY spin, the \(\eta\)-pairing is analogous to an antiferromagnetic state of the XY spin model. Hence, the \(\eta\)-pairing state should be strongly dependent on the underlying lattice structure and we naively expect a variety of the \(\eta\)-pairing state if we consider the geometrically frustrated lattice such as the triangular lattice since the simple staggered state cannot be realized. In this paper, we deal with the AH model on the non-bipartite lattice in order to search for possible new superconducting states depending on the feature of the non-bipartite lattice structure in equilibrium. Already in the normal state without superconductivity, it has been pointed out that the non-bipartite lattice generates a non-trivial state of matter. For example in the Kondo lattice, a partial-Kondo-screening, which has a coexisting feature of Kondo spin-singlet and antiferromagnetism, is realized [15]. Also in the AH model at half-filling, charge-density-wave (CDW) is suppressed due to the frustration effect [16]. The \(\eta\)-pairing that appears in a photodoped Hubbard model on the triangular lattice has been studied recently [14]. In the equilibrium situation, the properties of the AH model have been studied on bipartite lattices [5], but the model on a non-bipartite lattice has not been explored. As shown in the rest of this paper, there are several types of \(\eta\)-pairings on the triangular lattice of the AH model under the Zeeman field. One of the \(\eta\)-pairing states is regarded as a \(120^{\circ}\)-Neel state. Since the relative phase between the nearest neighbor sites is neither parallel nor anti-parallel, the inter-atomic Josephson current is spontaneously generated. This state can also be regarded as a staggered flux state, where the flux is created by the atomic-scale superconducting loop current. While the staggered flux state has been studied so far [17; 18; 19; 20; 21; 22; 23], the staggered flux in this paper is induced by the Josephson effect associated with superconductivity and has a different origin. For the analysis of the AH model, we employ the mean-field approximation in this paper. It has been suggested that a simple \(\eta\)-pairing shows a paramagnetic Meissner state [24]. Hence it is necessary to investigate the electromagnetic stability of the solution for superconductivity. We evaluate the Meissner kernel whose sign corresponds to the diamagnetic (minus) or paramagnetic (plus) response of the whole system, where the physically stable state should show diamagnetism. We confirm that if the mean-field \(\eta\)-pairing state has the lowest energy compared to the other ordered states, the calculation of the Meissner kernel shows the diamagnetic response. It is also notable that the odd-frequency pairing amplitude, which has an odd functional form with respect to the frequency [6; 25; 26; 27; 28; 29; 30], can contribute to the diamagnetism in the \(\eta\)-pairing state. This is in contrast to the usual superconductivity with the uniform phase where the conventional even-frequency pairing contributes to the diamagnetism. It has been shown that the odd-frequency pairing induced at the edge, interface or junctions [31; 32; 33; 34; 35; 36] shows a paramagnetic response [37; 38; 39; 40; 41]. In this paper, by contrast, we consider the odd-frequency pairing realized in bulk, which shows a qualitatively different behavior. This paper is organized as follows. We explain the model and method for the AH model in Sec. II, and the Meissner kernel in Sec. III. The numerical results for the AH model are shown in Sec. IV, and we summarize the paper in Sec. V. ## II Attractive Hubbard model ### Hamiltonian We consider the Hamiltonian of the AH model with magnetic field \(\mathbf{h}\) which induce Zeeman effect only (Zeeman field) : \[\mathcal{H}=-t\sum_{\langle i,j\rangle\sigma}c_{i\sigma}^{\dagger} c_{j\sigma}+\text{H.c.} +U\sum_{i}n_{i\uparrow}n_{i\downarrow}\\ -\mu\sum_{i}n_{i}-\mathbf{h}\cdot\sum_{i}\mathbf{s}_{i}, \tag{1}\] where \(c_{i\sigma}^{\dagger}\) and \(c_{i\sigma}\) are the creation and annihilation operators of the \(i\)-th site with spin \(\sigma\), respectively. The symbol \(\langle i,j\rangle\) represents a pair of the nearest-neighbor sites. Here, the parameter \(t\) is the nearest-neighbor single-electron hopping integral. \(U\) (\(=-|U|\)) is the on-site attractive interaction. The spin operator is defined as \(\mathbf{s}_{i}=\frac{1}{2}\sum_{\sigma\sigma^{\prime}}c_{i\sigma}^{\dagger} \mathbf{\tau}_{\sigma\sigma^{\prime}}c_{i\sigma^{\prime}}\), where \(\mathbf{\tau}\) is the Pauli matrix, and the number operator of electrons is denoted as \(n_{i}=n_{i\uparrow}+n_{i\downarrow}=\sum_{\sigma}c_{i\sigma}^{\dagger}c_{i \sigma}\). The electron concentration is controlled by adjusting the chemical potential \(\mu\). The AH model has been successfully used to elucidate several important and fundamental issues in superconductors [42]. The model on a bipartite lattice at half filling is theoretically mapped onto the repulsive Hubbard model by the following partial particle-hole transformation [43] \[c_{i\uparrow}^{\dagger}\to\ c_{i\uparrow}^{\dagger},\ c_{i\downarrow}^{ \dagger}\to\ c_{i\downarrow}\text{e}^{\text{i}\mathbf{Q}\cdot\mathbf{R}_{i}}. \tag{2}\] The reciprocal vector \(\mathbf{Q}\) satisfies the condition \(\text{e}^{\text{i}\mathbf{Q}\cdot\mathbf{R}_{i}}=(-1)^{i}\) that takes \(\pm 1\) depending on \(\mathbf{R}_{i}\) belonging to A or B sublattice on the bipartite lattice. Then, the \(\eta\)-pairing appears in the region that corresponds to a ferromagnet with transverse magnetization in the repulsive model [5]. In a mean-field theory, the phase diagram for the repulsive Hubbard model without the magnetic field is shown in the left panel of Fig. 1[44]. From this figure, we find that the ferromagnet is located in the regime where the repulsive interaction \(U>0\) is large and the electron concentration is not half-filled. Hence, the \(\eta\)-pairing phase is located in the regime where the attractive interaction \(U<0\) is large and the magnetization is finite. The phase diagram of the AH model at half filling is shown in the right panel of Fig. 1. In principle, an attractive interaction large enough to realize \(\eta\)-pairing could be realized in artificial cold atom systems [45]. The Cooper pair is formed by the two electrons with \((\mathbf{k}\uparrow,\ -\mathbf{k}+\mathbf{q}\downarrow)\) where \(\mathbf{q}\) is the center-of-mass momentum. The FFLO state and the \(\eta\)-pairing are distinguished by the magnitude of \(|\mathbf{q}|\). In \(\eta\)-pairing, the center-of-mass momentum of the Cooper pair is the order of the reciprocal lattice vector, while the momentum of the FFLO state is much smaller and the spatial modulation is slowly-varying compared to the atomic scale. Although the large center-of-mass momentum is usually not energetically favorable, a strong attractive interaction can make it stable. ### Mean-field theory By applying the mean-field approximation, we obtain the mean-field Hamiltonian \[\mathcal{H}^{\text{MF}} =-t\sum_{\langle i,j\rangle\sigma}c_{i\sigma}^{\dagger}c_{j\sigma }+\text{H.c.}-\mu\sum_{i}n_{i}-\mathbf{h}\cdot\sum_{i}\mathbf{s}_{i}\] \[\quad-\sum_{i}\left(v_{i}n_{i}+\mathbf{H}_{i}\cdot\mathbf{s}_{i}-\Delta_ {i}c_{i\uparrow}^{\dagger}c_{i\downarrow}^{\dagger}-\Delta_{i}^{*}c_{i \downarrow}c_{i\uparrow}\right). \tag{3}\] Figure 1: Sketches of the phase diagrams for the repulsive Hubbard model [44] (left panel) and AH model (right panel). \(n_{c}\) is the electron concentration and \(m\) is the magnetization. When the interaction \(|U|\) is large, the ground state in the repulsive Hubbard model is ferromagnet (FM), while the ground state in the AH model is \(\eta\)-pairing. The order parameters are given by the self-consistent equations \[v_{i}\equiv\frac{|U|}{2}\langle n_{i}\rangle, \tag{4}\] \[\Delta_{i}\equiv-|U|\langle c_{i1}c_{i\uparrow}\rangle,\] (5) \[\mathbf{m}_{i}=\frac{1}{2}\sum_{\sigma\sigma^{\prime}}\langle c_{i \sigma}^{\dagger}\mathbf{\tau}_{\sigma\sigma^{\prime}}c_{i\sigma^{\prime}}\rangle, \mathbf{H}_{i}= -2|U|\mathbf{m}_{i}, \tag{6}\] where \(\langle A\rangle=\mathrm{Tr}\left[A\mathrm{e}^{-\mathcal{H}^{\mathrm{MF}}/T} \right]/\mathrm{Tr}\left[\mathrm{e}^{-\mathcal{H}^{\mathrm{MF}}/T}\right]\) is a quantum statistical average with the mean-field Hamiltonian and \(T\) is temperature. \(\Delta_{i}\) is the order parameter for \(s\)-wave singlet superconductivity (pair potential). The phase \(\theta_{i}\in[0,2\pi)\) of the pair potential \(\Delta_{i}=|\Delta_{i}|\mathrm{e}^{\mathrm{i}\theta_{i}}\) is dependent on the site index and will be represented by the arrow in a two-dimensional space. The mean-fields for the charge and spin are given by \(v_{i}\) and \(\mathbf{H}_{i}\), respectively, at each site. The derivation of the self-consistent equations is summarized in Appendix A. We will consider the AH model both on the two-dimensional square and triangular lattices. ## III Meissner kernel for a general tight-binding lattice ### Definition As we explained in Sec. I, it is necessary to calculate the Meissner kernel to determine whether the mean-field solution for \(\eta\)-pairing is electromagnetically stable. In the tight-binding model, the electromagnetic field appears as Peierls phase: \[\mathcal{H}_{\mathrm{kin}}=-t\sum_{\langle i,j\rangle\sigma}\mathrm{e}^{ \mathrm{i}A_{ij}}c_{i\sigma}^{\dagger}c_{j\sigma}+\mathrm{H.c.}. \tag{7}\] The Meissner effect is examined by the _weak_ external orbital magnetic field applied perpendicular to the plane, while the \(\eta\)-pairing is stabilized only under a _strong_ Zeeman field. In order to make these compatible, we apply the Zeeman field parallel to the plane \(\mathbf{h}=(h,0,0)\), which does not create the orbital motion of the tight-binding electrons. Thus, the weak magnetic field that triggers the Meissner effect is applied perpendicular to the plane in addition to the in-plane magnetic field. While the out-of-plane Zeeman effect is also induced by the weak additional field, it is neglected since the dominant Zeeman field already exists by the strong in-plane magnetic field. Let us formulate the Meissner response kernel on a general tight-binding model. We apply the formulation in Refs. [46; 47; 48] to the present case with sublattice degrees of freedom. The current density operator between two sites is defined as \[\mathbf{j}_{ij} =\frac{\partial\mathcal{H}_{\mathrm{kin}}}{\partial A_{ij}}\mathbf{ \delta}_{ij}\] \[=-\mathrm{i}t\sum_{\sigma}\left(c_{i\sigma}^{\dagger}c_{j\sigma} \mathrm{e}^{\mathrm{i}A_{ij}}-c_{j\sigma}^{\dagger}c_{i\sigma}\mathrm{e}^{- \mathrm{i}A_{ij}}\right)\mathbf{\delta}_{ij}, \tag{8}\] where \(\mathbf{\delta}_{ij}=\mathbf{R}_{i}-\mathbf{R}_{j}\) is the inter-site lattice vector between \(i\)-th and \(j\)-th sites, and hat (\(\hat{\cdot}\)) symbol means a unit vector. In the linear response theory, the current operator which appears as a response to the static magnetic field in equilibrium is written as \[\mathbf{j}_{ij} \simeq-\mathrm{i}t\sum_{\sigma}(c_{i\sigma}^{\dagger}c_{j\sigma}-c _{j\sigma}^{\dagger}c_{i\sigma})\mathbf{\delta}_{ij}\] \[\qquad+t\sum_{\sigma}(c_{i\sigma}^{\dagger}c_{j\sigma}+c_{j\sigma }^{\dagger}c_{i\sigma})\mathbf{\delta}_{ij}A_{ij}\] \[\equiv\mathbf{j}_{ij}^{\mathrm{para}}+\mathbf{j}_{ij}^{\mathrm{dia}}. \tag{9}\] The first term is called the paramagnetic term and the second term is diamagnetic. The Fourier-transformed paramagnetic and diamagnetic current density operators are written as \(\mathbf{j}^{\mathrm{para}}(\mathbf{q})\) and \(\mathbf{j}^{\mathrm{dia}}(\mathbf{q})\). The linear response kernel is then defined by \(\langle j_{\nu}(\mathbf{q})\rangle=\sum_{\mu}K_{\nu\mu}(\mathbf{q})A_{\mu}(\mathbf{q})\), where \(\nu,\mu=x,y\) is the direction. We evaluate the kernel \(K_{\nu\mu}(\mathbf{q}\to\mathbf{0})\equiv K_{\nu\mu}\) when investigating the stability of superconductivity. This is called the Meissner kernel, which is proportional to the superfluid density. The Meissner kernel is separated into paramagnetic and diamagnetic terms as \(K_{\nu\mu}=\left(K_{\mathrm{para}}\right)_{\nu\mu}+\left(K_{\mathrm{dia}} \right)_{\nu\mu}\). The paramagnetic kernel is given by \[\left(K_{\mathrm{para}}\right)_{\nu\mu}=\frac{1}{N}\int_{0}^{1/T} d\tau\langle j_{\nu}^{\mathrm{para}}(\mathbf{q}=0,\tau)j_{\mu}^{\mathrm{para}}(\mathbf{q}=0)\rangle, \tag{10}\] where \(N=\sum_{i}1\) is the number of sites. The Heisenberg representation with the imaginary time \(\tau\) is defined as \(A(\tau)=\mathrm{e}^{\mathcal{H}\tau}A\mathrm{e}^{-\mathcal{H}\tau}\). The form of the diamagnetic kernel is obvious from Eq. (9). We note that if the sign of the Meissner kernel \(K\) is negative, the superconducting state is electromagnetically stable and is also called a diamagnetic Meissner state, which expels magnetic flux. On the other hand, if the sign is positive, the superconducting state is called the paramagnetic Meissner state, which attracts magnetic flux. For a stable thermodynamic superconducting state, the negative value of \(K\) is required. ### Method of evaluation The actual evaluation of the kernels is performed based on the wave-vector representation. Here, the physical quantities are described by the operator \(c_{\mathbf{k}\sigma}^{\alpha}\) where \(\alpha\) distinguishes the sublattice. Note that the Brillouin zone is folded by \(\sum_{\alpha}1\) times. The diamagnetic kernel is rewritten as \[\left(K_{\rm dia}\right)_{\nu\mu}=\frac{1}{N}\sum_{\alpha,\beta}\sum_{\mathbf{k} \sigma}\left(m^{-1}_{\mathbf{k}\alpha\beta}\right)_{\nu\mu}\langle c^{\alpha\dagger} _{\mathbf{k}\sigma}c^{\beta}_{\mathbf{k}\sigma}\rangle. \tag{11}\] The inverse mass tensor \(m^{-1}_{\mathbf{k}\alpha\beta}\), which reflects the characteristics of the lattice shape, are given by \[\left(m^{-1}_{\mathbf{k}\alpha\beta}\right)_{\nu\mu}\equiv t\sum_{ \langle i_{\alpha},j\beta\rangle}\left(\hat{\mathbf{\delta}}_{i_{\alpha}j_{\beta} }\right)_{\nu}\left(\hat{\mathbf{\delta}}_{i_{\alpha}j_{\beta}}\right)_{\mu}{\rm e} ^{-i\mathbf{k}\cdot\mathbf{R}_{i\alpha j_{\beta}}}, \tag{12}\] where \(i_{\alpha}\) is the \(i\)-th unit cell with sublattice \(\alpha\). The symbol \(\langle i_{\alpha},j_{\beta}\rangle\) represents a pair of the nearest-neighbor sites and \(\mathbf{R}_{i_{\alpha}j_{\beta}}\) is the vector between the unit lattice with the \(i\)-th sublattice \(\alpha\) and the unit lattice with the \(j\)-th sublattice \(\beta\). The paramagnetic term has the form of a current-current correlation function. We can calculate this term by using the Green's function matrix \[\tilde{\mathcal{G}}_{\mathbf{k}}(\tau)\equiv-\langle T_{\tau}\mathbf{\psi}_{\mathbf{k}}( \tau)\mathbf{\psi}^{\dagger}_{\mathbf{k}}\rangle \tag{13}\] where \(\mathbf{\psi}_{\mathbf{k}}=(c^{\alpha}_{\mathbf{k}\uparrow},c^{\alpha\dagger}_{-\mathbf{k} \downarrow},\cdots)^{T}\) is the Nambu-spinor. \(T_{\tau}\) is time-ordering operator regrading \(\tau\). Each component of the Green's function matrix is given by the diagonal and off-diagonal Green's functions: \[G^{\alpha\beta}_{\sigma\sigma^{\prime}}(\mathbf{k},\tau) \equiv-\langle T_{\tau}c^{\alpha}_{\mathbf{k}\sigma}(\tau)c^{\beta \dagger}_{\mathbf{k}\sigma^{\prime}}\rangle, \tag{14}\] \[G^{\alpha\beta}_{\sigma\sigma^{\prime}}(\mathbf{k},\tau) \equiv-\langle T_{\tau}c^{\alpha\dagger}_{\mathbf{k}\sigma}(\tau)c^{ \beta}_{\mathbf{k}^{\prime}\sigma^{\prime}}\rangle,\] (15) \[F^{\alpha\beta}_{\sigma\sigma^{\prime}}(\mathbf{k},\tau) \equiv-\langle T_{\tau}c^{\alpha}_{\mathbf{k}\sigma}(\tau)c^{\beta}_{-\mathbf{k} \sigma^{\prime}}\rangle,\] (16) \[F^{\alpha\beta\dagger}_{\sigma\sigma^{\prime}}(\mathbf{k},\tau) \equiv-\langle T_{\tau}c^{\alpha\dagger}_{-\mathbf{k}\sigma}(\tau)c^{ \beta\dagger}_{\mathbf{k}\sigma^{\prime}}\rangle. \tag{17}\] The anomalous part of Green's function [Eq. (16)] is also called the pair amplitude. The paramagnetic kernel in Eq. (10) can be divided into the normal (\(G\)) and anomalous (\(F\)) Green's function contributions as \[\left(K_{\rm para}\right)_{\nu\mu} =-\frac{1}{N}\sum\int_{0}^{1/T}{\rm d}\tau\left(\mathbf{v}_{\mathbf{k} \alpha\beta}\right)_{\nu}\cdot\left(\mathbf{v}_{\mathbf{k}\alpha^{\prime}\beta^{ \prime}}\right)_{\mu}\times\left(G^{\alpha\beta^{\prime}}_{\sigma\sigma^{ \prime}}(\mathbf{k},\tau)G^{\alpha^{\prime}\beta}_{\sigma\sigma^{\prime}}(\mathbf{k},\tau)+G^{\alpha\beta^{\prime}}_{\sigma\sigma^{\prime}}(-\mathbf{k},\tau)G^{ \alpha^{\prime}\beta}_{\sigma\sigma^{\prime}}(-\mathbf{k},\tau)\right)\] \[\quad-\frac{1}{N}\sum\int_{0}^{1/T}{\rm d}\tau\left(\mathbf{v}_{\mathbf{k} \alpha\beta}\right)_{\nu}\cdot\left(\mathbf{v}_{-\mathbf{k}\alpha^{\prime}\beta^{ \prime}}\right)_{\mu}\times\left(F^{\beta\alpha\dagger}_{\sigma\sigma}(\mathbf{k},-\tau)F^{\alpha^{\prime}\beta^{\prime}}_{\sigma,\sigma^{\prime}}(\mathbf{k}, \tau)+F^{\beta\alpha\dagger}_{\sigma^{\prime}\sigma}(-\mathbf{k},-\tau)F^{\alpha ^{\prime}\beta^{\prime}}_{\sigma,\sigma^{\prime}}(-\mathbf{k},\tau)\right)\] \[\equiv K^{G}_{\rm para}+K^{F}_{\rm para}. \tag{18}\] The summation \(\sum\) is performed over the indices which appears only in the right-hand side. The velocity vector \(\mathbf{v}_{\mathbf{k}\alpha\beta}\) is defined by \[\left(\mathbf{v}_{\mathbf{k}\alpha\beta}\right)_{\nu}\equiv t\sum_{\langle i_{\alpha}, j_{\beta}\rangle}\left(\hat{\mathbf{\delta}}_{i_{\alpha}j_{\beta}}\right)_{\nu}{\rm e} ^{-i\mathbf{k}\cdot\mathbf{R}_{i\alpha j_{\beta}}}. \tag{19}\] In order to perform the integral with respect to \(\tau\) in Eq. (18), we define the Fourier-transformed Green's function as \[g_{\mathbf{k}}({\rm i}\omega_{n})\equiv\int_{0}^{1/T}d\tau g_{\mathbf{k}}(\tau){\rm e} ^{{\rm i}\omega_{n}\tau}, \tag{20}\] where \(g_{\mathbf{k}}\) represents one of Eqs. (14)-(17) and \(\omega_{n}=(2n+1)\pi T\) is fermionic Mastubara frequency. Moreover, the Fourier-transformed Green's function matrix is given by using the matrix representation of mean-field Hamiltonian Eq. (3) as \[\tilde{\mathcal{G}}_{\mathbf{k}}({\rm i}\omega_{n})=\left[{\rm i}\omega_{n}\check {1}-\tilde{\mathcal{H}}^{\rm MF}_{\mathbf{k}}\right]^{-1}=\check{U}_{\mathbf{k}}\left[{ \rm i}\omega_{n}\check{1}-\check{\Lambda}_{\mathbf{k}}\right]^{-1}\check{U}^{ \dagger}_{\mathbf{k}}, \tag{21}\] where \(\check{\Lambda}_{\mathbf{k}}\) and \(\check{U}_{\mathbf{k}}\) are, respectively, a diagonal eigenvalue matrix and a unitary matrix satisfying \(\check{U}^{\dagger}\tilde{\mathcal{H}}^{\rm MF}_{\mathbf{k}}\check{U}=\check{ \Lambda}_{\mathbf{k}}=\text{diag}(\lambda_{\mathbf{k}1},\lambda_{\mathbf{k}2},\ldots)\). From Eq. (21), \(K_{\rm para}\) can be calculated as \[\left(K_{\rm para}\right)_{\nu\mu}=-\frac{1}{N}\sum\left[\left(\mathbf{v}_{\mathbf{k} \alpha\beta}\right)_{\nu}\cdot\left(\mathbf{v}_{\mathbf{k}\alpha^{\prime}\beta^{ \prime}}\right)_{\mu}\mathcal{U}^{\beta^{\prime}\sigma^{\prime},\alpha\sigma}_{\bm {k}p^{\prime}}\mathcal{U}^{\alpha^{\prime}\sigma,\beta\sigma^{\prime}}_{\mathbf{k}p^ {\prime}}+\left(\mathbf{v}_{\mathbf{k}\alpha\beta}\right)_{\nu}\cdot\left(\mathbf{v}_{-\mathbf{k} \alpha^{\prime}\beta^{\prime}}\right)_{\mu}\mathcal{U}^{\beta\sigma^{\prime}, \alpha\sigma}_{\mathbf{k}p}\mathcal{U}^{\alpha^{\prime}\sigma,\beta^{\prime}\sigma^{ \prime}}_{\mathbf{k}p^{\prime}}\right]\frac{f\left(\lambda_{\mathbf{k}p}\right)-f \left(\lambda_{\mathbf{k}p^{\prime}}\right)}{\lambda_{\mathbf{k}p}-\lambda_{\mathbf{k}p^{ \prime}}}+{\rm c.c.} \tag{22}\] where \(f(\lambda_{\mathbf{k}p})=\frac{1}{{\rm e}^{\lambda_{\mathbf{k}p}/T}+1}\) is the Fermi-Dirac distribution function and we have defined the coefficient \(\mathcal{U}^{\alpha\sigma,\beta\sigma^{\prime}}_{\mathbf{k}p}\equiv\left[\check{U}_{ \mathbf{k}}\right]_{\alpha\sigma,\mathbf{p}}\left[\check{U}^{\dagger}_{\mathbf{k}}\right]_{p, \beta\sigma^{\prime}}\). The anomalous part of Eq. (18) \(K^{F}_{\rm para}\) is further decomposed into the contributions \(K^{\rm EFF}\) and \(K^{\rm OFP}\) from the even-frequency pair (EFP) and odd-frequency pair (OFP) amplitudes defined by \[F^{\text{EFP}}(\mathbf{k},\text{i}\omega_{n}) \equiv\frac{F(\mathbf{k},\text{i}\omega_{n})+F(\mathbf{k},-\text{i}\omega_{ n})}{2}, \tag{23}\] \[F^{\text{OFP}}(\mathbf{k},\text{i}\omega_{n}) \equiv\frac{F(\mathbf{k},\text{i}\omega_{n})-F(\mathbf{k},-\text{i}\omega_ {n})}{2}. \tag{24}\] Then, we obtain \(K^{\text{EFP}}\) and \(K^{\text{OFP}}\) by using Eqs. (23) and (24) as \[K^{\text{EFP},\text{OFP}}_{\nu\mu} =-\frac{1}{2N}\sum_{\mathbf{k}}\sum_{\alpha\beta\alpha^{\prime}\beta ^{\prime}}\left(\mathbf{v}_{\mathbf{k}\alpha\beta}\right)_{\nu}\cdot\left(\mathbf{v}_{- \mathbf{k}\alpha^{\prime}\beta^{\prime}}\right)_{\mu}\] \[\times\sum_{\sigma\sigma^{\prime}}\sum_{p\mathbf{p}^{\prime}}\mathcal{ U}^{\beta\sigma^{\prime},\alpha\sigma}_{\mathbf{k}p}\mathcal{U}^{\alpha^{\prime} \sigma,\beta\sigma^{\prime}}_{\mathbf{k}p^{\prime}}\] \[\times\left[\frac{f\left(\lambda_{\mathbf{k}p}\right)-f\left(\lambda _{\mathbf{k}p^{\prime}}\right)}{\lambda_{\mathbf{k}p}-\lambda_{\mathbf{k}p^{\prime}}} \mp\frac{f\left(\lambda_{\mathbf{k}p}\right)-f\left(-\lambda_{\mathbf{k}p^{\prime}} \right)}{\lambda_{\mathbf{k}p}+\lambda_{\mathbf{k}p^{\prime}}}\right]\] \[+\text{c.c.}, \tag{25}\] where the minus \((-)\) sign in the square bracket is taken for EFP contribution and the plus \((+)\) for OFP pairing. These quantities are numerically calculated as shown in the next section. Note that the cross term of the EFP and OFP terms of Green's functions vanishes after the summation with respect to the Matsubara frequency. ### Paramagnetic Meissner response of a simple \(\eta\)-pairing state Before we show the results of the AH model, let us show that a simple \(\eta\)-pairing state leads to the paramagnetic response which would not arise from thermodynamically stable states [49; 24]. We consider the simple bipartite lattice with staggered ordering vector \(\mathbf{Q}\). The anomalous contribution to the Meissner kernel may be written as [49] \[K^{F}_{\text{para},xx} =-T\sum_{n\mathbf{k}\mathbf{k}^{\prime}\sigma\sigma^{\prime}}v^{x}_{\mathbf{k} ^{\prime}}v^{x}_{\mathbf{k}^{\prime}}F^{*}_{\sigma^{\prime}\sigma}(\mathbf{k}^{\prime },\mathbf{k},\text{i}\omega_{n})F_{\sigma\sigma^{\prime}}(\mathbf{k},\text{i}\omega_{ n}). \tag{26}\] This contribution must be negative (diamagnetic response) in order to dominate over the paramagnetic contribution. For a purely \(\eta\)-pairing state, we assume the relation \(F_{\sigma\sigma^{\prime}}(\mathbf{k},\mathbf{k}^{\prime})=F_{\sigma\sigma^{\prime}}( \mathbf{k})\delta_{\mathbf{k}^{\prime},-\mathbf{k}-\mathbf{Q}}\), and obtain \[K^{F}_{\text{para},xx} =-T\sum_{n\mathbf{k}\sigma\sigma^{\prime}}(v^{x}_{\mathbf{k}})^{2}F^{*}_{ \sigma^{\prime}\sigma}(\mathbf{k},\text{i}\omega_{n})F_{\sigma\sigma^{\prime}}( \mathbf{k},\text{i}\omega_{n}), \tag{27}\] where we have used \(v^{x}_{-\mathbf{k}-\mathbf{Q}}=v^{x}_{\mathbf{k}}\) valid for square lattice, which is in contrast to the relation \(v^{x}_{-\mathbf{k}}=-v^{x}_{\mathbf{k}}\) for the uniform pairing with additional minus sign [24]. We separate the spin-singlet and triplet parts as \(F_{\sigma\sigma^{\prime}}=F_{\text{s}}\mathrm{i}\tau^{y}_{\sigma\sigma^{ \prime}}+\mathbf{F}_{t}\cdot(\mathrm{i}\tau^{y})_{\sigma\sigma^{\prime}}\), and then obtain \[K^{F}_{\text{para},xx} =2T\sum_{n\mathbf{k}}(v^{x}_{\mathbf{k}})^{2}\Big{[}|F_{s}(\mathbf{k},\text{i} \omega_{n})|^{2}-|\mathbf{F}_{t}(\mathbf{k},\text{i}\omega_{n})|^{2}\Big{]}. \tag{28}\] If we consider the simple \(\eta\)-pairing with only spin-singlet part (\(\mathbf{F}_{t}=\mathbf{0}\)), it leads to the paramagnetic response (positive). Thus, a simple \(s\)-wave spin-singlet \(\eta\)-pairing is unlikely realized as a stable state. On the other hand, in the AH model with magnetic field, the spin-triplet pair contribution is substantially generated by the Zeeman field, which plays an important role for the diamagnetic response as shown below. ## IV Numerical result for AH model ### Square lattice #### iv.1.1 Prerequisites Let us begin with the analysis of the AH model on the square lattice. We consider the two-sublattice structure to describe the staggered ordered phase such as a \(\eta\)-pairing. While the superconducting states in the attractive model are interpreted in terms of the magnetic phases of the repulsive model by the particle-hole transformation in Eq. (2), the response functions such as the Meissner kernel are specific to the attractive model and have not been explored. In the following, we choose the band width \(W=1\) as the unit of energy. We fix the value of the attractive interaction \(U=-1.375\). The electron concentration is fixed as \(n_{c}=1\), and the temperature is taken to be \(T=1.0\times 10^{-3}\) unless otherwise specified. We will investigate the change of the Meissner kernel for \(\eta\)-pairing as a function of magnetic field strength \(h=|\mathbf{k}|\). In this paper, the mean-field solutions are calculated using the \(60\times 60\) mesh in \(\mathbf{k}\)-space. The result of the Meissner kernel for \(\eta\)-pairings is calculated with the mesh \(300\times 300\). We also checked that the behaviors remain qualitatively unchanged when these numbers are increased. The self-consistent equations in Eqs. (4)-(6) are computed by using an iterative method. In the following subsection IV.1.2, we restrict ourselves to the analysis of two-sublattice mean-field solutions, and in IV.1.3, we examine the solutions when the two-sublattice constraint is relaxed. #### iv.1.2 Two-sublattice solution Before investigating the electromagnetic stability, we clarify the regime where the \(\eta\)-pairing becomes the ground state. In this paper, we assume that the internal energy in Eq. (1) is approximately equal to the free energy in the low temperature region. The upper panel of Fig. 2 shows the internal energy of several ordered states measured from the normal-state energy as a function of the Zeeman field \(h\). Here, the \(\eta\)-pairing solution is obtained by solving the self-consistent equation with imposing the constraint of the staggered phase of the pair amplitude. A constraint is also used for the calculation of the other types of order parameters. Our calculations have not found any ordered states other than the types shown in Fig. 2 even when a random initial condition is employed. We determine the thermodynamically stable ground state by comparing the internal energies. In low magnetic fields, BCS and CDW are degenerated ground states. On the other hand, we find that the \(\eta\)-pairing becomes the ground state in the magnetic field located in \(1.063<h<1.875\). The \(\eta\)-pairing solution itself is found in the wider regime although the internal energy is not the lowest one. It has been known that the attractive Hubbard model under a magnetic field also shows the FFLO state [50], but this possibility cannot be considered when we take the two-sublattice condition. This point will be revisited in the next subsection where the two-sublattice condition is relaxed. The lower panel of Fig. 2 shows the density of state (DOS) at the Fermi level for each state. The result indicates that there is no energy gap in the \(\eta\)-pairing state, in contrast to the conventional BCS pairing state. There exists the regime where the DOS at the Fermi level for \(\eta\)-pairing is larger than that of normal metal (\(1.25\lesssim h\lesssim 1.5\)). This is due to the van-Hove singularity of the square lattice model as shown in FIG. 3. We also perform the calculation for the cubic lattice where the van-Hove singularity is absent at zero energy and confirm in this case that the DOS is smaller than the normal state (see Appendix B). The stability of the \(\eta\)-pairing depends upon the magnitude of the magnetic field as seen in the Meissner response kernel \(K\) (\(=K_{xx}=K_{yy}\)) (green symbol) in Fig. 4(a). The contributions from the paramagnetic (\(K_{\text{para}}\), positive) and diamagnetic (\(K_{\text{dia}}\), negative) parts are also separately plotted in the figure. In the regime with \(h\leq 1.125\) and \(1.75\leq h\), the \(\eta\)-pairing is electromagnetically unstable, while it is stable in \(1.125<h<1.75\). In Fig. 4, the yellow shaded rectangle indicates the regime where the \(\eta\)-pairing becomes the ground state as seen from Fig. 2. We find a narrow region where \(\eta\)-pairing is regarded as the ground state but is not an electromagnetically stable state around \(h=1.125\). From these results, we see that the \(\eta\)-pairing is not necessarily electromagnetically stable even if it becomes the ground state in a two-sublattice calculation. As we shall see later, the simple \(\eta\)-pairing in this narrow regime does not necessarily exist if we relax the two-sublattice condition of the mean-field solution. We also show in Fig. 4(a) the contributions from the even- and odd-frequency pairs defined in Eqs. (23) and (24). The negative sign of the kernel, which means the response is diamagnetic, is partly due to the odd-frequency component of the pair amplitude, (\(K^{\text{OFP}}<0\)). This is in contrast to the FFLO state whose Meissner kernel is also negative due to the even-frequency component [51]. Hence, it implies that the mechanism of the diamagnetism is different between the FFLO and \(\eta\)-pairing states. In addition to the Meissner kernel, we calculate the local pair amplitudes which are shown in FIG. 4(b). Here the left- and right-panels represent the spin-triplet and spin-singlet components of the local pair amplitude, respectively. The triplet component \(\sum_{\sigma\sigma^{\prime}}(\tau^{\mu}\mathrm{i}\tau^{y})_{\sigma\sigma^{ \prime}}F_{\sigma\sigma^{\prime}}(\mathrm{i}\omega_{n})\) with \(\mu=x\) has a finite imaginary part and zero real part, which represents the odd-frequency pair. The other \(\mu=y,z\) components are zero. On the other hand, the singlet component has a finite real part and zero imaginary part and is the even-frequency pair. We can see that the maximum value of the spin-triplet component of the pair amplitude is largest at the magnetic field \(h=1.375\), where the magnitude of \(K^{\text{OFP}}\) is largest. It is also notable that the magnitude of the odd-frequency pair amplitude correlates with the magnitude of DOS at zero energy as seen by comparing Figs. 3 and 4. We comment on the singular behavior of \(K^{\text{OFP}}\) at the magnetic field \(h=1.375\), although it does not affect the total Meissner kernel \(K\). This anomalous feature is related to the van Hove singularity of the DOS at zero energy as shown in FIG. 3, which shows a sharp peak at the Fermi level. Figure 3: Density of states for the \(\eta\)-pairing around magnetic filed \(h=1.375\) in the square lattice model. Here \(D(\omega)\) is normalized as \(\int d\omega D(\omega)=1\). Figure 2: (Upper panel) Magnetic-field dependence of the internal energy for each state measured from the normal state in the square lattice model. (Lower plane) Density of state (DOS) at zero energy \(D_{0}\) for each state. #### iii.1.3 Beyond two-sublattice In order to clarify the stable ordered state where the Meissner kernel is positive (paramagnetic), we investigate mean-field solutions on finite-sized lattice where the two-sublattice condition is not imposed. We have numerically solved the Eqs. (4)-(6) self-consistently by using the mean-field solutions of the \(\eta\)-pairing obtained for two-sublattice as an initial condition. Figure 5 shows the spatial distribution of the phase of the gap function when the number of sites is \(8\times 8\). At \(h=0.5\) in (a), where the \(\eta\)-pairing is not a ground state, the uniform BCS pairing state is realized as expected. With increasing the magnetic field, the longer-periodicity structures are found as shown in Figs. 5(b), (c) and (d). At \(h=1.375\) in (c), where the \(\eta\)-pairing solution has the lowest energy and the electromagnetic response is well diamagnetic, we obtain the staggered alignment of the phases. When the parameters are close to the edges of the yellow-highlighted region in Fig. 4, the complex structures are formed as shown in (b) and (d). The behavior in (b) is interpreted as due to the competing effect where the simple uniform and staggered phases are energetically close to each other. We also investigate the case with the other choice of parameters: \(U=-1.25\) and \(h=1.25\). In this case, we find the staggered flux state where the phase of pair potential is characterized by \(90^{\circ}\)-Neel ordering as in Fig. 6(a). This ordered state cannot be described in the mean-field theory with two sublattices. Owing to a non-colinear \(90^{\circ}\)-Neel ordering vector, the spontaneous clockwise or counterclockwise loop currents arise by the inter-atomic Josephson effect. The current density is calculated by \[j_{ij}=-\mathrm{i}t\sum_{\sigma}\langle c_{i\sigma}^{\dagger}c_{j\sigma}-c_{j \sigma}^{\dagger}c_{i\sigma}\rangle \tag{29}\] which is identical to the expression of the paramagnetic current in the linear response theory. We can also evaluate the flux for each plaquette, which is define by \[\Phi=\sum_{(i,j)\in\mathrm{plaquette}}j_{ij} \tag{30}\] This expression is similar to the flux \(\oint_{C}\mathbf{j}\cdot d\mathbf{s}=\int_{S}\mathbf{b}\cdot d\mathbf{S}\) (\(\mathbf{j}=\mathbf{\nabla}\times\mathbf{b}\)) defined in a continuum system, where \(\mathbf{b}\) is a flux density. The flux is aligned in a staggered manner Figure 4: (a) Magnetic field dependence of the Meissner kernel \(K(=K_{xx}=K_{yy})\) for the \(\eta\)-pairing on the square lattice. The yellow shaded rectangle indicates the range where the \(\eta\)-pairing becomes the ground state in two-sublattice calculation. The number of the wavenumber \(\mathbf{k}\) is taken as \(300\times 300\). (b) Matsubara frequency dependence of the local pair amplitude at several magnetic fields. The left panel represents the imaginary part of \(\left[F_{\downarrow\downarrow}(\mathrm{i}\omega_{n})-F_{\uparrow\uparrow}( \mathrm{i}\omega_{n})\right]/\sqrt{2}\), and the right panel represents the real part of \(\left[F_{\uparrow\downarrow}(\mathrm{i}\omega_{n})-F_{\downarrow\uparrow}( \mathrm{i}\omega_{n})\right]/\sqrt{2}\). The values of the pair amplitudes are shifted by \(0.6\) at each magnetic field for visual clarity, and the gray-dotted lines are the zero axes for each magnetic field. Figure 5: Spatial distribution of the phase of the superconducting order parameter at several magnetic fields. The calculation is performed on the finite-sized lattice (\(8\times 8\)) with open boundary condition. Small black dots are lattice points and red arrows indicate the phase of the pair potential for each lattice point. on a dual lattice as indicated in Fig. 6(b). The staggered flux originating from the normal part has been studied before [20; 21; 22; 23], while the staggered flux shown in Fig. 6(b) has a different origin: it arises from the superconductivity associated with the off-diagonal part in the Nambu representation. We also comment on a feedback effect to the electromagnetic field from the supercurrent. Since the characteristic length scale for the magnetic field in layered superconductor becomes long [52], each magnetic flux on the plaquette is smeared out with this length. Hence we expect that the net magnetic field is not created from the staggered superconducting flux. ### Triangular lattice #### iv.2.1 Mean-field solution Now we search for the \(\eta\)-pairing reflecting the characteristics of a geometrically frustrated triangular lattice at the half-filling (\(n_{c}=1.0\)). We choose the parameters \(U=-1.83\) and \(T=1.0\times 10^{-3}\). We consider the cases of two- and three-sublattice structures. For a usual antiferromagnet, the typical ordered state in the two-sublattice case has a stripe pattern, while in the three-sublattice case we expect a \(120^{\circ}\)-Neel state. Below we study the superconducting \(\eta\)-pairing phases within the mean-field theory. We have found the four types of superconducting states reflecting the characteristics of the triangular lattice, which are referred to as the \(\eta\)-pairing I, II, III, and IV. The schematic pictures for these four states are shown in Fig. 7(a), where the arrow indicates the phase of the superconducting order parameter at each site. We make a few general remarks: the three-sublattice structure is assumed for I, II, III, while the two sublattice is employed for IV. The type-I has a non-colinear structure, and in the other \(\eta\)-pairings the vectors are aligned in a colinear manner. We also note that CDW accompanies the \(\eta\)-pairings II and III, where the number of local filling is indicated by the size of the filled circle symbols in Fig. 7(a). Figure 7(b) shows the internal energy of the ordered states measured from the normal state (Upper panel) and from the \(\eta\)-pairing I (Lower panel). From the lower panel of Fig. 7(b), we can identify the ground state. With increasing the magnetic field, the ground state changes as BCS \(\rightarrow\)\(\eta\)-pairing II\(\rightarrow\)\(\eta\)-pairing I\(\rightarrow\)\(\eta\)-pairing I\(\rightarrow\) normal. Figure 7(c) shows the particle den Figure 6: (a) Spatial distribution of the phase of the superconducting order parameter for the \(\eta\)-pairing with \(90^{\circ}\)- Néel state on the finite-sized lattice under open boundary conditions. (b) Spatial distributions of the spontaneous loop current and the flux defined on each plaquette. The color of vectors displays the magnitude of current, and the color of dots in each plaquette indicates the value of the magnetic flux defined in Eq. (30). Figure 7: (a) Schematics for the four \(\eta\)-pairings in the triangular lattice model. The arrows indicate the phase of the pair potential. The size of the circles shows the amount of the electron density for each sublattice. (b) Magnetic field dependence of the internal energies measured from the normal state (upper panel). The lower panel shows the internal energy measured from the \(\eta\)-pairing I. (c) Magnetic field dependence of the number of electrons and magnetization on each sublattice for the \(\eta\)-pairing II (upper panel) and IV (lower panel). sity and \(x\)-direction magnetization \(m_{i}^{x}\) of each sublattice for \(\eta\)-pairing \(\mathrm{I\!I}\) (Upper panel) and \(\eta\)-pairing IV (Lower panel). The values of \(m_{i}^{y}\)and \(m_{i}^{z}\) are zero because the Zeeman field \(\mathbf{h}\) is applied along the \(x\)-direction. Below, we explain the characteristic features for each \(\eta\)-pairing state. \(\eta\)_-pairing-I state.--_ The \(\eta\)-pairing I has \(120^{\circ}\) Neel ordering vector (Green pentagon in Fig. 7(b)). The spontaneous supercurrent appears in this non-colinear state as schematically shown in Fig. 8(a). This superconducting state forms a staggered flux state, where the flux is aligned on a honeycomb dual lattice, which is similar to the \(\eta\)-pairing with \(90^{\circ}\)-Neel ordering vector on the square lattice shown in Fig. 6(b). Figure 8(b) displays the values of spontaneous loop current density as a function of the magnetic field. \(\eta\)_-pairing-II state.--_ The \(\eta\)-pairing \(\mathrm{I\!I}\) has the structure with up-up-down colinear phases plus CDW (Red hexagon in Fig. 7(b)). There is the relation \(n_{\mathrm{A}}=n_{\mathrm{B}}<n_{\mathrm{C}}\) for the electron filling at each sublattice shown in Fig. 7(c). We note that this site-dependent feature is characteristic for the \(\mathrm{I\!I}\) (and IV) state. The phases of the pair potential at A and B sublattices are "ferromagnetic", while the phase at C sublattice is "antiferromagnetic". The resulting ordered state is regarded as the emergence of the honeycomb lattice formed by equivalent A and B sublattices. \(\eta\)_-pairing-III state.--_ This is the \(\eta\)-pairing with a staggered ordering vector and CDW (Magenta square in Fig. 7(b)). The order parameter \(\Delta\) at C sublattice is zero, but the others (A,B) are finite. The electron-rich sublattices A and B form a simple bipartite \(\eta\)-pairing state on an emergent honeycomb lattice. Since this state does not become a ground state anywhere for the present choice of \(U=-1.83\), we do not further investigate this state in the following. \(\eta\)_-pairing-IV state.--_ This is the \(\eta\)-pairing with a simple stripe alignment (Cyan rhombus in Fig. 7(b)). This \(\eta\)-pairing is accompanied by CDW around \(h=1.9\) shown in Fig. 7(c). As shown below, this stripe phase show an anisotropic behavior in linear response coefficients, while the other \(\eta\)-pairing states are isotropic. #### iv.2.2 Meissner response Now we discuss the Meissner response. Figure 9(a,b,c) shows the Meissner kernels \(K_{xx},\ K_{yy}\) for the \(\eta\)-pairing \(\mathrm{I\!I}\), \(\mathrm{I\!I}\) and IV. The yellow-highlighted parts indicate the region where each \(\eta\)-pairing becomes the ground state as identified from Fig. 7(b). The result for the \(\eta\)-pairing \(\mathrm{I\!I\!I}\) is not shown because it does not become a ground state at \(U=-1.83\). We confirm that the Meissner response is basically diamagnetic if the \(\eta\)-pairing becomes the ground state as shown in Figs. 9(a,b,c). Thus the energetic stability and diamagnetic response are reasonably correlated. In the following, we discuss the properties of the Meissner kernel for each state. The Meissner kernels for both \(\eta\)-pairing I and \(\eta\)-pairing \(\mathrm{I\!I}\) shown in Figs. 9(a) and (b) satisfy the relation \(K_{xx}=K_{yy}\), which means an isotropic linear response. For the \(\eta\)-pairing I, the Meissner kernel becomes positive in the regions \(h<1.2,\ 1.95<h<2.12\), while the kernel becomes negative in the ground state region (Fig. 9(a)). Although the local current density is finite for the \(\eta\)-pairing I state, it does not affect the expression of the Meissner kernel in Eq. (10) since the total current \(\mathbf{j}(\mathbf{q}=0)\) is zero. Next we disucucus the \(\eta\)-pairing IV state. The Meiss Figure 8: (a) Schematic picture of the staggered flux state on the triangular lattice. The straight arrows display the phase of the pair potential at each site, and the circle arrows indicate the staggered loop current. (b) Magnetic field dependence of the magnitude of loop current. The yellow shaded rectangle indicates the range where the \(\eta\)-pairing I becomes the ground state. Figure 9: Magnetic field dependence of the Meissner kernels \(K_{xx}\) and \(K_{yy}\) for the \(\eta\)-pairings \(\mathrm{I\!I}\), \(\mathrm{I\!I}\), \(\mathrm{IV}\) on the triangular lattice. The yellow shaded rectangle indicates the regime where each \(\eta\)-pairing becomes the ground state. The symbols are the same as those in Fig. 4(a). For the \(\eta\)-pairing IV, \(K_{xx}\) and \(K_{yy}\) are separately plotted in (c1) and (c2). ner kernel jumps at \(h=1.8\) due to the emergence of the CDW order parameter as shown in Fig. 9(c1,c2). It is notable that the \(\eta\)-pairing IV with the stripe pattern shows a difference between \(x\) and \(y\) directions as shown in Figs. 9(c1,c2), respectively. This characteristic behavior can be intuitively understood from Fig. 7(a), where the current along the \(x\)-axis flows with experiencing a staggered pair potential, whereas the current in the \(y\)-direction feels an uniform pair potential. In the Meissner response, \(K_{xx}\) shows a characteristic behavior of the \(\eta\)-pairing, while \(K_{yy}\) is qualitatively the same as the kernel of BCS. Thus, as shown in Fig. 9(c1), the diamagnetic response in the \(x\)-axis direction is related to to the odd-frequency pair, whereas the diamagnetic response in the \(y\)-axis direction, shown in Fig. 9(c2), is related to even-frequency pair. ## V Summary and outlook We have studied the square and the triangular lattice of the attractive Hubbard model by using the mean-field theory. Several types of \(\eta\)-pairing have been found in the triangular lattice where a simple bipartite pattern is not allowed. Using the formulation of the Meissner kernel for a general tight-binding lattice, we have investigated the electromagnetic stability of \(\eta\)-pairings. We have confirmed that the electromagnetic stability of the \(\eta\)-pairing correlates with the internal energy. In a narrow parameter range, we also find that the \(\eta\)-pairing state can show an unphysical paramagnetic response if we assume the two or three sublattice structure in the mean-field calculation. In this case, another solution with longer periodicity needs to be sought. When the current path experiences the staggered phase of the superconducting order parameter, the odd-frequency component of the pair amplitude contributes to the diamagnetic response. This is in contrast to the conventional BCS case in which the even-frequency component of the pair amplitude contributes to the diamagnetism. We have further clarified that one of the \(\eta\)-pairing states on the triangular lattice has a stripe pattern and shows an anisotropic Meissner response. In this case, the odd-frequency pair contributes diamagnetically or paramagnetically depending on the direction of current. We comment on some issues which are not explored in this paper. We expect that the \(\eta\)-pairing without a simple staggered phase will appear on pyrochlore, kagome and quasicrystalline lattice, whose phase-alignment could be qualitatively different from the triangular lattice. In addition, there is another model that shows \(\eta\)-pairing in equilibrium. A two-channel Kondo lattice (TCKL) is an example of a model in which \(\eta\)-pairing appears even in the absence of a Zeeman field [24]. Our preliminary calculation for the TCKL shows a number of ordered states which have similar energies. These additional studies provide more insight into the exotic superconductivity characteristic for the \(\eta\)-pairing. ## Acknowledgement This work was supported by KAKENHI Grants No. 18H01176, No. 19H01842, and No. 21K03459. ## Appendix A Self-consistent equations in mean-field theory We derive self-consistent equations for the general interacting Hamiltonian. Let us begin with the Hamiltonian \[\mathscr{H}=\sum_{12}\varepsilon_{12}c_{1}^{\dagger}c_{2}+\sum_{1234}U_{1234} c_{1}^{\dagger}c_{2}^{\dagger}c_{4}c_{3} \tag{10}\] where site-spin indices are written as \(1=(i_{1},\sigma_{1})\). The mean-field Hamiltonian is introduced as \[\mathscr{H}_{\rm MF}=\sum_{12}\Big{(}E_{12}c_{1}^{\dagger}c_{2}+\Delta_{12}c_ {1}^{\dagger}c_{2}^{\dagger}+\Delta_{12}^{*}c_{2}c_{1}\Big{)}. \tag{11}\] We assume \(\langle\mathscr{H}\rangle=\langle\mathscr{H}_{\rm MF}\rangle\) where the statistical average is taken with \(\mathscr{H}_{\rm MF}\). Then the self-consistent equation is obtained as \[E_{12} =\frac{\partial\langle\mathscr{H}\rangle}{\partial\langle c_{1}^{ \dagger}c_{2}\rangle}\] \[=\varepsilon_{12}+\sum_{34}\big{(}U_{1324}+U_{3142}-U_{1342}-U_{31 24}\big{)}\langle c_{3}^{\dagger}c_{4}\rangle \tag{12}\] \[\Delta_{12} =\frac{\partial\langle\mathscr{H}\rangle}{\partial\langle c_{1}^ {\dagger}c_{2}^{\dagger}\rangle}=\sum_{34}U_{1234}\langle c_{4}c_{3}\rangle \tag{13}\] Figure 10: (a) The difference between the DOSs of the \(\eta\)-pairing and normal states in the cubic lattice model. The values of the DOS are shifted by \(0.2\) for each magnetic field, and the gray dotted lines are the zero axes for each magnetic field. We also show the Matsubara frequency dependence of (b) the imaginary part of \(\left[F_{\downarrow\downarrow}({\rm i}\omega_{n})-F_{\downarrow\uparrow}({\rm i }\omega_{n})\right]/\sqrt{2}\) and (c) the real part of \(\left[F_{\uparrow\downarrow}({\rm i}\omega_{n})-F_{\downarrow\uparrow}({\rm i }\omega_{n})\right]/\sqrt{2}\) for each magnetic field. The values of the pair amplitudes are shifted by \(0.6\). where the Wick's theorem is used for the derivation. Although the variational principle for the free energy also gives the same equation, the above formalism gives a simple procedure to derive the self-consistent equations. ## Appendix B Attractive Hubbard model on Cubic lattice We analyze the \(\eta\)-pairing on the cubic lattice, whose DOS does not have a van Hove singularity near zero energy. Here we choose the parameter \(U=-1.375\) and the electron concentration is half-filled. As a result, the DOS for the \(\eta\)-pairing around zero energy for each magnetic filed on the cubic lattice is smaller than the DOS of the normal state as shown in Fig. 10(a). For reference, we also show in Figs. 10(b) and (c) the pair amplitude similar to Fig. 4(b) in the main text. In addition, the odd-frequency pair amplitude increases when DOS near zero energy is enhanced as seen from Figs. 10(a) and (b).
2308.04717
Fully Decentralized Peer-to-Peer Community Grid with Dynamic and Congestion Pricing
Peer-to-peer (P2P) electricity markets enable prosumers to minimize their costs, which has been extensively studied in recent research. However, there are several challenges with P2P trading when physical network constraints are also included. Moreover, most studies use fixed prices for grid power prices without considering dynamic grid pricing, and equity for all participants. This policy may negatively affect the long-term development of the market if prosumers with low demand are not treated fairly. An initial step towards addressing these problems is the design of a new decentralized P2P electricity market with two dynamic grid pricing schemes that are determined by consumer demand. Futhermore, we consider a decentralized system with physical constraints for optimizing power flow in networks without compromising privacy. We propose a dynamic congestion price to effectively address congestion and then prove the convergence and global optimality of the proposed method. Our experiments show that P2P energy trade decreases generation cost of main grid by 56.9% compared with previous works. Consumers reduce grid trading by 57.3% while the social welfare of consumers is barely affected by the increase of grid price.
Hien Thanh Doan, Truong Hoang Bao Huy, Daehee Kim, Hongseok Kim
2023-08-09T05:43:06Z
http://arxiv.org/abs/2308.04717v1
# Fully Decentralized Peer-to-Peer Community Grid with Dynamic and Congestion Pricing ###### Abstract Peer-to-peer (P2P) electricity markets enable prosumers to minimize their costs, which has been extensively studied in recent research. However, there are several challenges with P2P trading when physical network constraints are also included. Moreover, most studies use fixed prices for grid power prices without considering dynamic grid pricing, and equity for all participants. This policy may negatively affect the long-term development of the market if prosumers with low demand are not treated fairly. An initial step towards addressing these problems is the design of a new decentralized P2P electricity market with two dynamic grid pricing schemes that are determined by consumer demand. Furthermore, we consider a decentralized system with physical constraints for optimizing power flow in networks without compromising privacy. We propose a dynamic congestion price to effectively address congestion and then prove the convergence and global optimality of the proposed method. Our experiments show that P2P energy trade decreases generation cost of main grid by 56.9% compared with previous works. Consumers reduce grid trading by 57.3% while the social welfare of consumers is barely affected by the increase of grid price. Decentralized electricity market, peer-to-peer energy trading, peer-to-grid, physical constraint, main grid dynamic pricing ## I Introduction In recent years, there have been significant interests in research on distributed energy resources (DERs), such as renewable energy resources (RESs) in power systems. Each node in a distribution network is equipped with smart devices capable of exchanging information and switching to appropriate software devices, such as smart meters and energy management systems (EMSs). It enables flexible scheduling, monitoring, and sharing of energy usage information in a distribution network, and encourages prosumers to participate in energy trading on a proactive basis. A variety of energy market development projects have been implemented in distribution systems, such as SonnenCommunity in Germany [1], Brooklyn in the USA [2], and Pico in the UK [3] to enable prosumers to utilize their DERs. All of these are expected to contribute to managing and optimizing energy resources in the future. In practice, prosumers often buy/sell electricity through retailers due to the daily fluctuation of grid price. The safety and the stability of the power system is also a major concern when transmitting electricity directly. However, this can change in the next generation of the power grid for a number of reasons, such as the development of EMS that enables each prosumer to manage its own risks under grid price fluctuations. Besides, with the development of a management system, each node on the power system can manage its own safety and stability, as well as service fees [4]. It can also operate with other nodes in the whole system without depending on retailers. Therefore, this study aims at designing the future electricity market that allows prosumers to transact energy directly with other prosumers and also with the main grid. A peer-to-peer electricity market (P2PEM) is a new type of market that allows surpluses and deficits among network peers to be directly traded. The P2PEM provides cost-saving, autonomy, transparency, and competition [5] to each participant. In order to provide cost-saving opportunities, [6] indicates that appropriate P2PEM should encourage prosumers to remain involved. Therefore, it is essential to devise a long-term market mechanism that supports fairness and incentives for energy trading among participants. Early studies have considered different P2P trading negotiation mechanisms such as centralized, decentralized, and auction-based approaches [7]. The centralized mechanism has a centralized transaction process and an information-sharing manner. The auction-based mechanism [8] is an approach in which prosumers relay the information to an aggregator to maximize their profits. However, relaying data to the central entity may leak the concern of privacy. One solution could be using a decentralized algorithm that uses limited information exchange and matches prosumers directly [9]. In the decentralized mechanism, each prosumer negotiates with another prosumer to find an optimum solution based on their preferences, which is the focus of this paper. There has been a variety of research on decentralized P2P trading. In [10], a market is proposed for grid-connected prosumers. In [11, 12], an optimization model was presented for prosumers, using an application of the alternating direction method of multipliers (ADMM) algorithm for matching prosumers. In [12], the authors proposed an electricity market by considering optimization problem separately and independently each time. Time-coupling constraint in multiple time is studied in [13] for battery. Although all of the above articles focused on grid-connected prosumers, they all have the same limitation in that the price offered by main grid is the predetermined one, which will be called a _fixed_ price hereafter. Of course, the time-of-use (TOU) price changes during a day, but not directly related with amount of the load in community grid in that time slot. This necessitates new grid pricing mechanisms between the main grid and the community grid to encourage or discourage electricity consumption dynamically, which is called _dynamic grid pricing_ hereafter. Indeed, the dynamic grid pricing was incorporated in several studies. In [14], an energy trading scheme for different stakeholders at multiple levels was proposed. In [15, 16], demand response was applied to minimize energy costs based on the preferences of market participants and to reduce the effects of dynamic grid price. The authors suggested a new objective function for prosumers to improve prosumer's utility [16] or satisfaction [17]. Also in [17], the authors considered an equitable allocation of profits among microgrids (MGs). However, these studies only considered social welfare maximization (SWM) or energy cost minimization in a _virtual_ layer. As a consequence, physical constraints such as line losses, voltage variation and congestion are neglected, so is the fairness. Several works have focused on considering these physical constraints in P2PEM; a decentralized market was studied using Nash bargaining to solve alternating current optimal power flow (AC-OPF); a branch-flow model was relaxed in second-order cones to resolve the non-convex problem with guaranteed exactness for radial grid topologies [18]. However, the Nash bargaining based studies [19, 20] can be impractical because it requires an honest report about the increased revenue of each prosumer. A newly added study in [21] attempted to solve both privacy concerns and congestion management but neglected reactive power and voltage constraints. In this regard, we address the above concerns by answering the following fundamental questions: 1) how to design an electricity market that maximizes social welfare while the main grid adjusts the price based on the community load, 2) what are the criteria for designing dynamic grid price in P2P trading, considering its impact on sustainability and stability in market development, 3) how to minimize the cost incurred in trading while keeping the privacy of each node and considering physical constraints, 4) how to impose a P2P trading fee to manage congestion, which is called _congestion pricing_ hereafter. The main contributions of this paper are as follows: * We propose a decentralized P2P community market enabling dynamic pricing in a grid-connected environment with physical constraints. We conduct a mathematical formulation of the proposed P2P market with the objective of SWM, which is solved in a decentralized manner using ADMM. * A novel two-stage operation model is proposed to guarantee physical constraints and achieve optimality. In the first stage, decentralized P2P trading in a virtual layer is performed as an iterative procedure using ADMM. Upon reaching a virtually optimal P2P matching, the second stage finds a physical way that enables the P2P transactions with minimum power losses and voltage fluctuations by solving a decentralized AC-OPF using ADMM. We prove that the proposed congestion pricing maximizes social welfare by coordinating the virtual layer and the physical layer iteratively while maintaining the privacy of each node in the power system, see _Proposition 1_. Fig. 1 summarizes the proposed framework encompassing the virtual layer and the physical layer. * We present and investigate two grid pricing schemes, unique price scheme (UPS) and differential price scheme (DPS), to guarantee fairness among participants which encourage prosumers to participate in the market actively. It is important to note that fairness is essential for the long-term market mechanism and active participation of prosumers. * Our proposed decentralized P2P market mechanism is analyzed in detail with various case studies under realistic configurations. Compared to the existing approaches, the proposed method is shown to decrease the generation cost of main grid by 56.9%. In addition, the proposed method reduces the energy consumption from non-RES, such as main grids, by 57.3%; thus, it is more environmentally friendly than other methods. Finally, the AC-OPF solution ensures the optimal power flow of the power systems, while a small number of rounds handle congestion. The remainder of this paper is organized as follows. In Section II, we present a system model for our work. In Section III, we formulate an energy trading pricing model and a congestion penalty rule. Section IV describes the decentralized market within the context of a power system and examines the operational model. Section V presents the results and analysis of the case studies while Section VI concludes this study. ## II System Model ### _Structure of community grid_ Prosumers participating in a P2P market are assumed to be non-strategic and rational. A prosumer can be a consumer when generation capacity is less than demand. Consumers purchase energy from producers or the main grid. On the other hand, prosumer can be a producer when generation exceeds demand. Producers sell energy to consumers or the main grid. To handle all these activities, each prosumer is assumed to have a demand response (DR)-enabled smart meter that can record prosumer generation data, demand, and thus manage the amount of sold or purchased energy in P2P market. The market is an hourly ahead market where prosumers negotiate trades for the next time slot. Since we focus on a single time slot, we omit the time slot index. ### _Distribution network model_ Let us consider a low-voltage (LV) distribution grid given by the undirected connected graph, \(\mathcal{G}=(\mathcal{N},\mathcal{L})\) as shown in Fig. 2. Here, \(\mathcal{N}\) denotes a set of nodes indexed by \(i=0,1,\ldots,|\mathcal{N}|\) and \(\mathcal{L}\) denotes a set of lines connecting those nodes indexed as \(\ell=1,\ldots,|\mathcal{L}|\). The slack bus (root node) Figure 1: Overview of electricity market systems. has a zero index. Each node \(i\) has a parent (ancestor), denoted as \(\mathcal{A}_{i}\). We consider a radial distribution network, and thus each node has only one parent. A set of children of node \(i\) is denoted by \(\mathcal{C}_{i}\), indexed by \(k=1,\ldots,|\mathcal{C}_{i}|\). Since we are considering a radial network, each line \(\ell\in\mathcal{L}\) can be uniquely indexed by its connected child. Hence, we use the same notation \(i\) to denote a node or a connected line towards the root node, unless \(\ell\) needs to be specified for congestion pricing later. Let \(I_{i}\) be the current flow from the parent \(\mathcal{A}_{i}\) to node \(i\). In addition, \(d_{i}^{P}\) and \(g_{i}^{P}\) represent active power of the consumer and producer, while \(d_{i}^{Q}\) and \(g_{i}^{Q}\) represent the reactive powers of the consumer and producer, respectively. \(f_{i}^{P}\) and \(f_{i}^{Q}\) represent the active and reactive power flows, respectively, of line \(i\). Let \(V_{i}\) be the voltage at node \(i\). Then, the squared voltage at node \(i\) is represented by \(v_{i}=|V_{i}|^{2}\), and the squared current is represented by \(l_{i}=|I_{i}|^{2}\). \(v_{i}^{\min}\) and \(v_{i}^{\max}\) are the minimum and the maximum squared voltage angles. Resistance \(r_{i}\) and reactance \(x_{i}\) are characterized for each line. \(S_{i}^{\max}\) is the maximum capacity of line \(i\). In an AC-OPF of the radial network, these quantities can be related using a LinDistFlow [22]. The equations are as follows. For \(\forall i\in\mathcal{N}\), we have \[f_{i}^{P}+g_{i}^{P}-\sum_{k\in\mathcal{C}_{i}}(f_{k}^{P}+r_{k}l_{ k})= d_{i}^{P}, \tag{1a}\] \[f_{i}^{Q}+g_{i}^{Q}-\sum_{k\in\mathcal{C}_{i}}(f_{k}^{Q}+r_{k}l_ {k})= d_{i}^{Q},\] (1b) \[v_{i}+2(r_{i}f_{k}^{P}+x_{i}f_{k}^{Q})+l_{i}(r_{i}^{2}+x_{i}^{2})= v_{\mathcal{A}_{i}},\] (1c) \[(f_{i}^{P})^{2}+(f_{i}^{Q})^{2}\leq (S_{i}^{\max})^{2}.\] (1d) \[(f_{i}^{P})^{2}+(f_{i}^{Q})^{2}= v_{i}^{\min}\leq v_{i}\leq v_{i}^{\max}. \tag{1f}\] Equation (1) defines the distribution network in the node variable set \(\{d_{i}^{P},g_{i}^{P},d_{i}^{Q},g_{i}^{Q},v_{i},l_{i}|i\in\mathcal{N}\}\). Equation (1e) being a non-convex constraint, can be relaxed to an inequality using a second-order cone [18], \[||2f_{i}^{P},2f_{i}^{Q},v_{i}-l_{i}||_{2}\leq v_{i}+l_{i}. \tag{1g}\] #### Ii-B1 Cost function Producers engage in the market to maximize their benefits, where they try to sell their energy at a beneficial price to consumers or the main grid. Let \(C_{i}(g_{i})\) represent the cost when a prosum \(i\) generates an amount of energy \(g_{i}\)[23].1 The formula can be expressed as Footnote 1: We use \(g_{i}\) instead of \(g_{i}^{P}\) for notational simplicity hereafter. \[C_{i}(g_{i})=b_{i}g_{i}+a_{i}g_{i}^{2}, \tag{2a}\] \[g_{i}^{\min}\leq g_{i}\leq g_{i}^{\max},\ \ \forall i\in\mathcal{N}, \tag{2b}\] where \(a_{i}\geq 0\) is to the dynamic cost of energy generation in \(\$/MWh^{2}\), \(b_{i}>0\) is to the producer's minimum selling price in \(\$/MWh\), and \(g_{i}^{\min}\) and \(g_{i}^{\max}\) represent the minimum and the maximum amounts of energy generation in \(MWh\). #### Ii-B2 Utility function Consumers are willing to pay money to purchase energy from producers or the main grid. The responses of different consumers to various scenarios can be modeled using the concept of utility function. As done in [24], each consumer \(i\) has level of satisfaction when it consumes \(d_{i}\) amount of energy.2 Footnote 2: We use \(d_{i}\) instead of \(d_{i}^{P}\) for notational simplicity hereafter. \[U_{i}(d_{i})=\begin{cases}\beta_{i}d_{i}-\alpha_{i}{d_{i}}^{2}&\text{if }0 \leq d_{i}\leq\frac{\beta_{i}}{2\alpha_{i}}\\ \frac{\beta_{i}^{2}}{4\alpha_{i}}&\text{if }d_{i}>\frac{\beta_{i}}{2\alpha_{i}} \end{cases}, \tag{3a}\] \[d_{i}^{\min}\leq d_{i}\leq d_{i}^{\max},\forall i\in\mathcal{N}, \tag{3b}\] where \(\alpha_{i}>0\) is consumer \(i\)'s satisfaction with energy consumption in \(\$/MWh^{2}\), and \(\beta_{i}>0\) is the consumer's maximum buying price in \(\$/MWh\). Consumer \(i\) can buys energy \(d_{i}\) from producers and main grid, where \(d_{i}^{\min}\) and \(d_{i}^{\max}\) represent the minimum and the maximum required electricity in \(MWh\) of the consumer \(i\), respectively. More specifically, \(d_{i}^{\min}\) represents a realistic load demand for fixed loads, whereas \(d_{i}^{\max}\) indicates the load demand for fixed and flexible loads. #### Ii-B3 Main grid cost function Main grid can supply or absorb power at any given time due to the mismatch between generation and demand in P2P markets. The cost of energy trading with main grid can be modeled as a quadratic function [14, 15, 16, 17]. However, unlike the previous works, the cost function of the proposed method is calculated by coefficient parameters \(a_{0}\geq 0,b_{0}>0\) and predefined by main grid at each time slot. The cost of main grid to supply \(p_{0}\) amount of power is given by \[C_{0}(p_{0})=b_{0}p_{0}+a_{0}p_{0}^{2}, \tag{4}\] where \(a_{0}\) denotes dynamic cost in \(\$/MWh^{2}\), and \(b_{0}\) represents the minimum price in \(\$/MWh\). Since the node \(0\) is a slack bus, we assume that \(p_{0}\) does not have the maximum or the minimum constraints. ## III Problem Formulation ### _Grid Dynamic Pricing_ According to (2a), (3a), (4) the objective of the electricity market in virtual layer can be expressed as **P1: Utility Maximization** \[\max\sum_{i\in\mathcal{N}}[U_{i}(d_{i})-C_{i}(g_{i})]-C_{0}(p_{0}), \tag{5}\] and **P1** can be solved by decentralized optimization, as in previous works [21, 25, 26]. In this study, however, we Fig. 2: A distribution network for P2P electricity market and power flow between participants. reformulate (5) by converting the quadratic cost function of \(C_{0}(p_{0})\) of the main grid into a linear function, which has several benefits compared to other studies. First, in practice, Feed-in Tariff (the selling price to the grid) is fixed at all times of day [27] or fixed at each time slot. In the context of this study, this corresponding to \(b_{0}>0\) and \(a_{0}=0\). Hence, the main grid can publicize the selling price \(b_{0}\) before the P2P electricity market starts, and producers can solve these problems without communicating the main grid. Accordingly, the previous studies lack efficiency, primarily because producers are required to communicate with the grid at each iteration [21, 25, 26]. Second, the pricing scheme implemented by the main grid can be categorized into two approaches, enabling the establishment of distinct incentive prices among prosumers. This contributes to a greater level of fairness within the community grid. To do that we first redefine (2a) and (3a) to a welfare function. Let \(p_{i0}\) denote the energy transfer from prosumer \(i\) to the main grid (indexed by \(0\) as a slack bus). Similarly, let \(p_{ij}\) denote the energy transfer from prosumer \(i\) to prosumer \(j\). The energy exchanged during the P2P process of prosumer \(i\) is expressed as \[g_{i}-d_{i}=p_{i0}+\sum_{j\in\omega_{i}}p_{ij}, \tag{6}\] where \(\omega_{i}\) denotes a set of prosumers to whom the prosumer \(i\) sells. Then, the welfare function of a prosumer \(i\in\mathcal{N}\) is defined as \[\begin{split} W_{i}(d_{i},g_{i},p_{i0})&=U_{i}(d_{i} )-C_{i}(g_{i})\\ &\quad-\overline{\lambda}\max(p_{0i},0)+\underline{\lambda}\max(p _{i0},0),\end{split} \tag{7}\] where \(\bar{\lambda}\) is the buying price from the main grid, and \(p_{0i}\) is the amount of energy from the main grid to the prosumer \(i\). Note that \(p_{0i}=-p_{i0}\) and \(p_{ij}=-p_{ji}\). Similarly, \(\underline{\lambda}\) is the selling price to the main grid. To develop dynamic grid pricing, we factor out the cost function of main grid in (4) as \[C_{0}(p_{0})=p_{0}(b_{0}+a_{0}p_{0}). \tag{8}\] Then, we see that the term (\(b_{0}+a_{0}p_{0}\)) can serve as a price, which dynamically depends on \(p_{0}\). Since \(p_{0}=\sum_{i\in\mathcal{N}}p_{0i}\), we set the dynamic grid price \(\overline{\lambda}\) as \[\overline{\lambda}=b_{0}+a_{0}\sum_{i\in\mathcal{N}}p_{0i}, \tag{9}\] which implies that the buying price from the grid varies depending on the total community load \(p_{0}\). Note that, however, all prosumers have the _same_ buying price in this case, and we call it **unique pricing scheme (UPS)**. The application of the UPS proposed in this paper and other relevant studies [14, 23] may lead to potential market conflicts concerning the distribution of benefits between electricity consumers. Specifically, conflicts may arise between households characterized by low electricity consumption, and commercial/industrial consumers with high electricity consumption. This results in a situation where low-electricity consumers are compelled to pay higher prices due to the influence of high-consumption consumers and can seriously affect the long-term development of the community grid exchange market. Hence, we consider another pricing scheme called **differential pricing scheme (DPS)**, where each prosumer \(i\) has it own buying price \(\overline{\lambda}_{i}\) depending on its own load \(p_{0i}\) such as \[\overline{\lambda}_{i}=b_{0}+a_{0}p_{0i}. \tag{10}\] For realistic simulations, the selling price to main grid \(\underline{\lambda}\) is fixed during a day [27] or fixed at each time slot. ### _Social Welfare Maximization of Community Grid_ The objective of the community grid in **P1** can be reformulated as **P2** using (1) and (7). Note that **P2** aims to maximize social welfare for community grid. **P2: Social Welfare Maximization** \[\max\sum_{i\in\mathcal{N}}W_{i}(d_{i},g_{i},p_{i0}) \tag{11}\] \[\text{s.t }(2b),(3b),(6):\text{virtual layer},\] \[(1a),(1b),(1c),(1d),(1f),(1g):\text{physical layer},\] variables: \[\{p_{ij},p_{i0},d_{i},g_{i},v_{i},l_{i}|\forall i\in\mathcal{N}\}.\] To solve the problem of **P2**, we consider the capacity of line in (1d) can be exceeded but then a penalty is imposed in proportion to the mount of violation. Consequently, prosumers seek to avoid the trading on the congested lines, and **P2** is reformulated as **P3** congestion relaxed given a congestion price \(\eta_{\ell,ij}\) as follow. **P3: Social Welfare Maximization with Congestion Relaxation** \[\max\sum_{i\in\mathcal{N}} \bigg{[}W_{i}(d_{i},g_{i},p_{i0})-\sum_{\ell\in\mathcal{L}}\sum_{j \in\omega_{i}}\eta_{\ell,ij}|p_{ij}|\bigg{]} \tag{12}\] \[\text{s.t }(2b),(3b),(6):\text{virtual layer},\] \[(1a),(1b),(1c),(1f),(1g):\text{physical layer},\] variables: \[\{p_{ij},p_{i0},d_{i},g_{i},v_{i},l_{i}|\forall i\in\mathcal{N},j\in\omega_{i}\}.\] In the next section, we will discuss how to determine and update the congestion pricing \(\eta_{\ell,ij}\) using the proposed two-stage model. ## IV Decentralized Two-stage Electricity Market In solving **P3**, we aim to solve the local problem of each prosumer using only P2P communications to ensure the data privacy of prosumers. Therefore, we propose decentralized two-stage electricity market for efficient P2P energy trading and AC-OPF based on ADMM. Once P2P matching reaches an optimal solution in a virtual layer, the second stage for AC-OPF is initiated to verify whether physical constraints are satisfied. The benefit of this approach is that a central authority is completely avoided, and the data are only shared with the corresponding neighbors. Recall that our two-stage approach consists of P2P matching in the virtual layer and P2P realization in physical layer as shown in Fig. 1. ### _Decentralized P2P Matching in Virtual Layer_ In energy dispatch, the market seeks to minimize the total cost or maximize the total profit of prosumers. To this end, we exploit the modified decentralized optimization based on ADMM in [28]. Accordingly, **P3** is decomposed into sub-problems in **P4** (virtual layer), which is iteratively solved by each prosumer \(i\) and AC-OPF problem in **P5** (physical layer), which updates the congestion pricing at each round of interaction. As shown in Fig. 3, **P3** is iteratively solved by **P4** and **P5**. Let \(n\) be the index of iterative rounds between the virtual layer (**P4**) and the physical layer (**P5**). At the \(n^{th}\) round, given \(\eta_{\ell,ij}=\eta_{\ell,ij}^{n}\), the sub-problem **P4** in virtual layer and its decentralized solution provided by prosumer \(i\) is as follows: **P4: P2P Mathing in Virtual Layer using modified ADMM** \[(\{p_{ij}\},p_{i0})^{t+1}=\operatorname*{arg\,min}_{\{p_{ij}\},p _{i0}}-W_{i}(d_{i},g_{i},p_{i0})+\frac{(p_{i0}-p_{i0}^{t})^{2}}{\rho}\] \[+\sum_{\ell\in\mathcal{L}}\sum_{j\in\omega_{i}}\eta_{\ell,ij}^{n} |p_{ij}|+\sum_{j\in\omega_{i}}\biggl{[}\frac{\rho}{2}\Bigl{(}\frac{p_{ij}^{t}- p_{ji}^{t}}{2}-p_{ij}+\frac{\lambda_{ij}^{t}}{\rho}\Bigr{)}^{2}\biggr{]}, \tag{13}\] \[\text{s.t }(2b),(3b),(6).\] Note that, given \(\eta_{\ell,ij}^{n}\), **P4**, which is identical to **P3** without physical constraints of (1a)-(1c), (1f), (1g), is iteratively solvable. As done in [28], at \(t^{th}\) iteration in the virtual layer, each prosumer \(i\in\mathcal{N}\) finds the optimal quantities \((\{p_{ij}\},p_{i0})^{t+1}\) in (13) depending on the quantities provided by its partners \(p_{ji}^{t}\). Note that \(\lambda_{ij}^{t}\) is the price of P2P trading, which is updated by the ADMM method such as [28] \[\lambda_{ij}^{t+1}=\lambda_{ij}^{t}-\frac{\rho}{2}(p_{ij}^{t+1}+p_{ji}^{t+1}). \tag{14}\] The convergence conditions are achieved by iteratively repeating the algorithm. Conditions (15) is evaluated as follows based on primal and dual residual values. \[||r_{i}^{t+1}||_{2}\leq\epsilon_{\text{pri}}^{2},||s_{i}^{t+1}||_{2}\leq \epsilon_{\text{dual}}^{2}, \tag{15}\] where \(\epsilon_{\text{pri}}\) and \(\epsilon_{\text{dual}}\) represent the primal and dual feasibility tolerances of a model, and their local primal and dual residuals are given by [28] \[r_{i}^{t+1}=\sum_{j\in\omega_{i}}{(p_{ij}^{t+1}+p_{ji}^{t+1})}^{2},s_{i}^{t+1 }=\sum_{j\in\omega_{i}}{(p_{ij}^{t+1}-p_{ij}^{t})}^{2}. \tag{16}\] The P2P matching is summarized in **Algorithm 1**. Note that to improve the performance of ADMM, we adopt the step-size method of ADMM as described in [29] for both P2P matching and P2P realization. ### _Decentralized P2P Realization in Physical Layer_ In solving **P3**, we first solved **P4** in a virtual layer. But the congestion price \(\eta_{\ell,ij}^{n}\) should be determined, and thus we need to solve an OPF problem in physical layer. To avoid sharing network parameters, we consider a decentralized OPF, where the network is decomposed into multiple zones [30]. To enable distributed computation, squared voltage angles \(v_{i}\) are duplicated per zone, which is defined as \(\vartheta_{i}\in\mathbb{R}^{|\mathcal{N}|}\) and updated across iterations to satisfy the following consensus constraint: \[\vartheta_{i}=\overline{\vartheta}\colon\mu_{i},\forall i\in\mathcal{N}, \tag{17}\] where \(\overline{\vartheta}\in\mathbb{R}^{|\mathcal{N}|}\) represents the consensus variable and \(\mu_{i}\in\mathbb{R}^{|\mathcal{N}|}\) indicates the dual variable corresponding to the consensus constraint of each node \(i\). Note that each node \(i\) has its own zone as shown in Fig. 4. ``` 0: Given \(\eta_{\ell,ij}^{n},\forall\ell\in\mathcal{L}\) of Algorithm 3; 0:\(\{p_{ij}\},p_{i0},d_{i},g_{i}\); 0:\(t=0,p_{ij}^{0}=0,p_{i0}^{0}=0\), and \(\lambda_{ij}^{0}=0,\forall i\in\mathcal{N},j\in\omega_{i}\); 1:while (15) not satisfieddo 2:\(t\gets t+1\); 3: Each prosumer \(i\) solves (13) and sends \(p_{ij}\) to \(j\in\omega_{i}\) and sends \(p_{i0}\) to power grid; 4: Grid receives \(p_{i0}\) by each prosumer, it updates \(\overline{\lambda}\) using UPS in (9) or DPS in (10) then sends \(\overline{\lambda}\) back to prosumers; 5: Each prosumer \(i\) updates \(\lambda_{ij}^{t}\) according to (14) and the local residuals in (16); 6: Each prosumer \(i\) broadcasts its local residuals and receives the local residuals of it partners; 7:#Note that the residual in (16) is used to update ADMM step-size [29]; 8:endwhile ``` **Algorithm 1** Decentralized P2P matching in virtual layer The decomposition in (17) separates the feasibility regions for **P3** in physical layer per zone; therefore, we denote the constraint set of each zone by \(\mathcal{F}_{i}\). Then, it is necessary to optimize the following partial Lagrangian function to solve OPF per zone. In doing this, we define \(\mu=\{\mu_{i}|i\in\mathcal{N}\}\), \(\hat{g}=\{\hat{g}_{i}|i\in\mathcal{N}\}\) where \(\hat{g_{i}}=g_{i}+g_{i,\text{loss}}\), \(\ell=\{\ell_{i}|i\in\mathcal{N}\}\), \(\vartheta=\{\vartheta_{i}|i\in\mathcal{N}\}\), which are collection of variables of all nodes. Note that we introduce a new variable \(g_{i,\text{loss}}\), which denotes _additional_ power generation to compensate the power losses. Since \(g_{i}\) in **P2**, **P3**, **P4** is the power generation in virtual layer, power losses cannot be captured. Hence, in physical layer optimization, we consider the cost incurred for power losses compensation given by \(C_{i,\text{loss}}(g_{i,\text{loss}}^{P})\triangleq C_{i}(\hat{g_{i}}^{P})-C_{i} (g_{i}^{P})\). Figure 3: Structure of problems in two-stage electricity market. Since this term cannot be considered for P2P transaction, we assure that cost incurred from power losses compensation can be covered from a monthly fee as done in [1]. **P5: AC-OPF in Physical Layer using ADMM** \[\max_{\mu}\min_{\hat{g},l,\vartheta,\overline{\vartheta}}\mathcal{L}(\hat{g},l, \vartheta,\overline{\vartheta},\mu)=\sum_{i\in\mathcal{N}}\biggl{[}C_{i,\text {loss}}(g_{i,\text{loss}})+\mu_{i}^{\top}\left(\overline{\vartheta}-\vartheta_ {i}\right)\biggr{]}, \tag{18}\] \[\text{s.t.}\quad(1a),(1b),(1c),(1g),(1f)\in\cap_{i\in\mathcal{N}}\mathcal{F}_{ i}.\] Distributed OPF computation is given by the following ADMM algorithm [30]. \[\vartheta_{i}^{t+1}=\operatorname*{arg\,min}_{\hat{g}_{i},l_{i},\vartheta_{i} }\mathcal{L}_{i}(\hat{g}_{i},l_{i},\vartheta_{i},\overline{\vartheta}^{t},\mu_ {i}^{t})+\frac{\rho}{2}||\overline{\vartheta}^{t}-\vartheta_{i}||_{2}^{2}, \tag{19}\] \[\overline{\vartheta}^{t+1}=\operatorname*{arg\,min}_{\overline{\vartheta}} \mathcal{L}(\hat{g},l,\vartheta^{t+1},\overline{\vartheta},\mu^{t})+\frac{\rho }{2}\sum_{i\in\mathcal{N}}||\overline{\vartheta}-\vartheta_{i}^{t+1}||_{2}^{2}, \tag{20}\] \[\mu_{i}^{t+1}=\mu_{i}^{t}+\rho(\overline{\vartheta}^{t+1}-\vartheta_{i}^{t+1}). \tag{21}\] A round index \(t\) along with a penalty factor \(\rho\) is used to represent the ADMM regularization term. The process is repeated until the termination condition is reached. Condition (22) is used to evaluate the gap in the values between two iterations of squared voltage angles, and defined as follows: \[\sum_{i\in\mathcal{N}}||\overline{\vartheta}^{t+1}-\vartheta_{i}^{t+1}||_{2} \leq\epsilon. \tag{22}\] The primal and dual residual are defined as \[r_{i}^{t+1}=\sum_{i\in\mathcal{N}}||\overline{\vartheta}^{t+1}-\vartheta_{i} ^{t+1}||_{2},s_{i}^{t+1}=\sum_{i\in\mathcal{N}}||\vartheta_{i}^{t+1}-\vartheta _{i}^{t}||_{2}. \tag{23}\] The process of solving **P5** is summarized in **Algorithm 2**. ``` 0: Given \(p_{ij},p_{0i},d_{i},g_{i}\) of Algorithm 1; 0:\(g_{i,\text{loss}},l_{i},\vartheta_{i},\overline{\vartheta},f_{i}^{P},f_{i}^{Q}\); 0:\(t=0,g_{i,\text{loss}}=0,l_{i}=0,\vartheta_{i}^{0}=0,\overline{\vartheta}^{0 }=0\), and \(\mu_{i}^{0}=0,\forall i\in\mathcal{N}\); 1:while (22) not satisfieddo 2:\(t\gets t+1\); 3: Each node \(i\) solves the local problem in (19) and sends the result to other nodes in zone; 4: Each node \(i\) updates consensus following (20) and dual-coordination signals following (21); 5: Each node \(i\) broadcasts the updated consensus and its residual in (23) to another node; #Note that the residual in (23) is used to update ADMM step-size [29]; 6:endwhile ``` **Algorithm 2** Decentralized P2P realization in physical layer ### _Congestion Price_ Finally, we present the congestion price update, which serves as the _corner_store of our two-stage algorithm. When congestion occurs on line \(\ell\in\mathcal{L}\), P2P participants must adjust their P2P matching until they are adapted to fit the power system. The proposed congestion pricing encourages each node to reduce the amount of P2P exchanged energy in the congested line \(\ell\). Congestion price is imposed on the overloading of energy when prosumers inject more energy beyond the line capacity. At the \(n^{th}\) round, let \(\kappa_{\ell}^{n}\) denote the overloading power calculated based on the deviation rate between the active power of P2P matching and the power flow of AC-OPF (active, reactive, and compensation power losses) in line \(\ell\) as shown in (24). \[\kappa_{\ell}^{n}=\left(\frac{S_{\ell}^{n}}{S_{\ell}^{\max}}-1\right)\sum_{(i, j)\in\mathcal{M}_{\ell}}p_{ij}^{n}. \tag{24}\] The first component presents the deviation rate between the power flow \(S_{\ell}^{n}\) and the maximum capacity \(S_{\ell}^{\max}\) of line \(\ell\) at the \(n^{th}\) round. The second component is the total amount of active power exchanged over line \(\ell\) at the \(n^{th}\) round where \(\mathcal{M}_{\ell}\) is a set of pairs \((i,j)\) that use line \(\ell\) to transfer energy. Note that here we focus on active power only but reactive power and losses in lines are affected by active power. Hence, a change in active power results in a change in reactive power and losses in lines as well. Thus, the congestion pricing \(\eta_{\ell,ij}^{n}\) is updated as follows \[\eta_{\ell,ij}^{n}=\eta_{\ell,ij}^{n-1}+\gamma\kappa_{\ell}^{n}, \tag{25}\] where \(\gamma\) denotes the penalty parameter set by the main grid. The proposed two-stage algorithm using congestion pricing is summarized in **Algorithm 3**. ``` 0: Prosumers request P2P market participation; 0: Market clearing solution; 0:\(\overline{\lambda}\), \(\underline{\lambda}\), \(j\in\omega_{i}\), \(n=1\), and \(\eta_{\ell,ij}^{1}=0\); 1:while true do 2: Execute Algorithm 1; 3: Execute Algorithm 2; 4: Each node examines congestion independently; 5:if congestion detected then 6: Updates congestion price based on (24) and (25); 7:else 8: Stop two-stage market; 9:endif 10: Nodes notify prosumers to update congestion price; 11:\(n\gets n+1\); 12:endwhile ``` **Algorithm 3** Decentralized two-stage electricity market ### _C Figure 4: Decentralized AC-OPF problem. **Proposition 1**: _(convergence to an optimal solution) The proposed decentralized two-stage electricity market algorithm using the congestion pricing in (25) converses to an optimal solution of **P2**._ Proof:: The proof is based on the network utility maximization [31], but the detailed proof is omitted due to space limitation. ## V Performance Evaluation ### _Simulation setup_ We numerically evaluate the feasibility of the designed market and the effectiveness of the proposed approach in clearing the market. For detailed examination, we first consider a small network with a 15-bus low voltage radial grid from [22] as illustrated in Fig. 5, which includes the network parameters obtained from [32]. The coefficients for consumer demand and producers of DERs are from [32]. All prosumers are listed in Table I and Table II, and the line parameters are presented in Table III. The coefficients for main grid as a producer is \(a_{0}=1\$/MWh^{2}\) and \(b_{0}=25\$/MWh\). The congestion parameter \(\gamma=0.5\). For ADMM, the change of penalties \(\rho\) for both P2P matching and P2P realization are set from \(10^{-4}\) to \(10^{5}\). In addition, \(\epsilon_{\text{pri}}=\epsilon_{\text{dual}}=10^{-6}\) in P2P matching while \(\epsilon=10^{-4}\) for P2P realization. Simulating is done using Gurobi [33]. ### _Decentralized OPF and congestion management_ Now we present the simulation results and illustrate voltage variation and losses in lines to deal with congestion. #### V-B1 Decentralized OPF The goal of AC-OPF is to minimize the cost of power losses owing to the transfer of energy in lines. Fig. 6 illustrates the voltages at all the nodes. The voltage magnitudes range from 0.96 pu to 1.03 pu at each node. The performance of the proposed decentralized method is same to that of the centralized one, and the total losses are all 0.0121 \(MWh\). Consequently, the decentralized AC-OPF achieves the optimal solution while minimizing information exchange in a distribution network, reducing the risk of confidential information leaks. #### V-B2 Congestion Pricing After solving AC-OPF, overloading of the network is determined, and congestion pricing is updated at each node. In this experiment, congestion occurs on Line 4. The change in the power flow of Line 4 during the operation of the P2P solution and AC-OPF is depicted in Fig. 7. The first round is from the start of the algorithm where the first detection of congestion at nodes. The last round indicates the point at which no congestion occurs. Our method Fig. 5: Low-voltage 15-bus radial network for evaluation. Fig. 6: Voltage magnitude at each node in 15-bus. converges in 60 rounds, and the electricity market reaches equilibrium after 94.8 seconds. As mentioned in (25), congestion price is determined from the power flow in the congested line, and the previous congestion price. As can be seen in Fig. 7, the power flow falls below the line capacity only after 60 rounds of iterations. It results in prosumers reducing their amounts in matched pairs and encouraging them to trade with another prosumers instead. One may wonder why the power flow curve appears to be the same, for example, from 7 to 26 in Fig. 7. The reason is as follows. The benefits derived from trading on another lines are lower than those derived from trading on the congested line \(\ell\). Therefore, prosumers maintains a preference for trades on \(\ell\), even if it entails incurring additional costs. Thus, the power flow curve may remain the same even if a penalty is applied each round. Besides, the increased congestion price leads to change energy \(p_{ij}\), resulting in the change in social welfare, and converges to an optimal social welfare of **P2**, as shown in Fig. 8. The final result of congestion for all lines is shown in Fig. 9. Table IV indicates that using the proposed technique, the average market clearing price of all consumers converges to 9.71$/\(MWh\) and congestion increases the average price significantly. Specifically, at the first round, when congestion is not taken into account, the average price on Line 4 is 9.71$/\(MWh\). At the last round, however, the average clearing price of pairs on congested Line 4 is increased to 12.66$/\(MWh\). It lead to the total clearing price of the P2P market changes to 10.2$/\(MWh\). Furthermore, according to the obtained results, the congestion not only affects the average price but also affects the matching pairs that use the congested line. The results are presented in Table V, where the number of matched pairs has decreased from 12 to 6. Based on the results, the proposed scheme could potentially relieve the grid congestion while improving P2P energy management between prosumers. The detail of clearing prices and quantities can be seen in Table VI and Table VII, respectively. To check the performance in a larger system, P2P energy trading simulations are performed with 282 participants on a modified 141-bus system [22], and operation time is 1239 seconds for 51 rounds. This is shown that operation time for each round \(n\) increases due to the number of participants. The results in Fig. 10 reveal that when prosumers have more options to choose from, they tend to trade on lines with no congestion. As a result, the 141-bus system requires fewer Fig. 8: Social welfare in case 15-bus following rounds. Fig. 10: Social welfare in case 141-bus following rounds. Fig. 7: Power flow of Line 4 in all rounds in 15-bus. Fig. 9: Power flow of all lines in 15-bus. rounds of iterations compared to the 15-bus system. ### _P2P matching analysis and efficiency comparison_ Next we analyze social welfare of prosumers and the main grid with and without the two proposed dynamic pricing schemes. Since this is about P2P matching analysis, we consider the virtual layer only here, and the following P2P matching is divided into two cases. **Case 1. Prosumers do not trade with main grid**: When producers have sufficient energy to cover all consumers, there happens no grid trading; for example, this is the case during off-peak hours. As can be seen in Table VIII, the proposed method effectively achieves the same social welfare as the centralized method without requiring all user information, as it shares user information only among matching prosumers. **Case 2. Prosumers trade with main grid**: When producers do not have sufficient surplus energy to satisfy the consumer demand; for example, during on-peak hours, consumers have to purchase the remaining energy from the main grid. In doing this, we modify the generation parameters at nodes 5, 13, to 0.1\(MWh\) and 0.2\(MWh\), respectively, and the remaining nodes have zero \(g_{i}^{P}\) and \(g_{i}^{Q}\) in Table II. #### Vii-C1 Comparison between proposed community grid and traditional community As described in Section III-A, the dynamic price is updated by prosumer demand. It offers the main grid flexibility to deal with the increase/decrease in demand. Table IX illustrates the different results in the proposed community grid and also compare with traditional community [34]. Note that the buying price from grid in [34] assumed is \(25\$/MWh\) and equal to minimum price \(b_{0}\) of this study. Compared with [34], the actual generated energy cost of main grid is decreased by 56.9% and 25.2% in UPS and DPS, respectively. Furthermore, our proposed method results in a reduction in grid trading in total, 57.3% and 25.5% in UPS and DPS, respectively. Although prosumers can adjust their consumption to maximize their benefits according to the main grid's price, they still experience a small negative impact on their welfare. The reason is that our method's welfare curve increases at a lower rate due to lower energy consumption, compared to the higher rate observed when energy consumption is higher [34]. Thus, [34] has social welfare slightly better than our proposed method, as can be seen in Table IX. In summary, the use of a quadratic cost function can reduce the generation cost of the main grid by 7.21$ in UPS and 3.196$ in DPS, compared to [34]. However, this lead to losses for consumers, with a welfare drop of 0.112$ in UPS and 0.05$ in DPS. This indicates a trade-off between market participants, prioritizing the grid's gains over consumer welfare. Consequently, the grid operator must thoroughly evaluate and balance the benefits and losses experienced by consumers while determining dynamic cost parameters \(a_{0}\) and minimum price \(b_{0}\) in the dynamic price model. This evaluation becomes crucial for incentivizing consumers to transition from the traditional market to the proposed market. #### Vii-C2 Fairness of market participants in community grid In Section III-A, we discussed how UPS and other schemes [14, 23] are not fair to all prosumers. This subsection analyzes it with results. Each prosumer in the market is rationally deciding how much energy they wish to buy based on their satisfaction or preferences. Thus, the Jain's fairness index (JFI) in [35] is used to determine the fairness of UPS and DPS. (26) quantifies the spread of welfare among prosumers. JFI is higher with a fairer scheme than with a lower one. \[JFI=\frac{\left(\sum_{i\in\mathcal{N}}W_{i}\right)^{2}}{\left|\mathcal{N}\right| \sum_{i\in\mathcal{N}}\left(W_{i}\right)^{2}}. \tag{26}\] When calculating \(JFI\) from Table X, we discover that DPS is 7% fairer than UPS. This is because prosumer 11 cannot purchase any energy in UPS, resulting in zero utilities and welfare. In DPS, this prosumer can purchase 0.0132\(MWh\) of energy, yielding 0.332S in utilities and 0.002S in welfare. This comparison shows that prosumers in UPS are limited in their ability to purchase energy and experience losses due to the actions of other prosumers. In contrast, DPS does not have these limitations, which can encourage more prosumers to participate in the market. Fig. 11 shows each prosumer's buying price from both the main grid for both UPS and DPS. ## VI Conclusion In this paper, we introduce a decentralized market framework for the P2P electricity market in both layers, leveraging the concept of distributed utility maximization to demonstrate optimality and the optimal solution. The virtual layer introduces a decentralized community grid, directly connecting prosumers to the main grid, thus enabling strategic price adjustments that minimize generation costs. By integrating congestion pricing, we harmonize virtual and physical requirements, resulting in improved efficiency for congestion management and P2P energy exchange. We assess our approach on a modified IEEE 15-bus system and 141-bus system and compare it to previous methods. Our results show the enhanced efficiency and effective congestion resolution, particularly in the 141-bus system. In future work, we plan to minimize uncertainty risks in hour-ahead to day-ahead electricity markets.
2307.14381
EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence
Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power. Moreover, many deep edge intelligence applications require handling distributed data that cannot be transferred to a central server due to privacy concerns. Decentralized learning methods, such as federated learning, offer solutions where models are learned collectively by exchanging learned weights. However, they often require complex models that edge devices may not handle and multiple rounds of network communication to achieve state-of-the-art performances. This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed. Edge models are implemented and trained independently on Field-Programmable Gate Array (FPGA) devices with various computational capacities. Learned data representations are transferred to a central server where the ensemble model is trained with the learned features received from the edge devices to boost the overall prediction performance. Extensive experiments demonstrate that the EdgeConvEns can outperform the state-of-the-art performance with fewer communications and less data in various training scenarios.
Ilkay Sikdokur, İnci M. Baytaş, Arda Yurdakul
2023-07-25T20:07:32Z
http://arxiv.org/abs/2307.14381v1
# EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence ###### Abstract Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power. Moreover, many deep edge intelligence applications require handling distributed data that cannot be transferred to a central server due to privacy concerns. Decentralized learning methods, such as federated learning, offer solutions where models are learned collectively by exchanging learned weights. However, they often require complex models that edge devices may not handle and multiple rounds of network communication to achieve state-of-the-art performances. This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed. Edge models are implemented and trained independently on Field-Programmable Gate Array (FPGA) devices with various computational capacities. Learned data representations are transferred to a central server where the ensemble model is trained with the learned features received from the edge devices to boost the overall prediction performance. Extensive experiments demonstrate that the EdgeConvEns can outperform the state-of-the-art performance with fewer communications and less data in various training scenarios. ## I Introduction Edge computing is vital for next-generation solutions in various industries such as automotive [1], energy [2], and agriculture [3]. Edge intelligence offers innovative frameworks for Internet of Things (IoT) applications. Sensors are used in data collection for safe driving [1], anomaly detection [2], real-time and predictive analytics [3]. The data-driven problems usually require computation-intensive solutions, e.g., Deep Neural Network (DNN) training. Deep edge intelligence, where DNNs are employed on edge, has become imperative to address complex edge intelligence applications such as image classification [4], speech recognition [5], and natural language processing [6]. However, centralized training with millions of parameters is essential to attain state-of-the-art performances. In contrast, edge intelligence face problems with distributed data and limited computational resources. Consequently, two main challenges emerge in deep edge intelligence applications; data cannot be centralized due to privacy concerns and limited bandwidth, and edge devices may not supply sufficient memory and computational requirements to train complex DNNs [7]. One prominent approach to the distributed and heterogeneous data challenge is federated learning, a popular approach in IoT solutions [8]. However, centralized and decentralized federated learning requires multiple communication rounds to update model parameters increasing the data transfer rate. Furthermore, federated learning frameworks, particularly with centralized orchestrator setup, require the clients to have similar architectures with a sufficient number of parameters to solve complex tasks. Therefore, the federated learning paradigm does not directly address the lack of computational resources at the edge of networks. Tiny Machine Learning (TinyML) is one of the recent techniques to address the low-resource device challenge. A typical TinyML framework comprises training a machine learning model on a high-performance computer, compressing the learned model to reduce memory requirements, and deploying it on an embedded system for inference [7]. Although TinyML solutions can mitigate the memory requirements, data privacy, and on-device model training challenges remain. This study aims to improve overall prediction performance under the distributed data setup by learning to fuse the knowledge acquired from on-device training on edge with low computational resources. The proposed solution, EdgeConvEns in Figure 1, combines weak learning at the edge devices with convolutional ensemble learning at the server. Unlike traditional federated learning, edge devices can train different models in the EdgeConvEns framework due to device Fig. 1: Deep edge intelligence setting covered in this study. Cameras represent edge devices: processing capacity is proportional to the size. Five different colored shapes represent different classes: the bigger the size is, the more frequently it is observed by each edge device. limitations. Only the learned feature representations are transferred without the requirement for global updates. The data need not be equally available at the edge devices. Some classes can be observed by one device more frequently than others. For example, a camera viewing the right lane of a highway observes heavy vehicles more frequently than a camera on the left lane. Unlike TinyML solutions, the proposed framework trains edge models on edge devices with their local data instead of creating a compressed global model and sending it back to the edge. The contributions of this study are outlined as follows: * Limited training of configurable weak models on heterogeneous edge devices is achieved on Field-Programmable Gate Array (FPGA) fabric. Autonomously generating a partially parallelized task-level streaming architecture constrained with available resources on various edge devices enables acceleration of neural network operations. * Variational Auto-Encoder (VAE) models are trained on the server for each edge model to generate statistically similar features to the ones learned on edge for replacing the missing features due to disconnection problems in edge intelligence. * A convolutional ensemble learning scheme that fuses the underlying information in the representations learned on edge is proposed to boost overall prediction performance. EdgeConvEns can attain satisfactory performances compared to the state-of-the-art with a much smaller number of parameters. Communication and memory utilization of the proposed framework are investigated under various data distribution and training scenarios with benchmark datasets in image classification and regression. This work is organized as follows. Selected previous works in edge intelligence, FPGA acceleration in neural networks, federated learning and ensemble learning are analyzed in the next section. Section III presents details of the proposed design and its components. Section IV contains the implementation details, experimental results and a comparative analysis with earlier works. The last section concludes this work with a review of the achievements of the study. ## II Related Work The edge intelligence literature showcases various ways of deploying model training and inference. In some applications, training and inference are on a cloud server, whereas some applications divide training and inference into cloud server and edge, respectively. Zhou _et al._ proposes a data offloading scale of seven levels based on the design choice of edge intelligence applications given in Figure 2[9]. Volume and path length of data unloading decrease as the level increases. The proposed EdgeConvEns can be placed between Levels 6 and 7 since the models on edge are independently trained, and their inference is also independently done on individual edge devices. However, the features of the inference data obtained from the edge devices are required on the server device for making the final prediction. Thus, the proposed approach results in less transmission latency of data offloading, higher data privacy, and lower communication bandwidth. EdgeConvEns can be categorized as Device-to-Edge (D2E), except that edge models do not receive any information from the server for training and inference. When the edge model training is completed on the edge device, data offload between the device and server can be less compared to other D2E approaches [10]. As edge intelligence has grown more prevalent for applications like IoT, decentralized learning techniques on edge have become imperative. One of these techniques is Federated Learning (FL), a mainstream decentralized learning method commonly used in IoT. The FL methods encounter several challenges, such as the high number of communication rounds and weight transfer due to the iterative updates of local and global models [8]. In addition, all clients and the central server have the same model architecture, which dictates a minimum resource requirement for training the models on each edge device [8, 11, 12, 13, 14, 15, 16, 17]. On the other hand, the EdgeConvEns is based on ensemble learning discarding the exchange of model parameters that causes a communication bottleneck. In the proposed approach, there is only a one-way transfer from edge devices to the server in which the overall prediction is boosted. In recent FL studies [18, 19, 20], training is also done with either a one-way model parameter or embedding vector transfer, but EdgeConvEns yields a better prediction accuracy with a lower number of parameters according to our comparative experimental analysis. Ensemble methods are broadly used in deep learning. Some recent studies propose IoT frameworks based on ensemble models [21], where traditional sampling strategies, such as bagging and boosting, are used. Knowledge distillation is also utilized in ensemble frameworks for edge intelligence. Such methods employ an ensemble of logit outputs and gradients of larger models to train smaller models [22, 23]. EdgeConvEns proposes convolution-based ensemble learning that fuses feature vectors transferred from edge for an improved classification performance compared to the traditional and knowledge distillation-based methods. Field-Programmable Gate Arrays (FPGA) have recently been employed in edge intelligence applications to accelerate computationally expensive operations [24]. The training time of Convolutional Neural Networks (CNN) can be reduced by accelerating matrix multiplication. These studies focus on the parallelization of the calculations by certain levels only, such Fig. 2: Edge intelligence levels by data offloading [9]. as fully connected layer calculations [25], convolution layer calculations [26], and parallelization on minibatch level [27]. The proposed EdgeConvEns partially parallelizes the computations at each layer by chosen array dimensions, including minibatch level, channel, and filter level for convolutional operations, and chosen partition level for fully connected layer operations. It also allows for the adjustment of resource utilization by changing the factor of parallelization. Some works also accelerate the training phase by binarizing [28], quantizing and using smaller data types [29] for the weights. Even though the same optimizations can also be preferred in EdgeConvEns, the acceleration of the training is achieved without using any quantization method or smaller data types in this work. ## III Proposed System EdgeConvEns, illustrated in Figure 3, consists of a server and edge devices with varying hardware capacities that presumably have medium computational power. The edge models in this study are shallow neural networks with poor evaluation performances trained for image recognition and regression. The server ensemble model aims to boost the overall prediction performance by learning to fuse the edge model embeddings. The server has no computational hardware restrictions for training a complex ensemble model. The learned ensemble model makes the final predictions without broadcasting any learned model back to the edge devices. First, we independently train the edge models with their training datasets, which are subsets acquired by random sampling with replacement from a more extensive training set of independent and identically distributed (i.i.d.) samples. Thus, edge datasets might have overlapping and distinct samples. After the training of edge models, learned embeddings of each edge's training set are transferred to the server. On the server, we first train a VAE for each edge model with respective embeddings to capture their distributions. The trained VAEs are utilized for imputing the missing embeddings by generating one from the corresponding edge's VAE. Finally, the ensemble model is trained with the training embeddings, either the original embeddings from the edge or generated from the respective edge's VAE. ### _Edge Models_ In the proposed framework, the edge models do not need to share the same model architecture and are trained independently. However, the dimensionality of the learned embeddings that are transferred to the server, denoted by \(L_{com}\) in Figure 3, should be the same since the ensemble model on the server applies convolution on the matrix of edge embeddings. The edge model denoted by \(EM_{i}\) is trained with a subset of the training data denoted by \(D_{train}^{EM_{i}}\), where \(D_{train}\) is the main training data. After training, embeddings of \(D_{train}^{EM_{i}}\) denoted by \(F_{train}^{EM_{i}}\in\mathbb{R}^{L_{com}}\) is obtained. The embeddings, \(F_{train}^{EM_{i}}\), are then transferred to the server as illustrated with red-dashed lines in Figure 3. The ensemble model expects to receive an embedding from each edge model to learn how to fuse them to improve the prediction performance. However, there may be several problems in acquiring the embedding from an edge model in real-world applications, such as connection problems between edge and server and data availability on edge. Hence, every edge model may not produce an embedding vector to transfer to the server for the same input data. Fig. 4: Two-dimensional t-distributed stochastic neighbour embedding (t-SNE) plots of the average feature vectors extracted from 20 edge models. The feature vectors are obtained from the training set of the CIFAR-10 dataset for two different classes such as (a) automobile and (b) bird. Each symbol represents a different model. Fig. 3: Illustration of the system for the image classification task. Red-dashed lines denote the successful transfer of edge features. Missing features are filled by VAE outputs (blue-dashed lines). Green-dashed lines represent the complete feature set to train the CNN-based ensemble model. The system is constructed the same for regression problems. ### _Server Models_ #### Iii-B1 Variational Auto-Encoder (VAE) Models This study proposes to use a generative approach to fill the necessary feature vectors not received from an edge model \(EM_{i}\) to obtain the final prediction of the ensemble model. In Figure 4, the dimensions of the feature vectors obtained from the layers of 20 edge models for CIFAR-10 are reduced to two by using the t-distributed Stochastic Neighbour Embedding (t-SNE) method [30] and their average positions by edge models are plotted. It can be seen that the edge outputs to be used in the ensemble model follow different distributions. EdgeConvEs leverages VAE models for imputing the missing embeddings. VAE models are trained and deployed on the server for each edge model to generate embeddings sampled from a similar distribution as the edge embeddings'. In Figure 3, the red-dashed line from our edge device to the VAE model shows the transfer of the edge embeddings \(F_{train}^{EM_{i}}\). Each VAE model, \(VAE_{i}\), is independently trained with respective edge embeddings, \(F_{train}^{EM_{i}}\), to capture the edge models' embedding distributions. For this purpose, embeddings received from the edge models are encoded into a distribution over latent space. When a random sample from the encoder's distribution is decoded to reduce the reconstruction error, the VAE's decoder starts exhibiting a generative nature. Thus, the trained VAE's decoder can be used to generate embeddings, \(VAE_{i}^{dec}(z)\), from a random variable in the latent space, \(z\in\mathbb{R}^{d}\). #### Iii-B2 Ensemble Model Since the weak models on edge devices cannot yield state-of-the-art performances, a convolutional ensemble approach shown by the green model in Figure 3 is proposed for learning a fusion of embeddings to improve the final predictive performance. Let \(K_{train}\) contain the indices of the samples for the full training dataset \(D_{train}\) and \(K_{train}^{EM_{i}}\) contain the indices for the edge model's (\(EM_{i}\)) training dataset, \(D_{train}^{EM_{i}}\). Since \(D_{train}^{EM_{i}}\subseteq D_{train}\), it can be deduced that \(K_{train}^{EM_{i}}\subseteq K_{train}\). The training dataset for the ensemble model, \(D_{train}^{Ens}\), is constructed as follows. \[D_{train}^{Ens}[k,i]=\left\{\begin{matrix}F_{train}^{EM_{i}}&if\ k\in K_{train }^{EM_{i}}\\ VAE_{i}^{dec}(z)&if\ k\notin K_{train}^{EM_{i}}\end{matrix}.\right. \tag{1}\] Thus, the ensemble model's training data, \(D_{train}^{Ens}\), of size \(n(D_{train})\times N\times L_{com}\) is constructed by stacking the embeddings. Figure 3 shows the ensemble training data construction with blue and green-dashed lines. Figure 5 illustrates the matrix formed by stacking the representations obtained from either edge models or VAE for a single sample. The red two-dimensional frame shows the convolutional kernel to extract a unified representation of each sample. The motivation behind the convolutional ensemble is to learn a fusion that complements each edge model's information by investigating the inter-edge interactions in the embeddings. Hence, the kernel size directly affects the prediction performance. In this study, the architecture of the ensemble model comprises a convolutional layer with a kernel of size \((\frac{N}{2},\frac{L_{com}}{2})\) and a hidden fully-connected layer. ## IV Experimental Results ### _Implementation_ #### Iv-A1 Partially Parallelized Streaming in FPGA-based Edge Devices Inference and training of DNNs heavily require matrix multiplications. Acceleration of matrix multiplication is performed by the proposed task-level parallelism, which allows the designer to choose the level of acceleration to ensure targeted hardware utilization. The proposed method transfers data using _data stream_ method [31] where the data flow from the source to the destination module via First-In-First-Out (FIFO) buffers. FIFO buffers allow the consumer module to use the data when the source module sends the data providing accelerated computations. The data interface is chosen as AXI4-Stream [32] for all modules. The task-level streaming method has constraints, such as reading and writing an array element only once, in addition to the in-order access to the buffers. In EdgeConvEns, all of the arrays used in training are manually tiled in preprocessing phase by the factors decided by the designer. Then, the arrays are partitioned into tiles during synthesis by the factorized part of the chosen dimension. The factors need to be decided to fit the edge model into the chosen edge device by the designer. Assume that the matrices \(M_{1}[X][Y]\) and \(M_{2}[Y][Z]\) are aimed to be multiplied into the result matrix \(M_{3}[X][Z]\). It can be seen that the \(Y\) is the common dimension of multiplication. Assume that \(Y=f*y\) where \(f\) and \(y\) are positive integers. Then, the matrix multiplication can be written as \[M_{3_{i,j}}=\sum_{k=1}^{f}\sum_{l=1}^{\frac{Y}{f}}M_{1}[i][k][l]*M_{2}[k][l][j]. \tag{2}\] In this form, the matrices \(M_{1}\) and \(M_{2}\) are manually tiled into the \(f\) factor on the \(Y\) dimension. After this modification, the follow-up arrays can be automatically partitioned by the new factor dimension \(f\). Then, the partitioned independent memory parts are streamed through the system in FIFO buffers. In Figure 6, the factorization operation used in EdgeConvEns is illustrated. In the first step, the tensors \(I\) and \(K\), are factorized by manually selected factors, namely \(BS_{f}\) and \(F_{f}\). Calculation operations require multiple uses of the same data which violates data streaming; therefore, required parts are duplicated in the second step. In the third step, the duplicated tensors are streamed for parallel computation. In the fourth step, the aimed calculations are done in parallel. Finally, the Fig. 5: Illustration of the convolutional ensemble operation for a single image. \(F^{EM_{i}}\): the feature vectors transferred from the \(i\)-th edge model. \(VAE_{i}^{dec}(z)\): the feature vectors generated by \(i\)-th VAE model for the same image. acquired partial results are aggregated into the result tensor in the last step. In this study, training of edge models is implemented on Xilinx FPGAs using high-level synthesis (HLS) [33]. Vitis HLS offers several directives which enable array partitioning, streaming, and parallel computation. In the algorithm, completely partitioning the manual tiles is executed with the "#pragma HLSARRAY_PARTITION complete" command where the "variable=" parameter specifies the array and the "dim=" parameter is used for the dimensions to partition. The arrays are also streamed inside the module, and pipelining is applied with the "#pragma HLS DATAFLOW" command. In addition, the task-level parallelism on computations is applied with the "#pragma HLS UNROLL" command. The same method applies to fully-connected layer computations with appropriate inputs and outputs. The mini-batch size and the input channel size, convolution filters, and length of the fully connected layers need to be manually factorized into tiles for CNN calculations. The other factorizable dimension of the array is automatically factorized based on the manual factorization of the weight of the previous layer. The proposed Algorithm 1 automatically sets the tiling factor of the layer on the corresponding dimension based on the manually tiled dimension of the previous layer. In the algorithm, dimensions of the \(i\)-th convolution kernel are denoted by \([F_{f}^{i},C_{f}^{i}][F_{p}^{i},C_{p}^{i}][KW^{i}][KH^{i}]\) for \(F^{i}=F_{f}^{i}\times F_{p}^{i}\) number of filters and for \(C^{i}=C_{f}^{i}\times C_{p}^{i}\) number of channels. Dimensions for \(i\)-th fully connected layer weights are denoted by \([L1_{f}^{i},L2_{f}^{i}][L1_{p}^{i},L2_{p}^{i}]\) as \(L1^{i}=L1_{f}^{i}\times L1_{p}^{i}\) and \(L2^{i}=L2_{f}^{i}\times L2_{p}^{i}\). The \(i\)-th convolutional layer output dimensions are \([BS_{f},C_{f}^{i}][BS_{p},C_{p}^{i}][W^{i}][H^{i}]\), where \(BS=BS_{f}\times BS_{p}\) is the minibatch size. The \(i\)-th fully connected layer outputs dimensions are \([BS_{f},L2_{f}^{i}][BS_{p},L2_{p}^{i}]\). ``` Batch of images: \(I[BS_{f},C_{f}][BS_{p},C_{p}][W][H]\), Edge model: \(EM\) Manual factorizations of layers: \(f\) for\(i<=n(l)\)do\(\triangleright\) Forward Pass if\(l^{i}=\) Convolution then if\(i=1\)then \(dim(w^{i})=[f^{i},C_{f}][\frac{F^{i}}{f^{i}},\frac{C_{f}^{i}}{C_{f}^{i}}][KW^{i}][ KH^{i}]\) elseif\(i>1\)then \(dim(w^{i})=[f^{i},f^{i-1}][\frac{F^{i}}{f^{i}},\frac{F^{i-1}}{f^{i-1}}][KW^{i}][ KH^{i}]\) \(dim(o^{i})=[BS_{f},f^{i}][BS_{p},\frac{F^{i}}{f^{i}}][W^{i}][H^{i}]\) elseif\(l^{i}=\) Fully Connected Layer then if\(i=1\)then \(dim(w^{i})=[C_{f},f^{i}][\frac{L1^{i}}{C_{f}},\frac{L2^{i}}{f^{i}}]\) elseif\(i>1\)then \(dim(w^{i})=[f^{i-1},f^{i}][\frac{L1^{i}}{f^{i-1}},\frac{L2^{i}}{f^{i}}]\) \(dim(o^{i})=[BS_{f},f^{i}][BS_{p},\frac{L2^{i}}{f^{i}}]\) for\(i<=n(l)\)do\(\triangleright\) Backward Pass if\(l^{i}=\) Fully Connected Layer then \(dim(\partial\mathcal{L}^{i})=[BS_{f},f^{i}][BS_{p},\frac{L2^{i}}{f^{i}}]\) \(dim(gr^{i})=[f^{i-1},f^{i}][\frac{L1^{i}}{f^{i-1}},\frac{L2^{i}}{f^{i}}]\) elseif\(l^{i}=\) Convolution then \(dim(\partial\mathcal{L}^{i})=[BS_{f},f^{i}][BS_{p},\frac{L2^{i}}{f^{i}}]\) \(dim(gr^{i})=[f^{i},f^{i-1}][\frac{F^{i}}{f^{i}},\frac{F^{i-1}}{f^{i-1}}][KW^{i }][KH^{i}]\) ``` **Algorithm 1** Automation process of the tiling. In Algorithm 1, \(l\) denotes the layers, \(w\) and \(o\) represent the weights and the outputs of the corresponding layers. \(\partial\mathcal{L}\) stands for the loss sent back from the next layer, and \(gr\) shows the corresponding gradients of the layers. Here, \(dim()\) outputs the dimensions of the corresponding array, and \(f\) is the manually selected tiling factorizations. #### Iii-A2 Experimental Setup As mentioned in Section I, the data is assumed to be distributed as some edge devices observe some classes more frequently than others. Two metrics, namely \(\alpha\) and \(\delta\), are defined to represent this behavior. The \(\alpha\) parameter denotes the minimum percentage of the whole training data of classes used in the training edge models. For example, when \(\alpha=0.25\), then at least 25% of the training data of each class is randomly sampled for each edge device. The second metric \(\delta\) denotes the maximum discrepancy rate between the percentage of sampled training data and sampled test data for a class. For example, if 10% of the whole training data of class 1 is used for training an edge model and \(\delta=0.5\), then the percentage of test data of class 1 that the edge model accesses is between 5% and 15%. The experiments are conducted by taking random subsets of the whole train and test data. \(K_{train}^{EM_{i}}\) is chosen as \(\bigcup_{i=1}^{c}\{k_{i}\subset K_{train},\frac{|k_{i}|}{|K_{train}|}=X\sim \mathcal{U}_{[\alpha,1]}\) and \(k_{i}\) is chosen randomly\(\}\). In addition, \(K_{test}^{EM_{i}}\) is chosen as \(\bigcup_{i=1}^{c}\{k_{i}\subset K_{test},\frac{|k_{i}|}{|K_{test}|}=max\{min\{|K_{ train}|*(1+X),1\},0\}\) and \(X\sim\mathcal{U}_{[-\delta,\delta]}\). Different values of \(\alpha\) and \(\delta\) remarkably change the class data percentage for train and test data of edge devices, which affects the overall prediction accuracy with timing and memory requirements. The classification experiments are conducted on four different image classification benchmark datasets, CIFAR-10 [34], CIFAR-100 [34], MNIST [35], and Fashion MNIST [36]. The regression performance of the method is evaluated on Boston Housing [37], California Housing [38], and Pecan Street [39] Fig. 6: Illustration of duplication, operation, and aggregation of arrays in dataflow parallelization. datasets. Experiments on classification utilize train and test data randomly generated for each edge device based on \(\alpha\) and \(\delta\) values. The continuous values of the targets in the regression datasets are divided into ten groups in training data, where all groups have the same density of data to mimic the classification behavior. Test data are also grouped based on the training grouping. After creating classes for regression datasets, \(\alpha\) is set to 0.05 in the regression experiments. #### Iv-A3 Edge and Server Models In the experiments, each weak edge model is arbitrarily designed. For the image classification task, the networks are built with structures such as one convolutional layer with \(5\times 5\) kernel size and \(f_{e}\) filters, one \(2\times 2\) max pooling layer, one convolutional layer with \(5\times 5\) kernel size and \(f_{e}\) filters, one or two Fully Connected Layers (FCL) with 64-dimensional output, and an output FCL with c-dimensional output where \(f_{e}\in\{1,2,4\}\) and \(c\) is the number of classes. For the regression task, the networks are built with structures such as one or two Fully Connected Layers (FCL) with 64-dimensional output and an output FCL with 1-dimensional output. Also, the training data and training epochs are randomly sampled for each edge model. The minimum, mean and maximum statistics of the training epoch, model accuracy, and the total number of model parameters are given in Table I for each classification dataset. It can be seen that the accuracy values of the edge models vary quite a lot because edge models differ in structure, and some edge models are trained for a few epochs. Due to their relatively small and shallow structures, the edge models do not produce a good accuracy. The edge models are trained independently from each other and the models that are on the server. The cross-entropy, given as \(-\sum_{i=1}^{c}y_{i}log(\hat{y}_{i})\), and mean squared error (MSE), given as \(\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}\), are used as the loss functions during edge model training for image classification and regression problems, respectively. In the loss functions, \(y_{i}\) and \(\hat{y}_{i}\) denote the ground truth label and the prediction for the \(i\)-th class out of \(c\) classes for cross-entropy loss, where they denote \(i\)-th prediction and observation in the \(n\)-sized sample for MSE loss, respectively. The loss function used in the training of VAE models is given as \(\left\|F_{train}^{EM_{i}}-VAE_{i}^{dec}(z)\right\|^{2}+KL[\mathcal{N}(\mu_{ enc},\sigma_{enc}),\mathcal{N}(0,1)]\), where \(z\) denotes a random sample vector from the latent space with the distribution \(\mathcal{N}(\mu_{enc},\sigma_{enc})\), \(\mu_{enc}\) and \(\sigma_{enc}\) denote mean and standard deviation terms of the encoding, \(KL\) denotes the Kullback-Leibler (KL) divergence and \(\mathcal{N}\) denotes normal distribution. The ensemble model is trained with the same loss functions as the edge models. It should be highlighted that the edge and ensemble models are implemented using float (32-bit) datatype. ADAM optimizer is used to train VAE and ensemble models with a learning rate of \(10^{-4}\). For edge models, Stochastic Gradient Descent (SGD) optimizer is used with \(10^{-4}\) learning rate. ### _Training Scenarios_ The training of the proposed system can be done in three different scenarios based on the feature vector transfer scheme from the edge to the server. These three scenarios bring advantages and disadvantages to the training process and stand on a trade-off between training time, memory requirement, and overall accuracy. The appropriate scenario can be chosen according to the device choice and specifications of the transfer medium. In all scenarios, \(Ep^{EM}\), \(Ep^{VAE}\), and \(Ep^{Ens}\) denote the training epochs of edge models, VAE models, and the ensemble model, respectively. #### Iv-B1 Abundant Memory on the Server Scenario 1 is considered when all feature vectors obtained from the inference of training data are transferred from the edge models to the server at once. Therefore, VAE training and ensemble learning can start once the feature vector transfer is completed. In this scenario, the memory requirements on the server side are the highest among the three scenarios since all the feature vectors must be stored on the server device. It also achieves the highest accuracy among all the scenarios discussed in this study. Firstly, edge models are trained independently using their training data for \(Ep^{EM}\) epochs. After their training, feature vectors, \(F_{train}^{EM_{i}}\), with \(L_{com}\) length are obtained by inference. After the inference, the feature vectors are transferred to the server in only one transfer. When the transfer is done, the VAE model of each edge model is trained for \(Ep^{VAE}\) epochs. Then, missing feature vectors are replaced via these trained VAE models for each edge model. Ultimately, the final ensemble model is trained using all feature vectors received and replaced. #### Iv-B2 Limited Memory on the Server In Scenario 2, only a mini-batch of ensemble training data consisting of feature vectors is sent to the server. This scenario remarkably reduces the memory requirement on the server side. The total ensemble training epoch \(Ep^{Ens}\) is divided by a factor \(Ep^{Ens}\). The stored mini-batch is repeatedly used in training for \(\frac{Ep^{Ens}_{t}}{Ep^{Ens}_{t}}\) epoch. Every mini-batch is transferred to the server for \(Ep^{Ens}_{d}\) times. This scenario requires less data load per transfer but more communication. Training the ensemble learner using the same mini-batch repeatedly causes bias in training hence, degrading the accuracy. However, this degradation can be alleviated to a certain level by decreasing the repeated use of a mini-batch in training. For example, increasing the number of \(Ep^{Ens}_{d}\) alleviates such bias in training at the cost of increasing the number of transfers. The training of VAE models is also deteriorated by low \(Ep^{Ens}_{d}\). It yields lower accuracy in the feature generation than in Scenario 1. A special case of Scenario 2, where \(Ep^{Ens}=Ep^{Ens}_{d}\), is named Scenario 3. In this case, a mini-batch is used in training for one epoch at one communication. On the server, extra storage memory is not required for keeping the mini-batch for consecutive use in training. Since each mini-batch is used in training for one epoch, the accuracy acquired is close to the first scenario. The disadvantage of this scenario is the increased number of one-way communication. ### _Quantitative Analysis_ For the experiments, \(L_{com}\) is set to 64 and edge models are trained for 30 epochs. The ensemble model is trained for 100 epochs. The transfer rate between the edge and server is assumed to be 450 Mbps. Memory and communication evaluations are done for the CIFAR-10 dataset. Moreover, 60%, 70%, and 87% of the full training set are used when \(\alpha\) values are set to 0.05, 0.3, and 0.7 for memory and communication evaluations. The VAE and the ensemble model architecture are the same for all datasets and training scenarios. VAE model consists of an FCL with 64 neurons in encoder and decoder parts and the latent space is taken as \(\mathbb{R}^{32}\). Total number of parameters used for the encoder and decoder parts of each \(VAE_{i}\) is 260K. VAE models are trained for 50 epochs. Ensemble model consists of 64 \(\frac{N}{2}\times\frac{L_{com}}{2}\) kernels and an FCL with 64 neurons. The ensemble model has 182K parameters for all datasets and training scenarios. The accuracy values are given for \(\alpha=0.05\) and \(\delta=0\) unless mentioned otherwise. The FPGA experiments are conducted on Xilinx Artix-7 AC701 Evaluation Platform using CIFAR-10 dataset. The synthesis and implementation of the system are done using Vitis HLS 2022.2 and Vivado 2022.2 platforms. The VAE and ensemble model experiments are conducted on GeForce RTX 2080 Ti device. Firstly, the effectiveness of the proposed partial task-level streaming implemented on FPGA devices is presented. Different model layers are parallelized in the experiments with the given parallelization factors. The training is done with mini-batches of size two. The results can be seen in Table II. The structure of the edge model used in this experiment is one convolutional layer with \(5\times 5\) kernel size and two filters, one \(2\times 2\) max pooling layer, one convolutional layer with \(5\times 5\) kernel size and two filters, one Fully Connected Layers (FCL) with 64-dimensional output, and an output FCL with 10-dimensional output. In the table, "Without Optimizations" is the training of the model without any task-level parallelism applied. \(BS_{f}\) and \(F_{f}\) denote the parallelization factors of the mini-batch dimension and the filter dimension of the data for edge training, respectively. Therefore, \(BS_{f=x}\)-\(F_{f=y,z}\) points to that mini-batch are parallelized by factor \(x\), the first convolutional operation is parallelized by factor \(y\), and the second convolutional operation is parallelized by factor \(z\). The utilization values are taken from the post-implementation phase. Latency values are taken from the C/RTL Cosimulation results. Experimental results show that task-level streaming yields remarkable acceleration even without factorizing different layer calculations. Also, parallelizing different layers with different factors shows changing effects on acceleration and resource utilization. These results show that the proposed method can be configured for devices with various hardware capacities. The importance of replacing the missing feature vectors with appropriate values can be seen in Table III. The dataset used in this experiment is CIFAR-10. The average edge model accuracy is 40%, and 20 edge models are used. The results show that replacing the missing feature vectors with trivial values such as zero, the mean, and the maximum element of the successfully transferred feature vectors detriments the overall accuracy of the ensemble model. The feature vectors generated with VAE produce remarkably more accurate results in terms of overall ensemble accuracy. Examining the loss and accuracy plots of the training of the proposed ensemble model is essential. Figure 7 shows that the ensemble model converges to a solution as the training continues. The plots are given for \(\alpha=0.05\) and \(\delta=0.5\) for CIFAR-10, and Pecan Street datasets on Scenario 1. Figure 8 shows the box plot of the performance metrics of accuracy, the area under the ROC curve, precision, and recall values calculated for each class in test data for CIFAR-10 and MNIST datasets. The values are se Fig. 7: The loss plots of the ensemble model trained for (a) CIFAR-10 dataset for image classification, and (b) Pecan Street dataset for regression tasks. on one-hot encodings for the classes. Even though the edge models are not fully trained for all classes, the ensemble model performs for all classes at a similar level in terms of prediction. In Figure 9, the accuracy of the ensemble model for different choices of \(\alpha\) and \(\delta\) values and training scenarios are given for CIFAR-10. The figure shows that the ensemble accuracy increases as the \(\alpha\) and \(\delta\) values decrease. As expected, training with Scenario 1 and Scenario 3 yield better accuracy than Scenario 2. That means the model learns better as the heterogeneity of training data observed by edge devices increases, and the prediction performance is preserved as the heterogeneity in test data stays similar. It is also shown that the accuracy of the ensemble model is limited to the accuracy of the edge model. For CIFAR-10, the prediction accuracy of 91.14%, 93.47%, and 94.23% is obtained using 20 edge models with a mean accuracy of 31.60%, 37.46%, and 48.41%, respectively. The effect of changing the number of edge models and the transferred feature vector size \(L_{com}\) is also investigated in Table IV for CIFAR-10 and CIFAR-100 datasets. For changing the number of edge models, \(L_{com}\) is taken as 64. For changing \(L_{com}\), the number of edge models is taken as 20. It can seen that increasing the number of edge models also increases the final ensemble accuracy. The same effect for changing \(L_{com}\) can be observed, but the increase ceases to be material after a certain point of \(L_{com}\) value, which is after 128 for this example. Comparison with recent edge intelligence studies regarding final accuracy for image classification and regression problems are presented in Table V and Table VI. The best-reported values are taken from the compared studies and our study. In the image classification comparison, accuracy results for CIFAR-10 and CIFAR-100 of EdgeConvEns are taken from Table IV. MNIST and Fashion MNIST results of EdgeConvEns are given for \(N=20\) and \(L_{com}=64\). In regression comparison, \(N=50\) is taken for our results. Table VI shows that EdgeConvEns produces state-of-the-art accuracy for image classification and regression problems though it uses less accurate edge models than the ones used in the compared studies. For example, the edge models used in [20] have 37.67% and 78.94% accuracy on CIFAR-100 and CIFAR-10, respectively. Table I shows that the average edge model accuracy of EdgeConvEns is 8.43% and 37.46%, respectively. In Figure 10(a), the time requirements for feature vector transfer (\(T_{transfer}^{EM}\)), VAE training (\(T_{train}^{VAE_{EM}}\)), and ensemble model training (\(T_{train}^{Ens}\)) is shown. The results are invariant with the dataset used for training due to the same \(L_{com}\) in all edge models. The total transfer time for an edge model is the most for Scenario 3 as a mini-batch is repeatedly transferred to the server. The number of communication is calculated as 39063 for Scenario 3. It is less in Scenario 2 due to the repeated use of the same mini-batch to complete the ensemble training epochs. The number of communication is calculated as 7813 for Scenario 2. It is the least for Scenario 1 because the feature vectors are sent to the server once. The ensemble learning time slightly increases from Scenario 1 to Scenario 3 due to repeated VAE decoding operations as the same mini-batch is received by the server multiple times. It can also be seen that the latency for feature vector transfer and VAE training visibly decreases for decreasing \(\alpha\) values as expected. The cumulative amount of data (\(S_{transfer}^{cum}\)) of feature vectors transferred to the server is shown in Figure 10(b). The largest amount is in Scenario 3 due to repeated transfers, while the least amount of feature vectors data transfer is in Scenario 1 due to one-time transfer. End-to-end latency is approximately 0.05 seconds for inference in all training scenarios. Table VII shows the size of the transferred feature vectors in a one-way communication from one edge device to the server (\(M_{transfer}^{EM}\)), required storage memory on the server (\(M_{memory}^{memory}\)), and required storage memory on an edge device (\(M_{memory}^{EM}\)) by training scenarios and \(\alpha\) values. It shows that the largest memory is needed for Scenario 1 because the whole feature vector data of each edge model are transferred to the server once and stored there. However, the required memory remarkably reduces when the system is trained with Scenario 2 and Scenario 3. Scenario 3 requires no memory space allocated for the transferred feature vector data, whereas Scenario 2 requires it for only a mini-batch. ## V Conclusion This study proposes EdgeConvEns, a convolutional ensemble learning framework for deep edge intelligence. The EdgeConvEns framework aims to boost the system's overall classification and regression performance by learning a unified representation from the collective information extracted by weak edge models accessing heterogeneously distributed data. Thus, we target to reduce the computational requirements on edge while offering performances comparable to state-of-the-art. The proposed framework also tackles missing information due to failures in network communication by a VAE-based feature imputation approach. Moreover, EdgeConvEns provides a customizable acceleration for training DNNs on Xilinx FPGA devices. Thus, the training can be accelerated using different parallelization factors such that the hardware limits are met for different target devices. EdgeConvEns also makes the training of the system available for devices with differing hardware capacities, thanks to different learning scenarios. The experiments conducted with benchmark datasets and various training scenarios demonstrate that the proposed EdgeConvEns outperforms conventional ensemble learning and standard federated learning techniques.
2303.06272
Chromatic numbers of Cayley graphs of abelian groups: Cases of small dimension and rank
A connected Cayley graph on an abelian group with a finite generating set $S$ can be represented by its Heuberger matrix, i.e., an integer matrix whose columns generate the group of relations between members of $S$. In a companion article, the authors lay the foundation for the use of Heuberger matrices to study chromatic numbers of abelian Cayley graphs. We call the number of rows in the Heuberger matrix the dimension, and the number of columns the rank. In this paper, we give precise numerical conditions that completely determine the chromatic number in all cases with dimension $1$; with rank $1$; and with dimension $\leq 3$ and rank $\leq 2$. For such a graph without loops, we show that it is $4$-colorable if and only if it does not contain a $5$-clique, and it is $3$-colorable if and only if it contains neither a diamond lanyard nor a $C_{13}(1,5)$, both of which we define herein. In a separate companion article, we show that we recover Zhu's theorem on the chromatic number of $6$-valent integer distance graphs as a special case of our theorem for dimension $3$ and rank $2$.
Jonathan Cervantes, Mike Krebs
2023-03-11T01:19:00Z
http://arxiv.org/abs/2303.06272v1
# Chromatic numbers of Cayley graphs of abelian groups: Cases of small dimension and rank ###### Abstract A connected Cayley graph on an abelian group with a finite generating set \(S\) can be represented by its Heuberger matrix, i.e., an integer matrix whose columns generate the group of relations between members of \(S\). In a companion article, the authors lay the foundation for the use of Heuberger matrices to study chromatic numbers of abelian Cayley graphs. We call the number of rows in the Heuberger matrix the _dimension_, and the number of columns the _rank_. In this paper, we give precise numerical conditions that completely determine the chromatic number in all cases with dimension \(1\); with rank \(1\); and with dimension \(\leq 3\) and rank \(\leq 2\). For such a graph without loops, we show that it is \(4\)-colorable if and only if it does not contain a \(5\)-clique, and it is \(3\)-colorable if and only if it contains neither a diamond lanyard nor a \(C_{13}(1,5)\), both of which we define herein. In a separate companion article, we show that we recover Zhu's theorem on the chromatic number of \(6\)-valent integer distance graphs as a special case of our theorem for dimension \(3\) and rank \(2\). _Keywords--_ graph, chromatic number, abelian group, Cayley graph, circulant graph ## 1 Introduction Given an \(m\times r\) integer matrix \(M\), let \(H\) be the set of all linear combinations of the columns of \(M\) with integer coefficients. Let \(\mathbb{Z}\) denote the group of integers under addition, and let \(\mathbb{Z}^{m}\) denote the \(m\)-fold direct product of \(\mathbb{Z}\) with itself. Let \(e_{j}\in\mathbb{Z}^{m}\) denote the \(m\)-tuple (regarded as a column vector) whose \(i\)th component is \(1\) if \(i=j\) and is 0 otherwise. Let \(S=\{H\pm e_{1},\ldots,H\pm e_{m}\}\). We may then form the Cayley graph whose underlying group is \(\mathbb{Z}^{m}/H\) with respect to the generating set \(S\). We denote by \(M_{X}^{\text{SACG}}\) the graph formed in this manner. We call \(M_{X}^{\text{SACG}}\) a _standardized abelian Cayley graph_, and we call \(M\) an associated _Heuberger matrix_. As discussed in [1], the study of chromatic numbers of Cayley graphs on abelian groups can be reduced to the study of standardized abelian Cayley graphs and their Heuberger matrices. Many, many particular cases of chromatic numbers of Cayley graphs on abelian groups have been studied; see the introduction to [1] for a long list of examples. Our main results (Theorems 2.9 and 2.14) give precise and easily checked numerical conditions that completely determine the chromatic number when the associated Heuberger matrix is \(2\times 2\) or \(3\times 2\). These results can be summarized as follows: Suppose such a graph does not have loops. Then it is 4-colorable if and only if it does not contain a 5-clique, and it is 3-colorable if and only if it contains neither a diamond lanyard (see Def. 2.2) nor a \(\text{Cay}(\mathbb{Z}_{13},\{\pm 1,\pm 5\})\). Whether such subgraphs occur, we show, can be ascertained quickly from the entries of the Heuberger matrix. After excluding some trivial exceptional cases, one first puts the matrix into a certain standardized form (lower triangular form with positive diagonal entries for \(2\times 2\) matrices, "modified Hermite normal form" for \(3\times 2\) matrices) without changing the associated graph. Theorems 2.9 and 2.14 then provide formulas for the chromatic number of graphs with matrices in this form. We briefly sketch the method of proof. For \(2\times 2\) matrices, the main result (Thm. 2.9) follows quickly by combining Heuberger's theorem on chromatic numbers of circulant graphs with the methods of [1]. The principal result for \(3\times 2\) matrices (Thm. 2.14) requires more work. The main idea is to apply the graph homomorphisms of [1] to obtain upper bounds on the chromatic number, utilizing the previous results from the \(2\times 2\) case. In [8], Zhu finds the chromatic number for an arbitrary integer distance graph of the form \(\text{Cay}(\mathbb{Z},\{\pm a,\pm b,\pm c\})\), where \(a,b,\) and \(c\) are distinct positive integers. Such graphs, as shown in [1], have associated \(3\times 2\) matrices. In another companion paper [2], we demonstrate how Thm. 2.14 yields Zhu's theorem as a corollary of our main theorem about \(3\times 2\) Heuberger matrices. One obvious future direction for this work will be to investigate what happens when the matrices are larger. For example, the case of a graph \(X\) with an associated \(m\times 2\) Heuberger matrix when \(m\geq 4\) seems well within reach using methods developed in this paper, and we plan to tackle that next. As we show in the proofs of our main theorems, when an abelian Cayley graph \(X\) has an associated \(2\times 2\) or \(3\times 2\) Heuberger matrix, an optimal coloring for \(X\) can always be realized as a pullback, via a graph homomorphism, of a coloring of a circulant graph. We pose the question (akin to that asked in [5]): Is that statement true for all connected, finite-degree abelian Cayley graphs? Moreover, we propose the following conjecture: Let \(X\) be a standardized abelian Cayley graph with an associated Heuberger matrix \(M_{X}\). If \(X\) does not have loops, and the determinant of every \(2\times 2\) minor of \(M_{X}\) is divisible by 3, then \(X\) is \(3\)-colorable. This article depends heavily on [1], which we will refer to frequently. The reader should assume that all notation, terminology, and theorems used but not explained here are explained there. ## 2 Main theorems In [1], we laid the groundwork for our main techniques, and we employed them to prove the Tomato Cage Theorem, which completely determines the chromatic number in the case where \(H\) has rank \(1\). In this section, we turn our attention to our main results, which concern the case where \(H\) has rank \(2\). Let \(X\) be a standardized abelian Cayley graph, and let \(m\) be the number of rows in an associated Heuberger matrix \(M_{X}\). In Subsection 2.1, we quickly dispense with the case where \(m=1\). For \(m=2\), we first apply isomorphisms to \(X\) as in [1] to put the matrix in a standard form without changing the associated graph. We then clear aside the somewhat aberrant situations where \(X\) is bipartite, \(X\) has loops, or the matrix has a zero row. Excluding these possibilities, we show that if the matrix entries are not relatively prime, then \(\chi(X)=3\); otherwise, \(X\) is isomorphic to a circulant graph, and its chromatic number is given by a theorem of Heuberger's. Subsection 2.3 contains the precise statements and proofs. For \(m=3\), we again begin by putting the matrix into a standard form we call "modified Hermite normal form," as detailed in Subsection 2.4. Again the "aberrant" situations can be handled quickly. We then determine the chromatic number in the remaining cases. To do so, we subtract rows to produce a homomorphism to a graph with an associated \(2\times 2\) matrix, for which we already have a complete theorem. When this fails to produce a \(3\)-coloring, we modify the mapping. We show that whenever we are unable in this manner to properly \(3\)-color \(X\), it must be that in fact \(X\) is not properly \(3\)-colorable, and our procedure instead yields a \(4\)-coloring of \(X\). These unusual cases, we show, fall into one of six families, and we state precise numerical conditions for when they occur. Subsection 2.5 contains the precise statements and proofs. As a by-product of our proofs, we show that for the class of graphs we consider, if they don't have loops, then \(K_{5}\) (the complete graph on \(5\) vertices) is the only obstacle to \(4\)-colorability, and "diamond lanyards" and "\(C_{13}(1,5)\)" (both of which we define in Subsection 2.2) are the only obstacles to \(3\)-colorability. In Subsection 2.6, we furnish a synopsis of the procedures involved -- an algorithm to determine the chromatic number whenever the Heuberger matrix is of size \(m\times 1,1\times r,2\times r\), or \(3\times 2\). ### The case \(m=1\) The case \(m=1\) can be dealt with immediately. **Lemma 2.1**.: _Suppose \(X\) is a standardized abelian Cayley graph defined by \((y_{1}\cdot\cdot y_{r})_{X}^{\text{SACG}}\) for some integers \(y_{1}\)\(\ldots,y_{r}\), not all \(0\). Let \(e=\gcd(y_{1}\)\(\ldots,y_{r})\). Then \(X\) has loops (and therefore is not properly colorable) if and only if \(e=1\) otherwise, we have that \(\chi(X)=2\) if \(e\) is even, and \(\chi(X)=3\) if \(e\) is odd._ Proof.: Applying column operations as in [1, Lemma 2.6], we can in effect perform the Euclidean algorithm so as to acquire an isomorphic standardized abelian Cayley graph \((e\ 0\ \cdots\ 0)^{\mathrm{SACG}}_{X^{\prime}}=(e)^{\mathrm{SACG}}_{X^{\prime}}\). The result follows from [1, Example 2.1]. If \(y_{1}=\cdots=y_{r}=0\), then \(\chi(X)=2\), by [1, Lemma 2.11]. ### Diamond lanyards and \(C_{13}(1,5)\) Recall that a _diamond_ is a graph with \(4\) vertices, exactly one pair of which consists of nonadjacent vertices. In other words, a diamond is a \(K_{4}\) (complete graph on \(4\) vertices) with one edge deleted. The two vertices not adjacent to one another are the _endpoints_ of the diamond. **Definition 2.2**.: An _unclasped diamond lanyard of length \(1\)_ is a diamond. Recursively, we define an _unclasped diamond lanyard \(U\) of length \(\ell+1\)_ to be the union of an unclasped diamond lanyard \(Y\) of length \(\ell\) and a diamond \(D\), such that \(Y\) and \(D\) have exactly one endpoint in common. The _endpoints_ of \(U\) are the endpoint of \(Y\) which is not an endpoint of \(D\), and the endpoint of \(D\) which is not an endpoint of \(Y\). A _(clasped) diamond lanyard of length \(\ell\)_ is obtained by adding to an unclasped diamond lanyard \(U\) of length \(\ell\) an edge between the endpoints of \(U\). We call that edge a _clasp_. Def. 2.2 does not preclude the possibiity of the diamonds in the lanyard having edges in common. For example, Let \(X=\mathrm{Cay}(\mathbb{Z}_{5},\{\pm 1,\pm 2\})\). Then \(X\) is a \(K_{5}\) graph with vertex set \(\{0,1,2,3,4\}\). We write an edge in \(X\) as a string of two vertices. So \(X\) contains as a subgraph a diamond with edges \(01,02,12,13,23\) and endpoints \(0\) and \(3\). It also contains as a subgraph a diamond with edges \(41,42,12,13,23\) and endpoints \(4\) and \(3\). The edge sets of these two diamonds are non-disjoint. Taking the union of these two diamonds produces an unclasped diamond lanyard of length two with endpoints \(0\) and \(4\). Observing that \(04\) is also an edge in \(X\), indeed \(X\) contains a clasped diamond lanyard as a subgraph. We sometimes refer to a clapsed diamond lanyard simply as a _diamond lanyard_. A diamond lanyard of length \(1\) is a \(K_{4}\), and a diamond lanyard of length \(2\) where the two diamonds have disjoint edges is called a _Mosers' spindle_[7]. Figure 1 illustrates a diamond lanyard of length \(4\). (We would have preferred to have dubbed these "diamond chain necklaces," but the term "necklace graph" already has a standard meaning.) We observe that diamond lanyards are not \(3\)-colorable. For suppose we have a proper \(3\)-coloring. Note that in any proper \(3\)-coloring of a diamond, its endpoints must have the same color. Hence the endpoints of the diamond lanyard must have the same color, but they are adjacent, which is a contradiction. Thus, we have the following lemma. **Lemma 2.3**.: _Suppose \(X\) is a graph containing a diamond lanyard as a subgraph. Then \(\chi(X)\geq 4\)._ unit-distance graph; hence the chromatic number of the plane is at least \(4\). (Indeed, as discussed in the introduction of [1], it is now known to be at least \(5\).) We now turn our attention to the other object that can stand in the way of \(3\)-colorability for our graphs. Let \(C_{13}(1,5)\) be the circulant graph \(\operatorname{Cay}(\mathbb{Z}_{13},\{\pm 1,\pm 5\})\). (For more about this notation, see Def. 2.5.) Heuberger proves the following lemma in [4] using a "vertex-chasing" argument. Here we prove it by showing that the independence number is \(4\). We include this proof so that it might suggest generalizations. **Lemma 2.4**.: _The chromatic number of \(C_{13}(1,5)\) is \(4\)._ Proof.: It is straightforward to find a proper \(4\)-coloring of \(C_{13}(1,5)\). We now let \(A\) be an independent set of vertices in \(C_{13}(1,5)\). We will show that \(|A|\leq 4\). This will prove the lemma. Suppose that \(|A|\geq 5\); we will show that this leads to a contradiction. First observe that we must have \(|A|\leq 6\). For if \(|A|\geq 7\), then \(A\) would contain the adjacent vertices \(x\) and \(x+1\) for some \(x\in\mathbb{Z}_{13}\). Because \(|A|\leq 6<\frac{13}{2}\), there must be \(x\in\mathbb{Z}_{13}\) such that \(x\notin A\) and \(x+1\notin A\). From \(|A|\geq 5\) we find that \[\{x+2,x+3,x+4,x+5,x+6,x+7\}\cap A\text{ or }\{x+8,x+9,x+10,x+11,x+12\}\cap A\] must contain at least \(3\) elements. Because \(z\) and \(z+1\) are adjacent for all \(z\in\mathbb{Z}_{13}\), these three elements must be \(x+y-2,x+y\), and \(x+y+2\) for some \(y\in\{4,5,10\}\). Using the fact that \(a\pm 1\notin A\) and \(a\pm 5\notin A\) whenever \(a\in A\), we see that \[x+y+1,x+y+3,x+y+5,x+y+6,x+y+7,x+y+8,x+y+10,x+y+12\notin A.\] Because \(|A|\geq 5\), we know that \(A\) must contain at least two elements other than \(x+y-2,x+y\), and \(x+y+2\), but the only remaining elements in \(\mathbb{Z}_{13}\) are \(x+y+4\) and \(x+y+9\). However, \(x+y+4\) and \(x+y+9\) are adjacent and so cannot both be in \(A\). Figure 1: A diamond lanyard of length \(4\) ### The case \(m=2\) In this subsection, we completely determine \(\chi(X)\) when \(X\) has a \(2\times 2\) matrix as an associated Heuberger matrix. (Note that if the number of columns exceeds the number of rows, we can perform column operations as in [1, Lemma 2.6] to get a zero column, and then delete it. So the results of this section, together with the Tomato Cage Theorem, completely take care of all dimension 2 cases.) Suppose \(X\) is a standardized abelian Cayley graph with a \(2\times 2\) Heuberger matrix. By performing row and column operations as in Lemma [1, Lemma 2.6], one can show that \(X\) is isomorphic to a standardized abelian Cayley graph \(X^{\prime}\) with an associated matrix \(M_{X^{\prime}}\) that is lower-triangular, and such that the diagonal entries of \(M_{X^{\prime}}\) are non-negative. Hence we may restrict our attention to the case where \(M_{X}\) is a \(2\times 2\) lower-triangular matrix with nonnegative diagonal entries. Next we delve into a class of graphs that play a crucial role in the main theorem for this subsection. **Definition 2.5**.: Let \(a,b\), and \(n\) be integers with \(n\neq 0\) and \(\gcd(a,b,n)=1\) and \(n\nmid a\) and \(n\nmid b\). We say that \(\operatorname{Cay}(\mathbb{Z}_{n},\{\pm a,\pm b\})\) is a _Heuberger circulant_, denoted \(C_{n}(a,b)\). \(\Box\) The condition \(\gcd(a,b,n)=1\) is equivalent to the connectedness of \(C_{n}(a,b)\), and the condition that \(n\nmid a\) and \(n\nmid b\) is equivalent to the absence of loops. In [4], Heuberger completely determines the chromatic number of all Heuberger circulants, as follows. **Theorem 2.6** ([4, Theorem 3]).: _Let \(C_{n}(a,b)\) be a Heuberger circulant. Then_ \[\chi(C_{n}(a,b))=\begin{cases}2&\text{if $a$ and $b$ are both odd, but $n$ is even}\\ 5&\text{if $n=\pm 5$ and $a\equiv\pm 2b\,(\operatorname{mod}5)$}\\ 4&\text{if $n=\pm 13,\text{ and }\text{ }a\equiv\pm 5b\,(\operatorname{mod}13)$}\\ 4&\text{if $(i)\;n\neq\pm 5,\text{ and }(ii)\;3\nmid n,\text{ and }(iii)\;a\equiv\pm 2b\,(\operatorname{mod}n)$ or $b\equiv\pm 2a\,( \operatorname{mod}n)$}\\ 3&\text{otherwise.}\end{cases}\] We note that [4] excludes the case \(a\equiv\pm b\,(\operatorname{mod}\,n)\), in which event \(C_{n}(a,b)\) is an \(n\)-cycle. However, the theorem as stated here includes this possibility as well. Observe that the third case in Theorem 2.6 is precisely Lemma 2.4. Moreover, for the fourth case, Lemma 2.3 shows that \(\chi(C_{n}(a,b))\geq 4\), for the following reason. First suppose that \(a\equiv 2b\,(\operatorname{mod}\,n)\). Observe that for all \(x\in\mathbb{Z}_{n}\), there is a diamond in \(C_{n}(a,b)\) with vertex set \(\{x,x+a,x+2a,x+3a\}\), where the endpoints are \(x\) and \(x+3a\). Because \(a\equiv 2b\,(\operatorname{mod}\,n)\) and \(\gcd(a,b,n)=1\), we must have that \(\gcd(a,n)=1\). Moreover, because \(3\nmid n\), we have that \(\gcd(3a,n)=1\). Let \(m\) be a positive integer such that \(3am\equiv 1\,(\operatorname{mod}\,n)\). Then \(C_{n}(a,b)\) contains a diamond lanyard with vertex set \(\{ja\mid 0\leq j\leq m\}\), so by Lemma 2.3, we have that \(\chi(C_{n}(a,b))\geq 4\). A similar argument works when \(a\equiv-2b\pmod{n}\) or when \(b\equiv\pm 2a\pmod{n}\). This is essentially the reasoning in [4] for the lower bounds in these cases. Hence a Heuberger circulant is \(4\)-colorable unless it is \(K_{5}\); and it is \(3\)-colorable unless it contains as a subgraph either a diamond lanyard or it equals \(C_{13}(1,5)\). (Recall that \(K_{5}\) contains a diamond lanyard, so we needn't include it in the list of obstructions to \(3\)-colorability.) Next, we show that when the entries of the first column of \(M_{X}\) are relatively prime, then with some mild additional conditions imposed, \(X\) is isomorphic to a Heuberger circulant. **Lemma 2.7**.: _Let \(X\) be a standardized abelian Cayley graph with associated Heuberger matrix_ \[M_{X}=\begin{pmatrix}y_{11}&y_{12}\\ y_{21}&y_{22}\end{pmatrix}.\] _Suppose that \(X\) does not have loops, that \(\det(M_{X})\neq 0\), and that \(\gcd(y_{11},y_{21})=1\). Then \(X\) is isomorphic to the Heuberger circulant \(C_{n}(a,b)\), where \(a=-y_{21}\), \(b=y_{11}\), and \(n=\det(M_{X})\)._ Proof.: Define a homomorphism \(\varphi\colon\mathbb{Z}^{2}\to\mathbb{Z}_{n}\) by \(e_{1}\mapsto a,e_{2}\mapsto b\). Let \(y_{1}\) and \(y_{2}\) be the first and second columns, respectively, of \(M_{X}\). We must show that \(\ker\varphi\) equals the \(\mathbb{Z}\)-span of \(y_{1}\) and \(y_{2}\). It is straightforward to show that \(\varphi(y_{1})=\varphi(y_{2})=0\), giving us one inclusion. We now show the reverse inclusion. Suppose \(\varphi((x_{1},x_{2})^{\ddagger})=0\). Then \(ax_{1}+bx_{2}\equiv 0\pmod{n}\). Because \(a\) and \(b\) are relatively prime, there exist integers \(q,r\) such that \(aq+br=1\). Using elementary number theory as in [6], we find that \(x_{1}=\ell nq+kb\) and \(x_{2}=\ell nr-ka\) for some integers \(\ell,k\). Hence after some computations we find that \[\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix}=ky_{1}+\ell n\begin{pmatrix}q\\ r\end{pmatrix}=ky_{1}+\begin{pmatrix}\ell qy_{11}y_{21}-\ell qy_{21}y_{12}\\ \ell ry_{11}y_{22}-\ell ry_{21}y_{12}\end{pmatrix}=(k+\ell qy_{22}-\ell ry_{1 2})y_{1}+\ell y_{2}.\qed\] Note that after switching the columns of the matrix, Lemma 2.7 becomes a special case of [1, Example 2.5]. We have included it here separately so as to give a more elementary proof for \(2\times 2\) matrices. We remark that not every graph with an associated \(2\times 2\) Heuberger matrix is isomorphic to a circulant. The graph \(\begin{pmatrix}4&0\\ 2&4\end{pmatrix}_{X}^{\text{SACG}}\), for instance, provides a counterexample. One computes that \(X\) must have order \(16\) and so, if circulant, would be of the form \(\text{Cay}(\mathbb{Z}_{16},S)\) for some \(S\). Because \(X\) is connected and bipartite, and has degree \(4\), we see that up to isomorphism, we can assume \(S=\{\pm 1,\pm 3\}\) or \(S=\{\pm 1,\pm 7\}\). Direct arguments (for example, counting the number of paths of length \(2\) between various vertices) show that neither choice of \(S\) produces a circulant graph isomorphic to \(X\). The following lemma characterizes \(2\times 2\) matrices whose associated graphs have loops. This occurs either when the determinant is nonzero and divides a row, or else when one row is zero and the other has relatively prime entries. **Lemma 2.8**.: _Let \(X\) be a standardized abelian Cayley graph with an associated matrix_ \[M_{X}=\begin{pmatrix}y_{11}&y_{12}\\ y_{21}&y_{22}\end{pmatrix}\] _Let \(n=\det(M_{X})\). If \(n\neq 0\), then \(X\) has loops if and only if either (i) \(n\mid y_{11}\) and \(n\mid y_{12}\) or (ii) \(n\mid y_{21}\) and \(n\mid y_{22}\). If \(n=0\), then \(X\) has loops if and only if either (a) \(y_{11}=y_{12}=0\) and \(\gcd(y_{21},y_{22})=1\), or (b) \(y_{21}=y_{22}=0\) and \(\gcd(y_{11},y_{12})=1\)._ Proof.: First suppose \(n\neq 0\). We know that \(X\) has loops if and only if \(e_{1}\) or \(e_{2}\) is in the \(\mathbb{Z}\)-span \(H\) of the columns of \(M_{X}\). We have \(e_{1}\in H\) if and only if \(M_{X}^{-1}e_{1}\in\mathbb{Z}^{2}\). But \[M_{X}^{-1}e_{1}=\frac{1}{n}\begin{pmatrix}y_{22}&-y_{21}\\ -y_{12}&y_{11}\end{pmatrix}e_{1}=(y_{22}/n&-y_{21}/n)^{t}.\] Similarly for \(e_{2}\). Now suppose \(n=0\). If (a) holds, then \(e_{2}\in H\). If (b) holds, then \(e_{1}\in H\). Conversely, suppose that \(X\) has loops, so that \(e_{1}\in H\) or \(e_{2}\in H\). First suppose \(e_{1}\in H\). Then there exist \(r,s\in\mathbb{Z}\) such that \[r\begin{pmatrix}y_{11}\\ y_{21}\end{pmatrix}+s\begin{pmatrix}y_{12}\\ y_{22}\end{pmatrix}=e_{1}, \tag{1}\] so \(ryz_{1}+sy_{22}=0\). Because \(n=0\), the left column and right columns of \(M_{X}\) are linearly dependent over \(\mathbb{Q}\). Then there exist \(\alpha,\beta\in\mathbb{Z}\), not both \(0\), such that \[\alpha\begin{pmatrix}y_{11}\\ y_{21}\end{pmatrix}=\beta\begin{pmatrix}y_{12}\\ y_{22}\end{pmatrix}. \tag{2}\] We can assume \(\gcd(\alpha,\beta)=1\), for if not, replace them with \(\alpha/\gcd(\alpha,\beta)\) and \(\beta/\gcd(\alpha,\beta)\), respectively. Then \(r\beta y_{21}+s\beta y_{22}=0\), so \(r\beta y_{21}+s\alpha y_{21}=0\), so \(r\beta+s\alpha=0\) or \(y_{21}=0\). Case 1: Suppose \(r\beta+s\alpha=0\). From (1) we also have \(ry_{11}+sy_{12}=1\), so \(\gcd(r,s)=1\). We have \(r\beta=-s\alpha\), so \(\alpha\mid r\) and \(\beta\mid s\). Also \(r\mid\alpha\) and \(s\mid\beta\). So either \(\alpha=r\) and \(\beta=-s\), or \(\alpha=-r\) and \(\beta=s\). Either way, (1) and (2) now contradict one another. Case 2: Suppose \(y_{21}=0\). Then \(0=\alpha y_{21}=\beta y_{22}\). If \(y_{22}=0\), then using (1), we find that (b) holds. If not, then \(\beta=0\), so \(\alpha\neq 0\), so \(y_{11}=y_{21}=0\). Equation (1) then implies that (b) holds. A similar argument shows that if \(e_{2}\in H\), then (a) must hold. We now have all the pieces in place needed to find \(\chi(X)\) whenever \(X\) has an associated \(2\times 2\) matrix. **Theorem 2.9**.: _Let \(X\) be a standardized abelian Cayley graph defined by_ \[\begin{pmatrix}y_{11}&0\\ y_{21}&y_{22}\end{pmatrix}_{X}^{\mathrm{SACG}}\.\] _Suppose that \(y_{11}\geq 0\) and \(y_{22}\geq 0\). Let \(d=\gcd(y_{11},y_{21})\) and \(e=\gcd(y_{11},y_{21},y_{22})\). Then:_ 1. _If either (i)_ \(y_{22}=1\) _or (ii)_ \(y_{11}=1\) _and_ \(y_{22}|y_{21}\) _or (iii)_ \(y_{11}=0\) _and_ \(\gcd(y_{21},y_{22})=1\)_, then_ \(X\) _has loops and is not properly colorable._ 2. _If both_ \(y_{11}+y_{21}\) _and_ \(y_{22}\) _are even, then_ \(\chi(X)=2\)_._ 3. _If (i) neither of the conditions in the previous statements holds, and (ii)_ \(y_{11}=0\) _or_ \(y_{22}=0\) _or_ \(e>1\) _or_ \(y_{22}|y_{21}\)_, then_ \(\chi(X)=3\)_._ 4. _If none of the conditions in the previous statements hold, take_ \(q\in\mathbb{Z}\) _such that_ \(gcd(y_{11},y_{21}+qy_{22})=1\)_. (Such a_ \(q\) _necessarily exists, as we can let_ \(q\) _be the product of all primes_ \(p\) _such that_ \(p|y_{11}\) _but_ \(p\nmid d\)_. Here we adopt the convention that if there are no such primes, let_ \(q=1\)_.) Then_ \(\chi(X)=\chi(C_{n}(a,b))\)_, where_ \(a=-y_{21}-qy_{22}\)_,_ \(b=y_{11}\)_, and_ \(n=y_{11}y_{22}\)_._ Proof.: Lemma 2.8 implies both Statement (1) and its converse. Statement (2) follows from [1, Lemma 2.11]. We now prove Statement (3). Assume that conditions (i) and (ii) in that statement both hold. Condition (i) implies that \(X\) does not have loops, and that \(X\) is not bipartite. If \(y_{11}=0\), then we can delete the top row as per [1, Lemma 2.8] without affecting the chromatic number. Lemma 2.1 now gives us \(\chi(X)=3\). If \(y_{22}=0\), then we can delete the second column as per [1, Lemma 2.6(4)] without changing \(X\). We then find that \(\chi(X)=3\), by [1, Theorem 2.15]. If \(e>1\), then \(\chi(X)=3\), by [1, Lemma 2.12]. If \(y_{22}|y_{21}\), then after performing a column sum as in [1, Lemma 2.6(3)] to eliminate \(y_{21}\), by [1, Lemma 2.7], we have that \(\chi(X)=3\). Finally, we prove Statement (4). By [1, Lemma 2.6(3)], we have that \[\begin{pmatrix}y_{11}&0\\ y_{21}&y_{22}\end{pmatrix}_{X}^{\mathrm{SACG}}=\begin{pmatrix}y_{11}&0\\ y_{21}+qy_{22}&y_{22}\end{pmatrix}_{X}^{\mathrm{SACG}}\.\] The result follows from Lemma 2.7. It follows from Thm. 2.9 and our previous observations about Heuberger circulants that for a standardized abelian Cayley graph \(X\) with an associated \(2\times 2\) Heuberger matrix, if \(X\) does not have loops, then \(X\) is \(4\)-colorable unless it contains \(K_{5}\) as a subgraph; and it is \(3\)-colorable unless it contains as a subgraph either a diamond lanyard or \(C_{13}(1,5)\). We will see in SS2.5 that this statement holds for the \(3\times 2\) case as well. When \(M_{X}\) is a \(2\times 2\) matrix, we perform row and column operations to create a \(2\times 2\) lower triangular matrix \(M_{X^{\prime}}\) for which \(X^{\prime}\) is isomorphic to \(X\). Observing the effect of these operations on the determinant, we find that \(|\det\ M_{X}|=|\det\ M_{X^{\prime}}|\). Note that in Theorem 2.9, whenever \(X\) has no loops and \(\chi(X)>3\), we have that \(|\det\ M_{X}|\) is not divisible by \(3\). Thus we have the following corollary, which provides an easily checked sufficient condition for \(3\)-colorability. **Corollary 2.10**.: _Let \(X\) be a standardized abelian Cayley graph with an associated \(2\times 2\) matrix \(M_{X}\). If \(X\) has no loops and \(3\,|\det\ M_{X}\), then \(\chi(X)\leq 3\)._ We note that Cor. 2.10 fails in general for larger matrices. For example, we have by Thm. 2.9, Thm. 2.6, [1, Lemma 2.7], and Example [1, 2.1] that \[\chi\left(\left(\begin{array}{ccc}1&0&0\\ -2&5&0\\ 0&0&3\end{array}\right)_{X}^{\text{SACG}}\right)=5.\] However, observe that not every \(2\times 2\) minor of the matrix above has determinant divisible by \(3\). As mentioned in the introduction, we conjecture that if this stronger condition holds, and \(X\) does not have loops, then \(X\) is \(3\)-colorable. ### Modified Hermite normal form In Subsection 2.3, we saw that it was useful to deal only with \(2\times 2\) Heuberger matrices in a certain convenient format. The same goes for \(3\times 2\) Heuberger matrices. The crux of this idea is drawn from [4], where Hermite normal form is used. For \(3\times 2\) matrices, we refine the requirements slightly for our purposes, so as to further reduce the number of exceptional cases. The purpose of the present subsection is to define this "modified Hermite normal form" for \(3\times 2\) matrices and to show that with very few exceptions every standardized abelian Cayley graph with an associated \(3\times 2\) Heuberger matrix is isomorphic to one with a matrix in this form. We do not attempt here to generalize these definitions to matrices of arbitrary size, as we do not know yet what restrictions will prove to be most useful when the rank or dimension is larger. **Definition 2.11**.: Let \[M=\begin{pmatrix}y_{11}&y_{12}\\ y_{21}&y_{22}\\ y_{31}&y_{32}\end{pmatrix}\] be a \(3\times 2\) matrix with integer entries such that no row of \(M\) has all zero entries. We say \(M\) is in _modified Hermite normal form_ if the following conditions hold: 1. \(y_{11}>0\), and 2. \(y_{12}=0\), and 3. \(y_{11}y_{22}\equiv y_{11}y_{32}\ (\text{mod}\ 3)\), and 4. \(y_{22}\leq y_{32}\), and 5. \(|y_{22}|\leq|y_{32}|\), and 6. Either (i) \(y_{22}=0\) and \(-\frac{1}{2}|y_{32}|\leq y_{31}\leq 0\), or else (ii) \(-\frac{1}{2}|y_{22}|\leq y_{21}\leq 0\). There are some departures here from the usual Hermite normal form. For example, \(|y_{21}|\) cannot be more than half of \(y_{22}\), a more stringent requirement than being less than \(y_{22}\), as in ordinary Hermite normal form. As we shall see, we can impose this narrower condition because we have both row and column operations at our disposal, not just column operations. Moreover, this form may be the transpose of what some readers are accustomed to, but we have adopted this convention so as to be consistent with [4]. The third condition has no analogue with Hermite normal form but will turn out to be rather useful in the succeeding subsection. We next show that every standardized abelian Cayley graph with a \(3\times 2\) Heuberger matrix of rank \(2\) without zero rows is isomorphic to one with a Heuberger matrix in modified Hermite normal form. **Lemma 2.12**.: _Let \(X\) be a standardized abelian Cayley graph with a \(3\times 2\) Heuberger matrix \(M_{X}\). Suppose that \(M_{X}\) has no zero rows, and that the columns of \(M_{X}\) are linearly independent over \(\mathbb{Q}\). Then \(X\) is isomorphic to a standardized abelian Cayley graph \(X^{\prime}\) with a \(3\times 2\) Heuberger matrix \(M_{X^{\prime}}\) in modified Hermite normal form._ Proof.: The proof is constructive. We give an explicit algorithm for row and column operations to perform on \(M_{X}\), as per [1, Lemma 2.6], so as to result in the desired matrix \(M_{X^{\prime}}\). * Let \(\alpha,\beta\), and \(\gamma\) be the determinants of the \(2\times 2\) minors of \(M_{X}\). Two of \(\alpha,\beta,\gamma\) must be congruent to each other or negatives of each other modulo \(3\). Using this fact, we can permute rows and/or multiply rows by \(-1\) as needed so that the determinant of the submatrix formed by the top two rows is congruent modulo \(3\) to the submatrix formed by the first and third rows. This property will be preserved by all subsequent steps and will eventually lead to satisfaction of the third condition in Def. 2.11. Let \(M_{X_{1}}\) be the resulting matrix. * If the first entry of any column of \(M_{X_{1}}\) is negative, multiply that column (or those columns) by \(-1\). Let \(M_{X_{2}}\) be the resulting matrix. The top row of \(M_{X_{2}}\) has no negative entries. * If the first entry of the first column of \(M_{X_{2}}\) is \(0\), then permute the two columns; otherwise, do nothing. Let \(M_{X_{3}}\) be the resulting matrix. The first entry of the first column of \(M_{X_{3}}\) is strictly positive. (Here is where we use the assumption that \(M_{X}\) has no zero rows.) If the first entry of the second column is \(0\), then let \(M_{X_{4}}=M_{X_{3}}\) and skip to Step Four. * Both entries of the top row of \(M_{X_{3}}\) are strictly positive. Let \(e\) be the greatest common divisor of these two entries. Repeatedly applying [1, Lemma 2.6(3)], we perform column operations that effectuate the Euclidean algorithm on the entries in the top row of \(M_{X_{3}}\), so that the top row of the resulting matrix has two entries, one of which is \(e\), the other \(0\). If the first entry of the first column is \(0\), then permute the two columns. Let \(M_{X_{4}}\) be the resulting matrix. The top row of \(M_{X_{4}}\) has a strictly positive first entry, and \(0\) for its second entry. The matrix \(M_{X_{4}}\) now meets the first three conditions in the definition of modified Hermite normal form, and these will be preserved by all subsequent steps. Step Four: Let \(z\) and \(w\) be the \((3,1)\) and \((3,2)\) entries of \(M_{X_{4}}\), respectively. If \(z\leq w\) and \(|z|\leq|w|\), then do nothing. If \(z>w\) and \(|z|\leq|w|\), then multiply the second column by \(-1\). If \(z\leq w\) and \(|z|>|w|\), then switch the bottom two rows and multiply the second column by \(-1\). If \(z>w\) and \(|z|>|w|\), then switch the bottom two rows. Whatever action was taken, let \(M_{X_{5}}\) be the resulting matrix. The first five conditions in Def. 2.11 are now (and will continue to be) met. * Let \(a\) and \(b\) be the \((2,1)\) and \((2,2)\) entries of \(M_{X_{5}}\), respectively. If \(b=0\), then apply to the third rather than second row the procedure described in the rest of Step Five as well as in Step Six. (Here is where we use that the columns of \(M_{X}\) are linearly independent over \(\mathbb{Q}\); this guarantees that if \(b=0\), then the \((3,2)\) entry of \(M_{X_{5}}\) is not zero.) If \(b\neq 0\), by the division theorem, there exist integers \(q\) and \(r\) such that \(r=a-q|b|\), where \(-|b|<r\leq 0\). Applying [1, Lemma 2.6(3)], perform a column operations to replace the first column with the first column plus \(\pm q\) times the second column, so that the second entry in the first column becomes \(r\). Let \(M_{X_{6}}\) be the resulting matrix. * Let \(c\) be the \((2,1)\) entry of \(M_{X_{6}}\). We still have that \(b\) is the \((2,2)\) entry of \(M_{X_{6}}\). Suppose that \(-\frac{|\mathbb{Q}|}{2}>c\). We then add \(b/|b|\) times the second column to the first column; then multiply the first column by \(-1\); and then multiply the first row by \(-1\). Let \(M_{X^{\prime}}\) be the resulting matrix. The matrix \(M_{X^{\prime}}\) will then satisfy all conditions in the definition of modified Hermite normal form. Suppose we have a \(3\times 2\) Heuberger matrix \(M_{X}\). If \(M_{X}\) has a zero row, then by [1, Lemma 2.8], we can delete it without affecting the chromatic number \(\chi\), whereupon Thm. 2.9 can be used to find \(\chi\). If the columns of \(M_{X}\) are linearly dependent over \(\mathbb{Q}\), then as per [1, Lemma 2.6] appropriate column operations that do not change \(X\) will produce a zero column, which can be deleted without changing \(X\). [1, Thm. 2.15] can then be used to find \(\chi(X)\). Otherwise, in light of Lemma 2.12, we lose no generality by assuming that \(M_{X}\) is in modified Hermite normal form. We next show that when \(M_{X}\) is in modified Hermite normal form, we can determine immediately whether \(X\) has loops. Recall that \(e_{j}\) is the \(j\)th standard basis vector, with a \(1\) as its \(j\)th entry and \(0\) for every other entry. **Lemma 2.13**.: _Let \(X\) be a standardized abelian Cayley graph with a Heuberger matrix \(M_{X}\). Suppose that \(M_{X}\) is a \(3\times 2\) matrix in modified Hermite normal form. Then \(X\) has loops if and only if either the first column of \(M_{X}\) is \(e_{1}\), or the second column of \(M_{X}\) is \(e_{3}\)._ Proof.: Let \(H\) be the \(\mathbb{Z}\)-span of the columns of \(M_{X}\). The graph \(X\) has loops if and only if \(\pm e_{j}\in H\) for some \(j\). From the definition of modified Hermite normal form, we see that this can occur if and only if either the first column of \(M_{X}\) is \(e_{1}\) or the second column of \(M_{X}\) is \(e_{3}\). ### Chromatic numbers of graphs with \(3\times 2\) matrices in modified Hermite normal form In this section we prove the following theorem, which completely determines the chromatic number of an arbitrary standardized abelian Cayley graph with a \(3\times 2\) Heuberger matrix in modified Hermite normal form. **Theorem 2.14**.: _Let \(X\) be a standardized abelian Cayley graph with a Heuberger matrix_ \[M_{X}=\begin{pmatrix}y_{11}&0\\ y_{21}&y_{22}\\ y_{31}&y_{32}\end{pmatrix}\] _in modified Hermite normal form._ 1. _If the first column of_ \(M_{X}\) _is_ \(e_{1}\) _or the second column of_ \(M_{X}\) _is_ \(e_{3}\)_, then_ \(X\) _has loops and cannot be properly colored._ 2. _If_ \(y_{11}+y_{21}+y_{31}\) _and_ \(y_{22}+y_{32}\) _are both even, then_ \(\chi(X)=2\)_._ 3. _If_ \[M_{X}=\begin{pmatrix}1&0\\ 0&1\\ \pm 3k&1+3k\end{pmatrix}\;or\;M_{X}=\begin{pmatrix}1&0\\ 0&-1\\ \pm 3k&-1+3k\end{pmatrix}\;or\;M_{X}=\begin{pmatrix}1&0\\ -1&2\\ -1-3k&2+3k\end{pmatrix}\;or\;M_{X}=\begin{pmatrix}1&0\\ -1&-2\\ -1+3k&-2+3k\end{pmatrix}\] _for some positive integer_ \(k,\;or\;M_{X}=\begin{pmatrix}1&0\\ 0&-1\\ 3b&2\end{pmatrix}\) _for some integer_ \(b,\;or\)__ \[M_{X}=\begin{pmatrix}1&0\\ -1&a\\ -1&a+3(k-1)\end{pmatrix}\;for\;some\;integer\;a\;with\;3\nmid a\;and\;some\; positive\;integer\;k,\] then_ \(\chi(X)=4\)_._ 4. _If none of the above conditions hold, then_ \(\chi(X)=3\)_._ The main idea behind the proof of Theorem 2.14 is to add or subtract two rows as per [1, Lemma 2.10] to obtain a homomorphism from \(X\) to a graph with a \(2\times 2\) Heuberger matrix. Theorem 2.9 and its corollary provide conditions under which this latter graph (and hence \(X\)) is \(3\)-colorable. If those conditions are not met, then we get information about \(M_{X}\). In this case we can repeat the procedure using some other homomorphism to further narrow down the possibilities for those \(M_{X}\) for which \(\chi(X)>3\). In particular, we are led in this way to consider three special types of matrices \(M_{X}\): those with \(y_{22}=0\) (we call these "L-shaped" matrices); those with \(y_{11}=y_{22}=1\) and \(y_{21}=0\) (we call these "\(I\) on top" matrices); and those with \(y_{11}=y_{21}=y_{31}=1\) (we call these "first column all ones" matrices). Every exceptional case traces back ultimately to one of these three. Hence, to lay the groundwork for the proof of Theorem 2.14, we first prove three technical lemmas which compute the chromatic numbers in these three circumstances. The proofs of all three use the same main idea: Add or subtract rows to map to a graph with a \(2\times 2\) matrix. If such maps fail to produce a \(3\)-coloring, then show that \(X\) contains a diamond lanyard. We begin with "first column all ones" matrices. **Lemma 2.15**.: _Suppose we have_ \[\begin{pmatrix}1&0\\ 1&y_{22}\\ 1&y_{32}\end{pmatrix}_{X}^{SACG}.\] _Then \(X\) has loops if and only if \(\{y_{22},y_{32}\}\) is \(\{0,-1\}\) or \(\{0,1\}\) or \(\{-1\}\) or \(\{1\}\). Otherwise,_ \[\chi(X)=\begin{cases}3&\text{ if }y_{32}\equiv-y_{22}\pmod{3}\\ 4&\text{ if }y_{32}\not\equiv-y_{22}\pmod{3}.\end{cases}\] Proof.: The statement about loops is straightforward. Now suppose that \(X\) does not have loops. Let \(M_{X}\) be the matrix in the lemma statement. The first column's sum is odd, so \(X\) cannot be bipartite, by [1, Lemma 2.11]. If \(y_{32}\equiv-y_{22}\pmod{3}\), then we get \(\chi(X)=3\) by applying [1, Lemma 2.12]. Now assume that \(y_{32}\not\equiv-y_{22}\pmod{3}\), in other words that \(y_{32}=-y_{22}\pm 1+3\ell\) for some integer \(\ell\). First we show that \(\chi(X)>3\), by showing that \(X\) contains a diamond lanyard. Let \(H\) be the subgroup of \(\mathbb{Z}^{3}\) generated by the columns of \(M_{X}\). Recall that vertices of \(X\) are of the form \((a,b,c)^{t}+H\), which we denote by \(\overline{(a,b,c)^{t}}\). The vertices \(\overline{(0,0,0)^{t}},\overline{(0,1,0)^{t}},\overline{(0,0,-1)^{t}}\), and \(\overline{(0,1,-1)^{t}}\) form a diamond in \(X\). Shifting this diamond \(x-1\) times by \(\overline{(0,1,-1)^{t}}\) and concatenating, we produce an unclapsed diamond lanyard \(L_{1}\) of length \(x\) with endpoints \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,x,-x)^{t}}\). In a similar vein, we have an unclapsed diamond lanyard of length \(2\) formed by one diamond with vertices \(\overline{(0,0,0)^{t}},\overline{(0,1,0)^{t}},\overline{(0,1,1)^{t}}\), and \(\overline{(0,2,1)^{t}}\) and another diamond with vertices \(\overline{(0,2,1)^{t}},\overline{(0,2,0)^{t}},\overline{(0,3,1)^{t}}\), and \(\overline{(0,3,0)^{t}}\). Its endpoints are \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,3,0)^{t}}\). Assume for now that \(\ell\geq 1\). Shifting \(\ell-1\) times by \(\overline{(0,3,0)^{t}}\) and concatenating, we produce an unclapsed diamond lanyard \(L_{2}\) of length \(2\ell\) with endpoints \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,3\ell,0)^{t}}\). Conjoining \(L_{1}\) and \(L_{2}\) gives us an unclapsed diamond lanyard of length \(x+2\ell\) with endpoints \(\overline{(0,x,-x)^{t}}\) and \(\overline{(0,3\ell,0)^{t}}\). Taking \(x=\pm 1+y_{22}+3\ell\) produces an edge between \(\overline{(0,x,-x)^{t}}\) and \(\overline{(0,3\ell,0)^{t}}\), and thus we have a clasped diamond lanyard in \(X\). By Lemma 2.3, we have that \(\chi(X)\geq 4\). A similar procedure gives the same result when \(\ell\leq 0\). Now we show that \(\chi(X)\leq 4\). By [1, Lemma 2.10] we have a homomorphism \[\begin{pmatrix}1&&0\\ 1&&y_{22}\\ 1&&y_{32}\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\oplus}\begin{pmatrix}1 &&0\\ 2&&y_{22}+y_{32}\end{pmatrix}_{Y}^{\text{SACG}}.\] The results of SS2.3 imply that we fail to get a \(4\)-coloring if and only if \(y_{22}+y_{32}\in\{-5,-2,-1,1,2,5\}\). First we consider the case where \(y_{22}+y_{32}=-5\). Then \[\begin{pmatrix}1&&0\\ 1&&y_{22}\\ 1&&-y_{22}-5\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\otimes}\begin{pmatrix}2 &-y_{22}-5\\ 1&&y_{22}\end{pmatrix}^{\text{SACG}}\cong\begin{pmatrix}1&&0\\ 2&&3y_{22}+5\end{pmatrix}_{Y^{\prime}}^{\text{SACG}}.\] Here we use [1, Lemmas 2.10 and 2.6], and from now on we shall do this sort of thing without referring to these lemmas each time. By Theorem 2.9, we have that \(Y^{\prime}\) is \(4\)-colorable unless \(y_{22}=-1\). But in that case we have: \[\begin{pmatrix}1&&0\\ 1&&-1\\ 1&&-4\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\otimes}\begin{pmatrix}2&-1 \\ 1&-4\end{pmatrix}^{\text{SACG}}\cong C_{7}(1,4),\] which is \(4\)-colorable. Here we use Theorems 2.9 and 2.6 as well as Lemma 2.7. In the sequel, usually we will simply compute chromatic numbers of graphs with \(2\times 2\) matrices using the results of SS2.3 without referring to the specific theorems and lemmas used. The other cases, where \(y_{22}+y_{32}\) equals \(-2\) or \(-1\) or \(1\) or \(2\) or \(5\), can each be dealt with in a similar way. We omit the proofs here, but complete details can be found in our authors' notes, which are housed on the second author's website [3]. Next we tackle "L-shaped" matrices. By performing row and column operations not unlike those in SS2.4, it suffices to consider only matrices with some additional restrictions imposed. **Lemma 2.16**.: _Suppose we have_ \[\begin{pmatrix}y_{11}&&0\\ y_{21}&&0\\ y_{31}&&y_{32}\end{pmatrix}_{X}^{\text{SACG}},\] _where \(y_{11},y_{21},y_{32}>0\) and \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\)._ _Then:_ 1. _We have that_ \(X\) _has loops if and only if_ \(y_{32}=1\)_._ 2. _We have that_ \(\chi(X)=2\) _if and only if_ \(y_{11}+y_{21}+y_{31}\) _and_ \(y_{32}\) _are both even._ 3. _We have that_ \(\chi(X)=4\) _if and only if_ \(y_{11}=y_{21}=-y_{31}=1\) _and_ \(3\nmid y_{32}\) _and_ \(y_{32}>1\)_._ 4. _Otherwise,_ \(\chi(X)=3\)_._ Proof.: The first two statements are straightforward to prove. Now suppose that \(X\) does not have loops (i.e., that \(y_{32}\geq 2\)) and is not bipartite. Let \(M_{X}\) be the matrix in the lemma statement. We have that \(3\) divides either \(y_{11}\), \(y_{21}\) \(y_{11}+y_{21}\), or \(y_{11}-y_{21}\), and we divvy into cases accordingly. First suppose that \(3\,|\,y_{11}\). We have \[\begin{pmatrix}y_{11}&&0\\ y_{21}&&0\\ y_{31}&&y_{32}\end{pmatrix}_{X}^{\text{SACG}}\stackrel{{\text{$ \emptyset$}}}{{\rightarrow}}\begin{pmatrix}y_{11}&&0\\ y_{21}+y_{31}&&y_{32}\end{pmatrix}_{Y}^{\text{SACG}}.\] We see that \(3\,|\,\det M_{Y}\), where \(M_{Y}\) is the Heuberger matrix for \(Y\) shown above. By Cor. 2.10, it follows that \(Y\) is \(3\)-colorable unless it has loops. But this cannot happen, because \(3\,|\,y_{11}\) and \(y_{11}>0\) and \(y_{32}\geq 2\). The case where \(3\,|\,y_{21}\) is handled similarly, as is the case where \(3\,|\,y_{11}+y_{21}\); in this latter case, begin with a homomorphism that "collapses" the top two rows by adding them. Finally, suppose that \(3\,|\,y_{11}-y_{21}\). We may assume without loss of generality that \(y_{11}\geq y_{21}\), for if not, then swap the top two rows. We have \[\begin{pmatrix}y_{11}&&0\\ y_{21}&&0\\ y_{31}&&y_{32}\end{pmatrix}_{X}^{\text{SACG}}\stackrel{{\text{$ \emptyset$}}}{{\rightarrow}}\begin{pmatrix}y_{11}-y_{21}&&0\\ y_{31}&&y_{32}\end{pmatrix}_{Y}^{\text{SACG}}.\] By Cor. 2.10, it follows that \(Y\) is \(3\)-colorable unless it has loops. Because \(y_{32}\geq 2\) and \(3\,|\,y_{11}-y_{21}\), this occurs if and only if (i) \(y_{11}-y_{21}=1\) and \(y_{32}\,|\,y_{31}\), or (ii) \(y_{11}-y_{21}=0\) and \(\gcd(y_{31},y_{32})=1\). Suppose (i) occurs. By our assumption that \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\), we must have that \(y_{31}=0\). But then by [1, Lemma 2.7, Theorem 2.15, and Example 2.1], we have that \(X\) is \(3\)-colorable. Now suppose that (ii) occurs. Let \(\alpha,\beta\in\mathbb{Z}\) such that \(\alpha y_{31}+\beta y_{32}=1\). Multiply \(M_{X}\) on the right by the unimodular matrix \[\begin{pmatrix}\alpha&-y_{32}\\ \beta&y_{31}\end{pmatrix}\] to get \[\begin{pmatrix}y_{11}&0\\ y_{11}&0\\ y_{31}&y_{32}\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix}\alpha y_{11}&-y_ {11}y_{32}\\ \alpha y_{11}&-y_{11}y_{32}\\ 1&&0\end{pmatrix}_{X}^{\text{SACG}}\stackrel{{\text{$\emptyset$} }}{{\rightarrow}}\begin{pmatrix}1&0\\ -2\alpha y_{11}&2y_{11}y_{32}\end{pmatrix}_{Y^{\prime}}^{\text{SACG}}\] Provided \(Y^{\prime}\) does not have loops, \(Y^{\prime}\) is is isomophic to the Heuberger circulant \(C_{n^{\prime}}(a^{\prime},b^{\prime})\) with \(n^{\prime}=2y_{11}y_{32}\), \(a^{\prime}=2\alpha y_{11}\), \(b^{\prime}=1\). So \(Y^{\prime}\) is \(3\)-colorable unless it has loops or one of the exceptional cases in Theorem 2.6 occurs. We now deal with these possibilities one at a time. Suppose \(Y^{\prime}\) has loops. Because \(y_{11}>0\) and \(y_{32}\geq 2\), this occurs if and only if \(2y_{11}y_{32}\,|\,2\alpha y_{11}\), which happens if and only if \(y_{32}\,|\,\alpha\). But then using column operations, we get that \(X\) has loops, contrary to our assumptions. Observe that \(n^{\prime}=\pm 5\) and \(n^{\prime}=\pm 13\) cannot happen, because \(n^{\prime}\) is even. In the remaining cases, we show \(y_{11}=1\), then we proceed from there. Suppose \(a^{\prime}\equiv\pm 2b^{\prime}\pmod{n^{\prime}}\). So \(2a\vartheta_{11}\equiv\pm 2\pmod{2y_{11}y_{32}}\). So \(\alpha y_{11}\equiv\pm 1\pmod{y_{11}y_{32}}\). So \(y_{11}\mid\pm 1\), which implies \(y_{11}=1\). Suppose \(2a^{\prime}\equiv\pm b^{\prime}\pmod{n}\). So \(4\alpha y_{11}\equiv\pm 1\pmod{2y_{11}y_{32}}\). So \(y_{11}\mid\pm 1\), which implies \(y_{11}=1\). Thus we have: \[\begin{pmatrix}1&0\\ 1&0\\ y_{31}&y_{32}\end{pmatrix}_{X}\stackrel{{\emptyset}}{{\to}} \begin{pmatrix}1&0\\ y_{31}+1&y_{32}\end{pmatrix}_{Y^{\prime\prime}}\] Then \(Y^{\prime\prime}\) has loops if and only if \(y_{32}\mid y_{31}+1\). But then \(y_{31}=ky_{32}-1\) for some integer \(k\). From our assumption that \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\), we must have that \(k=0\) and \(y_{31}=-1\). The conclusion of the lemma now follows in this case from Lemma 2.15. So now we can assume that \(y_{31}<-1\). Then by Lemma 2.7, we have that \(Y^{\prime\prime}\) is isomorphic to the Heuberger ciculant \(C_{n^{\prime\prime}}\big{(}a^{\prime\prime},b^{\prime\prime}\big{)}\) with \(n^{\prime\prime}=y_{32}\), \(a^{\prime\prime}=-y_{31}-1\), \(b^{\prime\prime}=1\). Suppose \(Y^{\prime\prime}\) is not \(3\)-colorable. So one of the exceptional cases in Theorem 2.6 occurs. We now deal with these possibilities one at a time. \(\flat\) Suppose that \(n^{\prime\prime}=5\) and \(a^{\prime\prime}\equiv\pm 2b^{\prime\prime}\pmod{5}\). Then \(y_{32}=5\) and \(a^{\prime\prime}\equiv 2\) or \(3\) mod \(5\). Hence \(y_{31}=-2\), using that \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\) as well as that \(y_{31}<-1\). But then \[\begin{pmatrix}1&0\\ 1&0\\ -2&5\end{pmatrix}_{X}\stackrel{{\text{SACG}}}{{\to}}\begin{pmatrix} 1&0\\ -1&5\end{pmatrix}^{\text{SACG}}\] gives us a \(3\)-coloring of \(X\). \(\flat\) Suppose \(n^{\prime\prime}=13\) and one of \(a^{\prime\prime}\) or \(b^{\prime\prime}\) is congruent to \(\pm 5\) times the other modulo \(13\): Then \(y_{32}=13\) and \(a^{\prime\prime}\equiv 5\) or \(8\) modulo \(13\). From \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\) and \(y_{31}<-1\) and \(a^{\prime\prime}=-y_{31}-1\), we get \(y_{31}=-6\). But then \[\begin{pmatrix}1&0\\ 1&0\\ -6&13\end{pmatrix}_{X}\stackrel{{\text{SACG}}}{{\to}}\begin{pmatrix} 2&0\\ -6&13\end{pmatrix}^{\text{SACG}}\stackrel{{\text{SACG}}}{{\to}} \begin{pmatrix}1&0\\ -3&13\end{pmatrix}^{\text{SACG}}\] gives us a map to a \(3\)-colorable Heuberger circulant. \(\flat\) Suppose \(a^{\prime\prime}\equiv 2b^{\prime\prime}\pmod{n^{\prime\prime}}\). Then \(y_{32}\mid y_{31}+3\). Note we cannot have \(y_{31}=-2\), since then \(y_{32}\mid 1\), but \(y_{32}>1\). If \(y_{31}=-3\), then we have \[\begin{pmatrix}1&0\\ 1&0\\ -3&y_{32}\end{pmatrix}_{X}\stackrel{{\text{SACG}}}{{\to}} \begin{pmatrix}2&0\\ -3&y_{32}\end{pmatrix}_{Z}^{\text{SACG}}\] The graph \(Z\) cannot have loops. Let \(a=3,b=2,n=2y_{32}\). Then \(Z\) is isomorphic to the Heuberger circulant \(C_{n}(a,b)\). We cannot have \(n=5\) or \(n=13\), because \(n\) is even. If \(a\equiv 2b\pmod{n}\), then \(2y_{32}\mid-1\), which cannot happen. If \(a\equiv-2b\pmod{n}\), then \(2y_{32}\mid 7\), which cannot happen. If \(2a\equiv b\pmod{n}\), then \(2y_{32}\mid 4\), which implies that \(y_{32}=2\), but this violates \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\). If \(2a\equiv-b\pmod{n}\), then \(2y_{32}\mid 8\), which implies that \(y_{32}=2\) or \(y_{32}=4\), both of which violate \(-\frac{y_{32}}{2}\leq y_{31}\leq 0\). Now assume \(y_{31}<-3\), which implies that \(y_{31}+3<0\). So from \(y_{32}\mid y_{31}+3\), we get that \(y_{32}\leq-3-y_{31}\leq-3+\frac{y_{32}}{2}\). But \(y_{32}>0\), so this cannot happen. * Suppose that \(a^{\prime\prime}\equiv-2b^{\prime\prime}\pmod{n^{\prime\prime}}\). Then \(y_{32}\mid y_{31}-1\). So \(y_{32}\leq 1-y_{31}\leq 1+\frac{y_{32}}{2}\). But \(y_{32}>1\). * Suppose that \(2a^{\prime\prime}\equiv b^{\prime\prime}\pmod{n^{\prime\prime}}\). Then \(y_{32}\mid 2y_{31}+3\). Because \(y_{31}\leq-2\), we have \(2y_{31}+3<0\). Hence \(y_{32}\leq-2y_{31}-3\leq y_{32}-3\), which is a contradiction. * Suppose that \(2a^{\prime\prime}\equiv-b^{\prime\prime}\pmod{n^{\prime\prime}}\). Then \(y_{32}\mid 2y_{31}+1\). So \(y_{32}\leq-2y_{31}-1\leq y_{32}-1\), which is a contradiction. In our final preparatory step, we contemplate "\(I\) on top" matrices. As before, we obtain upper bounds by mapping to graphs with \(2\times 2\) matrices, and we obtain a lower bound in some cases by finding diamond lanyards as subgraphs. **Lemma 2.17**.: _Suppose \(X\) is a standardized abelian Cayley graph with an associated Heuberger matrix_ \[M_{X}=\begin{pmatrix}1&0\\ 0&1\\ y_{31}&y_{32}\end{pmatrix},\] _where \(y_{31},y_{32}>0\) and \(y_{31}\leq y_{32}\). Then_ \[\chi(X)=\begin{cases}2&\text{if $y_{31}$ and $y_{32}$ are both odd}\\ 4&\text{if $y_{31}=2$ and $3\mid y_{32}$}\\ 4&\text{if $1\not\equiv y_{31}\pmod{3}$ and $y_{32}=1+y_{31}$}\\ 3&\text{otherwise}.\end{cases}\] Before embarking on the proof, we note that by [1, Example 2.4], we have that \(X\) is isomorphic to the distance graph \(\operatorname{Cay}(\mathbb{Z},\{\pm 1,\pm y_{31},\pm y_{32}\})\). Hence Lemma 2.17 is a special case of Zhu's theorem, as discussed in [2]. We offer here an alternative proof using Heuberger matrices. Proof.: [1, Lemma 2.11] implies that \(\chi(X)=2\) if and only if \(y_{31}\) and \(y_{32}\) are both odd. To show that \(X\) is \(4\)-colorable, consider \[\left(\begin{array}{cc}1&0\\ 0&1\\ y_{31}&y_{32}\end{array}\right)_{X}^{\text{SACG}}\stackrel{{\text{ \tiny SACG}}}{{\Rightarrow}}\left(\begin{array}{cc}1&0\\ y_{31}&1+y_{32}\end{array}\right)_{Y}^{\text{SACG}}\] By Lemma 2.8 and 2.7, we see that \(Y\) does not contain loops and is isomorphic to the Heuberger circulant \(C_{1+y_{32}}(1,y_{31})\). So \(Y\) is \(4\)-colorable unless \(y_{32}=4\) and \(y_{31}\in\{2,3\}\). But in these cases respectively take \[\left(\begin{array}{cc}1&0\\ 0&1\\ 2&4\end{array}\right)_{X}^{\text{SACG}}\stackrel{{\text{\tiny SACG }}}{{\Rightarrow}}\left(\begin{array}{cc}1&0\\ 2&3\end{array}\right)^{\text{SACG}}\text{ and }\left(\begin{array}{cc}1&0\\ 0&1\\ 3&4\end{array}\right)_{X}^{\text{SACG}}\stackrel{{\text{\tiny SACG }}}{{\Rightarrow}}\left(\begin{array}{cc}1&-1\\ 3&4\end{array}\right)^{\text{SACG}}.\] Indeed, \(Y\) is \(3\)-colorable unless one of the six exceptional cases in Theorem 2.6 occurs. Those cases each place restrictions on \(y_{31}\) and \(y_{32}\), whereupon we can modify the mapping appropriately to try to get a \(3\)-coloring. One can show that this procedure will produce a \(3\)-coloring unless either \(y_{31}=2\) and \(3\mid y_{32}\), or else \(1\not\equiv y_{31}\pmod{3}\) and \(y_{32}=1+y_{31}\). The logic is quite similar to that in the proofs of Lemmas 2.15 and 2.16, so we omit it here. Complete details can be found in our authors' notes, which are housed on the second author's website [3]. Finally, we show that if either \(y_{31}=2\) and \(3\mid y_{32}\), or else \(1\not\equiv y_{31}\pmod{3}\) and \(y_{32}=1+y_{31}\), then \(X\) contains a diamond lanyard. By Lemma 2.3, this will show that \(\chi(X)\geq 4\) in these cases. We note that this is essentially what Zhu does in [8] to find a lower bound on the fractional chromatic number of distance graphs such as these. Let \(H\) be the subgroup of \(\mathbb{Z}^{3}\) generated by the columns of \(M_{X}\). We denote by \(\overline{(a,b,c)^{t}}\) the vertex \((a,b,c)^{t}+H\) of \(X\). Suppose \(y_{31}=2\) and \(3\mid y_{32}\). We have that \(y_{32}=3k\pm 1\) for some positive integer \(k\). There is a diamond in \(X\) with vertices \(\overline{(0,0,0)^{t}},\overline{(0,0,1)^{t}},\overline{(0,0,2)^{t}}\), and \(\overline{(0,0,3)^{t}}\). Shifting this \(k-1\) times by \(\overline{(0,0,3)^{t}}\) and concatenating, we obtain a diamond lanyard of length \(k\) with endpoints \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,0,3k)^{t}}\). Now suppose that \(1\not\equiv y_{31}\pmod{3}\) and \(y_{32}=1+y_{31}\). So either \(y_{31}\) or \(y_{32}\) equals \(3k\) for some positive integer \(k\). We have a diamond in \(X\) with vertices \(\overline{(0,0,0)^{t}},\overline{(0,0,1)^{t}},\overline{(0,0,y_{32})^{t}}\), and \(\overline{(0,0,y_{32}+1)^{t}}\). Shifting this by \(\overline{(0,0,y_{32}+1)^{t}}\), we obtain an unclapsed diamond lanyard of length two with endpoints \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,0,2y_{32}+2)^{t}}\). Append to this a diamond with vertices \(\overline{(0,0,2y_{32}+2)^{t}},\overline{(0,0,y_{32}+3)^{t}},\overline{(0,0,y_ {32}+2)^{t}}\), and \(\overline{(0,0,3)^{t}}\). We thus obtain an unclapsed diamond lanyard of length three with endpoints \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,0,3)^{t}}\). Shifting this \(k-1\) times by \(\overline{(0,0,3)^{t}}\) and concatenating, we obtain a diamond lanyard of length \(3k\) with endpoints \(\overline{(0,0,0)^{t}}\) and \(\overline{(0,0,3k)^{t}}\). Finally, we turn our attention to proving Theorem 2.14. The essense of the proof is to show that if we do not have a homomorphism from \(X\) to a \(3\)-colorable graph with a \(2\times 2\) matrix, then \(M_{X}\) must be in a form where (perhaps after some manipulations) one of the preceding three lemmas applies. Proof of Theorem 2.14.: The first statement follows from Lemma 2.13, and the second follows from [1, Lemma 2.11]. Now suppose that \(M_{X}\) is one of the six types of matrices listed in the third statement. Lemma 2.17 shows that if \(M_{X}\) is of the form \(\begin{pmatrix}1&0\\ 0&1\\ 3k&1+3k\end{pmatrix}\), then \(\chi(X)=4\). For the other five, we can perform row and column operations as per [1, Lemma 2.6] to obtain a matrix for an isomorphic graph so that either Lemma 2.17 or 2.15 proves the third statement of the corollary. For example, suppose \(M_{X}\) is of the form \(\begin{pmatrix}1&0\\ -1&2\\ -1-3k&2+3k\end{pmatrix}\) for an integer \(k\). Add the second column to the first and then multiply the third row by \(-1\) to produce the matrix \(\begin{pmatrix}1&0\\ 1&2\\ 1&-2-3k\end{pmatrix}\), whereupon Lemma 2.15 applies. We leave the computations in the other four cases to the reader. Finally, assume that none of the first three statements apply. We will show that \(X\) has a \(3\)-coloring. Take the following mapping: \[\begin{pmatrix}y_{11}&0\\ y_{21}&y_{22}\\ y_{31}&y_{32}\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\otimes}\begin{pmatrix} y_{11}&0\\ y_{21}-y_{31}&y_{22}-y_{32}\end{pmatrix}_{Y}^{\text{SACG}}\] Let \(M_{Y}\) be the \(2\times 2\) matrix given above for \(Y\). From Def. 2.11 we have that \(3\mid\det M_{Y}\). So by Cor. 2.10, we have that \(Y\) (and hence \(X\)) is \(3\)-colorable unless \(Y\) has loops. (We remark that we imposed the third condition in Def. 2.11 specifically so that we can use Cor. 2.10 right here.) By Lemma 2.8 and Def. 2.11, we have that either (i) \(y_{22}-y_{32}=-1\), or (ii) \(y_{11}=1\) and \(y_{22}-y_{32}\mid y_{21}-y_{31}\). Suppose (i) holds. Because \(3\mid\det M_{Y}\), we must have that \(3\mid y_{11}\). Now consider \[\begin{pmatrix}y_{11}&0\\ y_{21}&y_{22}\\ y_{31}&y_{32}\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\otimes}\begin{pmatrix} y_{11}&0\\ y_{21}+y_{31}&y_{22}+y_{32}\end{pmatrix}^{\text{SACG}}\] By Cor. 2.10, this produces a \(3\)-coloring unless the target graph has loops. But by Lemma 2.8, this occurs if and only if \(y_{22}+y_{32}=\pm 1\), which gives us that \(y_{22}=0\) and \(y_{32}=1\). But then the first statement in the theorem holds, contrary to assumption. Thus (ii) holds. Because \(3\mid\det M_{Y}\), we must have \(y_{32}=y_{22}+3k\) for some integer \(k\geq 0\). (Here we use that \(y_{32}\geq y_{22}\).) Also \(\ell(y_{22}-y_{32})=y_{21}-y_{31}\) for some integer \(\ell\), which gives us that \(y_{31}=y_{21}+3k\ell\). So we have \[M_{X}=\begin{pmatrix}1&0\\ y_{21}&y_{22}\\ y_{21}+3k\ell&y_{22}+3k\end{pmatrix}. \tag{3}\] Take the mapping \[\begin{pmatrix}1&0\\ y_{21}&y_{22}\\ y_{21}+3k\ell&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\otimes} \begin{pmatrix}1&0\\ 2y_{21}+3k\ell&2y_{22}+3k\end{pmatrix}_{Y^{\prime}}^{\text{SACG}}.\] Either \(Y^{\prime}\) has loops, or else by Lemma 2.7 we have that \(Y^{\prime}\) is isomorphic to the Heuberger circulant \(C_{n^{\prime}}(a^{\prime},b^{\prime})\) with \(n^{\prime}=2y_{22}+3k\) and \(a^{\prime}=-2y_{21}-3k\ell\) and \(b^{\prime}=1\). \(\flat\) First suppose that \(Y^{\prime}\) has loops. By Lemma 2.8, this occurs if and only if \(2y_{22}+3k\mid 2y_{21}+3k\ell\). Then \(2y_{21}+3k\ell=(2y_{22}+3k)q\) for some \(q\in\mathbb{Z}\). So \(y_{21}=qy_{22}+\frac{3}{2}k(q-\ell)\). Letting \(t=q-\ell\), by various column operations we have \[\begin{pmatrix}1&0\\ y_{21}&y_{22}\\ y_{21}+3k\ell&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix}1&0\\ qy_{22}+\frac{3}{2}k(q-\ell)&y_{22}\\ qy_{22}+\frac{3}{2}k(q+\ell)&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix} 1&0\\ \frac{3}{2}k(q-\ell)&y_{22}\\ \frac{3}{2}k(-q+\ell)&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}\] \[=\begin{pmatrix}1&0\\ \frac{3}{2}kt&y_{22}\\ -\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\otimes} \begin{pmatrix}\frac{3}{2}kt&y_{22}\\ -1-\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{Y^{\prime\prime}}^{\text{SACG}}\] Either \(Y^{\prime\prime}\) has loops, or else by Lemma 2.7 we have that \(Y^{\prime\prime}\) is isomorphic to the Heuberger circulant \(C_{n^{\prime\prime}}(a^{\prime},b^{\prime})\) with \(n^{\prime\prime}=(3kt+1)y_{22}+\frac{9}{2}k^{2}t\) and \(a^{\prime\prime}=\frac{3}{2}kt+1\) and \(b^{\prime\prime}=\frac{3}{2}kt\). \(\flat\)\(\flat\)\(\flat\) First suppose \(Y^{\prime\prime}\) has loops. Let \(M_{Y^{\prime\prime}}\) be the given matrix for \(Y^{\prime\prime}\). By Lemma 2.8 we have that either the top or bottom row of \(M_{Y^{\prime\prime}}\) is zero, or else \(n^{\prime\prime}\) divides every entry in a row of \(M_{Y^{\prime\prime}}\). If the first row is zero, then \(kt=0\) and \(X\) had loops to begin with, contrary to assumption. If the second row is zero, then we have \(-1-\frac{3}{2}kt=0\) and \(y_{22}+3k=0\), so \[\begin{pmatrix}1&0\\ \frac{3}{2}kt&y_{22}\\ -\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix}1&0\\ -1&-3k\\ 1&0\end{pmatrix}_{X}^{\text{SACG}}\cong\begin{pmatrix}1&0\\ 1&3k\\ 1&0\end{pmatrix}^{\text{SACG}},\] which is 3-colorable by Lemma 2.15. Now suppose that \(n^{\prime\prime}\) divides every entry in either the first or second row of \(M_{Y^{\prime\prime}}\). We will work out here the details of the former case; for the latter, which is similar, see the authors' notes at [3]. We have that \(n^{\prime\prime}\mid\frac{3}{2}kt\) and \(n^{\prime\prime}\mid y_{22}\). Observe that because \(X\) does not have loops, hence \(t\neq 0\) and \(k\neq 0\) (and therefore \(k>0\)). We split into cases according to whether \(t\) is positive or negative. First suppose \(t>0\). From \(n^{\prime\prime}\,\left|\,\frac{3}{2}kt\right.\) we get \[-\frac{3}{2}kt\leq(3kt+1)y_{22}+\frac{9}{2}k^{2}t\leq\frac{3}{2}kt\] \[\frac{-\frac{3}{2}kt-\frac{9}{2}k^{2}t}{3kt+1}\leq y_{22}\leq\frac{\frac{3}{2} kt-\frac{9}{2}k^{2}t}{3kt+1}\qquad\text{because $t>0$}\] \[-\frac{1}{2}-\frac{3}{2}k<y_{22}\leq-\frac{1}{2}-\frac{3}{2}k+\frac{-\frac{1}{2 }+\frac{3}{2}k}{3kt+1}<-\frac{3}{2}k\] This cannot happen, because both \(k\) and \(y_{22}\) are integers. Now suppose \(t<0\). From \(n^{\prime\prime}\,\left|\,\frac{3}{2}kt\right.\) after some calculations we get that \[\frac{1}{2}-\frac{3}{2}k>y_{22}\geq-\frac{3}{2}-\frac{3}{2}k.\] Using the fact that \(y_{22}\) and \(k\) are integers, this tells us that \(y_{22}=-\frac{3}{2}k+\epsilon\) where \(\epsilon\in\{0,-\frac{1}{2},-1,-\frac{3}{2}\}\). We will work out here only the cases where \(\epsilon=0\) and \(\epsilon=-1\); the other two cases are similar, and details can be found at [3]. \(\flat\ \flat\ \flat\ \flat\ * \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Because \(q\) is an integer, this gives us that \(q=-3\) or \(q=-4\). If \(q=-3\), then from (5) and from the fact that \(k\) is an odd positive integer, we get that \(k=1\). But then \(y_{22}=-\frac{3}{2}k+\epsilon=-2\) and \(y_{32}=y_{22}+3k=1\), violating the condition \(|y_{22}|\leq|y_{32}|\) from Def. 2.11. So \(q=-4\). But this contradicts (4). \(\flat\) Now suppose that \(Y^{\prime}\) does not have loops, so \(Y^{\prime}\) is isomorphic to the Heuberger circulant \(C_{n^{\prime}}(a^{\prime},b^{\prime})\) with \(n^{\prime}=2y_{22}+3k\) and \(a^{\prime}=-2y_{21}-3k\ell\) and \(b^{\prime}=1\). We have that \(Y^{\prime}\) (and hence \(X\)) is \(3\)-colorable unless one of the six exceptional cases in Thm. 2.6 occurs. We work out here with some granularity only one of these cases, namely where \(3\nmid n^{\prime}\) and \(a^{\prime}\equiv 2b^{\prime}\;(\bmod\,n^{\prime})\). The other five cases are handled similarly. Complete details can be found in our authors' notes, which are housed on the second author's website [3]. From \(3\nmid n^{\prime}\) we get that \(3\nmid y_{22}\). From \(a^{\prime}\equiv 2b^{\prime}\;(\bmod\,n^{\prime})\) we get that \(y_{21}=qy_{22}-1+\frac{3}{2}k(q-\ell)\) for some integer \(q\). Let \(t=q-\ell\). Note \(k\) or \(t\) must be even. We have: \[\begin{pmatrix}1&0\\ y_{21}&y_{22}\\ y_{21}+3k\ell&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix}1&0\\ qy_{22}-1+\frac{3}{2}k(q-\ell)&y_{22}\\ qy_{22}-1+\frac{3}{2}k(q-\ell)+3k\ell&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}\] \[=\begin{pmatrix}1&0\\ qy_{22}-1+\frac{3}{2}k(q-\ell)&y_{22}\\ qy_{22}-1+\frac{3}{2}k(q+\ell)&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}= \begin{pmatrix}1&0\\ -1+\frac{3}{2}k(q-\ell)&y_{22}\\ -1+\frac{3}{2}k(-q+\ell)&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix} 1&0\\ -1+\frac{3}{2}kt&y_{22}\\ -1-\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}=\begin{pmatrix}1&0\\ -1+\frac{3}{2}kt&y_{22}\\ -1-\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}\] Suppose \(t=0\). Then \(y_{21}=qy_{22}-1\). From Def. 2.11 we have that \(-\frac{|y_{22}|}{2}\leq y_{21}\leq 0\). Suppose \(y_{22}>0\). If \(y_{22}=1\), then \(y_{21}=0\), and in this case the theorem now follows from Lemma 2.17. So we may assume that \(y_{22}\geq 2\). Then from \(qy_{22}-1\leq 0\) we get that \(q\leq 0\). From \(-\frac{y_{22}}{2}\leq qy_{22}-1\) we then get that \(q=0\). From \(t=q-\ell\) we then get \(\ell=0\). So \(y_{21}=y_{31}=-1\). After multiplying the bottom row of \(M_{X}\) by \(-1\), the theorem now holds by Lemma 2.15. The same is true for similar reasons if \(y_{22}<0\). Hence we may assume that \(t\neq 0\). A similar argument shows that we may assume that \(k\neq 0\). We divide now into cases according to whether \(t>0\) or \(t<0\). We write here only about the case \(t<0\), the case \(t>0\) being similar. Consider the mapping \[\begin{pmatrix}1&0\\ -1+\frac{3}{2}kt&y_{22}\\ -1-\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\oplus} \begin{pmatrix}\frac{3}{2}kt&y_{22}\\ -1-\frac{3}{2}kt&y_{22}+3k\end{pmatrix}_{Y^{\prime\prime}}^{\text{SACG}}\] Either \(Y^{\prime\prime}\) has loops, or else by Lemma 2.7 we have that \(Y^{\prime\prime}\equiv C_{n^{\prime\prime}}(a^{\prime\prime},b^{\prime\prime})\) where \(a^{\prime\prime}=1+\frac{3}{2}kt\) and \(b^{\prime\prime}=\frac{3}{2}kt\) and \(n^{\prime\prime}=(3kt+1)y_{22}+\frac{9}{2}k^{2}t\). Let \(M_{Y^{\prime\prime}}\) be the Heuberger matrix for \(Y^{\prime\prime}\) given above. Because \(kt\neq 0\) and \(kt\in\mathbb{Z}\), it follows that neither row of \(M_{Y^{\prime\prime}}\) is a zero row. Lemma 2.8 then tells us that \(Y^{\prime\prime}\) has loops if and only if \(n^{\prime\prime}\) divides every entry in either the top or bottom row of \(M_{Y^{\prime\prime}}\). Otherwise, we have that \(Y^{\prime\prime}\) (and hence \(X\)) is \(3\)-colorable unless one of the six exceptional cases in Theorem 2.6 holds. This gives us a total of eight cases to consider. Of those, we write here only about the possibility that \(n^{\prime\prime}\) divides both \(-1-\frac{3}{2}kt\) and \(y_{22}+3k\). The other seven cases can be managed using the same sort of techniques we've employed throughout this subsection; a full exposition can be found in [3]. To recap, we now assume that \(n^{\prime\prime}\mid-1-\frac{3}{2}kt\) and \(n^{\prime\prime}\mid y_{22}+3k\) and \(t<0\) and \(k>0\). So \[1+\frac{3}{2}kt\leq(3kt+1)y_{22}+\frac{9}{2}k^{2}t\leq-1-\frac{3}{2}kt.\] Solving for \(y_{22}\) we find that \[\frac{1}{2}-\frac{3}{2}k>y_{22}\geq-1-\frac{3}{2}k.\] Because \(k,y_{22}\in\mathbb{Z}\), we have that \(y_{22}=-\frac{3}{2}k+\epsilon\) where \(\epsilon\in\{-1,-\frac{1}{2},0\}\). We write here only about the case where \(\epsilon=-1\). The cases where \(\epsilon=-\frac{1}{2}\) or \(\epsilon=0\) use the same sorts of techniques we've seen previously. So assume \(y_{22}=-\frac{3}{2}k-1\). Then \(n^{\prime\prime}=-3k(\frac{1}{2}+t)-1\) and \(y_{22}+3k=\frac{3}{2}k-1>0\). So from \(n^{\prime\prime}\mid y_{22}+3k\) we have that \[-3k\left(\frac{1}{2}+t\right)-1\leq\frac{3}{2}k-1.\] Solving for \(t\), we find that \(t\leq-1\). Because \(t\) is a negative integer, this implies that \(t=-1\). Recall that \(y_{21}=qy_{22}-1+\frac{3}{2}kt=(q+1)y_{22}\). By Def. 2.11 we have that \(\frac{y_{22}}{2}\leq y_{21}=(q+1)y_{22}\leq 0\). Here we use that \(y_{22}<0\). Dividing by \(y_{22}\) we get that \(\frac{1}{2}\geq q+1\geq 0\), so \(q=-1\), because \(q\) is an integer. Then \(y_{21}=0\). From \(t=q-\ell\) we get \(\ell=0\), so \(y_{21}=0\). But then \(X\) had loops, contrary to assumption. The operations performed to put a \(3\times 2\) matrix \(M\) into modified Hermite normal form do not affect the gcd of the determinants of the \(2\times 2\) minors of \(M\). Hence we have the following corollary to Theorem 2.14, in analogy to Corollary 2.10. **Corollary 2.18**.: _Suppose \(X\) is a standardized abelian Cayley graph with an associated \(3\times 2\) Heuberger matrix \(M_{X}\). If \(X\) does not have loops, and if \(3\) divides the determinant of every \(2\times 2\) minor of \(M_{X}\), then \(X\) is \(3\)-colorable._ Proof.: If \(M_{X}\) has a zero row, we may delete it without affecting the chromatic number, so in this case, the result follows from Cor. 2.10. If the columns of \(M_{X}\) are linearly dependent over \(\mathbb{Q}\), then after appropriate column operations we obtain a zero column, whereupon the result follows from [1, Thm. 2.15]. Otherwise, as noted just before the statement of this corollary, we may assume that \(M_{X}\) is in modified Hermite normal form. But for each of the six types of matrices \(M_{X}\) in the third statement in Theorem 2.14 for which \(\chi(X)>3\), at least one \(2\times 2\) minor has a determinant not divisible by \(3\). The result follows. It would be interesting to know whether Cor. 2.18 holds for matrices of arbitrary size. We conjecture that it does. Indeed, we remark that Thm. 2.14 can be recast entirely so that one can determine the chromatic number directly from an arbitrary Heuberger matrix \(M\), not necessarily in modified Hermite normal form. Namely, let \(\alpha,\beta,\gamma\) be the absolute values of the determinants of the \(2\times 2\) minors of \(M\), such that \(\alpha\leq\beta\leq\gamma\). The six exceptional cases in Thm. 2.14 occur precisely when (i) one row of \(M\) has relatively prime entries, and (ii) \(\alpha>0\), and (iii) either \(\{\alpha,\beta,\gamma\}=\{1,2,3k\}\) for some positive integer \(k\), or else \(\gamma=\alpha+\beta\) and \(\alpha\not\equiv\beta\pmod{3}\). The other cases (bipartite, loops, zero row, etc.) can easily be characterized directly from \(M\). We previously noted that for a standardized abelian Cayley graph \(X\) with an associated \(2\times 2\) Heuberger matrix \(M_{X}\): (*) If \(X\) does not have loops, then \(X\) fails to be \(4\)-colorable if and only if it contains \(K_{5}\) as a subgraph, and it fails to be \(3\)-colorable if and only if it contains either \(C_{13}(1,5)\) or a diamond lanyard as a subgraph. Consequently, (*) holds for a \(3\times 2\) matrix with a zero row, as the corresponding graph equals a box product of a doubly infinite path graph and a graph with a \(2\times 2\) matrix. Each of the six exceptional cases in the third statement in Theorem 2.14 contains a diamond lanyard as a subgraph, as we saw in the proofs of Lemmas 2.15, 2.16, and 2.17. Thus (*) holds also for every standardized abelian Cayley graph \(X\) with an associated \(3\times 2\) Heuberger matrix \(M_{X}\). An algorithm to find the chromatic number for \(1\times r,m\times 1,2\times r\), or \(3\times 2\) matrices In this subsection, we provide a "quick-reference guide" to the results of this section. Specifically, we spell out a procedure to determine the chromatic number of a standardized abelian Cayley graph with a Heuberger matrix \(M_{X}\) of size \(1\times r,m\times 1,2\times r\), or \(3\times 2\), where \(m\) and \(r\) are positive integers. This procedure can easily be converted into code or pseudocode. Indeed, an implementation of this algorithm in _Mathematica_ can be found at [3]. \(\bullet\) If \(M_{X}\) is of size \(m\times 1\), apply [1, Theorem 2.15]. \(\bullet\) If \(M_{X}\) is of size \(1\times r\), apply Lemma 2.1. \(\bullet\) If \(M_{X}\) is of size \(2\times 2\), apply column operations as per [1, Lemma 2.6] to produce a lower-triangular matrix. Then apply Theorem 2.9; if the last statement in that theorem holds, use Theorem 2.6 to complete the final step in the computation. \(\bullet\) If \(M_{X}\) is of size \(2\times r\) for \(r>2\), perform column operations as per [1, Lemma 2.6] to produce a zero column. Delete that column, and iterate this procedure until you have a \(2\times 2\) matrix. Then use the procedure from the previous bullet point. \(\bullet\) If \(M_{X}\) is of size \(3\times 2\), do the following. If \(M_{X}\) has a zero row, delete that row, then use the procedure for \(2\times 2\) matrices to find the chromatic number of the graph with the resulting matrix. If the columns of \(M_{X}\) are linearly dependent over \(\mathbb{Q}\), perform column operations as per [1, Lemma 2.6] to produce a zero column. Delete that column, and then apply [1, Theorem 2.15]. Otherwise, use row and column operations as per Lemma 2.12 to find an isomorphic graph \(X^{\prime}\) with a Heuberger matrix \(M_{X^{\prime}}\) in modified Hermite normal form. Then apply Theorem 2.14. ## Acknowledgments The authors wish to thank Tim Harris for his careful read of an early version of this manuscript and for his many helpful suggestions.
2303.09141
On a fundamental problem in the analysis of cancer registry data
In epidemiology research with cancer registry data, it is often of primary interest to make inference on cancer death, not overall survival. Since cause of death is not easy to collect or is not necessarily reliable in cancer registries, some special methodologies have been introduced and widely used by using the concepts of the relative survival ratio and the net survival. In making inference of those measures, external life tables of the general population are utilized to adjust the impact of non-cancer death on overall survival. The validity of this adjustment relies on the assumption that mortality in the external life table approximates non-cancer mortality of cancer patients. However, the population used to calculate a life table may include cancer death and cancer patients. Sensitivity analysis proposed by Talb\"{a}ck and Dickman to address it requires additional information which is often not easily available. We propose a method to make inference on the net survival accounting for potential presence of cancer patients and cancer death in the life table for the general population. The idea of adjustment is to consider correspondence of cancer mortality in the life table and that in the cancer registry. We realize a novel method to adjust cancer mortality in the cancer registry without any additional information to the standard analyses of cancer registries. Our simulation study revealed that the proposed method successfully removed the bias. We illustrate the proposed method with the cancer registry data in England.
Sho Komukai, Satoshi Hattori, Bernard Rachet
2023-03-16T08:09:49Z
http://arxiv.org/abs/2303.09141v1
# On a fundamental problem in the analysis of cancer registry data ###### Abstract In epidemiology research with cancer registry data, it is often of primary interest to make inference on cancer death, not overall survival. Since cause of death is not easy to collect or is not necessarily reliable in cancer registries, some special methodologies have been introduced and widely used by using the concepts of the relative survival ratio and the net survival. In making inference of those measures, external life tables of the general population are utilized to adjust the impact of non-cancer death on overall survival. The validity of this adjustment relies on the assumption that mortality in the external life table approximates non-cancer mortality of cancer patients. However, the population used to calculate a life table may include cancer death and cancer patients. Sensitivity analysis proposed by Talb\(\check{\mbox{a}}\)ck and Dickman to address it requires additional information which is often not easily available. We propose a method to make inference on the net survival accounting for potential presence of cancer patients and cancer death in the life table for the general population. The idea of adjustment is to consider correspondence of cancer mortality in the life table and that in the cancer registry. We realize a novel method to adjust cancer mortality in the cancer registry without any additional information to the standard analyses of cancer registries. Our simulation study revealed that the proposed method successfully removed the bias. We illustrate the proposed method with the cancer registry data in England. Keywords: Cancer registry; Integral equation; Life table; Net survival; Relative survival ratio Introduction Cancer registries provide comprehensive and useful information on cancer and are utilized to conduct various epidemiology research including nation-wide comparisons of cancer survival and estimation of change in cancer survival. Angelis et al. (2014); Allemani et al. (2018) Cancer survival is an important measure. However, collecting reliable and consistent information on the cause of death is challenging. To make inference on survival from cancer without relying on the cause of death information (i.e., within the relative survival data setting), special survival analysis techniques have been then developed and widely used in analyses of cancer registry data. Several cancer survival measures using such techniques help to describe the survival experience of cancer patients, including relative survival ratio, net survival, or crude probabilities of death. Ederer, Axitell, and Cutler (1961); Cronin and Feuer (2000); Perme, Stare, and Esteve (2012); Perme, Esteve, and Rachet (2016); Belot et al. (2019) The techniques to estimate such measures are based on the assumption that the hazard of death can be decomposed into the hazard of death due to the cancer of interest and that due to other causes. In the absence of reliable information on the cause of death, as proposed in contexts other than cancer registries Breslow et al. (1983), an option is to borrow external information, in order to estimate the hazard of death due to other causes from the general population to which the patient belongs. Mortality hazards for the general population are available in population life tables based on demographic statistics, which are published in most countries at least by age, sex and calendar period. However, it assumes that the cancer deaths contained in these population life tables are too few to affect the estimation of the mortality hazard due to other causes. This assumption which may not always hold has been discussed by a few authors. Ederer, Axitell, and Cutler (1961); Esteve, Benhamou, and Raymond (1994); Talback and Dickman (2011) Ederer, Axitell, and Cutler (1961) claimed that since the sizes of age-, gender- and site-specific subpopulations of cancer patients was much smaller than their counterparts in the general population, the impact of cancer deaths contained in the general population was negligible. However, it may not be true for all cancer types and subpopulations. Talb\(\acute{\text{a}}\)ck and Dickman (2011) utilized uncommon life tables, which contained individual information on cancer patients included in these life tables. Considering the date of cancer diagnosis as censoring, they were able to estimate the non-cancer mortality hazard in the general population. They concluded that the presence of cancer deaths in the life tables hardly impacted the cancer survival estimation in most situations, but they also observed some bias in a few subpopulations. Given the general unavailability of such life tables (i.e., with individual information), Talb\(\acute{\text{a}}\)ck and Dickman (2011) also introduced a model-based method to conduct a sensitivity analysis without such individual-level information. However, their model for sensitivity analysis required the number of cancer deaths in the general population and did not account for inclusion of cancer patients in the general population. Since the number of cancer deaths cannot be obtained from cancer registries or is not reliably known in this setting, the model-based sensitivity analysis method by Talb\(\acute{\text{a}}\)ck and Dickman (2011) might not be easily applicable. In this paper, we propose a method to estimate cancer survival measures in the relative survival setting, while accounting for the potential enrollment of cancer patients and cancer deaths in the life tables. To this end, we only rely on additional information on cancer incidence, which is usually publicly available. Even if unavailable in public, incidence rates can be calculated with vital statistics coupled with the cancer registry. Thus, we do not need to take much efforts to gather additional information to apply our method and then eliminate biases due to cancer deaths in the life table. The key idea of our development is to adjust survival of cancer patients in the life-table by borrowing information from the cancer registry. We illustrate the method with the most used cancer survival measure, net survival, which is the survival probability of cancer patients in the hypothetical situation of individuals who can only die from their cancer. More specifically, we describe the method in the application to a non-parametric estimator of the net survival called Pohar-Perme estimator. Perme, Stare, and Esteve (2012) The organization of the rest of this paper is as follows. In Section 2.1, we introduce the net survival measure in the relative survival setting and its Pohar-Perme estimator. In Section 2.2, we discuss the assumptions implicitly made when using life tables in the relative survival setting. Section 3 presents the notations of the quantities, which are used in our approach, from the life tables. Section 4 details the key components of our approach: in Section 4.1, we introduce the incidence rate used in our method and explain how to estimate it from cancer registry data if unavailable in public; in Section 4.2, we summarize the assumptions to utilize the correspondence between cancer registry and life tables. In Section 4.3, we introduce an integral equation to obtain the non-cancer survival for the general population with adjustment for the cancer deaths. In Section 4.4, we show how to solve an empirical version of the integral equation. We evaluate the proposed method by the simulation studies in Section 5 and illustrate it on real cancer registry data in England in Section 6. Conclusions and some discussions are given in Section 7. Detailed formulas and all theoretical details are given in Appendixes. ## 2 Estimation of the net survival ### Pohar-Perme estimator Let \(Z_{D}\) be a vector of the baseline covariates recorded in the cancer registry such as age at diagnosis, year of diagnosis, gender and cancer stage at diagnosis. The subscript "\(D\)" is attached to covariates that are observed at the time of cancer diagnosis. Denote the time from diagnosis to death due to any causes by \(T_{O}\). We assume that \(T_{O}\) can be right-censored by the potential censoring time \(C\). Thus, the observed components are \(T=\min(T_{O},C)\), the indicator of censoring \(\Delta=I(T_{O}\leq C)\), and the covariates \(Z_{D}\), where \(I(\cdot)\) is the indicator function. Let \(T_{E}\) and \(T_{P}\) be the time to death due to cancer and that due to any causes other than cancer, respectively, from the date at cancer diagnosis. Note that \(T_{O}=\min(T_{E},T_{P})\). We assume that \(T_{E}\) and \(T_{P}\) are continuous. We suppose we observe \(n\) i.i.d. copies of \((T,\Delta,Z_{D})\), and \((T_{i},\Delta_{i},Z_{D,i})\) is the observation for the \(i\)th subject (\(i=1,2,...,n\)). We make inference based on these observations. For any random variable, we use the subscript \(i\) for representing its counterpart for the \(i\)th subject. The survival function for \(T_{O}\) is denoted by \(S_{O}(t)=\Pr(T_{O}>t)\) and the corresponding hazard and cumulative hazard functions are denoted by \(\lambda_{O}(t)\) and \(\Lambda_{O}(t)\), respectively. The survival, hazard and cumulative hazard functions conditional on \(Z_{D}\) are denoted by \(S_{O}(t|Z_{D})\), \(\lambda_{O}(t|Z_{D})\) and \(\Lambda_{O}(t|Z_{D})\), respectively. The corresponding quantities for \(T_{E}\), \(T_{P}\), and \(C\) are denoted in a similar way with the subscript "\(E\)", "\(P\)", and "\(C\)", respectively. The net survival is defined as \(S_{E}(t)=P(T_{E}>t)\), which is the marginal survival function of \(T_{E}\) and the estimand of interest. The \(PP\) estimator is defined by \[\hat{\Lambda}_{E}^{PP}(t)=\int_{0}^{t}\frac{\sum_{i=1}^{n}\frac{1}{S_{P}(u|Z_{ D,i})}\left\{dN_{i}(u)-Y_{i}(u)d\Lambda_{P}(u|Z_{D,i})\right\}}{\sum_{j=1}^{n} \frac{Y_{j}(u)}{S_{P}(u|Z_{D,j})}}. \tag{1}\] where \(N_{i}(t)\) and \(Y_{i}(t)\) are the counterpart for the \(i\)th subject of the counting process \(N(t)=I(T\leq t,\Delta=1)\) and at-risk process \(Y(t)=I(T>t)\), respectively.Perme, Stare, and Esteve (2012) Assuming that \(S_{P}(t|Z_{D})\) and \(\Lambda_{P}(t|Z_{D})\) are known for any \(Z_{D}\) and \(t\), Perme, Stare, and Esteve (2012) showed that the \(PP\) estimator consistently estimates \(\Lambda_{E}(t)\) under the conditions (A-1) \(T_{E}\perp T_{P}|Z_{D}\) and (A-2) \(C\perp\{T_{E},T_{P},Z_{D}\}\) (independent censoring). ### Extracting \(S_{p}(t|Z_{D})\) from a life table In practice, as \(S_{P}(t|Z_{D})\) is unknown, a life table for the general population is used to calculate \(S_{P}(t|Z_{D})\) by extracting the survival function of the general population with the same covariates from the life table. The information contained in the life table is defined by some socio-demographic variables such as age, calendar year, and gender. Suppose that \(Z_{D}\) has no cancer-specific variable. More specifically, let \(Z_{D}=(Z_{D}^{(1)},Z_{D}^{(2)},Z_{D}^{(others)tr})^{tr}\), where \(Z_{D}^{(1)}\) and \(Z_{D}^{(2)}\) are the age at cancer diagnosis and the year of cancer diagnosis, respectively, both being time-dependent variables, \(Z_{D}^{(others)}\) be a column vector of other time-invariant demographic covariates, such as gender and race, and for any column vector \(V\), \(V^{tr}\) indicates the transpose of \(V\). From the definition, \(S_{P}(t|Z_{D})\) is the survival function for \(T_{P}\), which is the time to non-cancer death if the subject would not die from cancer since the date of their cancer diagnosis. Although extracting the survival function corresponding to \(S_{P}(t|Z_{D})\) from life tables is a widely used practice, we would like to discuss its appropriateness more carefully. Life tables provide annual mortality rates for the population of specific age, calendar year, and \(Z_{D}^{(others)}\). With a series of life tables, we can construct a lexis diagram as shown in Figure 1\((b)\). Note that we consider \(Z_{D}^{(others)}\) fixed to a single value since this Lexis diagram is created for each value of \(Z_{D}^{(others)}\). For cancer patients of interest with covariates \(Z_{D}\) (say, 50 years old in 1990, as seen in Figure 1\((a)\)), the corresponding life table (matching on both age and year) is presented by the plain circle in Figure 1\((b)\). The survival function of the population by age and calendar year can be then extracted from the series of life tables on the diagonal line. We pretend there is a cohort of the population with this survival function. The validity of extracting \(S_{P}(t|Z_{D})\) with this survival function from the life-table is justified if **(i)**: No cancer patients are included in the cohort underlying the extracted survival function and the non-cancer subjects in this cohort do not die from cancer. **(ii)**: The survival function for the time to non-cancer death of the cancer patients included in the cancer registry data is the same to the survival function from the life table, given the same background covariates. Even supposing the assumption (ii), the assumption (i) may be questionable in reality; some cancer patients can be included in the cohort underlying the extracted survival function, and they are more likely to die of cancer, whereas some non-cancer subjects can be diagnosed with cancer after being included in that cohort and can die of cancer. ## 3 Formulating the life table and revisiting the current practice In this section, notations of the random variables related to the life tables are introduced, because we distinguish them from the notations applying to random variables related to the cancer registry. For the cancer registry, we use the notations introduced in Section 2. For the life table, a tilde is systematically added. Let \(\tilde{Z}_{L}=(\tilde{Z}_{L}^{(1)},\tilde{Z}_{L}^{(2)},\tilde{Z}_{L}^{(others) tr})^{tr}\) be a vector of covariates in the life table. Note that \(\tilde{Z}_{L}\) has the same components as \(Z_{D}\) and both of \(Z_{D}\) and \(\tilde{Z}_{L}\) vary only yearly. Suppose we consider a patient in the cancer registry, for example, who is diagnosed at age 50 in 1990. That is, \(Z_{D}^{(1)}=50\) and \(Z_{D}^{(2)}=1990\). See Figure 1\((a)\). The patient is matched with the life table of the corresponding covariates \((\tilde{Z}_{L}^{(1)},\tilde{Z}_{L}^{(2)},\tilde{Z}_{L}^{(others)tr})=(50,1990,Z_{D }^{(others)tr})\), which is represented by a plain circle in Figures 1\((a)\) and 1\((b)\). As mentioned in Subsection 2.2, the corresponding survival function can be extracted from a series of life tables on the diagonal line through this plain circle, assuming the existence of a cohort underlying this survival function. This is illustrated with two specific subjects of this cohort in Figure 1\((c)\); one had been diagnosed as a cancer at the age of 50 (\(\tilde{X}_{L}=1\); the upper panel of Figure 1\((c)\)) and the other had not (\(\tilde{X}_{L}=0\); the lower panel of Figure 1\((c)\). To describe these individuals, we introduce \(\tilde{t}_{D}\), which is the age at diagnosis. Define \(\tilde{Z}_{D}=(\tilde{Z}_{D}^{(1)},\tilde{Z}_{D}^{(2)},\tilde{Z}_{D}^{(others) tr})^{tr}\) be a covariate vector at \(\tilde{t}_{D}\). For notational convenience, set \(\tilde{t}_{L}=\tilde{Z}_{L}^{(1)}\) at \(\tilde{Z}_{L}^{(2)}\) (\(\tilde{Z}_{L}^{(2)}=1990\) for the above illustrative patient). Let \(\tilde{X}_{L}\) be a binary random variable, with the value 1 if, at \(\tilde{t}_{L}\), subject had already been diagnosed with a cancer and the value 0 otherwise. To link \(\tilde{Z}_{L}\) and \(\tilde{Z}_{D}\), we use the notation \(\tilde{Z}_{L\pm s}\) representing \(\tilde{Z}_{L}\) after/before \(s\) years from \(\tilde{t}_{L}\). That is, \(\tilde{Z}_{L\pm s}=(\tilde{Z}_{L}^{(1)}\pm s,\tilde{Z}_{L}^{(2)}\pm s,\tilde{ Z}_{L}^{(others)tr})^{tr}=(age\pm s,year\pm s,\tilde{Z}_{L}^{(others)tr})^{tr}\). If a subject with \(\tilde{Z}_{L}\) is diagnosed with a cancer after/before \(s\) years from \(\tilde{t}_{L}\), \(\tilde{t}_{D}=\tilde{t}_{L}\pm s\), then \(\tilde{Z}_{D}=\tilde{Z}_{L\pm s}\) holds. Similarly, we define \(\tilde{X}_{L\pm s}\) as the information on \(\tilde{X}_{L}\) at the time of \(\tilde{t}_{L}\pm s\). Let \(\tilde{T}_{L\to O}\) be the time to death due to any cause from the date of \(\tilde{t}_{L}\). In the subscript "\(L\to O\)", "\(L\)" in the left-hand side of the arrow means the origin and the right component corresponds to the event. Thus \(\tilde{T}_{L\to E}\) and \(\tilde{T}_{L\to P}\) are defined as the time-to-death due to cancer and non-cancer causes, respectively, from the date of \(\tilde{t}_{L}\). The random variables for the time-to-death from cancer diagnosis such as \(\tilde{T}_{D\to O}\), \(\tilde{T}_{D\to E}\) and \(\tilde{T}_{D\to P}\) are defined in a similar way. We also consider a random variable \(\tilde{T}_{D\to L}=\tilde{t}_{L}-\tilde{t}_{D}\), which the time elapsed between the cancer diagnosis and \(\tilde{t}_{L}\) for individuals whose cancer was diagnosed before the date \(\tilde{t}_{L}\) (that is, \(\tilde{X}_{L}=1\)). For the non-cancer subjects, the corresponding random variable at the date of registration into the life table (i.e. \(\tilde{X}_{L}=0\)), is defined in the same way \(\tilde{T}_{L\to D}=\tilde{t}_{D}-\tilde{t}_{L}\). Let \(\alpha(\tilde{z}_{L})=\Pr(\tilde{X}_{L}=1|\tilde{Z}_{L}=\tilde{z}_{L})\). Denote \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})=P(\tilde{T}_{L\to O}>t| \tilde{Z}_{L}=\tilde{z}_{L})\). The corresponding survival functions for \(\tilde{T}_{L\to E}\) and \(\tilde{T}_{L\to P}\) are denoted in a similar way with the subscript "\(L\to E\)" and "\(L\to P\)", respectively. Let \(\tilde{F}_{D\to L}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=1)=\Pr(\tilde{T}_ {D\to L}\leq t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=1)\) and \(\tilde{F}_{L\to D}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=\Pr(\tilde{ T}_{L\to D}\leq t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\). In the current practice, \(S_{P}(t|Z_{D}=z)\) is extracted with \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=z)\) matching \(Z_{D}=\tilde{Z}_{L}\). If \(\alpha(\tilde{z}_{L})=0\) for any \(\tilde{z}_{L}\) (no cancer patients are included in the general population used for the life table) and \(\tilde{S}_{L\to E}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=1\) for all \(\tilde{z}_{L}\) (non-cancer subjects included in the life table do not die of cancer), \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=\tilde{S}_{L \to P}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\). Then, the assumption (i) in Section 2 holds. The assumption (ii) in Section 2 can be described as \(\tilde{S}_{L\to P}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=\tilde{S}_{L \to P}(t|\tilde{Z}_{L}=\tilde{z}_{L})=S_{P}(t|Z_{D}=\tilde{z}_{L})\) under the assumption (i). Then, the current practice is justified. ## 4 Estimation of the net survival in the presence of cancer death in the life table ### Incidence rate As described in Section 2, the standard analysis of cancer registry data requires the cancer registry data and the life table. In addition to these two datasets, we suppose that the information on the annual cancer incidence rate for each \(\tilde{Z}_{L}\) is available. Let \(f_{\tilde{t}_{D}}(u|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\) be the probability density function of \(\tilde{t}_{D}\) conditional on \(\tilde{Z}_{L}=\tilde{z}_{L}\) and \(\tilde{X}_{L}=0\). The annual cancer incidence rate for the population with \(\tilde{Z}_{L}=\tilde{z}_{L}\) is defined by \[IR(\tilde{z}_{L})=\int_{\tilde{t}_{L}}^{\tilde{t}_{L}+1}f_{\tilde{t}_{D}}(u| \tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)du. \tag{2}\] In practice, \(IR(\tilde{z}_{L})\) are calculated as the number of new cancer patients diagnosed within a year divided by the number of person-years (within a year) in the general population with \(\tilde{Z}_{L}=\tilde{z}_{L}\). The number of new cancer patients is calculated from the cancer registry, and the number of person-years (within a year) in the general population is calculated from the vital statistics. Thus the \(IR(\tilde{z}_{L})\) for each cancer type, even if unavailable in public, can be calculated from the cancer registry data and the vital statistics. ### Assumptions Although some assumptions were mentioned in the previous sections, we summarize all the assumptions for quantities in the cancer registry, those in the life table, and the relationship among them. Let **(A-1)**: \(T_{E}\perp T_{P}|Z_{D}\) **(A-2)**: \(C\perp\{T_{E},T_{P},Z_{D}\}\) **(B-1)**: \(\tilde{T}_{L\to E}\perp\tilde{T}_{L\to P}|\{\tilde{Z}_{L},\tilde{X}_{L}=0\}\) **(B-2)**: \(\tilde{T}_{L\to D}\perp\tilde{T}_{D\to E}|\{\tilde{Z}_{L},\tilde{X}_{L}=0\}\) **(B-3)**: \(\tilde{T}_{D\to L}\perp\tilde{T}_{L\to O}|\{\tilde{Z}_{L},\tilde{X}_{L}=1\}\) **(C-1)**: \(S_{E}(t|Z_{D}=\tilde{z})=\tilde{S}_{D\to E}(t|\tilde{Z}_{D}=\tilde{z},\tilde{ X}_{L}=0)\) **(C-2)**: \(S_{P}(t|Z_{D}=\tilde{z})=\tilde{S}_{L\to P}(t|\tilde{Z}_{L}=\tilde{z},\tilde{ X}_{L}=0)\). **(C-3)**: \(S_{O}(t|Z_{D}=\tilde{z})=\tilde{S}_{D\to O}(t|\tilde{Z}_{D}=\tilde{z})=\tilde{ S}_{D\to O}(t|\tilde{Z}_{D}=\tilde{z},\tilde{X}_{L}=1)\) The assumptions (A-1) and (A-2) apply to the cancer registry data and are required by Poher-Perme estimator (Perme, Stare, and Esteve, 2012). The assumptions (B-1) to (B-3) apply to the life table. The assumption (B-1) corresponds to (A-1). As argued in Section 3, each cancer patient in the cancer registry is matched with a subject with the corresponding baseline characteristics in the cohort underlying in the life table. The corresponding survival function is then extracted (see Section 3 and Figure 1). Assumptions (B-2) and (B-3) describe a kind of non-informativeness for extracted \(T_{P}\) and \(T_{E}\); once the baseline characteristics are matched, a subject in the life table is selected regardless of their natural history of cancer. The assumptions from (C-1) to (C-3) establish the correspondences between the cancer registry and life table data. The assumption (C-1) implies that the survival functions of the time to cancer death from diagnosis are the same between the cancer patients registered in the cancer registry and those in the life table if they have the same covariates at diagnosis. The assumption (C-2) means that the survival functions of the time to non-cancer death are common among the cancer patients and the non-cancer subjects as long as the baseline covariates are same. The assumption (C-2) guarantees assumption (ii) of Section 2. The assumption (C-3) implies that cancer patients included in the life table are assumed to be similar to those in the cancer registry once diagnosed as cancer. ### Integral equation for \(S_{p}(t|Z_{D})\) Recall that \(\alpha(\tilde{z}_{L})=\Pr(\tilde{X}_{L}=1|\tilde{Z}_{L}=\tilde{z}_{L})\). It holds that \[\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L}) =\alpha(\tilde{z}_{L})\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L}, \tilde{X}_{L}=1)\] \[+\{1-\alpha(\tilde{z}_{L})\}\tilde{S}_{L\to O}(t|\tilde{Z}_{L}= \tilde{z}_{L},\tilde{X}_{L}=0). \tag{3}\] From (B-1), \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)= \tilde{S}_{L\to E}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\times \tilde{S}_{L\to P}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\). Then, by simple algebraic manipulation, the equation (3) leads to \[\frac{\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})-\alpha( \tilde{z}_{L})\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=1 )}{\{1-\alpha(\tilde{z}_{L})\}\tilde{S}_{L\to P}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)}=\tilde{S}_{L\to E}(t|\tilde{Z}_{L}=\tilde{z}_{L}, \tilde{X}_{L}=0). \tag{4}\] Under the assumption (C-2), \(\tilde{S}_{L\to P}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=S_{P}(t|Z_{ D}=\tilde{z}_{L})\). As presented in Appendix A, it holds that \[\tilde{S}_{L\to E}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\] \[=1-\int_{0}^{t}\left\{1-\frac{S_{O}(t-s|Z_{D}=\tilde{z}_{L+s})}{ S_{P}(t-s|Z_{D}=\tilde{z}_{L+s})}\right\}d\tilde{F}_{L\to D}(s|\tilde{Z}_{L}= \tilde{z}_{L},\tilde{X}_{L}=0). \tag{5}\] Recall that as defined in Section 3, \(\tilde{Z}_{L+s}\) is a time-shifted version of \(\tilde{Z}_{L}\), where \(\tilde{Z}_{L}^{(1)}\) (age) and \(\tilde{Z}_{L}^{(2)}\) (calendar year) were shifted by \(+s\). With (5), the equation (4) leads to \[\frac{\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})-\alpha( \tilde{z}_{L})\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=1 )}{\{1-\alpha(\tilde{z}_{L})\}S_{P}(t|Z_{D}=\tilde{z}_{L})}\] \[=1-\int_{0}^{t}\left\{1-\frac{S_{O}(t-s|Z_{D}=\tilde{z}_{L+s})}{ S_{P}(t-s|Z_{D}=\tilde{z}_{L+s})}\right\}d\tilde{F}_{L\to D}(s|\tilde{Z}_{L}= \tilde{z}_{L},\tilde{X}_{L}=0). \tag{6}\] It is regarded as an integral equation with respect to \(S_{P}(t|Z_{D})\). ### Estimation of \(S_{p}(t|Z_{d})\) by solving the empirical integral equation In this subsection, we consider an empirical version of the integral equation (6), in which all the theoretical quantities are replaced with their empirical ones. We denote these empirical ones with the superscript of hat. For example, the empirical version of \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})\) is denoted by \(\hat{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})\). In the left hand side of (6), \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})\) is obtained from the life table. With the annual cancer incidence rate \(IR(\tilde{z}_{L})\), \(\alpha(\tilde{Z}_{L})\) is estimated by the method presented in Appendix B.1. Since \(IR(\tilde{z}_{L})\) is available only in an annual basis, we consider to estimate \(S_{P}(t|Z_{D})\) only at \(t=0,1,2,\cdots\). As shown in Appendix A, \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=1)\) in the left hand side of (6) is represented as \[\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z},\tilde{X}_{L}=1)\] \[=1-\int_{0}^{\tilde{t}_{L}}\left\{1-S_{O}(t+s|Z_{D}=\tilde{z}_{L- s})\right\}d\tilde{F}_{D\to L}(s|\tilde{Z}_{L}=\tilde{z},\tilde{X}_{L}=1), \tag{7}\] under the assumptions (B-3) and (C-3). To handle the integral in the right hand side of (7), \(S_{O}(t|Z_{D}=\tilde{z}_{L-s})\) for each \(s=0,1,\cdots,\tilde{t}_{L}\) should be available over \((0,t+s)\). However, depending on the follow-up duration, it is not necessarily obtained from the cancer registry. To estimate over the interval, an extrapolation method with the Kaplan-Meier estimate is proposed in Appendix C. The method to estimate \(\tilde{F}_{D\to L}(t|\tilde{Z}_{L}=\tilde{z},\tilde{X}_{L}=1)\) is presented in Appendix B.2. Then, an estimator for \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z},\tilde{X}_{L}=1)\) is given by \[\hat{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z},\tilde{X}_{L}=1)\] \[=1-\int_{0}^{\tilde{t}_{L}}\left\{1-\hat{S}_{O}(t+s|Z_{D}=\tilde{ z}_{L-s})\right\}d\hat{F}_{D\to L}(s|\tilde{Z}_{L}=\tilde{z},\tilde{X}_{L}=1).\] Denote \(\Delta\tilde{F}_{k}=\tilde{F}_{L\to D}(k|\tilde{Z}_{L}=\tilde{z}_{L}, \tilde{X}_{L}=0)-\tilde{F}_{L\to D}(k-1|\tilde{Z}_{L}=\tilde{z}_{L}, \tilde{X}_{L}=0)\) for \(k=1,2,\cdots\) with \(\tilde{F}_{L\to D}(0|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=0\). A method to estimate \(\tilde{F}_{L\to D}(0|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)\) is presented in Appendix B.3. Then, \(\tilde{F}_{L\to D}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}=0)=\sum_{k:k \leq t}\Delta\tilde{F}_{k}\) and the integral equation (6) is represented by \[\frac{\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})-\alpha( \tilde{z}_{L})\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L},\tilde{X}_{L}= 1)}{\{1-\alpha(\tilde{z}_{L})\}S_{P}(t|Z_{D}=\tilde{z}_{L})}\] \[=1-\sum_{k:k\leq t}\left\{1-\frac{S_{O}(t-k|Z_{D}=\tilde{z}_{L+k} )}{\tilde{S}_{P}(t-k|Z_{D}=\tilde{z}_{L+k})}\right\}\Delta\tilde{F}_{k}. \tag{8}\] Set the right hand side of (8) as \[r(t|\tilde{z}_{L})=1-\sum_{k:k\leq t}h_{\tilde{z}_{L}}(t,k)\Delta\tilde{F}_{k},\] where \[h_{\tilde{z}_{L}}(t,k)=1-\frac{S_{O}(t-k|Z_{D}=\tilde{z}_{L+k})}{S_{P}(t-k|Z_{D} =\tilde{z}_{L+k})},\] and \(h_{\tilde{z}_{L}}(k,k)=0\) for any \(k\). Then (8) is represented as \[r(t|\tilde{z}_{L})=\frac{\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})- \alpha(\tilde{z}_{L})\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L}, \tilde{X}_{L}=1)}{\{1-\alpha(\tilde{z}_{L})\}S_{P}(t|Z_{D}=\tilde{z}_{L})}, \tag{9}\] Considering the equation (8) or (9) at \(t=1,2,\cdots,K\), one has the following system of the linear equations, \[\left(\begin{array}{c}r(1|\tilde{z}_{L})\\ r(2|\tilde{z}_{L})\\ \vdots\\ r(K|\tilde{z}_{L})\end{array}\right)=\left(\begin{array}{c}1\\ 1\\ \vdots\\ 1\end{array}\right)-\left(\begin{array}{cccc}0&0&\cdots&0\\ h_{\tilde{z}_{L}}(2,1)&0&0&\vdots\\ \vdots&\vdots&\ddots&0\\ h_{\tilde{z}_{L}}(K,1)&h_{\tilde{z}_{L}}(K,2)&\cdots&0\end{array}\right) \left(\begin{array}{c}\Delta\tilde{F}_{1}\\ \Delta\tilde{F}_{2}\\ \vdots\\ \Delta\tilde{F}_{K}\end{array}\right) \tag{10}\] The system of the linear equation (10) can be easily solved recursively replacing unknown theoretical quantities with their estimators as follows. The first equation of (10) is \(r(1|\tilde{z}_{L})=1\). Then, from the equation (9), \(S_{P}(t|Z_{D}=\tilde{z}_{L})\) at \(t=1\) is estimated by \[\hat{S}_{P}(1|Z_{D}=\tilde{z}_{L})=\frac{\tilde{S}_{L\to O}(1|\tilde{Z}_{L}= \tilde{z}_{L})-\hat{\alpha}(\tilde{z}_{L})\hat{S}_{L\to O}(1|\tilde{Z}_{L}= \tilde{z}_{L},\tilde{X}_{L}=1)}{\{1-\hat{\alpha}(\tilde{z}_{L})\}}.\] The second equation of (10) is \(r(2|\tilde{z}_{L})=1-h_{\tilde{z}_{L}}(2,1)\Delta\tilde{F}_{1}\). Set \[\hat{h}_{\tilde{z}_{L}}(2,1)=1-\frac{\hat{S}_{O}(1|Z_{D}=\tilde{z}_{L+1})}{ \hat{S}_{P}(1|Z_{D}=\tilde{z}_{L+1})}\ \ \mbox{and}\ \ \hat{r}(2|\tilde{z}_{L})=1-\hat{h}_{\tilde{z}_{L}}(2,1)\Delta\hat{F}_{1}.\] Then, from (9), \[\hat{S}_{P}(2|Z_{D}=\tilde{z}_{L})=\frac{\tilde{S}_{L\to O}(2|\tilde{Z}_{L}= \tilde{z}_{L})-\hat{\alpha}(\tilde{z}_{L})\hat{S}_{L\to O}(2|\tilde{Z}_{L}= \tilde{z}_{L},\tilde{X}_{L}=1)}{\{1-\hat{\alpha}(\tilde{z}_{L})\}\hat{r}(2| \tilde{z}_{L})}.\] \(S_{P}(t|Z_{D}=\tilde{z}_{L})\) for \(t\geq 3\) can be calculated recursively in a similar fashion by \[\hat{S}_{P}(t|Z_{D}=\tilde{z}_{L})=\frac{\tilde{S}_{L\to O}(t|\tilde{Z}_{L}= \tilde{z}_{L})-\hat{\alpha}(\tilde{z}_{L})\hat{S}_{L\to O}(t|\tilde{Z}_{L}= \tilde{z}_{L},\tilde{X}_{L}=1)}{\{1-\hat{\alpha}(\tilde{z}_{L})\}\hat{r}(t| \tilde{z}_{L})},\] where \[\hat{r}(t|\tilde{z}_{L})=1-\sum_{k:k\leq t}\hat{h}_{\tilde{z}_{L}}(t,k)\Delta \hat{F}_{k}=1-\sum_{k:k\leq t}\left\{1-\frac{\hat{S}_{O}(t-k|Z_{D}=\tilde{z}_{L+ k})}{\tilde{S}_{P}(t-k|Z_{D}=\tilde{z}_{L+k})}\right\}\Delta\hat{F}_{k}.\] Following the above procedures, we estimate \(S_{P}(t|Z_{D}=\tilde{z}_{L})\) at \(t=1,2,\cdots,K\), and the resulting estimator is denoted by \(\hat{S}_{P}(t|Z_{D}=\tilde{z}_{L})\). For \(t\) other than \(t=1,2,\cdots,K\), log-linear interpolation is applied. In Appendix D, a proof of consistency of \(\hat{S}_{P}(t|Z_{D}=\tilde{z}_{L})\) to \(S_{P}(t|Z_{D}=\tilde{z}_{L})\) is presented. Note that in the standard practice, \(\hat{S}_{L\to O}(t|\tilde{Z}_{L}=\tilde{z}_{L})\) is used as \(S_{P}(t|Z_{D}=\tilde{z}_{L})\) in (1). Instead, we propose to use \(\hat{S}_{P}(t|Z_{D}=\tilde{z}_{L})\) in calculating the \(PP\) estimator. ## 5 Simulation study We present the results of a simulation study to investigate the behavior of the proposed method. To generate the cancer registry data and life table, we consider the natural histories of subjects in a birth cohort as illustrated in Figure 1\((c)\). Each subject has the potential time to diagnosis of the cancer of interest after birth, \(\tilde{t}_{D}\), and the potential time to death due to other causes after birth, denoted by \(\tilde{t}_{P}\). If \(\tilde{t}_{D}\) is shorter than \(\tilde{t}_{P}\), the subject has the time to death due to cancer from the date of diagnosis, \(\tilde{T}_{D\to E}\). The cancer registry data was constructed as the population registered at \(\tilde{t}_{D}\), and it has the information of the baseline covariates at \(\tilde{t}_{D}\) and time-to-death. The time-to-death due to any causes after diagnosis was calculated by \(\tilde{T}_{D\to O}=\min(\tilde{T}_{D\to E},\tilde{t}_{P}-\tilde{t}_{D})\). From this information, the annual numbers of deaths from any cause, subjects diagnosed as cancer, and subjects in the population were calculated in each covariate. Then, the life tables and the annual cancer incidence rates were constructed. We considered a cohort of 50,000 subjects born in 1960. We generated gender from the Bernoulli distribution with the probability of 0.5. \(\tilde{t}_{D}\) and \(\tilde{t}_{P}\) were generated under the four settings as follows; \[\text{Dataset 1}:\ \tilde{t}_{D}\sim Weibull(0.5\times 10^{-2},\ 1),\ \tilde{t}_{P}\sim Weibull(1.0\times 10^{-2},\ 2)\] \[\text{Dataset 2}:\ \tilde{t}_{D}\sim Weibull(1.5\times 10^{-2},\ 1),\ \tilde{t}_{P}\sim Weibull(1.0\times 10^{-2},\ 2)\] \[\text{Dataset 3}:\ \tilde{t}_{D}\sim LN(\log 65,\ 2),\ \tilde{t}_{P}\sim LN (\log 75,\ 2)\] \[\text{Dataset 4}:\ \tilde{t}_{D}\sim LN(\log 65,\ 1),\ \tilde{t}_{P} \sim LN(\log 75,\ 2)\] where \(Weibull(\lambda,p)\) indicates the Weibull distribution with the hazard function of \(\lambda p(\lambda t)^{p-1}\)) and \(LN(\mu,\sigma^{2})\) indicates the Log-normal distribution. Datasets 1 and 3 had low cancer incidence and Datasets 2 and 4 had high cancer incidence. The covariates at the cancer diagnosis \(\tilde{Z}_{D}=(age,year,gender)^{tr}\) were calculated by \((\tilde{t}_{D},1960+\tilde{t}_{D},gender)^{tr}\). \(\tilde{T}_{D\to E}\) was generated from the exponential distribution with hazard rate \(\lambda_{E}(t|\tilde{Z}_{D})=\lambda\exp\left\{\beta^{tr}\tilde{Z}_{D}\right\}\), where \(\lambda=0.1\exp\left\{-\log 1.2\times 60/7.5-\log 0.95\times(2000-1960)/15\right\}\) and \(\beta=(\log 1.2/7.5,\log 0.95/15,\log 0.8)^{tr}\). The potential censoring time from diagnosis, \(C\), was generated from the uniform distribution on \([0,15]\). In this simulation, we focused on patients diagnosed from 60 to 74 years old, i.e. selected by \(\tilde{t}_{D}\in[60,75)\). We simulated 1,000 datasets in each setting. The true net survival function \(S_{E}(t)=E_{Z_{D}}\left[\exp(-t\lambda_{E}(t|Z_{D}))\right]\) was calculated by the average of \(\exp(-t\lambda_{E}(t|Z_{D}))\) over \(n=500,000\). Table 1 displays the number of cancer patients and events in each of four datasets. To apply the proposed method, we calculated the Kaplan-Meier estimators for each subpopulation with \(Z_{D}\) to estimate \(S_{O}(t|Z_{D})\). For the log-linear extrapolation of \(S_{O}(t|Z_{D})\) beyond the end of follow-up in (7), we used \(H=4\) and \(H=10\). Table 2 showed the empirical biases and root mean squared errors (rMSEs) of estimates for 3-, 5-, 7-, and 10-years net survivals. In all datasets, the \(PP\) estimator had considerable biases particularly in the case of high incidence (Datasets 3 and 4). The proposed method had negligible biases for all time points and the rMSEs of the proposed methods were smaller than that of the \(PP\) estimator. No substantial differences in estimation accuracy were observed between extrapolation using the \(H=4\) and with 10. Illustration We illustrated our proposed method by analyzing two cancer sites from the National Cancer Registry at the Office for National Statistics. We focused on a subgroup of all adult aged 65-74 years, who was diagnosed as colon or prostate cancers from 1990 to 2000 in London, England. All patients were followed up to 15 years after diagnosis. For colon cancer, 55,033 patients were included and 48,549 among them died until the end of follow-up. For prostate cancer, 71,419 patients were included, and 62,422 patients died. The data were analyzed by cancer sites (colon and prostate). To apply the \(PP\) estimator, set \(Z_{D}=(age,year,gender)^{tr}\) in colon cancer and \(Z_{D}=(age,year)^{tr}\) in prostate cancer. To calculate \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L})\), we used the population life-table of England, which gives annual mortality from 1981 to 2015 by age and gender. To calculate the incidence rate, we used the information of the number of populations by age, gender, and calendar year, which are available from England Population Estimates 1971 to 2014 ([https://www.ons.gov.uk/peoplepopulationandcommunity](https://www.ons.gov.uk/peoplepopulationandcommunity)). The survival function \(S_{O}(t|Z_{D})\) was estimated by the Kaplan-Meier method applied to the subpopulation by \(Z_{D}\). Extrapolation of \(S_{O}(t|Z_{D})\) required in (7) was made by the method in Appendix C with \(H=4\) and 10. Figure 2(A) shows the net survival estimated by the \(PP\) estimator with the standard approach of the population life-table and that with the proposed adjustment (\(H=4\) or 10) for colon cancer. Correspondingly, the estimated net survival probabilities at selected time points are shown in Table 3. Recall that in the standard approach, \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L})\) is used as \(S_{P}(t|Z_{D})\), whereas in the proposed adjustment \(S_{P}(t|Z_{D})\) is estimated. To see the magnitude of the difference between both approaches, in Figure 2(B), \(\tilde{S}_{L\to O}(t|\tilde{Z}_{L}=(65,1990,male))\) and \(\hat{S}_{P}(t|Z_{D}=(65,1990,male))\) are plotted, which are used for estimates of \(S_{P}(t|Z_{D}=(65,1990,male))\) in (1) in the standard method and the proposed one, respectively. Figure 2(C) shows \(\tilde{S}_{L\to E}(t|\tilde{Z}_{L}=(65,1990,male),\tilde{X}_{L}=0)\), and Figure 2(D) shows \(\alpha(\tilde{Z}_{L})\) by age. Recall that \(\tilde{S}_{L\to E}(t|\tilde{Z}_{L},\tilde{X}_{L}=0)=1\) for any \(t\) and \(\alpha(\tilde{Z}_{L})=0\) hold if the life table did not include cancer patients or cancer deaths. Figure 2(C) indicated only small inclusion of cancer death in the life tables, and Figure 2(D) showed that the inclusion of cancer patients is also minor. Correspondingly, as seen in Figure 2(B), estimated \(S_{P}(t|Z_{D})\) were slightly different with the proposed adjustment. Then, as seen in Figure 2(A), the \(PP\) estimates of net survival were modified by the proposed method by 0.4 to 0.6% at any time points. Results for prostate cancer are presented in Figure 3. On the contrary to colon cancer, there was a rather higher impact by the inclusion of prostate cancer deaths in the life table as seen in Figure 3(A); the \(PP\) estimates with the proposed method were 0.8 to 1.6% lower than those with the standard use of the life table. As seen in Figure 3(B), the proposed method made a certain amount of adjustment in estimation of \(S_{P}(t|Z_{D}=(65,1990))\). As shown in Figure 3(C) and 3(D), the inclusion of cancer population and cancer deaths in the life table would not be ignorable. ## 7 Discussion When analyzing population-based cancer registry data, external life tables (from the general population) are commonly utilized to estimate cancer-related survival measures (such as net survival) within the relative survival data setting. Such estimation assumes the absence of cancer patients and cancer deaths in the life tables, which cannot be fully met. Although the issue is generally ignored by assuming a minor impact on the estimation of net survival, a sensitivity analysis method to address it was also proposed by Talback and Dickman (2011). Their sensitivity analysis requires information on the number of cancer deaths in the general population, an information which is not available in the data usually collected by cancer registries. In this paper, we demonstrate how to address this problem with a method based on an easily tractable integral equation. In contrast to the approach introduced in Talback and Dickman (2011), our method requires only information contained in the standard cancer registry datasets. Our method is easily extendable to various measures other than net survival, such as relative survival ratio or crude probability of death. The relative survival is defined as \(S_{O}(t)/S_{P}(t)\), which is the ratio between overall survival in the cancer patients and that for the general population. The Ederer I (\(E1\)) estimator Ederer, Axitell, and Cutler (1961); Perme, Stare, and Esteve (2012) of \(\Lambda_{R}(t)=-\log S_{R}(t)\), which is the consistent estimator of the relative survival ratio, is defined by \[\hat{\Lambda}_{R}^{E1}(t)=\int_{0}^{t}\frac{\sum_{i=1}^{n}dN_{i}(u)}{\sum_{j=1 }^{n}Y_{j}(u)}-\int_{0}^{t}\frac{\sum_{i=1}^{n}\tilde{S}_{L\to O}(u|Z_{D,i})d \tilde{\Lambda}_{L\to O}(u|Z_{D,i})}{\sum_{j=1}^{n}\tilde{S}_{L\to O}(u|Z_{D,j})}. \tag{11}\] The \(E1\) estimator is a consistent estimator of \(\Lambda_{R}(t)\) under the condition \(C\perp\{T_{O},Z_{D}\}\) (independent censoring). With the \(E1\) estimator, one can replace \(\tilde{S}_{L\to O}(t|Z_{D})\) and \(\tilde{\Lambda}_{L\to O}(t|Z_{D})\) in the equation (11) with the adjusted version. The crude probability of death is defined by \(F_{CPD}(t)=\int_{0}^{t}S_{O}(u)\lambda_{E}^{*}(u)du\), where \(\lambda_{E}^{*}(u)=\lim_{h\to 0}\Pr(t<T_{E}\leq t+h|T_{O}\geq t)/h\) is a cause specific hazard due to cancer. An estimator of the crude probability of death is defined by \(\hat{F}_{CPD}(t)=\int_{0}^{t}\hat{S}_{O}(u)d\hat{\Lambda}_{E}^{*}(u)\),Cronin and Feuer (2000); Perme, Stare, and Esteve (2012); Perme and Pavlic (2018) where \(\hat{S}_{O}(t)\) is the estimator of the overall survival, i.e., its cumulative hazard function is estimated as the Nelson-Aalen estimator, and \[\hat{\Lambda}_{E}^{*}(t)=\int_{0}^{t}\frac{\sum_{i=1}^{n}dN_{i}(u)}{\sum_{j=1 }^{n}Y_{j}(u)}-\int_{0}^{t}\frac{\sum_{i=1}^{n}Y_{i}(u)d\Lambda_{P}(u|Z_{D,i}) }{\sum_{j=1}^{n}Y_{j}(u)}. \tag{12}\] This estimator is a consistent under the independent censoring assumption \(C\perp\{T_{O},T_{P},Z_{D}\}\). For this estimator, \(\Lambda_{P}(u|Z_{D})\) needs to be replaced with the adjusted version. Ederer, Axitell, and Cutler (1961), Esteve, Benhamou, and Raymond (1994), and others have stated that the presence of cancer death in the life table had a minimal impact on the estimation of these cancer survival measures. Although our illustration supported this in particular in the case of low cancer incidence, it is not necessarily true if the incidence rate is not low and the incidence rate of some cancer types may increase in the future.Siegel, Miller, and Jemal (2016) Then, it is valuable to have tools to address the issue quantitatively. Furthermore, the net survival and other related survival measures have attracted attention in the context of human immunodeficiency virus (HIV) cohorts Marston et al. (2005, 2007); Bhaskaran et al. (2008); Marston et al. (2011) or of cardiovascular diseases Nelson et al. (2008); Lantelme et al. (2022). Because of the dramatic improvement of prognosis among individuals infected with HIV following the widespread introduction of highly active antiretroviral therapy (HAART), it became important to account for the competing risks of death from other causes when estimating survival from HIV. In the absence of accurate information on the cause of death, methods developed for the relative survival setting can be applied. Because HIV prevalence is very high in some African regions (for example exceeding 10% in the 15-49 age group in South Africa and Botswana), it is crucial to address the presence of HIV patients and HIV deaths in the life tables. Similarly, the high prevalence and mortality of cardiovascular diseases in many populations are likely to violate the assumptions underlying survival estimation approaches within the relative survival setting. In such situations, the proposed method would play very important roles. ## Acknowledgment The first author's research was partly supported by Grant-in-Aid for Early-Career Scientists (20K19754) from the Ministry of Education, Science, Sports and Technology of Japan. The second author's research was partly supported by Grant-in-Aid for Challenging Exploratory Research (16K12403) and for Scientific Research (16H06299, 18H03208) from the Ministry of Education, Science, Sports and Technology of Japan. The third author's research was partly supported by Cancer Research UK (Reference C7923/A18525). Computational calculations were performed at the Institute of Medical Science (the University of Tokyo). _Conflict of Interest_: None declared.
2310.11636
A Uniform Language to Explain Decision Trees
The formal XAI community has studied a plethora of interpretability queries aiming to understand the classifications made by decision trees. However, a more uniform understanding of what questions we can hope to answer about these models, traditionally deemed to be easily interpretable, has remained elusive. In an initial attempt to understand uniform languages for interpretability, Arenas et al. (2021) proposed FOIL, a logic for explaining black-box ML models, and showed that it can express a variety of interpretability queries. However, we show that FOIL is limited in two important senses: (i) it is not expressive enough to capture some crucial queries, and (ii) its model agnostic nature results in a high computational complexity for decision trees. In this paper, we carefully craft two fragments of first-order logic that allow for efficiently interpreting decision trees: Q-DT-FOIL and its optimization variant OPT-DT-FOIL. We show that our proposed logics can express not only a variety of interpretability queries considered by previous literature, but also elegantly allows users to specify different objectives the sought explanations should optimize for. Using finite model-theoretic techniques, we show that the different ingredients of Q-DT-FOIL are necessary for its expressiveness, and yet that queries in Q-DT-FOIL can be evaluated with a polynomial number of queries to a SAT solver, as well as their optimization versions in OPT-DT-FOIL. Besides our theoretical results, we provide a SAT-based implementation of the evaluation for OPT-DT-FOIL that is performant on industry-size decision trees.
Marcelo Arenas, Pablo Barcelo, Diego Bustamante, Jose Caraball, Bernardo Subercaseaux
2023-10-18T00:07:38Z
http://arxiv.org/abs/2310.11636v2
# A Symbolic Language ###### Abstract The recent development of formal explainable AI has disputed the folklore claim _"decision trees are readily interpretable models"_, showing different interpretability queries that are computationally hard on decision trees, as well as proposing different methods to deal with them in practice. Nonetheless, no single explainability query or score works as a "silver bullet" that is appropriate for every context and end-user. This naturally suggests the possibility of "interpretability languages" in which a wide variety of queries can be expressed, giving control to the end-user to tailor queries to their particular needs. In this context, our work presents ExplainDT, a symbolic language for interpreting decision trees. ExplainDT is rooted in a carefully constructed fragment of first-order logic that we call StratifOILed. StratifOILed balances expressiveness and complexity of evaluation, allowing for the computation of many post-hoc explanations -- both local (e.g., abductive and contrastive explanations) and global ones (e.g., feature relevance) -- while remaining in the Boolean Hierarchy over NP. Furthermore, StratifOILed queries can be written as a Boolean combination of NP-problems, thus allowing us to evaluate them in practice with a constant number of calls to a SAT solver. On the theoretical side, our main contribution is an in-depth analysis of the expressiveness and complexity of StratifOILed, while on the practical side, we provide an optimized implementation for encoding StratifOILed queries as propositional formulas, together with an experimental study on its efficiency. ## 1 Introduction Context.The increasing need to comprehend the decisions made by machine learning (ML) models has fostered a large body of research in _explainable AI_ (XAI) methods [42], leading to the introduction of numerous queries and scores that aim to explain individual predictions produced by such models. For example, several methods aim to measure the contribution of a feature, or a set of features, to the output of an ML model, thus enabling users to identify the most influential features in shaping the model's decision around a given input [19; 38; 46]. Nonetheless, it is often not a single query or score, but a combination of them, that provides the best explanation [16; 41]. Furthermore, it has been shown that some widely used explainability scores, believed to be theoretically mature and robust, may behave counterintuitively in certain situations [11; 21; 24; 32; 51]. This state of affairs has raised a call for developing _explainability languages_: general-purpose languages that allow users the flexibility to interact with an ML model, by posing different queries, in search of the best explanation. A first attempt in this direction was carried out by Arenas et al. [2], who designed a simple explainability language based on first-order logic, called FOIL for _first-order interpretability logic_, that was able to express some basic explainability queries. However, as noted by the authors of [2], the primary purpose of FOIL was not to serve as a practical explainability language but as a foundation upon which such languages could be constructed. To date, nevertheless, we have no complete understanding of why FOIL is not a good practical language for explainability, nor what needs to be added to it in order to make it a more effective tool for performing such tasks. To gain a deeper understanding of this issue, we introduce two desiderata that any language used for explainability queries should meet: (1) _Rich expressive power:_ The language should be able to express a broad range of explainability queries commonly used in practice; (2) _Efficienty:_ The computational complexity of the language used to express explainability queries must be manageable. Specifically, we require that queries can be evaluated with a constant number of calls to an NP oracle, which may enable the use of SAT solvers for query evaluation. SAT solvers are a mature technology that is effective in computing explanations for various ML models [27, 30, 55]. Theoretical contributions.We start by assessing the suitability of FOIL as an explainability language, with a specific emphasis on its ability to meet the first criterion in our desiderata. We show that there are crucial explainability queries that cannot be expressed in this language. For instance, the query known as the _minimum sufficient reason_[6, 48] cannot be expressed in FOIL. This query takes an input \(\mathbf{e}\) to a machine learning model \(\mathcal{M}\) and requests a subset \(S\) of its features, with the smallest possible cardinality, that _justifies_ the output of the model on said input. In other words, the output of \(\mathcal{M}\) on inputs that coincide with \(\mathbf{e}\) on the features from \(S\) must always be the same. In view of this drawback, we propose a natural extension of the FOIL language, which we call FOIL\({}^{+}\). We demonstrate that FOIL\({}^{+}\) can express a broad range of explainability queries commonly used in practice, including the minimum sufficient reason. Importantly, FOIL\({}^{+}\) is _model-agnostic_, meaning it does not depend on the specific type of machine learning model being employed. Next, we explore the computational cost of evaluating FOIL\({}^{+}\) queries to gain understanding of the second condition in our list of requirements. Our focus is on the evaluation problem over decision trees, a class of ML models that have been extensively studied for explainability in the literature [2, 3, 6, 28, 29, 30]. It is known that there are FOIL queries that are \(\mathrm{NP}\)-hard to evaluate on decision trees [2]. However, it remains unclear whether every FOIL\({}^{+}\) query over decision trees can be evaluated by a constant number of calls to \(\mathrm{NP}\) oracles. We show that, under some widely believed complexity assumptions, this is not the case. Specifically, we show that the FOIL\({}^{+}\) evaluation problem over decision trees is hard for each level of the polynomial hierarchy, which goes well beyond the problems that can be solved with a constant, or even polynomial, number of calls to an \(\mathrm{NP}\) oracle. Based on this negative result, we identify a language StratiFOILed that satisfies our desiderata: (a) it is able to express a broad range of explainability queries commonly used in practice; in fact, all the examples we present of explainability queries that can be expressed in FOIL\({}^{+}\) can also be expressed in StratiFOILed; and (b) every query expressible in StratiFOILed can be evaluated with a constant number of calls to an \(\mathrm{NP}\) oracle. The language StratiFOILed is so named because it is a stratified version of FOIL\({}^{+}\), but which allows quantification over a specific class of objects that cannot be defined in FOIL\({}^{+}\). Unlike FOIL\({}^{+}\), the language StratiFOILed is _model-specific_, as the class of objects over which we allow this extended quantification to range over is characteristic of the decision trees. This allows us to define the language StratiFOILed in a simple and elegant way. Practical contributions.We present an implementation of ExplainDT, a minimalistic high-level language that can be used to interpret trained decision trees. ExplainDT queries are compiled into StratiFOILed formulas, then run through a simplifier module, to then encode a CNF formula whose satisfiability captures the truth-value of the StratiFOILed formula. Such CNF formulas are later fed to a SAT solver, whose answer is given back to the user, thus closing the loop. Figure 1 illustrates the intended workflow when using ExplainDT. Our entire implementation, together with the necessary code to reproduce our experiments, is available in the repository. Related work.Our work is rooted in the area of _formal_ XAI [13, 39, 40], which aims to deliver trust-worthy explanations through well-defined semantics, meaning that each kind of explanation has a precise mathematical definition, thus avoiding the pitfalls of _explanations that do not make sense or are unfaithful to the underlying models_[47]. The problem of explaining decision trees, as well as their generalization to random forests, has received significant attention within the community [2, 4, 6, 12, 28, 29, 30]. The theoretical line of work within formal XAI that is closest to our work is that Liu and Lorini [34, 35, 36], which also concerns logic-based languages for explainability. The main differences between our work and that of Liu and Lorini are: (i) we provide an implementation of our language, (ii) we use classical first-order logic as opposed to modal logic, and (iii) the complexity of evaluation is much lower in our logic. On the other hand, the practical line of work that is closest to ours is that of libraries for computing abductive and contrastive explanations for tree-based models, such as _PyXAI_[5] and _XReason_[22, 24, 25]. However, while PyXAI and XReason provide support only for a finite number of pre-defined queries, ExplainDT allows writing an infinite family of different queries. Finally, our implementation based on SAT encodings is related to a wide range of work using SAT-solving for explainability [3, 28, 40, 43, 26]. ## 2 Background Models and instances.We use an abstract notion of a model of dimension \(n\), and define it as a Boolean function \(\mathcal{M}:\{0,1\}^{n}\rightarrow\{0,1\}\).1 We write \(\dim(\mathcal{M})\) for the dimension of a model \(\mathcal{M}\). A _partial instance_ of dimension \(n\) is a tuple \(\mathbf{e}\in\{0,1,\bot\}^{n}\), where \(\bot\) is used to represent undefined features. We define \(\mathbf{e}_{\bot}=\{i\in\{1,\ldots,n\}\mid\mathbf{e}[i]=\bot\}\). An _instance_ of dimension \(n\) is a tuple \(\mathbf{e}\in\{0,1\}^{n}\). That is, an instance is a partial instance without undefined features. Given partial instances \(\mathbf{e}_{1}\), \(\mathbf{e}_{2}\) of dimension \(n\), we say that \(\mathbf{e}_{1}\) is _subsumed_ by \(\mathbf{e}_{2}\) if for every \(i\in\{1,\ldots,n\}\) with \(\mathbf{e}_{1}[i]\neq\bot\), it holds that \(\mathbf{e}_{1}[i]=\mathbf{e}_{2}[i]\). That is, it is possible to obtain \(\mathbf{e}_{2}\) from \(\mathbf{e}_{1}\) by replacing some unknown values. For example, \((1,\bot)\) is subsumed by \((1,0)\), but it is not subsumed by \((0,0)\). A partial instance \(\mathbf{e}\) can be seen as a compact representation of the set of instances \(\mathbf{e}^{\prime}\) such that \(\mathbf{e}\) is subsumed by \(\mathbf{e}^{\prime}\), where such instances \(\mathbf{e}^{\prime}\) are called the _completions_ of \(\mathbf{e}\). Footnote 1: This initial exploration focuses on Boolean machine learning models, as is often the case in formal XAI research. However, this assumption is not overly restrictive in practice for two reasons. Firstly, the extension from Boolean models to models with more general categorical features is not complicated [31, 12]. Secondly, there is substantial progress in discretizing continuous numerical features into intervals [2, 12]. Decision trees.A _decision tree_ (cf. Figure 2) over instances of dimension \(n\) is a rooted directed tree \(\mathcal{T}\) with labels on edges and nodes such that: (i) each leaf is labeled with \(\mathbf{true}\) or \(\mathbf{false}\); (ii) each internal node (a node that is not a leaf) is labeled with a feature \(i\in\{1,\ldots,n\}\); (iii) each internal node has two outgoing edges, one labeled \(0\) and the another one labeled \(1\), and (iv) in every path from the root to a leaf, no two nodes on that path have the same label. Every instance \(\mathbf{e}\in\{0,1\}^{n}\) defines a unique path \(\pi_{\mathbf{e}}=u_{1}\cdots u_{k}\) from the root \(u_{1}\) to a leaf \(u_{k}\) of \(\mathcal{T}\) such that: if the label of \(u_{i}\) is \(j\in\{1,\ldots,n\}\), where \(i\in\{1,\ldots,k-1\}\), then the edge from \(u_{i}\) to \(u_{i+1}\) is labeled with \(\mathbf{e}[j]\). Further, the instance \(\mathbf{e}\) is positive, denoted by \(\mathcal{T}(\mathbf{e})=1\), if the label of \(u_{k}\) is \(\mathbf{true}\); otherwise the instance \(\mathbf{e}\) is negative, which is denoted by \(\mathcal{T}(\mathbf{e})=0\). For example, for the decision tree \(\mathcal{T}\) in Figure 2 and instances \(\mathbf{e}_{1}=(0,0,1,1)\) and \(\mathbf{e}_{2}=(0,1,1,0)\), it holds that \(\mathcal{T}(\mathbf{e}_{1})=1\) and \(\mathcal{T}(\mathbf{e}_{2})=0\). Figure 1: High-level diagram of the ExplainDT workflow. The FOIL Logic FOIL was introduced in [2], and it is simply first-order logic over two relations on the set of partial instances of a given dimension: A unary relation Pos which indicates the value of an instance in a model, and a binary relation \(\subseteq\) that represents the subsumption relation among partial instances. We assume familiarity with the syntax and semantics of first-order logic (see the appendix for a review of these concepts). In particular, given a vocabulary \(\sigma\) consisting of relations \(R_{1}\), \(\ldots\), \(R_{\ell}\), recall that a structure \(\mathfrak{A}\) over \(\sigma\) consists of a domain, where quantifiers are instantiated, and an interpretation for each relation \(R_{i}\). Moreover, given a first-order formula \(\varphi\) defined over the vocabulary \(\sigma\), we write \(\varphi(x_{1},\ldots,x_{k})\) to indicate that \(\{x_{1},\ldots,x_{k}\}\) is the set of free variables of \(\varphi\). Finally, given a structure \(\mathfrak{A}\) over the vocabulary \(\sigma\) and elements \(a_{1}\), \(\ldots\), \(a_{k}\) in the domain of \(\mathfrak{A}\), we use \(\mathfrak{A}\models\varphi(a_{1},\ldots,a_{k})\) to indicate that formula \(\varphi\) is satisfied by \(\mathfrak{A}\) when each variable \(x_{i}\) is replaced by element \(a_{i}\) (\(1\leq i\leq k\)). Consider a model \(\mathcal{M}\) with \(\dim(\mathcal{M})=n\). The structure \(\mathfrak{A}_{\mathcal{M}}\) representing \(\mathcal{M}\) over the vocabulary formed by Pos and \(\subseteq\) is defined as follows. The domain of \(\mathfrak{A}_{\mathcal{M}}\) is the set \(\{0,1,\bot\}^{n}\) of all partial instances of dimension \(n\). An instance \(\mathbf{e}\in\{0,1\}^{n}\) is in the interpretation of Pos in \(\mathfrak{A}_{\mathcal{M}}\) if and only if \(\mathcal{M}(\mathbf{e})=1\), and no partial instance including undefined features is contained in the interpretation of Pos. Moreover, a pair \((\mathbf{e}_{1},\mathbf{e}_{2})\) is in the interpretation of relation \(\subseteq\) in \(\mathfrak{A}_{\mathcal{M}}\) if and only if \(\mathbf{e}_{1}\) is subsumed by \(\mathbf{e}_{2}\). Then given a formula \(\varphi(x_{1},\ldots,x_{k})\) in FOIL and partial instances \(\mathbf{e}_{1}\), \(\ldots\), \(\mathbf{e}_{k}\) of dimension \(n\), model \(\mathcal{M}\) is said to _satisfy_\(\varphi(\mathbf{e}_{1},\ldots,\mathbf{e}_{k})\), denoted by \(\mathcal{M}\models\varphi(\mathbf{e}_{1},\ldots,\mathbf{e}_{k})\), if \(\mathfrak{A}_{\mathcal{M}}\models\varphi(\mathbf{e}_{1},\ldots,\mathbf{e}_{k})\). Notice that for a decision tree \(\mathcal{T}\), the structure \(\mathfrak{A}_{\mathcal{T}}\) can be exponentially larger than \(\mathcal{T}\). Hence, \(\mathfrak{A}_{\mathcal{T}}\) is a theoretical construction needed to formally define the semantics of FOIL, but that should not be built when verifying in practice if a formula \(\varphi\) is satisfied by \(\mathcal{T}\). ### Expressing properties in FOIL We give several examples of how FOIL can be used to express some natural explainability queries on models. In these examples we make use of the following FOIL formula: \(\textsc{Full}(x)=\forall y\,(x\subseteq y\,\to\,y\subseteq x)\). Notice that if \(\mathcal{M}\) is a model and \(\mathbf{e}\) is a partial instance, then \(\mathcal{M}\models\textsc{Full}(\mathbf{e})\) if and only if \(\mathbf{e}\) is also an instance (i.e., it has no undefined features). We also use the formula \[\textsc{AllPos}(x)\;=\;\forall y\,\big{(}(x\subseteq y\wedge\textsc{Full}(y)) \,\to\,\textsc{Pos}(y)\big{)},\] such that \(\mathcal{M}\models\textsc{AllPos}(\mathbf{e})\) if and only if every completion \(\mathbf{e}^{\prime}\) of \(\mathbf{e}\) is a positive instance of \(\mathcal{M}\). Analogously, we define a formula \(\textsc{AllNeg}(x)\). Sufficient reasons.A _sufficient reason_ (SR) for an instance \(\mathbf{e}\) over a model \(\mathcal{M}\) is a partial instance \(\mathbf{e}^{\prime}\) such that \(\mathbf{e}^{\prime}\subseteq\mathbf{e}\) and each completion of \(\mathbf{e}^{\prime}\) takes the same value over \(\mathcal{M}\) as \(\mathbf{e}\). We can define SRs in FOIL by means of the following formula: \[\textsc{SR}(x,y)\;=\;\textsc{Full}(x)\wedge\,y\subseteq x\,\wedge\,\forall z \,\big{(}(y\subseteq z\wedge\textsc{Full}(z))\,\to\,(\textsc{Pos}(z)\leftrightarrow \textsc{Pos}(x))\big{)}.\] In fact, it is easy to see that \(\mathcal{M}\models\textsc{SR}(\mathbf{e},\mathbf{e}^{\prime})\) if and only if \(\mathbf{e}^{\prime}\) is a SR for \(\mathbf{e}\) over \(\mathcal{M}\). Notice that \(\mathbf{e}\) is always a SR for itself. However, we are typically interested in SRs that satisfy some optimality criterion. A common such criterion is that of being _minimal_[2, 6, 28, 48]. Formally, \(\mathbf{e}^{\prime}\) is a _minimal SR_ for \(\mathbf{e}\) over \(\mathcal{M}\), if \(\mathbf{e}^{\prime}\) is a SR for \(\mathbf{e}\) over \(\mathcal{M}\) and there is no partial instance \(\mathbf{e}^{\prime\prime}\) that is properly subsumed by \(\mathbf{e}^{\prime}\) that is also a SR for \(\mathbf{e}\). Let us write \(x\subsetneq y\) for \(x\subseteq y\wedge\neg(y\subseteq x)\). Then for \[\textsc{MinimalSR}(x,y)\;=\;\textsc{SR}(x,y)\,\wedge\,\forall z\,(z\subsetneq y \,\to\neg\textsc{SR}(x,z)),\] we have that \(\mathcal{M}\models\textsc{MinimalSR}(\mathbf{e},\mathbf{e}^{\prime})\) if and only if \(\mathbf{e}^{\prime}\) is a minimal SR for \(\mathbf{e}\) over \(\mathcal{M}\). Minimal SRs have also been called _prime implicant_ or _abductive_ explanations in the literature [23, 38]. Feature Relevancy.A standard global interpretability question about an ML model is to decide which features are relevant to its decisions [14, 20]. Such a notion can be defined in FOIL as follows: \[\textsc{RFS}(x)\;=\;\forall y\,\big{(}\textsc{SUF}(x,y)\,\to\,(\textsc{AllPos} (y)\vee\textsc{AllNeg}(y))\big{)}, \tag{1}\] where \(\textsc{SUF}(x,y)\) is a FOIL formula (SUF stands for _same undefined features_) such that \(\mathcal{M}\models\textsc{SUF}(\mathbf{e},\mathbf{e}^{\prime})\) if and only if \(\mathbf{e}_{\bot}=\mathbf{e}^{\prime}_{\bot}\), i.e., the sets of undefined features in \(\mathbf{e}\) and \(\mathbf{e}^{\prime}\) are the same (see the appendix for a definition of this formula). Then we have that \(\mathcal{M}\models\textsc{RFS}(\mathbf{e})\) if and only if for every \(\mathbf{e}^{\prime}\) with \(\mathbf{e}_{\perp}=\mathbf{e}_{\perp}^{\prime}\), all completions of \(\mathbf{e}^{\prime}\) receive the same classification over \(\mathcal{M}\). In other words, the output of the model on each instance is invariant to the features that are undefined in \(\mathbf{e}\). We call this a _Relevant Feature Set_ (RFS). For example, for the decision tree \(\mathcal{T}\) shown in Figure 2, the set of features \(\{1,3,4\}\) is an RFS, as the output of the model does not depend on feature 2. In particular, for the instances \(\mathbf{e}_{1}=(0,0,0,0)\) and \(\mathbf{e}_{2}=(0,1,0,0)\), we have that \(\mathcal{T}(\mathbf{e}_{1})=\mathcal{T}(\mathbf{e}_{2})=1\), while for the instances \(\mathbf{e}_{3}=(1,0,1,0)\) and \(\mathbf{e}_{4}=(1,1,1,0)\), we have that \(\mathcal{T}(\mathbf{e}_{3})=\mathcal{T}(\mathbf{e}_{4})=0\), so that the output of the model on these instances does not depend on feature 2. As before, we can also express that \(\mathbf{e}\) is _minimal_ with respect to feature relevancy using the formula: \(\textsc{MinimalRFS}(x)=\textsc{RFS}(x)\,\wedge\,\forall y\,(y\subsetneq x\, \rightarrow\neg\textsc{RFS}(y))\). ### Expressiveness limitations of FOIL In some scenarios we want to express a stronger condition for SRs and RFSs: not only that they are minimal, but also that they are _minimum_. Formally, a SR \(\mathbf{e}^{\prime}\) for e over \(\mathcal{M}\) is _minimum_, if there is no SR \(\mathbf{e}^{\prime\prime}\) for e over \(\mathcal{M}\) with \(|\mathbf{e}_{\perp}^{\prime\prime}|<|\mathbf{e}_{\perp}^{\prime}|\), i.e., \(\mathbf{e}^{\prime\prime}\) has more undefined features than \(\mathbf{e}^{\prime}\). Analogously, we can define the notion of _minimum_ RFS. One can easily observe that an RFS is minimum if and only if it is minimal. Therefore, the FOIL formula \(\textsc{MinimalRFS}(x)\) presented earlier indicates that \(\mathbf{e}\) is both the minimum and minimal RFS. This is however not the case for SRs; a sufficient reason can be minimal without being minimum. The following theorem demonstrates that FOIL cannot express the query that verifies if a partial instance \(\mathbf{e}^{\prime}\) is a minimum SR for a given instance \(\mathbf{e}\) over decision trees. **Theorem 1**.: _There is no formula \(\textsc{MinimumSR}(x,y)\) in FOIL such that, for every decision tree \(\mathcal{T}\), instance \(\mathbf{e}\) and partial instance \(\mathbf{e}^{\prime}\), we have that \(\mathcal{T}\models\textsc{MinimumSR}(\mathbf{e},\mathbf{e}^{\prime})\, \Leftrightarrow\,\mathbf{e}^{\prime}\) is a minimum SR for \(\mathbf{e}\) over \(\mathcal{T}\)._ In the next section, we present a simple extension of FOIL that is capable to express this and other interesting explainability queries that are related to comparing cardinalities of sets of features. ## 4 The \(\textsc{FOIL}^{+}\) Logic We propose the logic \(\textsc{FOIL}^{+}\), a simple extension of FOIL that includes a binary relation \(\mathrm{LEL}\) with the following interpretation: \(\mathcal{M}\models\mathrm{LEL}(\mathbf{e},\mathbf{e}^{\prime})\) if and only if \(|\mathbf{e}_{\perp}|\geq|\mathbf{e}_{\perp}^{\prime}|\). Using this language, we can easily express the idea that \(\mathbf{e}^{\prime}\) is the minimum SR for \(\mathbf{e}\) over \(\mathcal{M}\) by the following formula: \[\textsc{MinimumSR}(x,y)\,=\,\textsc{SR}(x,y)\wedge\forall z\big{(}\textsc{ SR}(x,z)\rightarrow(\mathrm{LEL}(z,y)\rightarrow\mathrm{LEL}(y,z))\big{)}.\] Minimum change required.Explanations that are _contrastive_ provide reasons for why a model classified a given input in a certain way instead of another. One such an explanation is the _minimum change required_ (MCR) query. Given an instance \(\mathbf{e}\) and a model \(\mathcal{M}\), MCR aims to find another instance \(\mathbf{e}^{\prime}\) such that \(\mathcal{M}(\mathbf{e})\neq\mathcal{M}(\mathbf{e}^{\prime})\) and the number of features whose values need to be flipped in order to change the output of the model is minimal, which is the same as saying that the Hamming distance between \(\mathbf{e}\) and \(\mathbf{e}^{\prime}\) is minimal. It is possible to express MCR in \(\textsc{FOIL}^{+}\) as follows. In the supplementary material we show that in \(\textsc{FOIL}^{+}\) one can express a ternary formula \(\mathrm{LEH}\) such that for every model \(\mathcal{M}\) of dimension \(n\) and every sequence of instances \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) of dimension \(n\), it holds that: \(\mathcal{M}\models\mathrm{LEH}(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\) if and only if the Hamming distance between \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) is less or equal than the Hamming distance between \(\mathbf{e}_{1}\) and \(\mathbf{e}_{3}\). By using \(\mathrm{LEH}\), we can express the condition that an instance \(y\) is obtained from an instance \(x\) by flipping the minimum number of values required to change the output of the model: \[\textsc{MinimumCR}(x,y)\,=\,\textsc{Full}(x)\wedge\textsc{ Full}(y)\wedge\neg(\textsc{Pos}(x)\leftrightarrow\textsc{Pos}(y))\,\wedge\\ \forall z\,\big{[}(\textsc{Full}(z)\wedge\neg(\textsc{Pos}(x) \leftrightarrow\textsc{Pos}(z))\big{)}\rightarrow\mathrm{LEH}(x,y,z)\big{]}.\] For example, for the decision tree \(\mathcal{T}\) depicted in Figure 2 and for the instances \(\mathbf{e}=(0,1,0,1)\) and \(\mathbf{e}^{\prime}=(0,1,1,0)\), it holds that \(\mathcal{T}\models\textsc{MinimumCR}(\mathbf{e},\mathbf{e}^{\prime})\), as \(\mathcal{T}(\mathbf{e})=1\), \(\mathcal{T}(\mathbf{e}^{\prime})=0\) and \(\mathcal{T}(\mathbf{e}^{\prime\prime})=1\) for every instance \(\mathbf{e}^{\prime\prime}\) such that the Hamming distance between \(\mathbf{e}\) and \(\mathbf{e}^{\prime\prime}\) is equal to 1. ### The evaluation problem For each query \(\varphi(x_{1},\ldots,x_{k})\) in \(\textsc{FOIL}^{+}\), we define its associated problem \(\textsc{Eval}(\varphi)\) as follows. The input to this problem is a decision tree \(\mathcal{T}\) of dimension \(n\) and partial instances \(\mathbf{e}_{1},\ldots,\mathbf{e}_{k}\) of dimension \(n\). The output is Yes, if \(\mathcal{T}\models\varphi(\mathbf{e}_{1},\ldots,\mathbf{e}_{k})\), and No otherwise. It is known that there exists a formula \(\phi(x)\) in FOIL for which its evaluation problem over the class of decision trees is \(\mathrm{NP}\)-hard, i.e., \(\textsc{Eval}(\varphi)\) is NP-hard [2]. We want to determine whether the language FOIL\({}^{+}\) is appropriate for implementation using SAT encodings. Thus, it is natural to ask whether the evaluation problem for formulas in this logic can always be reduced to a Boolean combination of \(\mathrm{NP}\) languages. However, we prove that this is not always the case. Although the evaluation of FOIL\({}^{+}\) formulas is always in the polynomial hierarchy (PH), there exist formulas in FOIL\({}^{+}\) (and even in FOIL) for which their corresponding evaluation problems are hard for every level of PH. Based on widely held complexity assumptions, we can conclude that FOIL\({}^{+}\) contains formulas whose evaluations cannot be reduced to a Boolean combination of \(\mathrm{NP}\) languages. Let us quickly recall how PH is defined. The class \(\Sigma_{k}^{\mathrm{P}}\), for \(k\geq 0\), is recursively defined as follows: \(\Sigma_{0}^{\mathrm{P}}=\mathrm{PTIME}\) and \(\Sigma_{k+1}^{\mathrm{P}}\) is the class of languages that can be solved in \(\mathrm{NP}\) with access to an oracle in \(\Sigma_{k}^{\mathrm{P}}\). We then define PH as \(\bigcup_{k\geq 0}\Sigma_{k}^{\mathrm{P}}\). A decision problem is _hard_ for \(\Sigma_{k}^{\mathrm{P}}\), if every problem in \(\Sigma_{k}^{\mathrm{P}}\) can be reduced in polynomial time to it. We can thus state the following result. **Theorem 2**.: _(i) Let \(\phi\) be a \(\textsc{FOIL}^{+}\) formula. Then there exists \(k\geq 0\) such that \(\textsc{Eval}(\phi)\) is in \(\Sigma_{k}^{\mathrm{P}}\); (ii) For every \(k\geq 0\), there is a FOIL-formula \(\phi_{k}\) such that \(\textsc{Eval}(\phi_{k})\) is \(\Sigma_{k}^{\mathrm{P}}\)-hard._ ## 5 The Stratifoiled Logic We present a logic that builds upon FOIL\({}^{+}\) and has the ability to encompass all explainability queries shown in the paper. Further, it can be evaluated through Boolean combinations of \(\mathrm{NP}\) languages. ### The definition of the logic The logic Stratifoiled is defined by considering three layers: _atomic_ formulas, _guarded_ formulas, and finally the formulas from Stratifoiled itself. Atomic formulas.In the previous sections, we showed how some predicates like \(\subseteq\), Full, SUF, LEL, GLB and LEH are used to express these notions. Such predicates can be called _syntactic_ in the sense that they refer to the values of the features of partial instances, and they do not make reference to classification models. It turns out that all the syntactic predicates needed in our logical formalism can be expressed as first-order queries over the predicates \(\subseteq\) and \(\textsc{LEL}\). Moreover, it turns out that each such formulas can be evaluated in polynomial time: **Theorem 3**.: _Let \(\phi\) be a FOIL\({}^{+}\) formula defined over \(\{\subseteq,\textsc{LEL}\}\). Then \(\textsc{Eval}(\phi)\) is in \(\textsc{PTIME}\)._ The _atomic formulas_ of Stratifoiled are defined as the set of FOIL\({}^{+}\) formulas over the vocabulary \(\{\subseteq,\textsc{LEL}\}\). Note that we could not have simply taken one of these predicates when defining atomic formulas, as we show in the appendix that they cannot be defined in terms of each other. The following is an additional example of an atomic formula in Stratifoiled: \[\textsc{Cons}(x,y)\;=\;\exists z\,(x\subseteq z\wedge y\subseteq z).\] This relation checks whether two partial instances \(\mathbf{e}\) and \(\mathbf{e}^{\prime}\) are _consistent_, in the sense that features that are defined in both \(\mathbf{e}\) and \(\mathbf{e}^{\prime}\) have the same value. We use this formula in the rest of the paper. Guarded formulas.At this point, we depart from the model-agnostic approach of FOIL\({}^{+}\) and introduce the concept of _guarded_ quantification, which specifically applies to decision trees. This involves quantifying over the elements that define a decision tree, namely the nodes and the leaves in the tree. To formalize this, given a decision tree \(\mathcal{T}\) and a node \(u\) of \(\mathcal{T}\), the instance \(\mathbf{e}_{u}\)_represented_ by \(u\) is defined as follows. If \(\pi=u_{1}\cdots u_{k}\) is the unique path that leads from the root of \(\mathcal{T}\) to \(u_{k}=u\), then: (i) for every \(i\in\{1,\ldots,k-1\}\), if the label of node \(u_{i}\) is \(j\in\{1,\ldots,n\}\), then \(\mathbf{e}_{u}[j]\) is equal to the label of the edge in \(\mathcal{T}\) from \(u_{i}\) to \(u_{i+1}\); and (ii) for each \(j\in\{1,\ldots,n\}\), \(\mathbf{e}_{u}[j]=\bot\) if the label of \(u_{i}\) is different from \(j\) for every \(i\in\{1,\ldots,k-1\}\). For example, for the decision tree \(\mathcal{T}\) in Figure 2 and the nodes \(u\), \(v\) shown in this figure, it holds that \(\mathbf{e}_{u}=(\bot,0,\bot,\bot)\) and \(\mathbf{e}_{v}=(0,1,0,0)\). Then we define a predicate \(\textsc{Node}(x)\) such that, \(\mathcal{T}\models\textsc{Node}(\mathbf{e})\) if and only if \(\mathbf{e}=\mathbf{e}_{u}\) for some node \(u\) of \(\mathcal{T}\). Moreover, we define a predicate \(\textsc{PosLeaf}(x)\) such that, given a decision tree \(\mathcal{T}\) and a partial instance \(\mathbf{e}\): \(\mathcal{T}\models\textsc{PosLeaf}(\mathbf{e})\) if and only if \(\mathbf{e}=\mathbf{e}_{u}\) for some leaf \(u\) of \(\mathcal{T}\) with label \(\mathbf{true}\). With the predicates Node and PosLeaf as part of the vocabulary, the set of _guarded formulas_ is inductively defined as follows: (i) Each atomic formula is a guarded formula. (ii) Boolean combinations of guarded formulas are guarded formulas. (iii) If \(\phi\) is a guarded formula, then so are \(\exists x(\textsc{Node}(x)\land\phi)\), \(\forall x(\textsc{Node}(x)\to\phi)\), \(\exists x(\textsc{PosLeaf}(x)\land\phi)\) and \(\forall x(\textsc{PosLeaf}(x)\to\phi)\). This class is termed as _guarded_ due to the fact that every quantification is protected by a collection of nodes or leaves in the decision tree. As the decision tree comprises a linear number of nodes (in the size of the tree), and hence a linear number of leaves, it follows from Theorem 3 that every guarded formula can be evaluated within polynomial time. The following are examples of guarded formulas. These formulas will be used to express the different notions of explanation studied in this paper, and some of them, like Pos, AllPos and AllNeg, have already shown to be needed when expressing interpretability queries. \[\textsc{Leaf}(x) = \textsc{Node}(x)\land\forall y\left(\textsc{Node}(y)\to(x\subseteq y \to y\subseteq x)\right)\] \[\textsc{NegLeaf}(x) = \textsc{Leaf}(x)\land\neg\textsc{PosLeaf}(x)\] \[\textsc{AllPos}(x) = \forall y\left(\textsc{Node}(y)\to((\textsc{Leaf}(y)\land \textsc{Cons}(x,y))\to\textsc{PosLeaf}(y))\right)\] \[\textsc{AllNeg}(x) = \forall y\left(\textsc{Node}(y)\to((\textsc{Leaf}(y)\land \textsc{Cons}(x,y))\to\textsc{NegLeaf}(y))\right)\] \[\textsc{Pos}(x) = \textsc{Full}(x)\land\textsc{AllPos}(x)\] \[\textsc{Neg}(x) = \textsc{Full}(x)\land\textsc{AllNeg}(x)\] StratiFOILed formulas.We finally have all the necessary ingredients to define the logic StratatiFOILed. As in the previous cases, the formulas in this logic are defined in a recursive way: (i) Each guarded formula is a StratatiFOILed formula. (ii) If \(\phi\) is guarded, then \(\exists x_{1}\cdots\exists x_{\ell}\,\phi\) and \(\forall x_{1}\cdots\forall x_{\ell}\,\phi\) are StratatiFOILed formulas. (iii) Boolean combinations of StratatiFOILed formulas are StratatiFOILed formulas. Both predicates SR and RFS can be expressed as StratatiFOILed formulas. In fact, \[\textsc{SR}(x,y)\ =\ \textsc{Full}(x)\land y\subseteq x\land(\textsc{Pos}(x) \to\textsc{AllPos}(y))\ \land\ (\textsc{Neg}(x)\to\textsc{AllNeg}(y)),\] which is an StratatiFOILed formula as it is a Boolean combination of guarded formulas. For RFS the situation is a bit more complex. Observe first that we cannot use the definition of RFS\((x)\) provided in (1), as such a formula involves an alternation of unrestricted quantifiers. Instead, we can use the following guarded formula: \[\textsc{RFS}(x)\ =\ \forall y\left[\textsc{Node}(y)\to(\textsc{AllPos}(y) \ \to\ \forall z\left(\textsc{Node}(z)\to(\textsc{AllNeg}(z)\ \to\] \[\neg\exists w\left(\textsc{SUF}(x,w)\land\textsc{Cons}(w,y)\land\textsc{ Cons}(w,z)\right))\right))\right]. \tag{2}\] Notice that this is a guarded formula, so a StratatiFOILed formula, since the formula \(\neg\exists w\left(\textsc{SUF}(x,w)\land\textsc{Cons}(w,y)\land\textsc{Cons}( w,z)\right)\) is atomic (that is, it is defined by using only the predicates \(\subseteq\) and \(\textsc{LEL}\)). Finally, we can see that StratatiFOILed can express all the examples of explainability queries that we have studied in the paper: \[\textsc{MinimalSR}(x,y) = \textsc{SR}(x,y)\land\forall z\big{(}\textsc{SR}(x,z)\to(z \subseteq y\to y\subseteq z)\big{)}\] \[\textsc{MinimumSR}(x,y) = \textsc{SR}(x,y)\land\forall z\big{(}\textsc{SR}(x,z)\to( \textsc{LEL}(z,y)\to\textsc{LEL}(y,z))\big{)}\] \[\textsc{MinimalRFS}(x) = \textsc{RFS}(x)\land\forall y\big{(}\textsc{RFS}(y)\to(y\subseteq x \to x\subseteq y)\big{)}\] \[\textsc{MinimumCR}(x,y) = \textsc{Full}(x)\land\textsc{Full}(y)\land\neg(\textsc{Pos}(x) \leftrightarrow\textsc{Pos}(y))\land\] \[\forall z\left[(\textsc{Full}(z)\land\neg(\textsc{Pos}(x) \leftrightarrow\textsc{Pos}(z)))\to\textsc{LEH}(x,y,z)\right]\] ### The evaluation problem The evaluation problem for StratatiFOILed is defined in the same way as for FOIL\({}^{+}\). Specifically, given a fixed StratatiFOILed formula \(\phi(x_{1},\ldots,x_{m})\), we investigate the problem Eval(\(\phi\)), which takes a decision tree \(\mathcal{T}\) and partial instances \(\mathbf{e}_{1},\ldots,\mathbf{e}_{m}\) as input, and asks whether \(\mathcal{T}\models\phi(\mathbf{e}_{1},\ldots,\mathbf{e}_{m})\). It turns out that StratatiFOILed can express formulas whose associated evaluation problem is NP-hard. An example of such a formula is \(\neg\textsc{MinimumSR}(x,y)\)[6]. Next we significantly extend this result by providing a precise characterization of the complexity of the evaluation problem for StratatiFOILed. More specifically, we establish that this problem can always be solved in the _Boolean Hierarchy over \(\mathrm{NP}\)[10, 54]_, i.e., as a Boolean combination of \(\mathrm{NP}\) problems. The Boolean Hierarchy over \(\mathrm{NP}\) is denoted by \(\mathrm{BH}\), and it is defined as \(\bigcup_{k\geq 1}\mathrm{BH}_{k}\), where \(\mathrm{BH}_{k}\) for \(k\geq 1\) is recursively defined as follows: (1) \(\mathrm{BH}_{1}=\mathrm{NP}\); (2) \(\mathrm{BH}_{2k}\) is the class of problems \(L\) such that \(L=L_{1}\cap L_{2}\) with \(L_{1}\in\mathrm{BH}_{2k-1}\) and \(L_{2}\in\mathrm{coNP}\); and (3) \(\mathrm{BH}_{2k+1}\) is the class of problems \(L\) such that \(L=L_{1}\cup L_{2}\) with \(L_{1}\in\mathrm{BH}_{2k}\) and \(L_{2}\in\mathrm{NP}\). A decision problem is _hard_ for \(\mathrm{BH}_{k}\), if every problem in \(\mathrm{BH}_{k}\) can be reduced in polynomial time to it. Then: **Theorem 4**.: _(i) For each StratiFOILed formula \(\phi\), there is \(k\geq 1\) such that \(\textsc{Eval}(\phi)\) is in \(\mathrm{BH}_{k}\); (ii) For every \(k\geq 1\), there is a StratiFOILed formula \(\phi_{k}\) such that \(\textsc{Eval}(\phi_{k})\) is \(\mathrm{BH}_{k}\)-hard._ ## 6 Implementation and Experiments Our implementation of ExplainDT consists of three main components (cf. Figure 1): (i) a parser for high-level syntax, (ii) a prototype simplifier for StratiFOILed formulas, and (iii) the encoder translating StratiFOILed formulas into CNF instances. This section describes them at a high level and also presents some key experiments - the supplementary material contains a detailed exposition of our implementation, as well as more extensive experimentation. Parser and simplifier.Writing logical queries directly as Equation (2) would be too cumbersome and error-prone for end-users; this motivates ExplainDT's high-level syntax which is then parsed into StratiFOILed. An example is illustrated in Figure 3. In turn, logical connectives can significantly increase the size of our resulting CNF formulas; consider for example the formula: \(\varphi(y)=\exists x\left[\neg(\neg(\neg(\neg(\neg(x\subseteq y)))))\lor(1,0, \bot)\subseteq(1,1,1)\right]\). It is clear that double-negations can be safely eliminated, and also that sub-expressions involving only constants can be pre-processed and also eliminated, thus resulting in the simplified formula \(\varphi(y)=\exists x\neg(x\subseteq y)\). We implement a _simplifier_-module that performs these optimizations. Encoder.We use standard encoding techniques for SAT-solving, for which we refer the reader to the _Handbook of Satisfiability_[1, 9]. The basic variables of our propositional encoding are of the form \(v_{x,i,s}\), indicating that the StratiFOILed variable \(x\) has value \(s\) in its \(i\)-th feature, with \(i\in\{1,\ldots,\dim(\mathcal{T})\}\), for an input decision tree \(\mathcal{T}\), and \(s\in\{0,1,\bot\}\). Then, the clauses (and further auxiliary variables) are mainly built on two layers of abstraction: a _predicate layer_, and a _first-order layer_. The predicate layer consists of individual ad-hoc encodings for each of the predicates and shorthands that appear frequently in queries, such as Cons, LEL, Full, AllPos and AllNeg. The first-order layer consists of encoding the logical connectives \((\neg,\lor,\land)\) as well as the quantifiers (with the corresponding Node and PosLeaf guards when appropriate). For two interesting examples of encoding the predicate layer, let us consider LEL and AllPos. For AllPos, we use a _reachability_ encoding, in which we create variables \(r_{x,u}\) to represent that a node \(u\) of \(\mathcal{T}\) is _reachable_ by a partial instance \(y\) subsuming \(x\). We start by enforcing that \(r_{x,\mathrm{root}(\mathcal{T})}\) is set to true, to then propagate the reachability from every node \(u\) to its children \(u\to 0\) and \(u\to 1\) depending on the value of \(x[a]\) with \(a\) the label of \(u\). Finally, by adding unit clauses stating that \((\neg r_{x,\ell})\) for every **false** leaf \(\ell\), we have encoded that no instance \(y\) subsuming \(x\) reaches a false leaf, and thus is a positive instance. For the case of LEL, we have that \[\textsc{LEL}(x,y) \equiv\ |x_{\bot}|\geq|y_{\bot}|\ \equiv\ \left(\sum_{i}^{\dim(\mathcal{T})}v_{x,i, \bot}\right)\geq\left(\sum_{i}^{\dim(\mathcal{T})}v_{y,i,\bot}\right)\] \[\ \ where \(c_{x,k}\) and \(c_{y,k}\) are the auxiliary variables from Sinz's _sequential encoding_[9, 50], thus amounting to a total of \(O(\dim(\mathcal{T})^{2})\) auxiliary variables and clauses. For the first-order layer, we implement the Tseitin transformation [52] to more efficiently handle \(\neg\) and \(\vee\), while treating the guarded-\(\forall\) as a conjunction over the \(O(|\mathcal{T}|)\) partial instances for which either \(\textsc{Node}(\cdot)\) or \(\textsc{PosLeaf}(\cdot)\) holds, which can be precomputed from \(\mathcal{T}\). An interesting problem that arises when handling negations or disjunctions is that of _consistency constraints_, e.g., for each \(i\in\{1,\ldots,\dim(\mathcal{T})\}\), the clause \((v_{x,i,\perp}\lor v_{x,i,0}\lor v_{x,i,1})\) should be true. To address this, we partition the clauses of our encoding into two sets: ConsistencyCls (consistency clauses) and SemanticCls (semantic clauses), so that logical connectives operate only over SemanticCls, preserving the internal consistency of our variables, both the original and auxiliary ones. Table 1 shows the size of the encoding on two queries we found to be hard: (i) deciding whether a random instance \(x\) has a sufficient reason \(y\) with \(|y_{\perp}|=0.1\times\dim(\mathcal{T})\), and (ii), deciding whether there is a partial instance \(x\) describing an RFS with \(|x_{\perp}|=0.1\times\dim(\mathcal{T})\). As Table 1 depicts, both the number of variables and clauses seem to grow at a rate that is linear in \(|\mathcal{T}|\), the number of nodes of \(\mathcal{T}\). Experiments.We perform experiments over both synthetic and real datasets (binarized _MNIST_[15] and the _Congressional Voting Records Data Set_[17]). As hardware we used a 2020 M1 MacBook Pro personal computer, and as software we used scikit-learn[44] for training decision trees, YalSAT[7] and Kissat[8] for SAT-solving. The synthetic datasets consist simply of uniformly random vectors in \(\{0,1\}^{d}\) for a parameter \(d\), whose true label is also chosen uniformly at random. Albeit meaningless by itself, the purpose of such data is to allow us to freely experiment over different dimensions and tree sizes.2 We perform experiments for all queries presented in this article, as well as a manually crafted set of queries as the one illustrated in Figure 3. Our main finding is that ExplainDT evaluates most queries in a few seconds, for trees of up to 1500 nodes3, and even high-dimensional datasets (e.g., MNIST has dimension 784, Figure 4 depicts experiments). Given that our experiments were run on a personal computer, we take these preliminary results as a signal of ExplainDT being a practically usable interpretability language, at least concerning performance. Footnote 2: Real datasets are fixed and thus only allow training decision trees with a number of nodes that is a function of the dimension and size of the training set, at least under standard training algorithms. Footnote 3: Practitioners seem to use decision trees of up to around 1000 nodes [37, 53], thus our setting seems realistic. ## 7 Limitations and Future Work A natural extension of our work is to add support for trees trained over categorical and numerical features, following the discretization approaches of Choi et al. [12] and Arenas et al. [2]. For a real adoption of ExplainDT by practitioners, we will require a more extensive and versatile high-level syntax, as well as providing tools for simplifying the interaction with the language. In terms of performance, the simplifier we have implemented only takes care of simple syntactic cases, which opens opportunities for optimizations. In particular, the area of databases has studied significantly the problem of restructuring queries or deciding evaluation plans with the goal of improving performance [18, 45, 49], which might serve as guidance for improving ExplainDT. At the theoretical level, a natural question is whether some of our theoretical results can be extended to larger classes of decision diagrams such as OBDDs; a main obstacle for such extensions is that while for decision trees there is a natural correspondence between partial instances and nodes, over OBDDs (and thus FBDDs) there can be exponentially many distinct partial instances that reach a node.
2305.03084
A Chandra X-ray Study of Supernova Remnant N63A in the Large Magellanic Cloud
We perform extensive spectroscopy of the supernova remnant N63A in the Large Magellanic Cloud, using $\sim 43$ ks {\it Chandra} archival data. By analysing the spectra of the entire remnant, we determine the abundance distributions for O, Ne, Mg, Si, and Fe. We detect evidence of enhanced O and possibly Ne and Mg in some of the central regions which might indicate an asymmetric distribution of the ejecta. The average O/Ne, O/Mg, and Ne/Mg abundance ratios of the ejecta are in plausible agreement with the nucleosynthesis products from the explosion of a $\sim40$ $M_{\odot}$ progenitor. We estimate an upper limit on the Sedov age of $\sim 5,400\pm200$ yr and explosion energy of $\sim 8.9\pm 1.6\times 10^{51}$ erg for N63A. We discuss the implications of our results for the morphological structure of the remnant, its circumstellar medium and the nature of the progenitor star.
E. Karagoz, N. Alan, S. Bilir, S. Ak
2023-05-04T18:00:13Z
http://arxiv.org/abs/2305.03084v1
# A Chandra X-ray Study of Supernova Remnant N63A in the Large Magellanic Cloud ###### Abstract We perform extensive spectroscopy of the supernova remnant N63A in the Large Magellanic Cloud, using \(\sim 43\) ks _Chandra_ archival data. By analysing the spectra of the entire remnant, we determine the abundance distributions for O, Ne, Mg, Si, and Fe. We detect evidence of enhanced O and possibly Ne and Mg in some of the central regions which might indicate an asymmetric distribution of the ejecta. The average O/Ne, O/Mg, and Ne/Mg abundance ratios of the ejecta are in plausible agreement with the nucleosynthesis products from the explosion of a \(\sim 40\)\(M_{\odot}\) progenitor. We estimate an upper limit on the Sedov age of \(\sim 5,400\pm 200\) yr and explosion energy of \(\sim 8.9\pm 1.6\times 10^{51}\) erg for N63A. We discuss the implications of our results for the morphological structure of the remnant, its circumstellar medium and the nature of the progenitor star. keywords: ISM: individual objects: N63A; ISM: supernova remnants; X-rays: ISM; galaxies: Magellanic Clouds ## 1 Introduction Supernova (SN) explosions, the most energetic stellar events known, play a crucial role in shaping the energy density, chemical enrichment, and evolution of galaxies. Core-collapse explosions of massive stars (\(M>8M_{\odot}\)) account for \(\sim 3/4\) of all supernovae (SNe) (Tsujimoto et al., 1995; Sato et al., 2007), and their remnants precious tools for understanding the recent star formation, the dynamics of supernova explosions, the composition of the interstellar medium (ISM), and nature of the progenitor. The structure of supernova remnants (SNRs) and their interactions with the ambient medium provides insight into their origin and effects on their environment. N63A is one of the brightest SNRs in the Large Magellanic Cloud (LMC; Westerlund & Mathewson, 1966), and provides an excellent laboratory for investigating such structures and interactions. N63A, first identified as an SNR by Mathewson & Healey (1964), appears to be embedded within the H II region N63 coincident with the OB association NGC 2030 or LH83 (Lucke & Hodge, 1970; Kennicutt & Chu, 1988). For this reason, it is believed that the remnant to be the product of the SN explosion of one of the most massive Population I stars in the dense and complex NGC 2030 or LH83 O-B association (Lucke & Hodge, 1970; van den Bergh & Dufour, 1980; Shull, 1983; Chu & Kennicutt, 1988; Hughes et al., 1998). The core-collapse origin of N63A was also corroborated by detailed measurements of Fe K\(\alpha\) centroids (Yamaguchi et al., 2014). Oey (1996) designated that the currently most luminous star in NGC 2030 is an O7 star with a mass of \(\sim 40M_{\odot}\), therefore the progenitor of N63A's supernova was probably more massive than this and with a main sequence spectral type earlier than O7. Furthermore, observational studies have confirmed that N63A is the first SNR formed in the HII region (Shull, 1983). The age of the SNR is estimated to be \(\sim 4,500\) yr (Williams, Chu, & Gruendl, 2006) and 2,000-5,000 yr (Hughes et al., 1998; Warren, Hughes, & Slane, 2003), indicating that the natal gas may still be associated with N63A. The size of N63A in X-rays is \(r\sim 81^{\prime\prime}\times 67^{\prime\prime}\) or \(\sim 18\) pc in diameter, assuming a distance of 50 kpc (e.g. Dickel et al., 1993; Feast, 1999) and this size is approximately three times that of the optical remnant containing three prominent lobes (Mathewson et al., 1983). The two eastern lobes with high-intensity ratios of [S II]/H\(\alpha\)(Payne et al., 2008) represent the shock-heated gas, while the third western lobe with a low-intensity ratio corresponds to the photoionized H II region (Levenson et al., 1995). All optical lobes show molecular shock features with their near-infrared colours, suggesting that the shocked molecular gas dominates in the remnant (Williams et al., 2006). Subsequent detailed infrared spectroscopy verified that shock-excited molecular hydrogen lines are determined in all-optical lobes (Caulet & Williams, 2012). Similar to the fact that the X-ray emission from N63A is also consistent with swept-up ISM (Hughes et al., 1998), the shock heating is the result of interactions with the ISM, rather than with SN ejecta, based on the derived abundances (Russell & Dopita, 1990). The imaging spectroscopy of X-rays also indicates the presence of dense interstellar gas with a mass of at least \(\sim 450M_{\odot}\)(Warren et al., 2003). Sano et al. (2019) using the ALMA data calculated the total mass of the molecular clouds is \(\sim 800M_{\odot}\) for the shock-ionized region and \(\sim 1,700\) for the photoionized region. They also reveal that the absorbing column densities toward the molecular clouds are \(\sim 1.5-6.0\times 10^{21}\) cm\({}^{-2}\). In previous X-ray studies, several sub-regions characteristically representing emission from the swept-up ambient gas and the shocked ejecta were examined, but an extensive X-ray study of the entire remnant has not yet been performed. In this work, based on the archival _Chandra_ data, we perform the spatially-resolved spectral analysis of the entire SNR to provide unprecedented details on radial and azimuthal structures of N63A. We describe the X-ray data and the data reduction in Section 2. We present the X-ray imaging and spectral analysis of the SNR in Section 3. We discuss the results and compare this work to other abundance measurements in Section 4. ## 2 X-ray data and reduction We used archival _Chandra_ data (43.35 ks) of SNR N63A obtained on 2000 October 16 with the S3 chip Advanced Charged Couple Device Imaging Spectrometer (ACIS-S) detector array (Bautz et al. 1998). We reprocessed observational data with CIAO version 4.13 via the chandra_repro script. We removed time intervals that shows high particle background fluxes (by a factor of \(\sim 2\) higher than the mean background). After the data reduction, the total effective exposure time is \(\sim 41\) ks. ## 3 Analysis and results ### Imaging We created an ACIS-S3 broadband and an X-ray three-colour (RGB) images of N63A using archival _Chandra_ data as shown in Figure 1. The X-ray morphology of the N63A shows a bright, sharp outer rim in the northwest, northeast, and southeast while a faint, diffuse in the southwest. There are also smaller diffuse "crescent-like" structures on the northern and eastern borders of the remnant. One of the remarkable structures on the N63A is the faint triangular "hole-like" area in just west of the geometrical centre of the remnant (Figure 1a). This area corresponds to optical lobes which are also coincident with the brightest emission in radio. In Figure 1b, we show the _Chandra_ RGB image with red, green and blue corresponding to the soft (0.30-1.11 keV), medium (1.11-2.10 keV), and hard (2.10-7.00 keV) energy bands, respectively. The energy bands displayed in each colour were chosen to emphasise atomic line emission that illustrates the distribution of electron temperatures and ionization states across N63A. The major emission lines O and Ne are grouped together. We used the sub-band images with the native pixel scale of the ACIS detector (0\({}^{\prime\prime}\).49 pixel\({}^{-1}\)) and then adaptively smoothed them. Although soft X-ray (red) dominates N63A, inner parts of the remnant generally show harder X-ray emission (blue). The RGB image showed that both the crescent-like regions and the entire rim of the remnant have softer spectra than average for N63A. There are also small blue clumps on the remnant that show a harder-than-average X-ray emission. ### Spectroscopy We first analysed the outer rim of SNR N63A, which represents the ISM swept up by forward shock. For this purpose, we selected six regions and marked them as ISM1-ISM6 (Figure 2a). Shell regions contain almost 5,000 counts on average, in the 0.3-3.0 keV energy band. In order to reveal the spatial structure of X-ray emission from metal-rich ejecta gas, we selected 144 regions through seven radial directions across N63A. We also selected 12 X-ray faint regions from the inner and outer parts of the remnant. The radial regions and directions are marked as shown in Figure 2b. Each region contains about 5,000 counts in the 0.3-3.0 keV energy band. The outer and inner faint X-ray regions that contain almost 4,000-6,000 counts are Figure 1: (a) The broadband ACIS image (0.30-7.0 keV) and (b) the three-colour image of N63A: Red = 0.30-1.11 keV, green = 1.11-2.10 keV, and blue = 2.10-7.00 keV. The cyan triangle in the left panel represents the triangular hole-like structure. For the purposes of display, both images have been smoothed by a Gaussian kernel of \(\sigma=0^{\prime\prime}.75\). Figure 2: Logarithmic-scale broadband image of N63A in the 0.3 – 7.0 keV photon energy band. (a) The outermost six shell regions that are used to characterise the spectral nature of the swept-up ISM are marked. (b) The radial and azimuthal regions used for the spectral analysis are marked. (c) The inner and outer faint regions used for the spectral analysis are marked. All images have been smoothed by a Gaussian kernel of \(\sigma=0^{\prime\prime}.25\). also marked as shown in Figure 2c. Then, we extracted the spectra from the observational data for each selected faint region using the CIAO script specextract. We performed the background subtraction using the spectrum extracted from source-free regions outside of N63A. We binned each extracted spectrum to comprise at least 20 counts per photon energy channel. We fit each regional spectrum with a non-equilibrium ionization (NEI) plane-shock model (vpshock in XSPEC; Borkowski et al., 2001) with two foreground absorption column components, one for the Galactic (\(N_{\rm H,Gal}\)) and the other for the LMC (\(N_{\rm H,LMC}\)). We used NEI version 3.0.4 in XSPEC related to ATOMDB (Foster et al., 2012), which was augmented to include inner shell lines, and updated Fe-L lines (see, Badenes et al., 2006). We fixed \(N_{\rm H,Gal}\) at \(1.72\times 10^{21}\) cm\({}^{-2}\) for the direction toward N63A (HI4PI Collaboration et al., 2016) with solar abundances (Anders & Grevesse, 1989). We fitted \(N_{\rm H,LMC}\) assuming the LMC abundances (Russell & Dopita, 1992; Schenck et al., 2016). We also fixed the redshift parameter at \(z=8.75\times 10^{-4}\) for the radial velocity (262.2 kms\({}^{-1}\)) of the LMC (McConnachie, 2012). #### 3.2.1 Ambient Medium We fit ISM1 - ISM6 regions spectra using a one-component plane-parallel shock (phabs \(\times\) vphabs \(\times\) vpshock) model. In Figure 3, we show the spectra of ISM1 and ISM3 regions with models and residuals as a sample. We initially fixed all elemental abundances at the LMC values, i.e. He = 0.89, C=0.303, S=0.31, N=0.123, Ar=0.537, Ca=0.339, Ni=0.618 (Russell & Dopita, 1992), and O=0.13, Ne=0.20, Mg=0.20, Si=0.28, Fe=0.15 (Schenck et al., 2016), in the plane-shock model. Hereafter, abundances are with respect to solar values (Anders & Grevesse, 1989). We varied electron temperature (\(kT\), where \(k\) is the Boltzmann constant), ionization timescale (\(n_{\rm e}t\), where \(n_{\rm e}\) is the post-shock electron density, and \(t\) is the time since the gas has been shocked) and the \(N_{\rm H,LMC}\) absorbing column in the LMC. The normalisation parameter (a scaled volume emission measure, \(EM=n_{\rm e}n_{\rm H}V\), where \(n_{\rm H}\) is the postshock H density, and \(V\) is the emission volume) is also varied. The reduced chi-square (\(\chi^{2}_{\nu}\)) values between \(1.3-1.9\) for the model fits. Then to improve the fits we varied O, Ne, Mg, Si, Fe abundances and obtained the best-fit models for the shell regions (\(\chi^{2}_{\nu}=1.01-1.50\)). While the fitted Ne, Mg, and Si abundances are consistent (within statistical uncertainties) with values given by Russell & Dopita (1992), the fitted abundances for O and Fe are lower by a factor of \(\sim 2-3\) than Russell & Dopita (1992) values. Besides this all fitted elemental abundances are consistent with Schenck et al. (2016) values within statistical uncertainties. We found that the outer rim spectrum of N63A is dominated by emission from the shocked low-abundant LMC ISM rather than that from the shocked metal-rich ejecta gas. The best-fit model parameters of the shell regions and their median values are listed in Table 1. #### 3.2.2 Metal-Rich Ejecta We perform an extensive spatially resolved spectral analysis of its X-ray emission based on 144 radial and azimuthal regional (see Figure 2b) spectra to study the detailed spatial distribution of metal-rich ejecta in N63A. The outer regions of the remnant can be modelled with a single-component plane shock model, and the spectral parameters for these regions are compatible with median shell values. The spectra of central parts of the remnant cannot be fit by a single shock model with abundances fixed at the values that we estimated for the mean shell (i.e. \(\chi^{2}_{\nu}>2.0\)). These show the existence of an additional shock component, likely representing the emission from the shocked metal-rich ejecta gas, superposed with \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Region & \(n_{\rm H}\) & \(kT\) & \(n_{\rm e}t\) & \(EM\) & O & Ne & Mg & Si & Fe & \(\chi^{2}_{\nu}\) \\ & (\(10^{22}\)cm\({}^{-2}\)) & (keV) & (\(10^{11}\)cm\({}^{-3}\)s) & (\(10^{67}\)cm\({}^{-3}\)) & & & & & & \\ \hline ISM1 & \(0.04^{+0.08}_{-0.03}\) & \(0.61^{+0.11}_{-0.14}\) & \(1.06^{+1.96}_{-0.46}\) & \(13.07^{+0.81}_{-0.32}\) & \(0.16^{+0.03}_{-0.03}\) & \(0.28^{+0.06}_{-0.05}\) & \(0.21^{+0.10}_{-0.08}\) & \(0.40^{+2.22}_{-0.14}\) & \(0.15^{+0.05}_{-0.05}\) & 1.25 \\ ISM2 & \(0.01^{+0.01}_{-0.01}\) & \(0.60^{+0.11}_{-0.10}\) & \(1.53^{+1.80}_{-0.04}\) & \(1.89^{+27.11}_{-0.75}\) & \(0.16^{+0.04}_{-0.03}\) & \(0.25^{+0.05}_{-0.05}\) & \(0.21^{+0.06}_{-0.08}\) & \(0.28^{+0.12}_{-0.12}\) & \(1.05^{+0.02}_{-0.04}\) & 1.40 \\ ISM3 & \(0.01^{+0.07}_{-0.01}\) & \(0.57^{+0.05}_{-0.07}\) & \(2.88^{+1.70}_{-0.98}\) & \(2.84^{+35.4}_{-1.95}\) & \(0.17^{+0.04}_{-0.05}\) & \(0.36^{+0.06}_{-0.08}\) & \(0.29^{+0.08}_{-0.08}\) & \(0.38^{+0.13}_{-0.11}\) & \(0.13^{+0.02}_{-0.03}\) & 1.01 \\ ISM4 & \(0.01^{+0.02}_{-0.04}\) & \(0.50^{+0.18}_{-0.18}\) & \(1.77^{+3.15}_{-0.16}\) & \(1.79^{+40.15}_{-0.16}\) & \(0.16^{+0.04}_{-0.02}\) & \(0.30^{+0.06}_{-0.07}\) & \(0.28^{+0.10}_{-0.10}\) & \(0.26^{+0.18}_{-0.14}\) & \(0.08^{+0.04}_{-0.02}\) & 1.22 \\ ISM5 & \(0.01^{+0.02}_{-0.02}\) & \(0.58^{+0.10}_{-0.10}\) & \(1.49^{+0.84}_{-0.84}\) & \(13.17^{+4.48}_{-1.46}\) & \(0.20^{+0.05}_{-0.05}\) & \(0.30^{+0.06}_{-0.05}\) & \(0.25^{+0.08}_{-0.08}\) & \(0.42^{+0.16}_{-0.16}\) & \(0.16^{+0.04}_{-0.04}\) & 1.50 \\ ISM6 & \(0.01^{+0.04}_{-0.02}\) & \(0.59^{+0.13}_{-0.10}\) & \(1.16^{+1.25}_{-0.55}\) & \(15.65^{+21.86}_{-4.46}\) & \(0.14^{+0.02}_{-0.02}\) & \(0.28^{+0.06}_{-0.05}\) & \(0.20^{+0.08}_{-0.07}\) & \(0.40^{+0.13}_{-0.15}\) & \(0.10^{+0.04}_{-0.03}\) & 1.50 \\ \hline Median & \(0.01^{+0.04}_{-0.02}\) & \(0.59^{+0.12}_{-0.10}\) & \(1.51^{+1.75}_{-0.70}\) & \(16.80^{+24.01}_{-43.43}\) & \(0.16^{+0.04}_{-0.03}\) & \(0.29^{+0.06}_{-0.05}\) & \(0.23^{+0.08}_{-0.08}\) & \(0.39^{+0.14}_{-0.14}\) & \(0.14^{+0.04}_{-0.04}\) & — \\ \hline \end{tabular} Note: Abundances are with respect to solar (Anders & Grevesen 1989). Uncertainties are at the 90% confidence level. The Galactic column \(N_{\rm H,Gal}\) is fixed at \(1.72\times 10^{21}\) cm\({}^{-2}\) (HI4PI Collaboration et al. 2016). \end{table} Table 1: Summary of spectral models fits to the six shell regions of N63A. The median shell values are given in the last line. Figure 3: Best-fit spectral models and residuals of the X-ray spectra from the ISM1 and ISM3 regions. Several atomic emission line features are marked on left panel. the projected shell emission. Therefore, we performed a two-component NEI shock model (phabs \(\times\) vphabs \(\times\) (vpshock+vpshock)) to fit these spectra, one for the underlying mean shell spectrum and the other responsible for the metal-rich ejecta component. We fixed \(N_{\rm H,LMC}\) at the mean shell value. We also fixed all model parameters, except for normalisation, of the underlying swept-up ISM component at the values for the mean shell (Table 1). For the second shock component, we first varied \(kT\), \(n_{\rm e}t\), and normalisation and fixed the elemental abundances at the mean shell values. The fit for each regional spectrum was not statistically acceptable (\(\chi^{2}_{\nu}>2.0\)) because the model was not able to reproduce emission lines from various elements. Then we thawed elemental abundances for the second component to improve each regional spectral fit. With abundances for O, Ne, Mg, Si, and Fe varied, our spectral model fits significantly improved (\(\chi^{2}_{\nu}<1.6\)). In Figures 4-5, we show some example spectra extracted from the regions marked in Figure 2b with best-fit models and residuals. The O, Ne and Mg in some of the central parts of the remnant in directions A and F are enhanced compared to those of the mean shell values. Si and Fe abundances are generally consistent with mean shell values within statistical uncertainties. The best-fit model parameters for radial regions are listed in Table 2, and radial profiles of spectral parameters are shown in Figures 7-9. #### 3.2.3 Inner and Outer Faint Structures on N63A We perform spectral analysis of selected 12 X-ray faint regions from the inner and outer parts of the remnant (see Figure 2c) to investigate the detailed features these "crescent-like" and the "hole-like" structures. These regions were modelled with a single component plane shock model by varying \(kT\), \(n_{\rm e}t\), normalisation and elemental abundances (O, Ne, Mg, Si, and Fe) and statistically acceptable model fits were obtained (\(\chi^{2}_{\nu}<1.6\)). In Figure 6, we show some example spectra extracted from the regions marked in Figure 2c with best-fit models and residuals. The best-fit model parameters for these regions are listed in Table 3. The spectral parameters for the outer diffuse and faint regions (Ear 1 - Ear 3 and Tail 1 - Tail 4) are compatible with median ISM values within statistical uncertainties. It is seen that \(kT\) and \(n_{\rm e}t\) for the inner faint regions (I1-I5) of the remnant are generally higher than the median ISM values, and the elemental abundances are consistent with the ISM values within the error limits. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline Region & \(r\) & \(n_{\rm H}\) & \(kT\) & \(n_{e}t\) & \(EM\) & O & Ne & Mg & Si & Fe & \(\chi^{2}_{\nu}\) \\ & (\({}^{\prime\prime}\)) & (\(10^{22}\)cm\({}^{-2}\)) & (keV) & (\(10^{11}\)cm\({}^{-3}\)s) & (\(10^{97}\)cm\({}^{-3}\)) & & & & & & \\ \hline A01 (\({}^{*}\)) & 3.46 & \(0.0^{+0.04}_{-0.02}\) & \(1.10^{+2.53}_{-0.20}\) & \(5.9^{+4.82}_{-2.47}\) & \(17.06^{+1.60}_{-1.78}\) & \(0.44^{+0.25}_{-0.44}\) & \(0.51^{+4.23}_{-0.47}\) & \(0.64^{+4.36}_{-0.28}\) & \(0.27^{+0.27}_{-0.17}\) & \(0.23^{+0.18}_{-0.23}\) \\ A02 (\({}^{*}\)) & 6.56 & \(0.01^{+0.03}_{-0.03}\) & \(1.14^{+0.08}_{-0.08}\) & \(5.23^{+4.47}_{-1.65}\) & \(18.52^{+3.47}_{-1.44}\) & \(0.82^{+0.36}_{-0.26}\) & \(0.96^{+0.98}_{-0.54}\) & \(0.80^{+0.36}_{-0.32}\) & \(0.34^{+0.30}_{-0.19}\) & \(1.38^{+0.17}_{-0.16}\) & \(1.34\) \\ A03 (\({}^{*}\)) & 8.81 & \(0.01^{+0.02}_{-0.02}\) & \(0.81^{+0.08}_{-0.05}\) & \(8.18^{+2.92}_{-1.75}\) & \(18.15^{+3.51}_{-3.26}\) & \(0.61^{+0.11}_{-0.15}\) & \(0.59^{+0.22}_{-0.18}\) & \(0.35^{+0.17}_{-0.14}\) & \(0.33^{+0.15}_{-0.11}\) & \(0.38^{+0.17}_{-0.16}\) \\ A04 (\({}^{*}\)) & 10.69 & \(0.01^{+0.01}_{-0.01}\) & \(0.81^{+0.07}_{-0.47}\) & \(9.28^{+4.44}_{-1.40}\) & \(0.40^{+0.37}_{-0.37}\) & \(0.43^{+0.18}_{-0.08}\) & \(0.82^{+0.39}_{-0.39}\) & \(0.52^{+0.24}_{-0.29}\) & \(0.61^{+0.08}_{-0.08}\) & \(1.00\) \\ A05 (\({}^{*}\)) & 12.34 & \(0.01^{+0.01}_{-0.01}\) & \(0.83^{+0.10}_{-0.37}\) & \(12.83^{+1.97}_{-4.95}\) & \(19.23^{+2.41}_{-0.64}\) & \(0.44^{+0.19}_{-0.16}\) & \(0.41^{+0.25}_{-0.16}\) & \(0.47^{+0.29}_{-0.17}\) & \(0.30^{+0.18}_{-0.13}\) & \(0.17^{+0.09}_{-0.05}\) & \(1.06\) \\ A06 (\({}^{*}\)) & 13.76 & \(0.02^{+0.02}_{-0.02}\) & \(0.75^{+0.05}_{-0.05}\) & \(8.33^{+6.33}_{-0.88}\) & \(22.43^{+3.86}_{-0.36}\) & \(0.37^{+0.10}_{-0.08}\) & \(0.31^{+0.13}_{-0.12}\) & \(0.33^{+0.15}_{-0.03}\) & \(0.32^{+0.10}_{-0.09}\) & \(0.13^{+0.03}_{-0.03}\) & \(1.22\) \\ A07 & 14.96 & \(0.01^{+0.01}_{-0.01}\) & \(0.68^{+0.06}_{-0.06}\) & \(7.82^{+5.39}_{-3.35}\) & \(23.93^{+4.88}_{-4.88}\) & \(0.28^{+0.08}_{-0.08}\) & \(0.34^{+0.13}_{-0.01}\) & \(0.20^{+0.08}_{-0.07}\) & \(0.20^{+0.09}_{-0.08}\) & \(0.12^{+0.03}_{-0.02}\) & \(1.59\) \\ A08 & 15.96 & \(0.01^{+0.01}_{-0.01}\) & \(0.72^{+0.02}_{-0.02}\) & \(4.57^{+2.00}_{-2.00}\) & \(22.91^{+3.60}_{-3.60}\) & \(0.20^{+0.06}_{-0.06}\) & \(0.21^{+0.07}_{-0.07}\) & \(0.01^{+0.08}_{-0.02}\) & \(0.02^{+0.09}_{-0.02}\) & \(1.59\) \\ A09 & 16.79 & \(0.01^{+0.01}_{-0.01}\) & \(0.66^{+0.06}_{-0.06}\) & \(7.42^{+5.28}_{-3.21}\) & \(24.61^{+3.52}_{-3.52}\) & \(0.22^{+0.07}_{-0.07}\) & \(0.25^{+0.09}_{-0.09}\) & \(0.20^{+0.07}_{-0.07}\) & \(0.27^{+0.09}_{-0.08}\) & \(0.13^{+0.03}_{-0.02}\) & \(1.38\) \\ A10 & 17.51 & \(0.01^{+0.01}_{-0.01}\) & \(0.69^{+0.03}_{-0.03}\) & \(7.8^{+2.41}_{-2.21}\) & \(22.65^{+1.77}_{-1.49}\) & \(0.70^{+0.13}_{-0.08}\) & \(0.23^{+0.16}_{-0.09}\) & \(0.16^{+0.07}_{-0.06}\) & \(0.24^{+0.08}_{-0.07}\) & \(0.16^{+0.03}_{-0.03}\) & \(1.55\) \\ A11 & 18.21 & \(0.01^{+0.01}_{-0.01}\) & \(0.71^{+0.04}_{-0.04}\) & \(7.93^{+3.62}_{-3.52}\) & \(25.45^{+3.73}_{-1.73}\) & \(0.22^{+0.11}_{-0.05}\) & \(0.15^{+0.08}_{-0.08}\) & \(0.16^{+0.07}_{-0.06}\) & \(0.24^{+0.08}_{-0.07}\) & \(0.16^{+0.03}_{-0.03}\) & \(1.55^{+0.04}_{-0.03}\) & \(0.98\) \\ A12 & 18.91 & \(0.01^{+0.01}_{-0.01}\) & \(0.66^{+0.06}_{-0.06}\) & \(7.33^{+6.62}_{-3.50}\) & \(25.52^{+3.54}_{-3.52}\) & \(0.22^{+0.11}_{-0.07}\) & \(0.23^{+0.09}_{-0.08}\) & \(0.15^{+0.07}_{-0.06}\) & \(0.23^{+0.09}_{-0.08}\) & \(0.15^{+0.07}_{-0.06}\) & \(0.23^{+0.09}_{-0.08}\) & \(0.14^{+0.03}_{-0.03}\) & \(1.26\) \\ A13 & 19.64 & \(0.01^{+0.02}_{-0.02}\) & \(0.6 ) \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline Region & \(r\) & \(n_{\rm H}\) & \(kT\) & \(n_{\rm e}t\) & \(EM\) & O & Ne & Mg & Si & Fe & \(\chi^{2}_{\nu}\) \\ & (\({}^{\prime\prime}\)) & (\(10^{22}\)cm\({}^{-2}\)) & (keV) & (\(10^{11}\)cm\({}^{-3}\)) & (\(10^{97}\)cm\({}^{-3}\)) & & & & & & \\ \hline Co1 (*) & 3.51 & \(0.01^{+0.01}_{-0.01}\) & \(0.80^{+0.00}_{-0.05}\) & \(6.79^{+2.34}_{-0.08}\) & \(16.32^{+2.90}_{-1.93}\) & \(0.54^{+0.15}_{-0.11}\) & \(0.37^{+0.11}_{-0.16}\) & \(0.37^{+0.16}_{-0.11}\) & \(0.37^{+0.15}_{-0.11}\) & \(0.19^{+0.06}_{-0.04}\) & 1.59 \\ Co2 (*) & 6.49 & \(0.01^{+0.02}_{-0.01}\) & \(0.67^{+0.08}_{-0.08}\) & \(9.65^{+2.48}_{-3.51}\) & \(23.67^{+0.54}_{-0.54}\) & \(0.22^{+0.11}_{-0.10}\) & \(0.34^{+0.13}_{-0.10}\) & \(0.22^{+0.08}_{-0.09}\) & \(0.35^{+0.15}_{-0.11}\) & \(0.11^{+0.04}_{-0.02}\) & 1.01 \\ Co3 (*) & 8.34 & \(0.01^{+0.01}_{-0.01}\) & \(0.77^{+0.08}_{-0.08}\) & \(8.47^{+3.52}_{-0.07}\) & \(15.34^{+2.81}_{-2.81}\) & \(0.23^{+0.14}_{-0.07}\) & \(0.23^{+0.12}_{-0.02}\) & \(0.30^{+0.17}_{-0.01}\) & \(0.27^{+0.14}_{-0.11}\) & \(0.21^{+0.18}_{-0.06}\) & 1.51 \\ Co4 (*) & 10.01 & \(0.03^{+0.05}_{-0.03}\) & \(0.79^{+0.08}_{-0.07}\) & \(8.57^{+8.87}_{-4.62}\) & \(24.62^{+7.50}_{-0.27}\) & \(0.22^{+0.07}_{-0.27}\) & \(0.30^{+0.17}_{-0.10}\) & \(0.19^{+0.14}_{-0.14}\) & \(0.14^{+0.11}_{-0.05}\) & 1.36 \\ Co5 (*) & 11.66 & \(0.01^{+0.05}_{-0.05}\) & \(0.84^{+0.07}_{-0.07}\) & \(17.95^{+4.49}_{-0.49}\) & \(20.11^{+2.32}_{-0.34}\) & \(0.44^{+0.14}_{-0.14}\) & \(0.14^{+0.29}_{-0.18}\) & \(0.18^{+0.17}_{-0.24}\) & \(0.24^{+0.15}_{-0.15}\) & \(0.17^{+0.07}_{-0.07}\) & 1.00 \\ Co6 (*) & 12.97 & \(0.01^{+0.03}_{-0.01}\) & \(0.66^{+0.08}_{-0.06}\) & \(8.37^{+1.18}_{-1.18}\) & \(22.87^{+0.55}_{-0.35}\) & \(0.33^{+0.07}_{-0.01}\) & \(0.26^{+0.07}_{-0.07}\) & \(0.17^{+0.07}_{-0.09}\) & \(0.23^{+0.11}_{-0.13}\) & \(0.13^{+0.03}_{-0.03}\) & 1.06 \\ Co7 & 14.06 & \(0.01^{+0.01}_{-0.01}\) & \(0.62^{+0.05}_{-0.05}\) & \(5.15^{+0.04}_{-1.25}\) & \(27.42^{+0.44}_{-0.44}\) & \(0.16^{+0.06}_{-0.04}\) & \(0.22^{+0.06}_{-0.06}\) & \(0.14^{+0.06}_{-0.06}\) & \(0.18^{+0.02}_{-0.07}\) & \(0.10^{+0.02}_{-0.02}\) & 1.22 \\ Co8 & 15.04 & \(0.01^{+0.01}_{-0.01}\) & \(0.72^{+0.05}_{-0.07}\) & \(5.54^{+2.47}_{-2.70}\) & \(27.34^{+0.34}_{-0.22}\) & \(0.22^{+0.07}_{-0.07}\) & \(0.24^{+0.14}_{-0.09}\) & \(0.10^{+0.06}_{-0.06}\) & \(0.18^{+0.08}_{-0.07}\) & \(0.10^{+0.04}_{-0.03}\) & 1.51 \\ Co9 & 15.92 & \(0.01^{+0.01}_{-0.01}\) & \(0.71^{+0.08}_{-0.06}\) & \(6.14^{+0.01}_{-0.24}\) & \(22.10^{+0.39}_{-0.99}\) & \(0.30^{+0.04}_{-0.09}\) & \(0.34^{+0.13}_{-0.13}\) & \(0.23^{+0.07}_{-0.09}\) & \(0.11^{+0.03}_{-0.03}\) & 1.23 \\ C10 & 16.71 & \(0.01^{+0.01}_{-0.01}\) & \(0.72^{+0.08}_{-0.06}\) & \(6.41^{+0.04}_{-3.02}\) & \(32.36^{+0.49}_{-0.31}\) & \(0.19^{+0.15}_{-0.06}\) & \(0.15^{+0.15}_{-0.06}\) & \(0.21^{+0.07}_{-0.08}\) & \(0.08^{+0.08}_{-0.09}\) & \(1.10^{+0.03}_{-0.02}\) & 1.27 \\ C11 & 17.40 & \(0.01^{+0.01}_{-0.01}\) & \(0.73^{+0.08}_{-0.01}\) & \(1.36^{+0.10}_{-0.18}\) & \(22.28^{+0.20}_{-0.20}\) & \(0.36^{+0.14}_{-0.24}\) & \(0.28^{+0.16}_{-0.16}\) & \(0.28^{+0.16}_{-0.10}\) & \(0.15^{+0.05}_{-0.06}\) & \(0.21^{+0.09}_{-0.09}\) & \(0.14^{+0.02}_{-0.02}\) & 1.15 \\ C12 & 18.03 & \(0.01^{+0.01}_{-0.01}\) & \(0.78^{+0.03}_{-0.01}\) & \(3.86^{+0.39}_{-0.39}\) & \(19.01^{+0.26}_{-0.26}\) & \(0.19^{+0.14}_{-0.26}\) & \(0.19^{+0.14}_{-0.14}\) & \(0.18^{+0.07}_{-0.07}\) & \(0.22^{+0.08}_{-0.09}\) & \(0.07^{+0.07}_{-0.08}\) & \(0.06^{+0.03}_{-0.03}\) & 1.46 \\ C13 & 18.60 & \(0.01^{+0.01}_{-0.01}\) & \(0.70^{+0.09}_{-0.07}\) & \(4.39^{+4.80}_{-1.90}\) & \(20.00^{+0.08}_{-0.05}\) & \(0.16^{+0.10}_{-0.06}\) & \(0.2 \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline Region & \(r\) & \(n_{\rm H}\) & \(kT\) & \(n_{\rm e}t\) & \(EM\) & O & Ne & Mg & Si & Fe & \(\chi_{\nu}^{2}\) \\ & (\({}^{\prime\prime}\)) & (\(10^{22}\)cm\({}^{-2}\)) & (keV) & (\(10^{11}\)cm\({}^{-3}\)s) & (\(10^{57}\)cm\({}^{-3}\)) & & & & & & \\ \hline E01 (\({}^{*}\)) & 3.26 & \(0.01^{+0.01}_{-0.01}\) & \(0.67^{+0.04}_{-0.06}\) & \(7.76^{+5.31}_{-1.48}\) & \(20.42^{+3.81}_{-2.90}\) & \(0.34^{+0.07}_{-0.10}\) & \(0.42^{+0.08}_{-0.13}\) & \(0.35^{+0.08}_{-0.14}\) & \(0.17^{+0.06}_{-0.10}\) & \(0.18^{+0.02}_{-0.04}\) & 1.33 \\ E02 (\({}^{*}\)) & 5.91 & \(0.01^{+0.01}_{-0.01}\) & \(0.56^{+0.04}_{-0.03}\) & \(8.09^{+13.38}_{-2.19}\) & \(24.64^{+5.33}_{-5.33}\) & \(0.23^{+0.06}_{-0.05}\) & \(0.33^{+0.08}_{-0.07}\) & \(0.31^{+0.05}_{-0.08}\) & \(0.41^{+0.14}_{-0.12}\) & 0.14\({}^{+0.03}_{-0.02}\) & 1.36 \\ E03 (\({}^{*}\)) & 7.76 & \(0.01^{+0.01}_{-0.01}\) & \(0.63^{+0.01}_{-0.01}\) & \(12.87^{+12.87}_{-0.34}\) & \(25.70^{+6.38}_{-3.04}\) & \(0.18^{+0.09}_{-0.03}\) & \(0.24^{+0.07}_{-0.07}\) & \(0.26^{+0.02}_{-0.13}\) & 0.12\({}^{+0.10}_{-0.08}\) & 0.14\({}^{+0.01}_{-0.04}\) & 1.22 \\ E04 (\({}^{*}\)) & 9.41 & \(0.01^{+0.02}_{-0.02}\) & \(0.62^{+0.03}_{-0.04}\) & \(5.71^{+6.47}_{-1.11}\) & \(23.14^{+5.78}_{-5.10}\) & \(0.22^{+0.05}_{-0.06}\) & \(0.29^{+0.06}_{-0.07}\) & \(0.22^{+0.07}_{-0.08}\) & \(0.21^{+0.10}_{-0.09}\) & 0.15\({}^{+0.02}_{-0.05}\) & 1.44 \\ E05 (\({}^{*}\)) & 10.93 & \(0.01^{+0.01}_{-0.01}\) & \(0.65^{+0.06}_{-0.06}\) & \(5.15^{+0.87}_{-0.07}\) & \(21.50^{+3.37}_{-0.21}\) & \(0.22^{+0.08}_{-0.08}\) & \(0.25^{+0.09}_{-0.02}\) & \(0.25^{+0.12}_{-0.02}\) & 0.30\({}^{+0.13}_{-0.03}\) & 0.16\({}^{+0.05}_{-0.07}\) & 0.97 \\ E06 (\({}^{*}\)) & 12.31 & \(0.01^{+0.01}_{-0.01}\) & \(0.72^{+0.08}_{-0.08}\) & \(7.30^{+1.62}_{-1.59}\) & \(23.91^{+3.51}_{-0.51}\) & \(0.18^{+0.09}_{-0.09}\) & \(0.22^{+0.11}_{-0.10}\) & \(0.13^{+0.09}_{-0.08}\) & 0.08\({}^{+0.04}_{-0.08}\) & 0.12\({}^{+0.04}_{-0.03}\) & 1.08 \\ E07 & 13.59 & \(0.01^{+0.01}_{-0.01}\) & \(0.64^{+0.09}_{-0.05}\) & \(5.67^{+4.04}_{-2.57}\) & \(25.60^{+4.21}_{-4.46}\) & \(0.22^{+0.10}_{-0.06}\) & \(0.25^{+0.08}_{-0.08}\) & \(0.14^{+0.08}_{-0.03}\) & 0.08\({}^{+0.10}_{-0.02}\) & 1.48 \\ E08 & 14.76 & \(0.01^{+0.01}_{-0.01}\) & \(0.72^{+0.10}_{-0.12}\) & \(2.81^{+1.97}_{-1.97}\) & \(19.32^{+4.29}_{-3.73}\) & \(0.19^{+0.09}_{-0.09}\) & \(0.22^{+0.08}_{-0.04}\) & \(0.17^{+0.07}_{-0.06}\) & \(0.25^{+0.10}_{-0.09}\) & 0.11\({}^{+0.05}_{-0.03}\) & 1.59 \\ E09 & 15.81 & \(0.01^{+0.01}_{-0.01}\) & \(0.65^{+0.09}_{-0.06}\) & \(3.49^{+2.39}_{-2.52}\) & \(24.42^{+1.11}_{-0.11}\) & \(0.04^{+0.06}_{-0.06}\) & \(0.21^{+0.07}_{-0.06}\) & \(0.26^{+0.10}_{-0.09}\) & 0.11\({}^{+0.03}_{-0.02}\) & 1.41 \\ E10 & 16.84 & \(0.01^{+0.01}_{-0.01}\) & \(0.59^{+0.04}_{-0.04}\) & \(2.58^{+4.09}_{-1.39}\) & \(24.26^{+4.11}_{-0.11}\) & \(0.04^{+0.06}_{-0.06}\) & \(0.21^{+0.07}_{-0.06}\) & \(0.20^{+0.07}_{-0.08}\) & \(0.21^{+0.10}_{-0.08}\) & 0.11\({}^{+0.03}_{-0.02}\) & 1.14 \\ E11 & 17.86 & \(0.01^{+0.01}_{-0.01}\) & \(0.63^{+0.18}_{-0.18}\) & \(2.77^{+1.20}_{-1.25}\) & \(22.77^{+3.13}_{-1.43}\) & \(0.13^{+0.04}_{-0.04}\) & \(0.26^{+0.06}_{-0.06}\) & \(0.17^{+0.06}_{-0.06}\) & \(0.31^{+0.11}_{-0.02}\) & 0.12\({}^{+0.04}_{-0.02}\) & 1.47 \\ E12 & 18.84 & \(0.01^{+0.01}_{-0.01}\) & \(0.60^{+0.08}_{-0.08}\) & \(2.82^{+1.60}_{-1.08}\) & \(23.47^{+4.12}_{-0.14}\) & \(0.14^{+0.04}_{-0.03}\) & \(0.25^{+0.06}_{-0.05}\) & \(0.20^{+0.07}_{-0.06}\) & \(0.30^{+0.12}_{-0.10}\) & 0.14\({}^{+0.02}_{-0.02}\) & 1.56 \\ E13 & 19.77 & \(0.01^{+0.01}_{-0.01}\) & \(0.58^{+0.06}_{-0.08}\) & \(3.45^{+2.24}_{-1.30}\) & \(26.35^{+4.70}_{-4.44}\ ## 4 Discussion ### Morphology of the Remnant The two main morphological features of the N63A are detached by imaging; the "crescent-like" and the "hole-like" structures. Warren et al. (2003) suggested that the origin of "crescent-like" structures located beyond the main shell in the X-ray emission, which resembles similar features seen in other SNR (e.g. in Vela SNR), is the interaction of high-speed clumps of SN ejecta with the ambient medium. Wang & Chevalier (2002) use two-dimensional hydrodynamic simulations to model the formation of such structures. They find that dense, high-velocity clumps of ejecta may protrude beyond the forward shock as the remnant interacts with the surrounding medium. Shocks moving through the clump at first crush it and then cause it to expand laterally, eventually taking on the shape of a crescent, such as is seen in N63A. The roughly "triangular hole" seen in the _Chandra_ X-ray data near the location of the optical lobes, which is also coincident with the brightest emission in radio, is the result of N63A engulfing a cloud that then absorbs the X-ray emission (Sano et al., 2019). ### Nature of the Ambient Medium of N63A We studied the X-ray spectrum from the outermost shell regions to assign the features of the swept-up ISM (see Figure 2a). The mean values of our fitted model parameters for the \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Region & \(n_{\rm H}\) & \(kT\) & \(n_{e}t\) & \(EM\) & O & Ne & Mg & Si & Fe & \(\chi^{2}_{\nu}\) \\ & (\(10^{22}\)cm\({}^{-2}\)) & (keV) & (\(10^{11}\)cm\({}^{-3}\)s) & (\(10^{57}\)cm\({}^{-3}\)) & & & & & & \\ \hline Ear 1 & \(0.04^{+0.08}_{-0.03}\) & \(0.57^{+0.10}_{-0.09}\) & \(0.85^{+0.73}_{-0.34}\) & \(14.90^{+4.17}_{-4.68}\) & \(0.19^{+0.03}_{-0.03}\) & \(0.39^{+0.08}_{-0.07}\) & \(0.34^{+0.12}_{-0.10}\) & \(0.62^{+0.28}_{-0.22}\) & \(0.21^{+0.07}_{-0.05}\) & 1.55 \\ Ear 2 & \(0.01^{+0.01}_{-0.01}\) & \(0.57^{+0.10}_{-0.09}\) & \(0.65^{+0.59}_{-0.26}\) & \(14.22^{+5.96}_{-3.46}\) & \(0.18^{+0.02}_{-0.03}\) & \(0.27^{+0.06}_{-0.05}\) & \(0.24^{+0.10}_{-0.08}\) & \(0.25^{+0.18}_{-0.15}\) & \(0.25^{+0.09}_{-0.07}\) & 1.30 \\ Ear 3 & \(0.01^{+0.07}_{-0.01}\) & \(0.65^{+0.14}_{-0.09}\) & \(0.76^{+0.57}_{-0.34}\) & \(24.00^{+7.79}_{-6.65}\) & \(0.16^{+0.02}_{-0.02}\) & \(0.26^{+0.04}_{-0.04}\) & \(0.20^{+0.07}_{-0.06}\) & \(0.26^{+0.11}_{-0.10}\) & \(0.18^{+0.07}_{-0.04}\) & 1.58 \\ Tail 1 & \(0.01^{+0.03}_{-0.01}\) & \(0.54^{+0.17}_{-0.11}\) & \(1.29^{+6.23}_{-0.76}\) & \(27.33^{+15.48}_{-10.08}\) & \(0.16^{+0.03}_{-0.02}\) & \(0.27^{+0.07}_{-0.07}\) & \(0.17^{+0.06}_{-0.06}\) & \(0.34^{+0.12}_{-0.12}\) & \(0.06^{+0.04}_{-0.02}\) & 1.59 \\ Tail 2 & \(0.01^{+0.01}_{-0.01}\) & \(0.56^{+0.54}_{-0.08}\) & \(2.20^{+3.80}_{-1.88}\) & \(28.94^{+3.08}_{-17.17}\) & \(0.18^{+0.03}_{-0.02}\) & \(0.26^{+0.01}_{-0.01}\) & \(0.05^{+0.07}_{-0.03}\) & \(0.17^{+0.03}_{-0.15}\) & \(0.01^{+0.05}_{-0.01}\) & 1.57 \\ Tail 3 & \(0.01^{+0.01}_{-0.01}\) & \(0.77^{+0.12}_{-0.07}\) & \(0.98^{+0.71}_{-0.34}\) & \(24.40^{+7.62}_{-4.99}\) & \(0.13^{+0.01}_{-0.01}\) & \(0.32^{+0.04}_{-0.03}\) & \(0.17^{+0.04}_{-0.02}\) & \(0.22^{+0.07}_{-0.06}\) & \(0.09^{+0.01}_{-0.01}\) & 1.51 \\ Tail 4 & \(0.01^{+0.02}_{-0.01}\) & \(0.68^{+0.10}_{-0.11}\) & \(0.53^{+0.39}_{-0.16}\) & \(17.90^{+6.78}_{-2.85}\) & \(0.17^{+0.02}_{-0.02}\) & \(0.26^{+0.04}_{-0.04}\) & \(0.12^{+0.06}_{-0.06}\) & \(0.19^{+0.12}_{-0.11}\) & \(0.08^{+0.02}_{-0.03}\) & 1.49 \\ \hline I1 & \(0.16^{+0.06}_{-0.05}\) & \(0.71^{+0.05}_{-0.04}\) & \(13.06^{+17.2}_{-7.0}\) & \(20.14^{+5.01}_{-4.38}\) & \(0.42^{+0.32}_{-0.19}\) & \(0.37^{+0.31}_{-0.18}\) & \(0.48^{+0.25}_{-0.16}\) & \(0.39^{+0.17}_{-0.12}\) & \(0.16^{+0.05}_{-0.04}\) & 1.07 \\ I2 & \(0.23^{+0.10}_{-0.07}\) & \(0.73^{+0.05}_{-0.05}\) & \(0.62^{+50.0}_{-4.92}\) & \(0.17^{+5.47}_{-3.60}\) & \(0.84^{+0.29}_{-0.29}\) & \(0.34^{+1.37}_{-0.85}\) & \(10.66^{+0.57}_{-0.41}\) & \(0.34^{+0.26}_{-0.17}\) & \(0.15^{+0.08}_{-0.05}\) & 1.14 \\ I3 & \(0.30^{+0.08}_{-0.07}\) & \(0.79^{+0.08}_{-0.06}\) & \(28.64^{+6.2}_{-20.2}\) & \(21.30^{+4.56}_{-4.08}\) & \(0.40^{+0.44}_{-0.21}\) & \(0.36^{+0.35}_{-0.24}\) & \(0.37^{+0.22}_{-0.15}\) & \(0.40^{+0.26}_{-0.12}\) & \(0.09^{+0.03}_{-0.02}\) & 1.18 \\ I4 & \(0.34^{+0.07}_{-0.06}\) & \(0.75^{+0.03}_{-0.03}\) & \(36.98^{+23.4}_{-11.18}\) & \(41.96^{+8.01}_{-23}\) & \(0.43^{+0.20}_{-0.20}\) & \(0.84^{+0.28}_{-0.27}\) & \(0.43^{+0.10}_{-0.13}\) & \(0.33^{+0.09}_{-0.10}\) & \(0.05^{+0.02}_{-0.01}\) & 0.86 \\ I5 & \(0.25^{+0.09}_{-0.08}\) & \(0.77^{+0.19}_{-0.07}\) & \(32.68^{+43.5}_{-30.1}\) & \(15.72^{+4.00}_{-31.2}\) & \(10.19^{+0.26}_{-0.15}\) & \(0.35^{+0.24}_{-0.16}\) & \(0.20^{+0.13}_{-0.16}\) & \(0.20 outermost shell regions are consistent with Schenck et al. (2016) values within statistical uncertainties. Our results show that Ne, Mg, and Si abundances are consistent (within statistical uncertainties) with values of Russell & Dopita (1992), while O and Fe are lower by a factor of \(\sim 2-3\) than Russell & Dopita (1992) values. The median \(n_{e}t\) calculated for the swept-up ISM is \(0.01\times 10^{22}\) cm\({}^{-2}\). Figure 4: A set of sample best-fit models and residuals of X-ray spectra from selected regions shown in Figure 2b. ### Spatial and Chemical Structure of Ejecta The spectral analysis of the observed X-ray spectra for 144 sub-regions in N63A shows O, Ne, and Mg are enhanced in the central parts of the remnant for some directions, while Si and Fe generally shows no enhancement (Figures 7-9). O abundance is enhanced in radial regions A, B, and F at the more than 2 sigma level. However, there is no evidence of an Figure 5: A set of sample best-fit models and residuals of X-ray spectra from selected regions shown in Figure 2b. Figure 6: A set of sample best-fit models and residuals of X-ray spectra from selected regions are shown in Figure 2c. enhanced abundance of O in radial regions C, D, and E. Ne and Mg abundances show no significant enhancement. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not generally enhanced. There is weak evidence for an enhancement of Ne and Mg in the radial regions A and F, while the abundances of Ne and Mg are not enhanced. abundances of Ne and Mg are not enhanced in directions B-E and generally consistent with the mean shell values. Elemental abundances (O, Ne, Mg) for the A and F directions are above the mean shell values up to \(r\sim 15^{\prime\prime}\), while the abundances for the other directions, except the innermost regions, are consistent with the mean shell values. Ejecta gas expanded more in the A and F directions than in the other four directions. Another possibility is that the reverse shock has reached the ejecta sooner in these directions due to the interaction with the molecular cloud. Figure 8: Same as spectral parameters in Figure 7 but for the C and D directions of N63A. The fact that the element abundance distributions of the spatially selected regions on N63A are not uniform and the elemental abundances in the A and F directions are higher than the measurements in the other directions (B-E) may be caused by an asymmetric explosion as well as by circumstellar medium (CSM). Complex structures in the CSM could have affected the structure of the shocked ejecta. For instance, numerical simulations show that, when the outer blast wave encounters a dense wind shell, a reflected shock is developed (along with the transmitted shock into the shell, e.g., Dwarkadas, 2005). Such a shock propagating Figure 9: Same as spectral parameters in Figure 7 but for the E and F directions of N63A. back to the geometric center of the X-ray emission of the SNR might produce an inwardly increasing density structure. The transmitted shock into the dense shell will enhance the X-ray emissivity of the shocked CSM. The limb brightening in the northeast, northwest and southeast boundaries (see Figure 1), in contrast to the faint and diffuse outer boundary in the southwest, might suggest such an interaction between the blast wave and the dense wind shell. Alternatively, the non-standard structure of the shocked ejecta might have been caused by the inherent complexity in the ejecta. For all radial directions, although the \(kT\) is slightly higher in the central parts of the remnant, it generally does not show a significant gradient and can be considered constant. The \(n_{e}t\) is also generally higher in the inner parts of the remnant and decreases beyond \(r\sim 18^{\prime\prime}-20^{\prime\prime}\). It also does not show a remarkable gradient. The \(EM\) exhibits a distribution compatible with the broadband image of the remnant. ### Inner and Outer Faint Structures on N63A The "crescent-like" features (Ear 1-3, Tail 1-4) located beyond the main shell in the X-ray emission, which resemble structures seen in the _ROSAT_ image of the Vela supernova remnant (Aschenbach, Egger, & Trumper, 1995), which have been interpreted as arising from high-speed clumps of SN ejecta interacting with the ambient medium (Warren et al., 2003). The origin of "crescent-like" structures as ejecta is supported by Miyata et al. (2001), who found an overabundance of Si in their work of Vela shrapnel A. _ASCA_ and _Chandra_ data of Vela bullet D show strong O, Ne, and Mg emission (Slane et al., 2001; Plucinsky et al., 2001), suggesting a source as a dense knot of SN ejecta. Nonetheless, Plucinsky et al. (2001) discuss that a nearly solar abundance plasma far from ionization equilibrium is a more likely explanation for their observations. We detect that none of the crescent regions in N63A shows strongly enhanced abundances. Only in the northern crescent region (Ear 1) are the Ne, Mg and Si abundances slightly higher compared to the other sides. The "crescent-like" structures of N63A are generally consistent with the median ISM values within the error limits. We can confidently conclude that the "crescent-like" structures are not dominated by ejecta. These structures that look softer than the N63A's average in RGB image show the features of the shocked-ISM. Thus, for the high-speed ejecta clump scenario to be correct, the clumps in N63A must be significantly mixed with ISM, perhaps indicating the onset of their disintegration and destruction. The element abundances calculated from the spectral analyses of the "hole-like" regions in the western inner parts of the remnant (I1-I5) are generally higher than the "crescent-like" structures and median ISM values. But it can be considered compatible within the statistical uncertainties. The \(n_{\rm e}t\) and \(kT\) are higher than the median ISM values. The higher \(kT\) and \(n_{\rm e}t\) parameters than the ISM values designate that the shock waves generated by the explosion heat and ionize these regions in the inner parts of the remnant. The fact that the element abundances are consistent with the mean shell values also points out that ISM is dominant rather than ejecta for these regions. On the other hand, the \(N_{\rm H}\) parameters obtained for these "hole-like" regions are approximately 20-40 times higher than the ISM values (Sano et al., 2019). These X-ray faint structures in the western inner parts of N63A correspond to active molecular clouds and ionized hydrogen regions in the optical band of the electromagnetic spectrum (Sano et al., 2019). The higher \(N_{\rm H}\) values calculated for these regions support this finding and also consistent with measurements by Warren et al. (2003). ### Progenitor Feature and SNR Dynamics Based on the mean values of the ejecta elemental abundances measured from individual ejecta regions, calculated by considering the first six regions in all directions, \(r\sim 15^{\prime\prime}\), we estimate the abundance ratios of O/Ne=\(5.7^{+3.8}_{-3.6}\), Ne/Mg=\(4.2^{+3.2}_{-2.5}\), O/Mg=\(23.0^{+17.8}_{-12.7}\), and O/Si=\(26.3^{+16.1}_{-13.7}\). These abundance ratios are in plausible agreement with the core-collapse supernova and hypernova nucleosynthesis models for a \(40M_{\odot}\) progenitor with solar or sub-solar (Z = 0.004) metallicity (Nomoto et al., 2006). This progenitor mass is consistent with that estimated by Oey (1996). To estimate the explosion energy and the age of the SNR, we apply self-similar Sedov solutions (Sedov, 1959). For these purposes, based on the volume emission measure values (\(EM=n_{\rm e}n_{\rm H}V\)) estimated from the best-fit spectral models of the shell regions we calculate the post-shock electron density (\(n_{\rm e}\)). For this estimation, we calculated the X-ray emitting volumes (\(V\)) for each region are listed in Table 4. For all shell regions we also assumed a \(\sim 0.56\) pc path length (roughly corresponding to the angular thickness of each shell region at 50 kpc) along the line of sight. For a mean charge state with normal composition, we assumed \(n_{\rm e}\sim 1.2n_{\rm H}\) (where \(n_{\rm H}\) is the H number density) and calculated electron density for all shell region \(n_{\rm e}\sim 21.1-36.8f^{-1/2}{\rm cm}^{-3}\) where \(f\) is the volume filling factor of the X-ray emitting gas. The pre-shock H density \(n_{0}\) are listed in Table 4 assuming a strong adiabatic shock where \(n_{\rm H}=4n_{0}\). Under the assumption of electron-ion temperature equipartition for N63A, the gas temperature is related to the shock velocity (\(V_{\rm s}\)) as \(T=3\hat{m}v_{\rm s}^{2}/16k\) (where \(\hat{m}\sim 0.6m_{\rm p}\) and \(m_{\rm p}\) is the proton mass). Using electron temperatures, we calculated shock velocity of \(V_{\rm s}\sim 652-721\) km s\({}^{-1}\) and Sedov age of \(\tau_{\rm sed}\sim 5,262-5,812\) yr for each shell region (see Table 4). The median shock velocity and Sedov age calculated from the six shell region values are \(706\pm 25\) km s\({}^{-1}\) and \(5,375\pm 200\) yr, respectively. Our shock-velocity estimate is only a conservative lower limit, therefore our SNR age estimate is an upper limit. Although our age upper limit is not tightly constraining, it is generally consistent with previous estimates of \(\sim 4,500\) yr (Williams, Chu, & Gruendl, 2006) and 2,000-5,000 yr (Hughes et al., 1998; Warren, Hughes, & Slane, 2003). We calculated the corresponding explosion energy of \(E_{0}\sim 6.09-10.43\times 10^{51}\) erg for N63A (see Table 4). The median explosion energy is \(E_{0}\sim 8.9\pm 1.6\times 10^{51}\) erg which is compatible with the explosion energy values given for core-collapse supernovae in the literature (Nomoto et al., 2006; Vink, 2020). In addition, the calculated high explosion energy indicates that the remnant may also originate from a hypernova (Janka, 2012). ## 5 Summary & Conclusion We present the results of our extensive analysis of the _Chandra_ archival data of core-collapse SNR N63A in the LMC. Our detailed spatially-resolved spectral analysis reveals radial profiles of elemental abundances for O, Ne, Mg, Si, and Fe and plasma parameters. We detect an asymmetric structure of central metal-rich ejecta material. The asymmetric distribution of N63A gas is likely caused by an asymmetric explosion of the progenitor, but it should not be ruled out that the ejecta may undergo a non-uniform expansion in interstellar material with different densities. We estimate the explosion energy \(E_{0}\sim 8.9\pm 1.6\times 10^{51}\) erg. This \begin{table} \begin{tabular}{c c c c c c} \hline Shell Region & \(V\) & \(n_{0}\) & \(V_{\rm s}\) & \(\tau_{\rm sed}\) & \(E_{0}\) \\ & (\(10^{55}\) cm\({}^{3}\)) & (cm\({}^{-3}\)) & (km s\({}^{-1}\)) & (yrs) & (\(\times 10^{51}\)erg) \\ \hline ISM1 & 2.19 & 5.57 & 721 & 5,262 & 8.10 \\ ISM2 & 2.19 & 6.71 & 715 & 5,306 & 9.59 \\ ISM3 & 2.19 & 7.67 & 697 & 5,444 & 10.43 \\ ISM4 & 1.94 & 6.94 & 652 & 5,812 & 8.28 \\ ISM5 & 3.54 & 4.40 & 703 & 5,397 & 6.09 \\ ISM6 & 1.71 & 6.91 & 709 & 5,351 & 9.72 \\ \hline \end{tabular} \end{table} Table 4: Assumed \(V\), \(n_{0}\) values for Sedov solutions, and derived dynamic parameters (\(V_{\rm s}\), \(\tau_{\rm sed}\), and \(E_{0}\)) of N63A. explosion energy estimate is compatible with a Type II SN or hypernova explosion. We also estimate a Sedov age of \(\sim 5,400\pm 200\) yr for N63A. ## Acknowledgements We thank the anonymous referee for his/her insightful and constructive suggestions, which significantly improved the paper. This study was funded by Scientific Research Projects Coordination Unit of Istanbul University. Project number: FOA-2018-30716. We would like to thank Dr Jayant Bhalerao for his contributions. This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages CIAO, ChIPS, and Sherpa. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This study is a part of the master's thesis of Emre Karagoz. ## Data Availability The X-ray data on N63A as described in Section 2 include Chandra ACIS-S observations and data are available in the Chandra archive ([https://asc.harvard.edu/cda/](https://asc.harvard.edu/cda/)). Processed data products underlying this article will be shared on reasonable request to the authors.
2301.09738
Security of Electrical, Optical and Wireless On-Chip Interconnects: A Survey
The advancement of manufacturing technologies has enabled the integration of more intellectual property (IP) cores on the same system-on-chip (SoC). Scalable and high throughput on-chip communication architecture has become a vital component in today's SoCs. Diverse technologies such as electrical, wireless, optical, and hybrid are available for on-chip communication with different architectures supporting them. Security of the on-chip communication is crucial because exploiting any vulnerability would be a goldmine for an attacker. In this survey, we provide a comprehensive review of threat models, attacks, and countermeasures over diverse on-chip communication technologies as well as sophisticated architectures.
Hansika Weerasena, Prabhat Mishra
2023-01-23T21:58:53Z
http://arxiv.org/abs/2301.09738v2
# Security of Electrical, Optical and Wireless On-Chip Interconnects: A Survey ###### Abstract. The advancement of manufacturing technologies has enabled the integration of more intellectual property (IP) cores on the same system-on-chip (SoC). Scalable and high throughput on-chip communication architecture has become a vital component in today's SoCs. Diverse technologies such as electrical, wireless, optical, and hybrid are available for on-chip communication with different architectures supporting them. Security of the on-chip communication is crucial because exploiting any vulnerability would be a goldmine for an attacker. In this survey, we provide a comprehensive review of threat models, attacks and countermeasures over diverse on-chip communication technologies as well as sophisticated architectures. network-on-chip security, communication security + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + Footnote †: journal: Security and privacy + communication. Bus-based architectures have an inherent advantage, once the bus arbitration is done, the full bandwidth of the wires can be utilized and data transfer timing is predictable. Bus-based architectures are the most widely used on-chip communication topology for small-scale SoCs. When the SoCs become more sophisticated, the concept of route packets are used instead of wires [33]. This leads to the introduction of Network-on-Chip (NoC). NoC isolates and manages the communication requirement of the whole SoC. Figure 1.c shows a NoC with ring topology. Some topolgoies such as mesh topology can be used across every technology used in on-chip communication while some topologies are tightly coupled with underline technology. There are multitudes of other topologies that can be chosen by evaluating design requirements such as power, performance, heat dissipation, connectivity, cost etc. Some interesting 3-D typologies have been proposed by [59] by stacking IPs. The main architectural difference is that a 3-D NoC will have vertical links to communicate across layers. 3-D NoCs have less average hop counts for communication which leads to the improvement in performance and power consumption. Early NoC architectures are based on electrical (copper) interconnects that connect IPs together. Due to inherent technology limitations in electrical interconnects, it does not scale for large SoCs with 100+ IPs. The multi-hop communication structure of electrical interconnects increases the average latency and power consumption. There are studies Shacham et al. [73] to show that electrical interconnects are unable to scale up to adequate performance with acceptable area and power overheads in relatively large SoCs. In recent years, there are two technology alternatives namely wireless and optical interconnects adapted from traditional computer network domain to address scalability issues in electrical NoCs. These two technologies together with electrical interconnects can be used in a hybrid manner to address specific design, power, performance, and cost goals. For example, DNN accelerators tend to use wireless over multicast/broadcast for spatial reuse. In wireless NoCs, the copper wired links in Electrical NoCs are replaced by wireless medium. The inherent multicast/broadcast nature of wireless transmission provides advantages in NoC that use many multicast messages. Optical communication technologies has been used in traditional computer networks to achieve higher data transmission rates. Optical NoCs provide high throughput data transmission while reducing power consumption at the same time. Optical NoCs encode data as light and use waveguides to transfer data throughout the chip [15]. The data transfer through optical interconnects does not need any repeaters or buffers which results in low power consumption and low latency [73]. ### Security Vulnerabilities and Challenges We can discuss security of NoC from multiple perspectives. The NoC can be used by the attacker who is sitting outside of the boundary of NoC to access and modify transferred data. Therefore, NoC is used to detect and secure attack on SoC architectures. For example, FlexNoC [10] provides Resilience Package having hardware based data protection to protect SoC. However, the components Figure 1. Evolution of on-chip communication. Bus-topology, adhoc mix of buses and ring-based NoC [50]. of the NoC itself can be malicious that can lead to attacks. Complex and large MPSoCs have made it easier for an attacker to hide malicious implants inside an SoC. Sophisticated SoC design has led to significant increase in design and verification complexity. Reusable hardware IP based SoC design has become industry norm to reduce cost and meet critical time-to-market constraints. Typically only few IPs are designed in-house for any SoC design while others are outsourced from third-party vendors. For example, FlexNoc interconnect is used by four out of the top five Chinese fabless companies to facilitate their on-chip communication (Zhou et al., 2017). These third-party IPs may come with malicious implants, such as Hardware Trojans(HTs), due to the long and potentially untrusted supply chain. These malicious implants can be inserted into the RTL or netlist with the intention of launching attacks (Zhou et al., 2017). On-chip communication is responsible for sharing the resources and critical information. Exploiting attack vectors on a NoC will be a goldmine to any attacker. When we consider a practical scenario, a cloud computing infrastructure that provides virtual machines under the same hardware has to guarantee that it will not leak critical information to other tenants using the same hardware infrastructure. Therefore, the communication infrastructure across different users of the same hardware should be secured. Securing on-chip communication has it's own unique challenges as follows. **Diverse Architectures and Technologies:** NoCs will have a mixture of different architectures and technologies. Security attacks can be innovative to exploit the underline property or design limitation of the architectures and technologies. For example, inherent broadcasting property of wireless NoC makes it easier for an attacker to gather the messages than in an electrical NoC. Therefore, it is difficult to have security countermeasures that are generic across all the technologies and architectures. More importantly, countermeasures also have the advantage of utilizing the underline properties of the technology to defend against attacks. **Resource-constrained Environment:** On-chip communication inherits many similarities from traditional computer networking domain. Although there are well established security countermeasures in traditional computer networks, they cannot be applied directly to NoCs due to resource-constrained nature of NoC-based SoCs. Specifically, area, power, and real-time execution requirements are the three main constraints in NoCs. For example, Advanced Encryption Standard (AES) is widely used for encryption in computer networks (Kumar et al., 2017), however, it can introduce unacceptable overhead in terms of power, area and performance in resource-constrained NoCs. Therefore, the security countermeasures should focus on the trade-off between security and performance. ### Major Differences with Existing Surveys There are existing surveys on Network-on-chip technologies, architectures and security. (Kumar et al., 2017) describes the fundamental concepts, system-level modeling, and design of NoCs. A comprehensive study on wireless NoC covers the topology, routing, flow control, antenna and reliability (Werner et al., 2017). Werner et al. (Werner et al., 2017) presents a detailed review of existing optical NoC architectures. Details on optical interconnection technologies and underline basic physics can be found in (Kumar et al., 2017). (Kumar et al., 2017) provides a extensive survey on network on chip attack and countermeasures which focus only on security of electrical NoCs. A recent survey (Zhou et al., 2017) explores security on wired, wireless as well as 3-D NoCs. However, it has two major limitations. It does not discuss security attacks and countermeasures on optical on-chip interconnects or bus based architectures. Moreover, it does not describe technology-specific attacks rather focus more on countermeasures. This paper makes three major contributions compared to existing surveys. First, it provides a comprehensive survey of all the existing communication technologies namely electrical, wireless and optical. Next, this paper provides equal emphasis on threat models, attacks as well as their countermeasures. Finally, the paper provides an executive summary of basic architectures and technologies in on-chip communication which is essential to understand the attacks and countermeasures. ### Survey Outline There are various ways of classifying security attacks and countermeasures. Our survey considers classification in two layers as shown in Figure 2. First, the attacks are categorized under the technologies which are Electrical, Wireless and Optical NoCs. Even though some attacks are possible in multiple technologies, the primary discussion would be on the technology specified in the published literature. Next, we discuss the following six security concepts for each communication technology. **Confidentiality :** Confidentiality during communication ensures that there is no unauthorized disclosure of secret information. Eavesdropping, snooping and utilization of side channel or covert channel for leaking sensitive data are some common attacks on confidentiality. Encryption is the most widely used solution to ensure confidentiality under eavesdropping or snooping attacks. **Integrity :** Integrity ensures no unauthorized modification or destruction of information. Message tampering and message removal attack are some of the common attacks on integrity. Message authentication code is the most popular solution against these attacks. **Authenticity :** Authenticity ensures that the data received by the receiver is originated from intended sender. Spoofing attack is the most common attack while various signature schemes can be used to defend against spoofing attacks. **Availability :** Availability of the system ensures that data and the system are available for users whenever they need them. Denial-of-Service (DoS) is the most common attack on availability. Traditional DoS attacks can be further categorized into bandwidth or resource consumption DoS attacks. **Anonymity:** Anonymity ensures that there is no unauthorized disclosure of information about communicating parties (source and destination). **Freshness :** Freshness ensures that message passing in the system are up-to date. Replay attack is the main attack violating data freshness. The remainder of the survey is organized as follows. Section 2 describes fundamentals of on-chip communication architecture under different communication technologies. Section 3-5 provide a comprehensive survey of NoC security for electrical, wireless and optical NoCs, respectively. Figure 2. NoC security survey for three communication technologies under six security concepts. Specifically, it covers threat models, attacks and countermeasures for each technology. Finally, Section 6 concludes the survey. ## 2. On-Chip Communication Architectures On-chip communication architecture separates the communication layer from other IPs which has allowed for diverse technologies and architectures with different performance goals. Understanding of fundamental building blocks of NoC architecture is essential to review their security. This section gives an overview of communication protocol and existing architectures under different technologies. ### Networking Models and Communication Protocols Bus-based communication protocols are simple and have few primitives such as master, slave, arbiter, decoder, bridge, etc. Bus architectures use a communication protocol in which one device or process (known as the master) has total control one or more other devices or processes (known as slaves). Bus has a master who initiates communication session and a slave who listens and responds to incoming transfers. Bus arbiter controls the access for a shared bus through an arbitration scheme such as Time Division Multiple Access. The decoder determines the intended recipients. More sophisticated hierarchical bus architectures have bridges to connect two buses together. A bus architecture enables the transfer of the following three types of information. * Address: carries address of the destination node * Data: carries data between destination and the source * Control: carries control message requests and acknowledgements before transfers. When a source wants to send a message through bus, it must first use the bus arbiter to acquire the bus. Once the bus is acquired, the address of the destination will be placed on the address line. Once the address is received, the corresponding decoder will read the data on data line. The data transfer to other way will happen similarly. Most of the definitions of NoC networking model is based on electrical NoCs so there are minor deviations from the basic networking concepts in other technologies. Figure 3 shows a simple mesh topology for NoC-based SoC. A typical NoC have the following basic components. * Node: Any IP or cluster of IPs that have bus for internal communication. * Link: Connects two nodes physically. Link may have one or more logical or physical channels. * Network Interface: IPs and NoC communicates through network interface. It may include conversion of data between two mediums. It decouples communication from computation. * Router: It forwards data according to pre-defined routing protocols. Figure 3. NoC with 4x4 mesh topology. Each IP is connected to NoC via a network interface followed by a router. Node S is sending packets to D via NoC. A typical router micro-architecture of a NoC router. A packet is the basic unit of transfer in a NoC. The idea of communication in a NoC is to route packets through a series of nodes between the source and destination nodes. The packet is further subdivided into basic flow control units, called flits, and the flits are transferred between the routers. A typical NoC protocol uses two types of packets, control and data packets. Figure 4 shows the structure of a data packet. As shown in Figure 3, the following steps will be followed when the source node (S) wants to request data from the destination node (D). First, the node S generates a control packet requesting data from the node D with necessary headers and injects to the network interface of S. Then, the network interface of S will divide the message into a head flit, multiple body flits, and a tail flit. The network interface injects the flits to the neighbouring router. The flits will hop from one router to the next (uses X-Y routing in this example) until it reaches the destination router. The final router will inject flits to the network interface of D. Then, the network interface of D will collect all the flits and provide D with the assembled message. Finally, D will generate a data packet and inject it to the network interface of D. Similar steps will commence and S will receive the data as the response. #### 2.1.1. Routing Protocol : Routing protocol defines the path of moving data from source to destination. There are some basic characteristics in a routing protocol: **Circuit versus Packet Switching :** In circuit switching, first a path is established and dedicated to a session followed by the data transfer through it. In case of packet switching, the individual packets are forwarded independently per hop. Therefore, in packet switching each packet have routing information including the source and destination in their headers. **Deterministic versus Adaptive Routing :** Deterministic routing considers only the source and destination for deciding the route path. Adaptive routing considers other factors such as link utilization and congestion on deciding the next hop of the route. XY routing is the most commonly used routing in mesh based electrical NoCs. XY routing is simple, it basically takes all the X links first followed by Y links. There are also adaptive routing algorithms such as odd-even, west-first, north-last, and negative-first (S model discussed in Section 2.1. Both NoC based and bus based electrical interconnects can be seen in commercial designs. A typical NoC will use virtual channels for flow control. Figure 3 shows the routing micro-architecture of a virtual channel based router which is commonly used in electrical NoCs. The following are the main components and their functionalities in electrical NoCs. * Input and Output Buffers: They are used to store flits for incoming and outgoing communication. There are multiple buffers for multiple virtual channels in a single port. * Route Computation Unit: It determines the output port for the flit according to the routing protocol. Route computation will be done for the head flit. The body flits and tail flit will follow the same route. * VC Allocator: It assigns a free virtual channel buffer at the downstream router for the outgoing port. * Switch Allocator: It arbitrates between two or more flits that are going to use the same output port via switch simultaneously. * Crossbar Switch: It allows flits to reach the output buffers according to previous calculations. The crossbar switch is capable of supporting multiple flits simultaneously. IBM's CoreConnect (Wang et al., 2013) and ARM's AMBA (Advanced Micro-controller Bus Architecture) bus (Birsh et al., 2013) can be considered as the most popular bus-based communication architectures used in complex SoCs. Oracle SPARC T5 (2013) has 16 multi-threaded cores and 8 L2 banks connected by a Crossbar NoC. Similarly, Intel Single-Chip Cloud Computer (2009) has 24 tiles with 2 cores connected through a mesh based NoC. ### Wireless Interconnects Inherent limitation of electrical NoC paved the way for wireless on-chip communication. In wireless NoC, the copper links between nodes are replaced by wireless links. wireless NoC topologies can be pure wireless with only wireless communication or a hybrid of electrical and wireless connections. Modern wireless NoCs (Shi et al., 2012; Wang et al., 2013) tend to use hybrid wireless NoC architectures for three main reasons. (1) Electrical multi-hop communication between nodes that are far away has higher packet latency which can be reduced by introducing wireless links. (2) Electrical communication between two neighbouring node is robust and faster than wireless communication between them. (3) Limitation of channels in wireless medium (Wang et al., 2013) hinders scalability in fully wireless topologies. Therefore, hybrid wireless NoC architectures use wireless links as the highway for long distance communication between two IPs (Shi et al., 2012) while electrical connections are used for short-distance communications. Figure 6 shows a simple 4x4 wireless NoC architecture (Wang et al., 2013) that uses multi-channel wireless links for communication. Similar to router connected to every node in mesh based electrical NoC, this architecture has wireless routers with wireless transceivers and antennas. There is a low bandwidth wired control network to support fast channel arbitration. The transmission radius of a router defines average hops for a packet but increasing transmission radius will increase channel arbitration time because there will be more nodes fighting for limited number of channels. Figure 5. Common NoC topologies: ring, star, mesh and torus. Recent work has proposed multiple hybrid wireless NoC topologies [12, 48, 53, 58]. Figure 7 shows an example hybrid wireless NoC architecture by Wang et al. [79]. The network is divided into four subnets and there is a wireless hub placed in the middle of each subnet. Wireless hubs have antennas and transceivers that use frequency division multiple access (FDMA) for multi-channel communication. There can be electrical interconnects similar to mesh-based NoC between the nodes. Imagine a scenario where node 2 wants to send message to node 60. First, the packet will use the electrical interconnection to reach the wireless hub 1 taking the path Node1 -> NodeX -> wireless hub 1. Then, the wireless hub uses wireless link to transfer packet to the wireless hub 4. Finally, the packet will use the electric interconnect again to reach the node 60. Although Figure 7 has mesh based electrical topology, small-world based topology [58] allows subnets to be completely independent and different topologies interconnected only by wireless hubs. Traditional routing algorithms for electrical NoCs such as X-Y routing can be used in wireless NoCs. However, there are routing algorithms that utilize special characteristics of wireless NoC architectures such as location-based routing [87]. There are different types of antennas used in wireless NoC categorized as silicon integrated antennas, UWB antennas [85], and CNT antennas [40]. These multiple topologies and diverse components raise different set of security vulnerabilities in wireless NoCs. Although inherent characteristics of having a shared medium of wireless communication supports broadcast/multicast message passing, it affects negatively in terms of security and opens up new attack vectors to adversaries. ### Optical Interconnects Optical on-chip interconnects another promising alternative to address bandwidth and energy limitations of electrical interconnects. optical NoCs transfer data across the SoC as optical signals and the energy consumption of the signals are relatively distance-independent. Figure 8 represents a simple bus based optical interconnect with only a sender and a receiver. An off-chip laser acts as the source of the optical signals in different wavelengths. Waveguide can be represented as the duel of copper links in electrical NoC which is responsible to carry optical signals. Unlike electrical links, the waveguide supports parallel data transmission through dense wavelength-division multiplexing. Microring resonator (MR) [13, 17] is the fundamental component inside modulators, detectors and routers which is made out of mixture of circular and straight waveguide. MR is capable of removing all or keep all the optical signals in the waveguide which represents logical 0 and 1, respectively. When the sender wants to send a message, it will generate a packet which is initially represented as an electric signal. The modulator is responsible for electric to optical modulation. The MR inside the modulator will capture a light of a particular wavelength on the waveguide and modulates it to represent stream of logical 0s and 1s. At the receiver side, the detector will continuously listen on the selected wavelength. Upon receiving data, MR inside the detector will select the wavelength and remove it from the waveguide. Then the signal is converted to electrical signal and amplified before sending into the receiver. This optical bus can be considered as the fundamental building box for many complex topologies of optical interconnects (Sandel, 2010). \(\lambda\)-Router (Sandel, 2010) and Folded Crossbar (Crossbar, 2010) are two optical only NoC architectures that route optical signals based on their wavelengths. Amon (Amon, 2013) and QuT (QuT, 2013) are two topologies that use a low bandwidth control network to agree upon a set of wavelengths before actual collision-free optical transmission. Similar to hybrid wireless NoCs, optical interconnects also have hybrid architectures that uses optical links for long distance communication and electrical links for short distance communication. Metor (Metor, 2013) is a hybrid optical NoC architecture that divides 8x8 NoC into four 4x4 clusters (Figure 9). Atac (Amon, 2013) uses a global optical crossbar that supports 32 buses with 64 wavelengths on each (Figure 10). Compared to Metor, it does not have fixed clusters, it uses a policy of using optical interconnects only if the destination is more than four hops away. Otherwise, the electrical NoC is used. In 2013, optical interconnects were used in Intel's Optical PCI-X motherboard. In 2015, Sun et al. have fabricated processor chip containing 850 photonic components that communicate using optical signals (Sun et al., 2015). Practical photonic based on-chip interconnects are already in the path of major MPSoC vendors. Technology specific components and processes in optical interconnects have introduced new attack vectors to adversaries that will be discussed in Section 5. ## 3. Electrical on-Chip Communication Security Electrical on-chip interconnects can be considered as traditional and widely used technology of all. In this section, we will survey recent research efforts on security of electrical NoCs. First, we discuss the threat models and attacks on electrical NoCs. Next, we describe suitable countermeasures to defend against these attacks. ### Threat Models and Attacks In this section, we explore threat models and attacks of electrical NoCs categorized into previously defined security concepts. Some of the threat models can be applied across other technologies as wireless. #### 3.1.1. Confidentiality Side-channel attacks utilize the implementation of a computer system rather than a software or hardware vulnerabilities to launch an attack. Some of the side-channel attacks require deep understanding about the internal physical implementation of the system. Timing information, Figure 8. Basic optical data transfer where the sender modulates off-chip laser in wave-guide (Sandel, 2010). electromagnetic signals and power analysis are some of the avenues for side-channel attacks. Sharing of multi-tenant application is more practical in cloud environments where same physical hardware is shared by two different virtual machines. These applications have concurrent network usage where they have to compete for the shared resource in NoC such as links. Latency and throughput variation in such scenarios can be used as a timing side channel to leak sensitive information. An adversary can also use the timing channel as covert channel to bypass any security mechanisms in NoC to leak sensitive data. We will first focus on side-channel attacks on confidentiality. Wang and Suh [81] talks about how to use side-channel and covert channel to hinder confidentiality data flowing through NoC. The attacker can use timing channel of the network interface to leak the sensitive information or use covert channel to leak information to colluding malicious application. For example, a program with high security profile would leak information to a lower security profile program via covert channel. The specific threat model assumes multiples security levels such as high and low security zones. Attacker is in control of a software in multiple processing cores along with it's placement at scheduling. Attacking program is assumed to be in low security zone but it can introduce packets to NoC which will affect timing characteristics of the high security zone traffic. Furthermore, the attacker is assumed to know the placement, scheduling, and traffic patterns of high security zone programs. Attacker will use throughput and timing change of it's own packets as the side channel to leak sensitive information from high security traffic flow. Sepulveda et al. talked about timing based side channel attack to steal critical information communication through NoC. This attack is conducted by infecting a component in SoC with a hardware Trojan. The attack uses the performance degradation of the malicious IP to infer critical information in concurrent process running in SoC. Imagine a scenario when a source and destination are performing a symmetric key cryptographic function of such as AES [32]. When the source requests a particular data, it first looks at the local cache. If the requested data is not available, it will use the NoC to access the shared L2 cache or main memory. Node A is the attacker that lies in the routing path of this communication who is injecting packets continuously to the NoC. The performance degradation caused by A leaking information on crypto process will reduce the search-space for a secret key. In 2016, Sepulveda et al. [69] described about a Denial of Service attack with minor changes to the threat model. Although the attack is similar, the HT affected IP in the NoC is replaced by a malicious program. Furthermore, the authors point out different ways how a malicious software can infect an IP. Most common way can be considered as a buffer overflow attack by a malicious program. The malicious program can leak sensitive information as secret keys through timing-side channel as discussed previously. Boraten and Kodi [21] uses a similar threat model where the adversary uses interference from contending applications to launch timing side channel attack to steal sensitive information. The authors further elaborate using of a covert channel to leak sensitive data captured from timing side channel attack. They point out that protecting against side channel attack is challenging since malicious application can hinder detection by artificially inducing interference. Reinbrecht et al. [63] proposes a a stronger adversary for side-channel attacks which can be considered as an extension to previous attacks. This adversary uses distributed timing side channel attack to leak sensitive information. This attack has the advantages of reduced computation and storage requirements of the adversary. They introduced two types of malicious IPs, (1) injectors which inject traffic in high rate to increase congestion on NoC and (2) observers who injects traffic in low rates while monitoring their performance degradation. The attack described in the paper has three stages. The first stage is infection where the malicious IP uses a malware to position injectors and observers. At the calibration stage, the traffic injection rates will be adjusted to avoid unnecessary throughput degradation. During execution and cryptanalysis, the attack is conducted and a mathematical algorithm is used to infer sensitive data from the distributed side-channel data. Side-channel attacks and covert-channel attacks have been discussed in traditional bus based architectures. Shao et al. [74] talks about an attack where a run-time HT in bus can monitor communication between the master and slave of the bus. The HT can be activated from an external input. When HT is activated it will use power side channel to leak address information on the bus. The next six attacks on confidentiality share similar threat model with subtle differences. The basic threat model is snooping by some malicious components of NoC which is colluding with a local or remote malicious application. Local colluding application can be considered as either colluding application or HT compromised local IP. Remote colluding application resides outside the boundary of SoC which can be considered to have higher processing power compared to that of local counterpart. Ancajas et al. [7] describe confidentiality violation via snooping in the presence of malicious NoC. The authors highlight that this threat is critical when using muti-tenant cloud computing using MPSoCs. The malicious NoC has embedded hardware Trojan that is capable of activating covert backdoor which is working with a colluding malicious application running on a normal processing core. Figure 11 shows an example scenario in a mesh NoC when malicious application is running in node X in a compromised NoC when S and D are communicating. The attack has four phases. At design phase, a third party NoC IP provider fabricate a HT into the NoC. Then the colluding application running in the SoC use a dirty cache bit to activate the HT when necessary. At attacking phase, a malicious software request the compromised NoC to eavesdrop and duplicate packets of a specific communication. The compromised NoC will use covert channels to transfer duplicated classified information from legitimate communication between S and D. Finally, the attacker will deactivate the HT after the attack. The compromised NoC with HT have 4.62% area and 0.28% power overhead compared to baseline NoC. The authors highlight the fact that HT is harder to detect due to low overhead and HT being deactivated after an attack. Charles et al. [23] talks about snooping attack when NoC is malicious which uses similar threat model as [7] with the exception of not using covert channels. Similar threat models are explored in [71, 82] where there is a malicious router snooping packets inside NoC. Hussain and Guo [44] also talks about packet leaks in NoC where the NoC is a third-party IP. Here, hardware Trojan in NoC is capable of changing destination and source of request and response packets, respectively, to leak sensitive data. Charles and Mishra [27] also talks about a threat model in mesh topology where NoC IP is compromised. Specifically, they assume that some routers and IP cores in middle are malicious. The trusted IPs are introduced as secure IPs. When two secure IPs are communicating, a malicious router in middle can leak sensitive information to the malicious IP in the same chip through NoC. Raparti and Pasricha [62] propose a harder to detect threat model for data snooping attack on NoC. The HT is placed in the Network Interface (NI) where NoC packets are generated from SoC data. There can be multiple malicious NIs which snoop packets and send it to the malicious program listening at a specific IP. The packetizer is the module which is responsible for creating NoC packets adding header information like destination ID, virtual channel ID, etc which is immediately followed by a set of flits. The flits are stored in a circular buffer where pointer is used to define the start and end of the buffer. The HT is capable of tampering with these pointers to re-send the duplicate packet with new destination ID as the IP with the malicious program. The authors point out that the area overhead of this HT is only 1.3% compared to baseline NI. Ahmed et al. [4] proposed a lightweight and naive HT that can leak traffic patterns to an adversary so that the adversary can do traffic analysis. The HT is capable of the naive task of periodically counting number of packets in a time window and leak that information to the adversary. The HT can be in either router block or in an interconnection switch. For example according to Figure 12, when source S is communicating with D, the router with HT in the middle of the path will count the packets and send it to the attacking application. The HT is a 16-bit counter and the count is packetized every 5000 cycles. This is a passive attack where the data path and delay of the original packets are not affected making the detection of HT close to impossible. There maybe multiple such HT with counters. The area and power overhead of HT with small counter is insignificant in a large MPSoC design so they are hard to be detected. The adversary can use data mining techniques on top of the collected data to arrive at information such as application running on the system. This attack uses eavesdropping to extract user behaviors and violates privacy of the user. Ahmed et al. [3] talks about what an remote access Hardware Trojan can do to the confidentiality of NoC traffic. The authors highlight the severity of impact of the attack on multi-tenant servers. remote access Hardware Trojan has a relatively small area footprint compared to typical HT and can be used by a remote attacker to steal confidential information. The HT can have a simple functionality as leaking packet counts of a particular switch in a time window. After stealing information, the HT will packetize and transfer the information to remote attacker for further processing. Similar to [4], the attacker uses a ML technique, specifically an artificial neural network, to infer properties of application running on the system and reverse engineer the architectural design. The authors shows that it can reveal information such as router micro-architecture, cache organization, processor platform, and NoC configurations. Four remote access Hardware Trojans in a 64 mesh can successfully reveal application running with an accuracy of 80%-98% in different scenarios. #### 3.1.2 Integrity Integrity of electrical NoC is less addressed compared to confidentiality. Sepulveda et al. [71] assumes that NoC IP is malicious. The paper talks about 3 stages of the HT (1) Trojan design and insertion, (2) malicious behaviour activation, and (3) execution of the attack. Network interfaces are not considered malicious because integration of network interface needs in house development. The malicious NoC contains of compromised routers in both partially deactivated mode and fully deactivated mode. The malicious router is capable of replacing a portion of incoming packets using the information in _malicious data_ register [71]. The HT to tamper and modify packets in router results in overheads of 1.3%, 0.1% and 0.3%. in area, power and performance, respectively. Apart from attacking the integrity, the authors presents two other HTs in router to spoof and launch replay attacks. #### 3.1.3 Availability In this section, we review attacks on availability that utilize compromised NoC components. JS et al. [46] discuss about bandwidth denial of service attack to disrupt availability of resources when running programs on MPSoC. This attack directly affects the performance of the programs running on MPSoC. Some application's performance will heavily depend on specific IPs in MPSoC. For example, a memory intensive task will heavily depend on memory controller, the proposed DoS attack try to hinder the traffic flow to such hotpots and cause application performance degradation. The authors show proposed attack degrade packet latency in the rage of 14.5% to 72% depending on the severity of the attack. The HT is inserted in the standard four-stage virtual channel router with an activation method of software-hardware collation, time based triggers and traffic characteristic based triggers. The victim node selection of the HT will select a victim IP that has noticeable drop by using heuristics such as high ingress/egress rate. Finally, the traffic flow manipulation module of HT affects arbitration and allocation stages of the packets to slow down and hinder performance. The HT will result in negligible overhead of 4.32% and 0.014% in area and power, respectively. A similar threat model is used in the discussion of [37]. Daoud and Rafla [34] propose a HT that results in DoS attack by misrouting packets. The proposed attack will cause deadlocks and virtual link failures. The HT can be implemented in malicious router with only 0.2% of additional overhead which can be considered insignificant to detect. The HT has an inactive and waiting state which is even harder to detect. In attacking state, the router will simply misroute the packet to incorrect output port. For example, if the packet need to be switched to south output port according to XY routing, the HT will direct it to the east output port. The experimental results show that the attack will result in reduction of number of received packets by the destination. Manju et al. [55] also talks about DoS attack by misrouting the packets at routers and attack selected set of nodes. The misrouting will also affect the flow control and results in injection suppression which will eventually freeze the communication in NoC. Similar to the threat model used in [34], this paper is also used threat model where router is malicious. Furthermore, they assumed that there is only one such infected router to avoid significant change in power consumption which will lead in detection of the HT. HT inserted to the router in pre-silicon stage either by adversary having access to design or by untrusted CAD tool. The HT will maliciously assign the head flit of the packet to a wrong output port where the rest of the packet will follow the misrouted path due to wormhole routing. The misrouted packet counting cache coherence message will significantly degrade performance. When running SPEC CPU benchmarks [43], the HT infected router in NoC will result in only 80% delivery rate while 20% packets are lost in ping-pong inside the NoC. The attack will also increase average packet latency by 87% compared to the baseline without HT. JYV et al. [47] propose a novel DoS attack on NoC that is induced by a HT. Unlike the previous HT triggered by special timing and external triggers, the proposed HT is triggered by special bit pattern in the message. The paper discussed four HTs that can be integrated in the router which can change flit count, address, head bit and tail bit. Once a flit enters the router through buffer, the HT will modify a field/bit in the flit. The router will react differently depending on the field changed by the HT. For example, flit quantity Trojan can change the header field that represents the number of flits in the packet. Mismatch between actual number of flits and the value in the header field will result in abandoning the packets and re-transmitting. HT changing tail and head bits results in 63% and 71% of throughput reduction, respectively. Experimental results show that the effect of address change and flit count change by HT results in more throughput reduction. The results show similar trend in average packet latency, and total number of packets received. Rather than having HTs at routers or network interfaces, Boraten and Kodi [19] introduce a novel DoS attack on NoC via lightweight HTs at compromised links. This HT is capable of inspecting the packets and injects faults to the systems so that the error correction mechanism get triggered though error correction code. Errors will inject into NoC where error detection mechanism is able to detect the error and unable to correct it resulting in re-transmission. Single error correction and double error detection is an example for such a simple error correction code. Repetitive and frequent fault injection will result in DoS since majority of the available bandwidth will be utilized by re-transmissions resulting in back-pressure and network resource starvation. The proposed HT has an externally controlled kill-switch to enable the attack and reduce the chances of detection during verification process. A single HT in a link incur less than 1% of the total power which makes the possibility of multiple compromised links. Furthermore, experimental results show that even if all 48 links include a HT, they will have only 2% overhead compared to whole NoC. A single HT can deadlock 81% of the injection ports and at least one link on 68% of the routers in a small number of cycles. All previously discussed attacks on availability utilize a compromised NoC. For next few attacks, we focus on threat models where IPs are malicious. Sudusinghe et al. [75] use a threat model of flooding based DoS attack for their proposed countermeasure. The malicious IP targets a critical component for SoC performance as memory controller. For example, a node with malicious intent can flood packets to a node which is the neighbouring node to a memory controller. This will make hot-spots around frequently used and shared memory controller leading to a DoS. The authors point out the increase in traffic rate of the routers in the path of communication. This results in reduced performance, missed deadline, and insufficient energy consumption. The next set of efforts focus on threat models that use multiple malicious IPs working collaboratively to launch attacks on availability instead of a single compromised component. Charles et al. [25, 26] elaborates distributed DoS (DDoS) attack on NoC. Multiple malicious IPs will flood the NoC with useless packets to eat up the bandwidth and disrupt normal communication. Most of the time these flooding will happen targeting critical and shared IP such as memory controller to have higher impact on the attack. Figure 13 shows several scenarios of DDoS attack in 3x3 mesh network where the paths for flooding packets may or may not overlap. The flooded packets will fill up buffers and eat up processing time of the routers in the path making extremely high latency for legitimate packets. Several flooding based DoS attack scenarios in mesh-based NoC has been implemented and evaluated in [36]. The authors present a novel congestion based attack model to evaluate bandwidth based congestion attack on NoC. The authors have used two scenarios of attack with two and four malicious nodes, respectively. They also have explored different placement of the malicious IPs. The malicious node is capable of generating constant bit rate traffic to the mesh based NoC. They have evaluated the impact of these DoS attacks on XY-routing and four other adaptive routing mechanisms namely odd-even, west-first, north-last and negative-first. The experimental results show that XY routing performs better in lower packet injection rates, and adaptive routing performs better at higher rates. The experimental results also reveal higher degradation of network performance with increasing number of malicious IPs flooding the network. Charles et al. [24] discuss about a slightly different threat model that uses multiple malicious IPs for DoS attack. IPs in SoC are categorized into two zones as secure zone and non-secure zones considering the trustworthiness of the IPs. In order to preserve integrity of the packets, a message authentication code based authentication system is used. When a packet traverses through the non-secure zone, the content of the packet can be tampered by a malicious IP leading to failed message authentication code verification. This will result in dropping of initial packets and re-transmission of new packets. A malicious node can tamper either the request or the response resulting in re-transmission. Tampering a large number of packets in short interval can lead to a DoS attack. Tampering packets by multiple malicious IPs can be recognized as a distributed DoS attack. #### 3.1.4. Authenticity There are no recent efforts on explicitly attacking authenticity in electrical NoCs. However, the confidentiality countermeasures with authenticated encryption ensures authenticity which will be discussed in Section 3.2. #### 3.1.5. Anonymity Anonymity in electrical mesh NoCs has been studied by [24]. The threat model assumes that NoC is malicious. Specifically, there is a HT in a compromised router which is capable of sniffing into the packets routing through it. This HT can be triggered by a malicious program running in another core. It can steal critical information from data when payload is not encrypted. If payload is encrypted, it can gather packets that has same source and destination. An intelligent attacker can use these gathered data to launch complex cryptanalysis attacks. Sarihi et al. [67] use a similar threat model to [24]. Furthermore, they highlight the fact that header information needs to be kept in plain-text in order for a router to process the request fast without per hop costly decryption. This leads to various kind of attacks such as differential cryptanalysis attacks by collecting packets between the same endpoints. #### 3.1.6. Freshness There are no recent efforts on attacking freshness in electrical NoCs. Figure 13. Four scenarios of distributed DoS attack on NoC in the presence of malicious IPs [26]. ### Countermeasures This section surveys effective countermeasures to defend against attacks outlined in Section 3.1. Specifically, we discuss the countermeasures in the following six major categories. #### 3.2.1. Confidentiality We first review the countermeasures against side-channel attacks. Next, we will survey countermeasures to defend against other types of attacks. Wang and Suh (Wang and Suh, 2018) propose an efficient countermeasure against timing side-channel attack to steal critical information. The solution is to use multi-level security modeling of the SoC as IPs categorized into high and low security zones. They propose a priority bandwidth allocation scheme for one-way traffic from high security zone to low security zone. This ensures that low priority traffic is not affecting the high priority traffic. So an attacker cannot use throughput variations on high priority traffic to leak sensitive information. Furthermore, the proposed solution ensures that the low priority traffic will not be starved or lead to DoS by introducing static lower bounds to low priority traffic bandwidth. The proposed countermeasure can be implemented on routers with minimal power and area overhead while successfully eliminating one-way side channel from high to low security zone traffic. Furthermore, the authors propose two physical networks with spatial or temporal network partitioning to fully eliminate timing side channel for both directions. Two mechanisms to avoid timing side channel attack by a malicious IP were proposed by (S insert random dummy data with valid data. The slave can distinguish actual data using the random number. The dummy data in middle will obfuscate power values making it hard to correctly guess actual data values using power side channel analysis. So far, we have discussed countermeasures to defend against side-channel attacks. Now we will focus on security countermeasures to defend against other types of attacks. Ancajas et al. [7] proposed Fort-NoC which is a solution for snooping eavesdropping attacks in a compromised NoC. The solution uses a series of techniques to provide both reactive and proactive multi-level protection. The data scrambling technique scrambles the critical data at SoC firmware before handing it over to NoCs' network interface. This technique affects the backdoor activation of the HT and also makes the leaked data incomprehensible to the attacker. The packet certification technique simply adds an encrypted tag at the end of the packet by the SoC firmware. The packets with invalid tags will be discarded making it harder for covert communication initiated at compromised NoC. Node obfuscation technique decouples and hides the source and destination of a given communication. These three fort-NoC techniques have minimum performance overhead of 3.8%, 2% and 001% in terms of packet latency. These methods highlight that mult-level security can mitigate the snooping attack in compromised NoC with low overhead. Charles and Mishra [27] propose a lightweight encryption scheme using incremental cryptography. They highlight that most of the memory request and response communication inside NoC differs by only few bits. Incremental encryption compares two consecutive packets and encrypts only the difference. The authors utilize this feature to minimize data bits needed for encryption and decryption. They use Hummingbird-2 cipher [35] as the encryption algorithm. The authors observed 57% (30% on average) performance improvement over traditional routing mechanism with only 2% overhead. Sepulveda et al. [71] use traditional encryption of counter-mode of the Advanced Encryption Standard (AES-CTR) to protect against eavesdropping attacks. They use unique key for each of their encryption and use linear feedback shift registers to generate initialization vector for each instance. They use encryption of initialization vector and counter to generate unique key for encryption. A lightweight encryption scheme that utilizes two basic concepts of chaffing and winnowing[66] and all-or-nothing transform [65] is proposed by Weerasena et al. [82]. All-or-nothing transform has a lightweight quasi-group based implementation. Chaffing and winnowing process utilizes inherent traffic characteristics of the NoC to speedup the overall encryption. The implementation of the encryption scheme at network interface is fast and lightweight compared to AES-CTR mode based countermeasure [71]. These results validate the fact that traditional security protocols do not adapt well for resource-constrained NoC. Although traditional authenticated encryption mitigates eavesdropping attacks, they can introduce unacceptable overhead in resource constrained NoC. Charles et al. [23] propose a lightweight digital watermarking scheme to detect eavesdropping attacks. This solution replaces costly authentication tag generation while maintaining confidentiality through existing encryption scheme. A watermark will be embedded into every packet stream. Both watermark encoding and decoding logic implemented on NI while sender encodes and receiver decodes. The receiver will identify a packet stream as invalid in case of an attack, or valid otherwise. Hussain and Guo [44] introduced another countermeasure to detect attacks that will alter packet source and destination to leak packets. The proposed lightweight authentication scheme generates tag by authenticating both source and destination. The tag is scrambled with packet data. The tag inserted by source processing element is verified by packet leak detection unit in destination processing element. If the tag is altered or any of the address is altered, the destination will detect the packet and invalidate it. A two-tier protection mechanism is introduced by [62] to protect against eavesdropping attack. A snooping invalidation module is presented as the first tier. This is implemented in output queue of every NI which will discard packets with invalid header flits. Additional encoding information provided by the processing element is used for the detection of snoop packets. As second tier of protection will go after the source of the attack where a malicious program running in a processing core. The detector implemented at interface between NI and processing element needs to observe traffic ratios for few hours to detect source of the attack. This two-tier countermeasure protects NoC-based SoC from snooping attacks while reducing application execution time. A Simulated Annealing based randomized routing mechanism is proposed by [4] to overcome traffic analysis attack. Since a fully randomized routing degrades performance of NoC, the authors used parameterized Simulated Annealing that can balance between security against traffic analysis and performance. The key idea of the countermeasure is to obfuscate traffic flows so that the attacker cannot launch successful data mining techniques. Simulated Annealing based randomized routing makes the path of the packet unpredictable so that the features for the Machine Learning(ML) model is obfuscated. This method reduces the user profile identification accuracy from 98% to 15%. An attack proposed by [3] has the similar basic intuition as [4]. Therefore, the solution presented by [3] to overcome ML-based attacks on eavesdropped data can be used. The solution is routing obfuscation through adaptive routing mechanism. This obfuscation will confuse the external adversary who conducts data mining techniques on these data. Similar to previous works, this countermeasure has configurablity to trade-off between security and performance. #### 3.2.2 Integrity Sepulveda et al. [71] proposed a tunnel based solution that protects electrical NoC against message modification attacks. The solution assumes that the network interface is trustworthy. The authors use SipHash-2-4 as the algorithm to generate the message authentication code. Siphash [11] is popular for generating message authentication code for shorter inputs. The proposed countermeasure generates 64-bit message authentication code and appends it to the message. The receiver regenerates the tag from that end and compares the tag for possible tampering of the message. Any mismatch between the two tags will indicate unauthorized modification of the message content. Furthermore, the authors makes the countermeasure configurable by controlling Siphash rounds. #### 3.2.3 Authenticity Although there are no threat models solely focused on authentication in recent literature, the solutions for confidentiality that provide authenticated encryption also ensures authenticity. Although the SipHash-2-4 implementation in [71] focuses on integrity, the inclusion of the source field in hash calculation guarantees that the data in the source field is authentic. Several other attempts to develop lightweight authenticated encryption can be found in the literature. The packet certification technique with an XoR cipher in [7] uses a tag to be validated by the receiver. The sender will append a tag to the message which can only be validated by the receiver. Another authentication scheme is proposed by Boraten and Kodi [20]. It is a reconfigurable packet validation and authentication scheme by merging two robust error detection schemes, namely, algebraic manipulation detection and cyclic redundancy check. Intel's TinyCrypt [45] is a cryptographic library targeting resource-constrained IoT and embedded devices. It provides fundamental cryptographic building blocks consisting of hash functions and message authentication codes that can be used in ensuring authenticity in NoC. #### 3.2.4 Anonymity A lightweight anonymous routing protocol to ensure anonymity inside NoC is introduced by [24]. The proposed technique initiates a tunnel between the sender and receiver through a three-way handshake. The handshake uses per hop encryption and decryption to ensure a secure tunnel creation. After the tunnel creation, a router in the path only knows about the preceding and following routers. Therefore, the data transfer in tunnel ensures anonymity. The proposed anonymous routing ensures anonymity with only 4% impact on performance while traditional onion routing implementation introduces 1.5X performance degradation. Sarihi et al. [67] propose an anonymous routing mechanism in NoC. This routing mechanism uses encrypted destination address and prevent any malicious router in the middle collecting the packets of same flow. Instead of plain-text source and destination, approximate routes and turns are embedded into the packet with encrypted destination. Hummigbird-2 is used as the lightweight encryption needed in the scheme. The packets in the NoC are divided into secure and non-secure packets. Secure packet ensures that it's destination and source are not leaked to any unauthorized party through proposed routing mechanism. This solution incurs minimal area overhead of 1% and power overhead of 10%. #### 3.2.5. Availability. JS et al. [46] proposes a bandwidth DoS runtime attack detection. The authors present RLAN (Runtime Latency Audition for NoCs) which is an auditor to monitor traffic characteristics of the system. RLAN does not need any support form NoC. The countermeasure is implemented in the \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Integrity** & \begin{tabular}{l} Message tamper via malicious router \\ \end{tabular} & \begin{tabular}{l} Applying and Validation of message authentication \\ code at Network Interface. [71] \\ \end{tabular} & \begin{tabular}{l} NA \\ \end{tabular} \\ \hline \hline firmware module that interfaces the NI with processing element. RLAN performs this in two steps (1) RLAN will carefully inject selected packets to the network and the NoC firmware will monitor anomalies in the packet transfer delays. (2) RLAN will look for comparable latencies of two packets which has overlap in their path (spatial similarity) in the same time frame (temporal similarity). RLAN will generate packet by slightly altering the destination and source which will have the same hop count as legitimate packet. Before sending the packet for packetization, the RLAN will tag the packet with timestamp to establish latency thresholds. An adequate sample size can be used to detect DoS attack by comparing the latencies between a RLAN injected packet and its original counterpart. RLAN has the overhead of 12.73%, 9.34% and 5.4% in area power and network latency, respectively. A monitoring system to avoid bandwidth based DoS attack is proposed by [37]. The denial of service probe collects traffic statistics of packets generated by processing elements. Any unnatural condition in traffic flags a potential DoS attack. The authors point out the requirement of effective way to minimize false positives for incorrectly classifying normal anomalies as a DoS attack. Daoud and Rafla [34] propose a method for real-time detection and localization of DoS attack. They also provide a prevention technique with new routing protocol which will avoid interaction with malicious nodes by detouring traffic around. The detection of the attack is straightforward as the downstream router can detect it by comparing it with the XY routing protocol decision for routing path violation. Once the downstream router detects a misrouted packet, it will inform the operating system and neighbouring routers where they will flag the upstream router of that packet as a malicious router. This can be visualized as the malicious router is covered by a shield ring by neighbouring routers. The neighboring routers of the malicious router will detour the packets avoiding any interaction with the malicious router while transferring the packets. This rerouting module has only 0.4% area overhead compared to the base router. The countermeasure increases packet latency due to detouring of the packet and it will incur 0.6% additional power overhead. Manju et al. [55] describes Trojan aware routing mechanism for DoS attacks. This mechanism is implemented in all the routers. Trojan aware routing has 3 stages. The Trojan detection stage detects whether the neighbouring/upstream router is Trojan infected by the current router. The detection module of Trojan aware routing will detect violation of routing protocol and indicates it using two flags (Boolean alert flag and direction flag) in the current router. The dynamic shielding phase will create a virtual shield surrounding the malicious router. The router who detects the Trojan will send alert to the neighbouring routers of the malicious router so that the neighbouring routers update their flags. The bypass routing stage is a modified XY routing. At each router, it will look at the two flags and activate detour if the next hop is a malicious router, otherwise the normal XY routing will continue. Trojan aware routing shows reduction of 38% packet latency compared to HT infected router and has only 7% increase of packet latency compared to baseline NoC. Trojan aware routing has only 6% reduction of throughput compared to baseline. In terms of hardware overhead, Trojan aware routing has 2.78% of area and 3% of leakage power overhead. Charles et al. [25, 26] present a mechanism to detect and localize distributed DoS attacks by multiple malicious IPs by monitoring communication patterns. At design time, the communication patterns are analyzed and parameterized as packet arrival curve at each router and packet latency curve at each IP. The routers will store packet arrival curve and use it to detect a real-time violation by comparing it with upper-bound from the curve. Once a router flags a detection of DDoS attack, the corresponding IP of that router is responsible for localizing the attack. The IP will compare with packet latency curve upper bound to determine abnormal latencies. Once it sort out the delayed packet, the IP will communicate with the routers in the path for congestion information and use that information to localize the malicious IPs. The experimental results show that attack detection is faster when more malicious IPs are available. Experimental results also reveal fast localization time. The proposed approach has 6% and 4% area and power overhead, respectively. Fang et al. [36] talk about robustness of the DDoS attacks on mesh-based NoC with different malicious IP numbers and placements. Among five routing algorithms (XY routing and four adaptive routing), XY routing performs better at traffic injection rates less than 0.65. However, the authors suggest that for higher traffic injection rates adaptive routing algorithms perform better. They also propose a set of design guidelines to elevate system performance in a DoS attack scenario. They suggest a hybrid scheme of deterministic routing and adaptive routing that will switch based on the traffic injection rates. Fang et al. [36] propose a heuristic for DoS attack initiated by exploiting error correction mechanism in NoC. The threat detection mechanism implemented in the router will detect threat by probing and monitoring links with transient and frequent faults. The detector will examine all incoming flits in terms of whether they are having a fault and whether the fault is seen in similar locations earlier. After a successful detection, the authors propose multiple link obfuscation techniques to mitigate the effects of the proposed DoS attack. When an artificial fault in the packet is tagged by the detection module, the mitigation module will look for the fault location (whether the fault is in the header, payload or both). Next, it will use either shuffle, scramble or invert bits to obfuscate the target of the compromised link. The mitigation module may have to run multiple times to choose correct obfuscation technique. The experiment results show successful mitigation of the proposed attack with only 2% and 6% area and power overhead, respectively. Finally, the authors proposed optimized algebraic manipulation detection as a solution to maintain integrity in malicious links. The authors in [47] propose a mitigation technique inside the router that may have Trojan to alter header fields (no of flits, destination, head flag bit, and tail flag bit) to cause DoS attacks. The basic idea is to shuffle the bit fields among themselves and other fields to obfuscate information. Furthermore, the authors propose a single bit error code correction on top of the obfuscation to mitigate effects from Trojans. Immediately after the flit enters the router, the shuffle encoder will shuffle header fields. The shuffling pattern is determined by the last few bits of the actual payload of the packet. For example, there will be eight shuffling patterns determined by the last three bits of the payload. After the router goes trough it's all stages, the de-shuffle returns the header fields to the normal bit pattern. Since the HT is inside the router, it cannot do meaningful attack by bit alterations from shuffled bits. For example, the HT is unaware of the position of the bits representing number of flits in the packet. Experimental results show that the proposed methodology is able to recover 67% and 45% in changing number of flits and destination address, respectively. The mitigation technique has area overhead of 21% compared to baseline router. Sudusinghe et al. [75] observe that the real benchmarks in general purpose system may have unpredictable traffic patterns, and therefore, simple statistical patterns would not adapt well for them. So they propose a ML based approach for DoS attack detection. They have explored several supervised ML approaches for the DoS detection. During the design time, the data for both normal and attack scenarios are collected using known applications. A ML model is trained and stored in a dedicated core for security. At run-time, the probes are used to collect data from routers. Separate physical NoC is used to send data to the security core for prediction. The security core will predict the traffic pattern as attack or normal scenario. They have used 20 distinct features for this ML detection. Two physical NoCs have 6% and 7% area and power overhead, respectively. They have compared performance of 12 different ML models - XGBoost algorithm performs better with an accuracy of 99% in successfully detecting DoS attack in 4x4 mesh based NoCs. Charles et al. [24] propose a trust aware routing mechanism to overcome DoS attack in the presence of multiple malicious IPs. The authors model trust of routers which calculates the trust based on feedback from neighbouring routers and propagate the values through NoC. A neighbouring router will keep on trusting of the upstream router if the packet was not tampered and reduce it's trust if packet is tampered. When a particular IP wants to send data, an adaptive trust-aware routing algorithm will be used to route the packets avoiding untrusted routers. This lightweight mechanism can be integrated with any of the existing authentication protocol to mitigate DoS attacks that exploits authentication checking. #### 3.2.6. Freshness There are no recent efforts on attacks or defenses related to freshness in electrical NoCs. Table 1 shows the summary of the surveyed papers attacking confidentiality, integrity, availability, authenticity and freshness in wireless NoCs and corresponding countermeasures.The first column shows the main security concept. The second column outlines the threat model. The second and third columns provide the countermeasure and associated overhead in terms of area and power. ## 4. Wireless On-Chip Communication Security Wireless NoC is promising to mitigate the routing challenges associated with multi-hop communication in electrical NoCs. However, wireless NoCs introduce inherent vulnerabilities due to wireless communication. In this section, we survey recent research efforts in securing wireless NoCs. First, we discuss the threat models and attacks on wireless NoCs. Next, we describe the countermeasures to defend against these attacks. ### Threat Models and Attacks The attacks on wireless NoCs follow the similar threat models as the electrical NoC with subtle differences for wireless communication. In this section, we explore the attacks on wireless NoCs in the following categories. #### 4.1.1. Confidentiality Lebiednik et al. (Lebiednik et al., 2017) consider multiple attacks in their threat model and one of them is the eavesdropping attack. The authors consider a single point of attack and avoids the scenarios where an unbound attacker can disrupt the system. They assume that their HT cannot affect physical layer since HT is placed in the digital circuit. Furthermore, since the chip is covered with metallic box, the the Radio Frequency (RF) signals cannot be leaked outside or attacker cannot inject RF signals from outside. Figure 14 shows typical wireless chip with metallic cover. The authors highlight that the broadcasting nature of messages in wireless NoC makes it inherently vulnerable to eavesdropping attack. Pereniguez-Garcia and Abellan (Pereniguez-Garcia and Abellan, 2018) describes multiple attacks in hybrid NoC. The target system has shared L3 banks distributed in 16 nodes and private L2 cache at each node. The authors Figure 14. Wireless NoC based SoC with metallic cover hinders access to an external attacker (Lebiednik et al., 2017). focus on the broadcast cache coherence message communication between L2 and L3 for all the attacks. They label L3 as sender and L2 as receivers while the communication is done via wireless NoC. The authors assume an ideal condition of all L2 and L3 nodes having wireless receivers and transceivers. The electrical NoC assumed to be secure while wireless medium is not secure. In proposed eavesdropping attack, the attacker captures messages over wireless medium which can lead to leaking sensitive information such as passwords and keys. Vashist et al. (Vashist et al., 2017) describe eavesdropping attack from both external and internal adversaries. The attack is passive and hard to detect in both cases. This is done by the attacker tuning into the unprotected wireless channel. An external attacker need to have external receiver tuned to the wireless band used inside wireless NoC with enough sensitivity. An internal attacker will forward eavesdropped packet to a malicious IP. #### 4.1.2. Integrity Pereniguez-Garcia and Abellan (Pereniguez-Garcia and Abellan, 2018) describe unauthorized modification of packets in wireless medium in a hybrid wireless NoC. The attacker changes the content of the message and forwards it to the intended destination. The attacker is capable of full or partial modification of the content compared to the real version of the message. The authors point out the need for novel hashing mechanism with less than 30 cycles to harness fast broadcasting of cache coherence messages via wireless NoCs. #### 4.1.3. Availability We discussed the general attack model proposed by Lebiednik et al. (Lebiednik et al., 2017) earlier which also discusses about DoS attacks. The authors point out that misconfiguration in media access control protocol can lead to DoS attack. For example, two nodes transmitting in the same channel can cause collision and corruption of messages. A rouge node can transfer packets out of turn violating fundamental rule of collision-free media access control protocol. Therefore, repetitive collision can eat-up bandwidth which can lead to bandwidth DoS attack. The proposed attack can lead to throughput drop over 70% in the presence of a selfish node that unfairly consumes bandwidth. DoS attacks on wireless NoC is first discussed by Ganguly et al. (Ganguly et al., 2018). The threat model assumes HT in a processing core injecting dummy packets into the network. These garbage packets will occupy majority of the virtual channels and output ports of a switch. This will result in disturbance of traffic in and out of the switch. Further propagation of this congestion to neighbouring switches will result in DoS attack. Persistent jamming-based DoS attacks by both internal and external attackers have been studied in (Vashist et al., 2017). During the attack period, the proposed attack is capable of making interference on wireless communication. This causes bit errors in contiguous bits known as burst errors. The attack will continue for a long period of time resulting in long contiguous bit errors that cannot be corrected by existing error correction codes. This will result in re-transmission that can eventually lead to DoS attack. The internal attack is initiated by a HT affected wireless interface. Vashist et al. (Vashist et al., 2017) also talks about persistent jamming based DoS attacks. They highlight the fact that it is impractical to use traditional channel-hopping technique as a countermeasure on wireless NoC due to use of limited number of channels. This attack has similar effect of burst errors as discussed in (Vashist et al., 2017). Ahmed et al. (Ahmed et al., 2018, 2019) also outline a similar threat model in wireless NoC for persistent jamming based DoS. The attack was conducted on multicore-multichip topology where two wireless interfaces are available in a chip for inter-chip communication. #### 4.1.4. Authenticity Lebiednik et al. (Lebiednik et al., 2017) discuss spoofing attack on wireless NoCs. Wireless being the broadcast medium, it is inherently vulnerable to spoofing attacks. Malicious cores use this to impersonate other cores by changing the source address of flits. The authors highlight that spoofing can be used to get access to unauthorized regions of memory or steal unauthorized information. Furthermore, the spoofing attack on authenticity can lead to attack on availability by responding with wrong information to legitimate signals by a rouge node. Lebiednik et al. (Lebiednik et al., 2017) discussed this attack in detail. The spoofing attacks are conducted from inside since the chip package blocks external signals. The HT is in the digital circuit of wireless NoC. The attacker sends spoof cache invalidations. The authors show that even 10% such spoof invalidations can lead to 27% drop in NoC performance. Pereniguez-Garcia and Abellan (Pereniguez-Garcia and Abellan, 2018) also talks about impersonation in hybrid wireless NoCs. The attacker simply tricks other IPs of the wireless NoC that they are having legitimate communication with actual IP while they are actually communicating with a malicious IP. #### 4.1.5. Anonymity There are no recent efforts on attacking anonymity in wireless NoCs. #### 4.1.6. Freshness Pereniguez-Garcia and Abellan (Pereniguez-Garcia and Abellan, 2018) describe replay attack. The attacker will store a valid message exchanged over the wireless NoC. Then the attacker will inject the same packet without modification. The receiver will waste time or conduct an unwanted action for such artificially delayed redundant messages. ### Countermeasures This section surveys countermeasures to defend against the attacks discussed in Section 4.1 categorized by security concepts. #### 4.2.1. Confidentiality Lebiednik et al. (Lebiednik et al., 2017) propose a set of hardware solutions against multiple attacks including eavesdropping attack. The solution for eavesdropping attack is included as a module in the network interface. The authors use stream ciphers as the low cost solution for eavesdropping attack and highlight the fact that Advanced Encryption Standard (AES) (Lebiednik et al., 2017) is not suitable for resource-constrained wireless NoCs. The proposed solution will take bit-wise decision of flipping bits using symmetric keys. The proposed Py (Vashist et al., 2018) algorithm will take only 2.85 cycles compared to 20 cycles of AES. The area and power overhead of Py is minimal while having less network saturation compared to AES. However, Py is vulnerable to liner distinguishing attacks. Although (Pereniguez-Garcia and Abellan, 2018) provides solutions for different kind of attacks they do not consider eavesdropping. They reasoned that all broadcast messages through wireless NoC only contain memory address and destination node ID and they are not critical information to hide. Many researchers argue that eavesdropping and collecting broadcasting messages can lead to complex attacks to disrupt confidentiality and privacy of SoC through machine learning based techniques. Vashist et al. (Vashist et al., 2018) propose eavesdropping attack prevention technique embedded to each transceiver. It performs XOR-based data scrambling based encoding in each sensitive data transmission. The header bits are kept as plaintext for faster routing. This method is extremely lightweight and fast due to XOR operation. For internal eavesdropping attack, the authors propose a low complexity rule checker on destination address at wireless interface. #### 4.2.2. Integrity Usage of a hash function is proposed by (Pereniguez-Garcia and Abellan, 2018) to overcome integrity issue in wireless NoC. The sender appends a tag with hash value at the end of the packet while receiver validates the tag upon receiving. The authors use a lightweight hash function of SPONGENT family (Girard et al., 2018) as the most reliable and efficient solution for wireless NoC. Specifically, they have used SPONGENT configuration which results in a 88-bit length hash value and operates using 8-bit blocks. The authors assume that it will take 450 cycles for hash function to generate output in 1GHz frequency. #### 4.2.3. Availability Ganguly et al. [38] propose a design methodology to mitigate effect of DoS attack. The authors propose a small-world topology which is known for its inherent resilience to DoS attacks. The small-world topology has many short-distance links and a small number of long distance shortcuts. The topology has both wired and wireless links. The authors have optimized the defence against DoS attack by simulated annealing to mitigate spreading of DoS attack. Experimental results show that the proposed methodology assists high data transfer rate with low power dissipation. A DoS prevention using unfairness detection is proposed as a module in Prometheus [51]. The module monitors possible collisions violating collision avoidance media access control protocol protocol. After a successful detection of collision, the wired NoC of hybrid wireless NoC is used to send the message to suppress transmissions of the malicious node. Furthermore, the operating system will turn off the communication of the malicious node by ID. In a case of collision-based media access control protocol where collision is part of the protocol, the detection is hard. The authors define an unfairness ratio and a configurable threshold of it's value to identify malicious nodes. Vashist et al. [77] propose an attack detection and mitigation technique for jamming-based DoS attack. They use an ML-based model to distinguish random burst errors from burst errors by a DoS attack. After a successful detection, a defense unit at each transceiver is used to mitigate the attack depending on whether the attack is internal or external. In case of an external attacker, all the wireless interfaces will be disabled from data routing, instead wired links will be used. In case of an internal attacker, the power supply will be removed from the specific Trojan infected transceiver. The authors in [78] propose a modification of existing Built-In-Self-Test (BIST) framework used for testing to monitor jamming-based DoS attacks. The defense mechanism is similar to [77]. The proposed method is able to detect DoS attack with accuracy of 99.87% with only <3% communication overhead and <1% energy overhead. Ahmed et al. [2, 5] use a similar approach of utilizing BIST framework to detect persistent jamming based DoS attacks. They used both regular ML-model and adversarial ML classifier for this detection. A reconfigurable media access control protocol is used as the countermeasure to mitigate DoS attack. The reconfigurable media access control protocol is implemented at transceiver. It uses reservation-based media access control protocol in normal scenario and special attack mitigating media access control protocol otherwise. ML and adversarial ML classifiers show 99.87% and 95.95% accuracy in attack detection, respectively. Reconfigurable media access control protocol helps to mitigate the effect of DoS attack with 1.44x and 1.56x latency impact for internal and external attack, respectively. The authors have used K-nearest neighbours algorithm for the detection part of DoS attack in previous work with accuracy over 99%. The other parts of the solution is same with few subtle differences. #### 4.2.4. Authenticity RF power analysis is used in [51, 52] to detect spoofing attacks in wireless NoCs. The authors also highlight the impracticality of using asymmetric key signature schemes in NoCs despite their use in traditional wireless networks. The proposed solution can measure power level of the received transmission. Due to the topology, an effective source address can be derived from the observed power level. A mismatch in effective source address and actual address in packet header is recognized as a potential spoofing attack. To overcome the challenge of correctly identifying the nodes equidistant to a particular node, the authors propose to place attack detection modules at each corner of NoC. Pereniguez-Garcia and Abellan (2017) observe that there is no need for both sender (L3) and receiver (L2) to authenticate each other using a secret key because the specific message in the threat model are in the same application context. They suggest only one-way authentication of sender which is verified by the receiver. The scheme uses a key bind into a specific process to do this authentication using symmetric cryptography. #### 4.2.5. Freshness Pereniguez-Garcia and Abellan (2017) suggest a counter-based scheme to monitor the freshness of the communication. This is a simple counter initialized into a random value by the operating system when creating the process. When broadcasting the message, the counter is incremented by one. The receiver will have local variable representing the counter for each sender. Upon receiving the packet, each receiver will compare it's local counter value with the value inside the packet. A mismatch in the counter will flag a redundant message and it will be discarded. Table 2 shows the summary of the surveyed papers attacking confidentiality, integrity, availability, authenticity and freshness in wireless NoCs and corresponding countermeasures.The first column shows the main security concept. The second column outlines the threat model. The second and third columns provide the countermeasure and associated overhead in terms of area and power. ## 5. Optical on-Chip Communication Security Optical NoC design and security analysis is an active research field. In this section, we survey security of optical NoCs. First, we review the threat model and attacks on optical NoCs. Next, we survey the countermeasures to defend against these attacks. ### Threat Models and Attacks Bus-based topologies and their variations can be considered as the most common topologies in optical NoCs. Therefore, there are different set of threat models and vulnerabilities compared to mesh based electrical NoCs. In this section, we explore threat models and attacks on optical NoCs. \begin{table} \begin{tabular}{|l|l|l|c|} \hline **Security** & **Attack** & **Countermeasure** & **Overhead** \\ \cline{2-4} **Conidentiality** & \begin{tabular}{l} Eavesdropping attack via HT \\ \end{tabular} & \begin{tabular}{l} Encrypt using stream cipher at NI (S #### 5.1.1. Confidentiality Bashir et al. [14] discuss multiple categories of attacks on optical NoCs. We focus on general threat model and specific attack on confidentiality in this section while other attacks are discussed in subsequent sections. The attack model trusts only the sender and receiver. In other words, it considers that NoC and other IPs connected to it can be malicious. Furthermore, the threat model assumes that an attacker cannot tamper with clock, power or ground lines. Eavesdropping attack on optical NoCs is simple. A malicious IP can simply listen into the shared waveguide and steal sensitive data without altering the original communication. According to Figure 15, station 1 sends message to station 3. Since station 2 is malicious, the HT will turn on and tune the detector (MR inside the detector) on wavelength used by station 1 and 3. Then station 2 can steal critical information intended for station 3. MRs are highly sensitive for temperature variations. Therefore, there is a control system for thermal sensing at different locations in optical NoC and corresponding tuning MRs. However, these sensed thermal data needs to be sent to collection node to calculate relevant tuning adjustments of each MR. In hybrid optical NoCs, electrical NoC is used to send these control signals. Zhou et al. [89] talks about attack on confidentiality when sending optical NoC control signal data (thermal sensing values) through insecure electrical NoC. Chittamuru et al. [29; 30] introduced a spoofing attack through HT in MR tuning circuit. A gateway is responsible for interfacing shared wave-guide of optical NoC to processing cluster. An HT in gateway is able to manipulate tuning circuit of MR to partially tune the neighbouring wavelength. Once it snoops data from ongoing communication, the data will be sent to a malicious node to extract critical information. Multiple side channel attacks caused by resource contention is discussed by Guo et al. [41]. There is a malicious switch sharing common resource such as link with a legitimate communication. The malicious switch competes and intentionally allows the legitimate communication win. The malicious switch observes timing characteristics of the flow and extracts secret information from them. The authors discuss two side-channel attacks of one-way and two-way interference that uses discussed threat model. The authors also discuss an attack that leaks critical NoC temperature related control data that can be used by other attacks. #### 5.1.2. Integrity Bashir et al. [14] use the same basic threat model as discussed in Section 5.1.1 for attack on integrity. The attacker has the ability to actively modify packets of an ongoing legitimate communication. The attacker will simply capture and delete original packet from the bus, modify the content of the packet, and finally place the packet with tampered data on bus with the same destination. Figure 16 visualizes message tampering attack by compromised station 2 on legitimate data transfer. Similar to eavesdropping attack, tuning MR to a selected wavelength by HT is the origin of the attack. Figure 15. Eavesdropping attack on optical NoCs in the presence of HT in Optical Station [14]. As discussed earlier, optical networks are sensitive to temperature since thermal sensing is integrated into optical NoCs. Zhou et al. [88; 89] outline an attack that tampers the thermal-sensing control procedure of MRs. The optical router is the focal point of this attack where the thermal-sensing measurement is happening. HT in optical router will tamper modulation voltage which results in incorrect measurements. This will lead to negative impact on performance and reliability. The authors highlight that these HTs are harder to detect due to their negligible overhead. #### 5.1.3. Availability Multiple HT-based remote DoS attacks on optical NoCs have been studied by [41]. The threat model assumes that every node has a optical switching cell for switching of optical signal. A blackhole attack on optical NoC is led by HT inserted in switching cell. In blackhole attack, HT will drop packets without forwarding them to correct local output port. Sinkhole attack is also led by same HT in switch. When an upstream switch wants to forward data to a downstream switch, it considers available wavelengths and modes in the downstream switch. In sinkhole attack, the HT will notify more empty wavelength and modes than what the switch actually have. Both blackhole and sinkhole attacks result in performance degradation which will leads to DoS attack. The authors also elaborate flooding attack where a malicious cell continuously injects dummy packets to induce a DoS attack. #### 5.1.4. Authenticity A spoofing attack can also be easily conducted on top of the basic threat model by [14] which was discussed in Section 5.1.1. Malicious IP/station connected to the shared optical wave-guide can tune it's MR to listen to an on-going communication and impersonate the sender or the receiver. Similar to attack on integrity by [41], attacker can capture and remove a legitimate request from sender. After that either it can respond to the request impersonating as the destination or it can send it's own request back to the wave-guide impersonating as the legitimate sender. Impersonating as the sender can result in attacker obtaining sensitive information from the destination or make destination conduct an unintended task. Zhou et al. [89] also talk about possible spoofing attack in thermal sensing control procedure. As discussed in Section 5.1.2, the optical router measures the raw data for temperature calculation. Then it needs to transfer this data to processing unit through low priority electrical NoC. The unique identifier for optical router in control packet can be changed so that the processing element can calculate and act on different optical router as intended. This will also lead to performance degradation and unreliable communication. The effects of the attack will be different in each instance making it harder to detect these attacks. #### 5.1.5. Anonymity There are no recent efforts on attacking anonymity in optical NoCs. Figure 16. Message tampering attack on optical NoCs in the presence of HT in Optical Station [14]. #### 5.1.6. Freshness The same basic attack model discussed in Section 5.1.1 can be used by an attacker for replay attack (Kumarumar et al., 2017). An HT in the middle of a legitimate communication can tune itself through MR tuning circuit. Then the attacker can store selected messages from the session internally and inject them again to the wave-guide. This can result in destination doing unintended action. Frequent injections can result in unwanted bandwidth utilization affecting legitimate communication. The authors highlight the fact that this attack is possible even with encrypted payload in the packet. ### Countermeasures This section surveys the countermeasures to defend against the attacks outlined in Section 5.1 categorized by security concepts. #### 5.2.1. Confidentiality Bashir et al. (Bashir et al., 2017) propose a layer of security in between optical station and processing elements. The authors highlight that optical networks are more sensitive to latencies in cryptographic operations. Therefore, naive implementations of exiting traditional countermeasures will not adapt well for optical NoCs. The authors propose a symmetric encryption based solution for eavesdropping attack. In the first stage of the solution, a key distribution algorithm is proposed which is executed during the boot time. The authors assume availability of specific hardware block for key generation. In co-existence of electrical NoC, the key distribution algorithm will use secure electrical NoC for key distribution. Otherwise, it will use low complexity public-key cryptography named BlueJay for key distribution. The proposed countermeasure uses One-Time Pad which can be executed in one cycle for encryption at sender and decryption at receiver. The First-In-First-Out (FIFO) property of optical bus is utilized for pre-computing and storing of tags for faster execution. To ensure that both sender and receiver use the same key for One-Time Pad, a minor key concat with main key is used. Incrementing of the minor key after every message and the FIFO property of the bus ensures synchronization of the keys used for One-Time Pad. The complete solution for addressing multiple security requirements incur only 1.6% area overhead and 14.2% performance overhead while eavesdropping only solution incurs even less overhead. In a hybrid optical NoC, Zhou et al. (Zhou et al., 2017) use an encryption scheme to ensure confidentiality of control messages transfer via electrical NoC. The fabrication of optical NoC introduces Process Variations. The change of resonant wavelength due to the PV of MR is used as the key for this symmetric encryption scheme. Furthermore, they use simple and lightweight NoR operation in the encryption scheme. Chittamuru et al. (Chittamuru et al., 2017) introduce a process variation based authentication signature scheme to protect photonic networks against snooping attacks in both uni-cast and multi-cast scenarios. Specifically, the packet is encrypted by the process variation profile of detector in destination gateway. Architecture-level reservation-assisted security enhancement scheme is proposed and combined with authentication scheme to further increase the security of optical NoCs. The reservation scheme introduces a secure reservation wave-guide which prevents malicious Gateway Interface stealing information about another Gateway Interface. This framework secures optical NoC against snooping attack with moderate overhead of 14.2% in average latency and of up to 14.6% in energy delay product. Guo et al. (Guo et al., 2017) divide cells into trusted and malicious cells for countermeasure against timing side channel attack. This HT countermeasure limits locally-generated traffic leaving a cell that tagged as malicious. Furthermore, the static and dynamic partitioning of cells based on the trust of each cell will try to isolate sensitive traffic inside secure cells. Both of these approaches provide protection against timing side channels by avoiding malicious traffic been collided with sensitive traffic. #### 5.2.2. Integrity Bashir et al. [14] propose an integrity checking mechanism to avoid unauthorized packet modification. First, they point out that standard hash generation and verification is not practical in optical NoCs because it consumes too many cycles. The authors propose a piggybacking of the hash. When sender wants to send a message, hash generation of a the message starts in parallel of actual sending of the message through optical NoC. The message arrives at the destination without the hash. The hash for the current message will arrive with the next message. The receiver will process the message until the hash arrives but commits it only if hash is verified. If verification fails, the message is discarded and already processed action will not be committed. The hashing algorithm will use the same key distributed by key distribution algorithm (BlueJay) which was discussed in Section 5.2.1. Zhou et al. [89] discuss on adding module to check optical sampling during thermal sensing. This module is implemented in the network interface of Electrical NoC because it will not affect the normal optical network communication. The module is capable of checking and correcting raw data sent by an optical router before transferring it to the relevant processing element via electrical NoC. The process can be seen as a theoretical checking and correction. In addition, they also introduce a mechanism for run-time detection of the security status of the whole NoC under the presence of discussed HTs. This scheme uses spiking neural networks for detection of the security status. This will help operating system to take more precautions in the presence of an attack. The experimental results show that the presented mitigation techniques allow secure thermal-sensing in optical NoC with overhead of 3.06% and 2.6% in average latency and energy consumption, respectively. #### 5.2.3. Availability Guo et al. [41] discuss countermeasures against DoS attack inducing HT detection, localization and mitigation. The authors assume that the optimal location for HT is hotspots in optical NoC. In the first step, a preliminary HT localization module tags hotspot locations as potential HT locations. Then, the exact localization algorithm tags HTs by identifying anomalies that surpass the upper-bound of the maximal packet count curve for that node. From initially tagged nodes, address of those abnormally delayed packets are tagged as HTs. Finally, the HT-detection stage has two countermeasures. The first countermeasure applies randomized permutation to the data so \begin{table} \begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Security** & **Attack** & **Countermeasure** & **Overhead** \\ \cline{2-4} **Confielduality** & \begin{tabular}{l} Eavesdropping via Malicious Optical Station \\ \end{tabular} & \begin{tabular}{l} One-Time-Pad with changing keys [14] \\ \end{tabular} & \begin{tabular}{l} \textless{}1.6\% \textbackslash{} NA \\ \end{tabular} \\ \cline{2-4} & \begin{tabular}{l} Eavesdropping of control signal via electrical \\ interconnect in hybrid optical NoC \\ \end{tabular} & \begin{tabular}{l} Encryption scheme with Process Variation \\ of Optical network as key [89] \\ \end{tabular} & \textless{}1\% \textbackslash{} Low \\ \cline{2-4} & \begin{tabular}{l} Snooping via HT in MR \\ tuning circuit \\ \end{tabular} & \begin{tabular}{l} Process variation based authentication \\ signature [29, 30] \\ \end{tabular} & \begin{tabular}{l} 14.6\% energy \\ delay product \\ \end{tabular} \\ \cline{2-4} & \begin{tabular}{l} Timing side channel by resource contention \\ \end{tabular} & \begin{tabular}{l} T Restricting local traffic from malicious \\ cell \& static and dynamic partitioning [41] \\ \end{tabular} & NA \\ \hline \hline **Integrity** & \begin{tabular}{l} Unauthorized packet modification \\ \end{tabular} & \begin{tabular}{l} Hashing with piggybacking the tag [14] \\ \end{tabular} & \begin{tabular}{l} \textless{}1.6\% \textbackslash{} NA \\ \end{tabular} \\ \cline{2-4} & \begin{tabular}{l} Tampering critical control data at Optical \\ \end{tabular} & \begin{tabular}{l} Checking and Correcting raw data and \\ security status detection using ML [89] \\ \end{tabular} & 2.6\% on energy \\ \hline \hline **Availability** & \begin{tabular}{l} DoS by blackhole, sinkhole and flooding \\ attacks \\ \end{tabular} & \begin{tabular}{l} HT detection, localization and mitigation [41] \\ \end{tabular} & NA \\ \hline \hline **Authenticity** & \begin{tabular}{l} Spoofing by impersonating sender and \\ receiver via malicious IP \\ \end{tabular} & No countermeasure is given in [14] & NA \\ \cline{2-4} & \begin{tabular}{l} Spoofing source identifier data in control \\ messages \\ \end{tabular} & Using hash at Electrical NI [89] & NA \\ \hline \hline **Freshness** & \begin{tabular}{l} Replay attack via Malicious Optical Station \\ \end{tabular} & Counter-based solution [14] & \textless{}1.6\% \textbackslash{} NA \\ \hline \end{tabular} \end{table} Table 3. Summary of Attacks and countermeasures in optical NoCs categorized by security concepts. that it is hard for HT to change the final destination address. The second countermeasure ensures integrity of data by detecting message removal from the system. After these three steps, there will be trusted and untrusted cells. As extra layer of security, the authors propose static and dynamic partitioning of cells resulting in traffic isolation. #### 5.2.4. Authenticity Although (Zhou et al., 2017) discusses spoofing attack in optical NoCs, it does not provide countermeasure to overcome the proposed attack. Zhou et al. (2018) discuss a countermeasure for their proposed spoofing attack on thermal-sensing process. The module for countermeasure is implemented on electrical NI. They propose using of hash algorithm to ensure authenticity. When sending the packet to the collection processing node from electrical NI, the module will append 128-bit tag. When the collection node receives the packet, it validates the tag with the same algorithm and proceed with it's actions. MD5 (Zhou et al., 2017) is proposed as the hashing algorithm by the authors. #### 5.2.5. Freshness Bashir et al. (Bashir et al., 2017) ensure freshness of the messages through a counter based solution. This solution will detect if a message is removed from the system or removed and reinserted back a message after considerable delay. Therefore, it can detect the proposed replay attack. This solution also utilizes the FIFO property of the optical waveguide. Every sender will maintain a major and minor counter. It will send the both counter values to destination in every \(n\) cycles. If the respective counter values in the destination matches, it will send a known response message. If sender does not receive a particular response withing a threshold time limit, it will flag for possible message removal from the system. The authors define these parameters in the algorithm based on simulation results. This solution has minimal area and performance overhead according to the experimental results. Table 3 shows the summary of the surveyed papers attacking confidentiality, integrity, availability, authenticity and freshness in wireless NoCs and corresponding countermeasures.The first column shows the main security concept. The second column outlines the threat model. The second and third columns provide the countermeasure and associated overhead in terms of area and power. ## 6. Conclusion NoC is the core component responsible for communication in SoCs with large number of tiles. In this paper, we perform a comprehensive survey about attacks, threat models and countermeasures of diverse on-chip communication technologies (electrical, optical, wireless, and hybrid) and architectures. The primary goal of this survey is to provide the reader with a clear insight into the technologies and recent developments in NoC communication security. The survey also consists of an extensive summary on architecture and technology primitives that are essential for the understanding about security of NoC. The survey contains state-of-the art security attacks and countermeasures from both industry and academic perspectives. The discussion about attacks and countermeasures are divided into six security areas. We believe there are several NoC security challenges that need to be addressed in the near future. For example, emerging technologies can introduce vulnerabilities through inherent side channels. Similarly, it is possible to attack NoC security using machine learning techniques. While communication architecture and NoC security have been independently studied, designing them together would lead to secure and robust on-chip communication architectures. The future NoCs need to support sophisticated SoCs in diverse application domains implemented using novel technologies. This will lead to opportunities for exploring new attacks and developing effective countermeasures.
2304.05904
Electrical Characteristics of in situ Mg-doped beta-Ga2O3 Current-Blocking Layer for Vertical Devices
The lack of p-type doping has impeded the development of vertical gallium oxide (Ga2O3) devices. Current blocking layers (CBL) using implanted deep acceptors has been used to demonstrate vertical devices. This paper presents the first demonstration of in situ Mg-doped beta-Ga2O3 CBLs grown using metalorganic chemical vapor deposition. Device structures were designed with in-situ Mg doped layers with varied targeted Mg doping concentrations, which were calibrated by quantitative secondary ion mass spectroscopy (SIMS). The effectiveness of the CBL is characterized using temperature dependent current-voltage measurements using n-Mg-doped-n structures, providing crucial insight into the underlying mechanisms. To further validate the experimental results, a TCAD simulation is performed and the electrically active effective doping is found to be dependent on the Mg-doping density, offering a new perspective on the optimization of CBL performance. Breakdown measurements show a 3.4 MV/cm field strength. This study represents a significant step forward in the development of Ga2O3-based devices and paves the way for future advancements in this exciting field.
Sudipto Saha, Lingyu Meng, A F M Anhar Uddin Bhuiyan, Ankit Sharma, Chinmoy Nath Saha, Hongping Zhao, Uttam Singisetti
2023-04-12T15:21:24Z
http://arxiv.org/abs/2304.05904v1
# Electrical Characteristics of _in situ_ Mg-doped \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) Current- ###### Abstract The lack of p-type doping has impeded the development of vertical gallium oxide (Ga\({}_{2}\)O\({}_{3}\)) devices. Current blocking layers (CBL) using implanted deep acceptors has been used to demonstrate vertical devices. This paper presents the first demonstration of _in situ_ Mg-doped \(\beta\)- Ga\({}_{2}\)O\({}_{3}\) CBLs grown using metalorganic chemical vapor deposition. Device structures were designed with _in situ_ Mg doped layers with varied targeted Mg doping concentrations, which were calibrated by quantitative secondary ion mass spectroscopy (SIMS). The effectiveness of the CBL is characterized using temperature dependent current-voltage measurements using n-Mg-doped-n structures, providing crucial insight into the underlying mechanisms. To further validate the experimental results, a TCAD simulation is performed and the electrically active effective doping is found to be dependent on the Mg-doping density, offering a new perspective on the optimization of CBL performance. Breakdown measurements show a 3.4 MV/cm field strength. This study represents a significant step forward in the development of Ga\({}_{2}\)O\({}_{3}\)-based devices and paves the way for future advancements in this exciting field. In recent years, there has been a growing trend towards developing ultra-wide-bandgap (UWBG) semiconductor materials for advanced power electronic applications as silicon-based technologies have approached their limitations [1, 2, 3]. With its high bandgap, Johnson's figure of merit (JFOM), and Baliga's figure of merit (BFOM), monoclinic \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) has the potential to outperform other wide bandgap devices in terms of switching efficiency and power conversion density [4, 5, 6, 7, 8, 9, 10]. These features make Ga\({}_{2}\)O\({}_{3}\) an attractive option for high voltage and high-power power switching devices and these devices could potentially operate at elevated temperatures. Furthermore, the mature melt-growth technology used for producing high-quality, large-area Ga\({}_{2}\)O\({}_{3}\) wafers and the ability to control n-doping over a large range sets it apart from other UWB materials [11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Various high-performance lateral Ga\({}_{2}\)O\({}_{3}\) MOSFETs have been studied over the years with high breakdown voltages and power device figures of merit [21, 22, 23, 24, 25, 26]. However, vertical devices are preferred over lateral geometries for high-voltage and high-power applications since the peak electric field is buried in the bulk to avoid surface effects [27, 20]. Moreover, the breakdown voltage of vertical Ga\({}_{2}\)O\({}_{3}\) metal-oxide-semiconductor field-effect transistors (MOSFETs) can be tuned with the drift layer thickness, i.e., a higher blocking voltage can be achieved without sacrificing chip area [20]. Owing to the lack of shallow p-type dopants for Ga\({}_{2}\)O\({}_{3}\), the performance of vertical Ga\({}_{2}\)O\({}_{3}\) devices is still lagging behind. The high effective hole mass and high activation energy of traditional doping species cause this p-type conductivity problem in Ga\({}_{2}\)O\({}_{3}\)[28, 29, 30, 31, 32, 33]. Researchers have used ion-implanted current blocking layers (CBL) to demonstrate vertical devices [20, 34, 35, 36]. The CBL layers use deep acceptor species [36, 37] to form insulating layers, which are used to engineer the electric field in vertical MOSFETs. It forms a potential barrier between the source and the drain and allows current to flow through a desired aperture in the drift layer. Charge compensation by _in situ_ epitaxy offers several advantages over the _ex situ_ ion implantation process. High energy ion implantation and the subsequent thermal annealing process at high temperatures (over 1000\({}^{\circ}\) C for Ga2O3) results in displacement damage in the lattice, and thermal diffusion of the dopants and point defects [38, 39, 40, 41]. _In situ_ epitaxial insulating layer can be grown flexibly, avoiding the conflict of high diffusion of acceptors in Ga2O3 at high temperature and high temperature required for crystal recovery and dopant activation from _ex situ_ ion implantation. Among various acceptors in Ga2O3, Mg and N impurities are usually used because of their deep acceptor nature in Ga2O3 [40, 42]. Mg doping is an attractive choice because of its relatively shallow acceptor level and one of the lowest formation and activation energy compared to other cation-site acceptors from DFT calculation [43, 44]. Recent EPR (Electron Paramagnetic Resonance) measurements have determined that the acceptor transition level of magnesium is located at 0.65 eV above the valence band, instead of the theoretically predicted larger values of \(>\)1 eV [45, 46, 47, 48, 49]. Recent experimental research also shows the semi-insulating properties of Mg acceptors in Ga2O3 as Mg can effectively capture electrons from substrate/epilayer growth interface to reduce the conductivity of n-type Ga2O3 in _in situ_ Mg-doped Ga2O3 thin films [39]. However, an in-depth study of Mg incorporation in _in situ_ Ga2O3 epitaxy and its electrical properties is still lacking. In this study, we investigated the electrical properties of _in situ_ Mg-doped Ga2O3 CBL for future vertical devices. Metalorganic chemical vapor deposition (MOCVD) growth was used for the designed structures. A systematic study of the electrical properties of _in situ_ Mg-doped Ga2O3 CBL is investigated. We report a detailed study on the dependence of electrical behavior on the Mg doping concentration in Ga2O3 films, both at room temperature and as a function of elevated temperature. The _in situ_ acceptor doping of Mg in Ga2O3 will provide versatility for designing and fabricating high-performance vertical Ga2O3 power devices. Three Ga\({}_{2}\)O\({}_{3}\) films were grown on commercial Sn-doped (010) oriented Ga\({}_{2}\)O\({}_{3}\) substrates via MOCVD with three different Mg doping concentrations labeled S1, S2, and S3. The substrate surfaces were first cleaned with acetone, isopropanol, and de-ionized water prior to loading to the growth system. The UID layer and the Mg _in situ_ doped layer were grown using trimethylgallium (TMGa) as the Ga precursor and high-purity O\({}_{2}\) gas for oxidation, and Argon (Ar) as the carrier gas. The Si-doped n\({}^{+}\) layer and the Mg tail layer (UID) were grown following the previously established growth conditions using triethylgallium (TEGa) as gallium precursor [50]. Mg doping was introduced by using bis(cyclopentadienyl) magnesium (Cp\({}_{2}\)Mg) as the precursor. The growth temperature was set at 880 \({}^{\circ}\)C for the layers using TMGa and 700 \({}^{\circ}\)C for the layers using TEGa, and the growth pressure was fixed at 60 Torr. The Mg doping concentration was varied by changing the Mg precursor molar flow rate (S1: 106.6 nmol/min, S2: 25.6 nmol/min and S3: 6.4 nmol/min). Under these conditions, average concentrations of 7.15\(\times\)10\({}^{18}\), 8.25\(\times\)10\({}^{17}\), and 2\(\times\)10\({}^{17}\) cm-3 were obtained for S1, S2, and S3, respectively (Fig. 1(b)). Quantitative secondary ion mass spectroscopy (SIMS) was utilized on a multi-layer stack, as illustrated in Fig. 1(a), to quantitatively probe the impurity profile of Mg and other impurity elements. This stack was grown on a Fe-doped substrate maintaining the same growth conditions as the S1, S2, and S3 samples. Fig. 1(b) shows the SIMS depth profiles of selected elements, including Mg, H, and C, for the sample layer stack. The Mg concentration in each sub-layer decreases monotonically as the Mg flow rate decreases. The peak Mg concentration reached in each of the three layers of Fig. 1(a) are (from top to bottom) 1.5\(\times\)10\({}^{19}\), 1.75\(\times\)10\({}^{18}\), and 3\(\times\)10\({}^{17}\) cm-3. At the growth temperature of 880 \({}^{\circ}\)C, notable Mg diffusion was observed from the SIMS depth profiles. The effect of diffusion was taken into consideration when we design the device structures in which the Mg tail layer was identified. A schematic cross-section of the device structure for electrical testing is shown in fig. 2. The epitaxial structures for the three devices were grown on (010) \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) substrate consisted of (from bottom up) a \(\sim\)0.4 \(\mu\)m Si-doped n\({}^{+}\)\(\beta\)-Ga\({}_{2}\)O\({}_{3}\) layer (4.0\(\times\)10\({}^{19}\) cm\({}^{-3}\)), a \(\sim\)0.25 \(\mu\)m UID layer, a \(\sim\)0.25 \(\mu\)m _in situ_ Mg doped \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) layer, a \(\sim\)0.15 \(\mu\)m Mg tail layer followed by a \(\sim\)0.5 \(\mu\)m Si-doped n\({}^{+}\)\(\beta\)-Ga\({}_{2}\)O\({}_{3}\) layer (1\(\times\)10\({}^{20}\) cm\({}^{-3}\)). The only difference between the three fabricated samples is the Mg doping density. The average Mg concentration is 7.15\(\times\)10\({}^{18}\) cm\({}^{-3}\), 8.25\(\times\)10\({}^{17}\) cm\({}^{-3}\), and 2\(\times\)10\({}^{17}\) cm\({}^{-3}\) for S1, S2, and S3, respectively. The fabrication started with backside etching using BCIs reactive-ion etching (RIE); a total of 1 \(\mu\)m thick Ga\({}_{2}\)O\({}_{3}\) was etched in this step. A Ti/Au Ohmic metal stack was deposited using electron beam evaporation, followed by rapid thermal annealing (RTA) in N\({}_{2}\) at 470 \({}^{0}\)C for 1 minute. The top Ti/Au/Ni Ohmic contacts were then defined using electron beam lithography, followed by RTA in N\({}_{2}\) at 470 \({}^{0}\)C for 1 minute. A BCl\({}_{3}\)/Ar self-aligned RIE was done to etch \(\sim\)1.2 \(\mu\)m Ga\({}_{2}\)O\({}_{3}\) to reach the bottom n\({}^{+}\) layer, with top Ni serving as the etch mask. Finally, bottom Ti/Au Ohmic metal stacks were deposited on the n\({}^{+}\) layer, followed by RTA in N\({}_{2}\) at 470 \({}^{0}\)C for 1 minute. After device fabrication, current density-voltage (J-V) measurements were carried out with HP 4155B semiconductor parameter analyzer. The two probe measurements utilizing the top and back metal contacts, which (shown in fig. 2(a)) were performed and named vertical and pseudo-vertical measurements. Before testing the ohmic behavior was verified between the bottom contact and back contact. Both the vertical and pseudo-vertical measurements exhibit similar trends in the J-V results discussed in this article. An Auriga AU-5 high voltage pulsed current-voltage (I-V) setup with a 20 ns rise and fall period and a low duty cycle was used for the pulsed IV measurement to study the dynamic behavior of the Mg doped CBL. The pulse width was varied from 1 ms to 10 \(\mu\)s. Before measuring the devices, the system was calibrated at room temperature. In order to investigate the charge carrier compensation effect in the current blocking layer, standard reverse-biased capacitance-voltage (C-V) measurements were carried out using an Agilent 4294A precision impedance analyzer. As seen in Fig. 3(a), all the current blocking layer structures show current blocking behavior. However, the current blocking capability varies linearly with the Mg-doping concentration, as seen in Fig. 3(b). With the highest Mg doping concentration (7.15\(\times\)10\({}^{18}\) cm-3), S1 has excellent current blocking capability with a maximum forward current blocking voltage, Fbl, of 19 V and a maximum reverse current blocking voltage, Rbl, of 22.37 V. S3, with the lowest Mg doping concentration (2\(\times\)10\({}^{17}\) cm-3) blocks only -0.83 V to +2.18 V. Therefore, it is evident from the experimental results that a higher Mg doping in the CBL results in stronger blocking, which is conducive to the preparation of power devices. Fig. 4 shows the temperature dependence of the current density (J) vs. voltage (V) characteristics for S1, S2, and S3 current blocking layer structures. The J-V curves of all three CBL samples show that the forward blocking capability shifts gradually toward the lower bias side with the increasing temperature representing an increase in the thermal contribution of electron transport. The reverse current densities also increase almost monotonically for all three CBLs with an increase in temperature. This is because electrons gain higher energies at elevated temperatures to the energy barrier, which attributes to the reduction in current blocking voltage. Therefore, when the temperature of the CBL structures increased, the blocking voltage range also decreased correspondingly. Fig. 5 shows the DC and pulsed J-V measurements for the three CBL devices, S1, S2, and S3, and negligible dispersion is observed. All the strucures were able to block current in the pulsed conditions, this rules out introduction of electrically active traps during _in-situ_ doping. The C-V profiles of the three tested CBLs are displayed in Fig. 6(a). The current blocking portions of all the devices exhibit almost flat C-V characteristics, supporting the idea that the current blocking regions of the devices with various Mg-doping concentrations are fully compensated, and the free charge concentration is very low. To observe breakdown, reverse biasing was applied to few of devices in each of the two samples. Devices in the S1 sample have destructive breakdown at 45 V to 59 V. Similarly, S2 sample devices have destructive breakdown between 28 V and 83 V. Figures 6(b) and 6(c) depict the reverse current density-voltage characteristics of the S1 and S2 CBLs, respectively. The maximum destructive breakdown is seen for S1 with a breakdown voltage (Vbr) of -59 V and S2 with a Vbr = -83 V, giving average field strength of 2.4 MV/cm and 3.4 MV/cm, respectively. ATLAS SILVACO simulation was carried out to understand the blocking characteristics. The materials parameters used in the simulation are listed in Table I. The SRH recombination model, Auger recombination model, and Lombardi model were all utilized in the simulation. The impact ionization parameters for \(\beta\)-Ga\({}_{2}\)O\({}_{3}\) have been taken from a detailed first principle theoretical study [51]. First the doping concentration of the Mg-doped layer has been tuned to match the simulated current-blocking regions of the three structures with the experimental current-blocking capability, as shown in Fig. 8. The TCAD simulated S1 structure with Mg doping concentration of 1.35 \(\times\)10\({}^{17}\) cm-3 matches the fabricated current blocking characteristics of S1 which has mean Mg doping concentration of 7.15 \(\times\)10\({}^{18}\) cm-3 which leads us to the concept of effective doping. The effective doping is only 1.88% of the targeted doping of the Mg-doped layer. A similar trend is also found for S2 and S3. The effective doping is maximum for S3 (11.25%), and the effective doping rate decreases gradually with the increase of Mg-doping concentration (Table II). Understanding of the effective doping needs more careful studies. The drastic drop of effective doping for higher Mg doping could be due to formation of compensating donor defects [28] and incorporation of Mg to interstitial sites. The conduction band diagrams of the three CBL samples in Fig. 7(a) show that S1 has the highest barrier height than S2 and S3 samples. Due to the highest Mg-doping concentration, the fermi level for S1 is closer to the valence band than S2 and S3, which results in a larger band bending under equilibrium when the fermi levels of the n\({}^{+}\) and Mg-doped layer match. As shown in Fig. 2, an Mg tail region of 150 nm is in between the top n\({}^{+}\) and the Mg-doped region due to high Mg-diffusivity in Ga\({}_{2}\)O\({}_{3}\). The TCAD device structure also considers a graded Mg-doped region of 150 nm. As seen in Fig. 7(b) at zero bias condition, the band bending is more prominent at the n\({}^{+}\) and Mg-doped junction; the Mg activation is much higher in the interface as compared to the central region of the Mg-doped layer, where the bands are mostly flat. The trend is found in all three CBL structures. The fabricated S1 sample blocks -22.37 V to 19 V at room temperature with mean Mg doping concentration of 7.15 \(\times\)10\({}^{18}\) cm\({}^{-3}\) at room temperature. As we decrease the Mg concentration, the blocking range also decreases (Fig. 8), which can be explained by the fact that at zero bias, the conduction band barrier between the n\({}^{+}\) and the Mg-doped region decreases with decreasing doping density, as can be seen in fig. 6(b) where the red, blue and green curves represent the 7.15\(\times\)10\({}^{18}\), 8.25\(\times\)10\({}^{17}\), and 2\(\times\)10\({}^{17}\) cm\({}^{-3}\), respectively. In conclusion, this study represents the first successful fabrication of current blocking layers (CBLs) for vertical gallium oxide (Ga\({}_{2}\)O\({}_{3}\)) devices using _in situ_ Mg doping at various concentrations. The electrical behavior of the devices was investigated, and the dependence of the current-voltage characteristics on the temperature and Mg doping concentration was also discussed. The results clearly demonstrate that Mg doping concentration plays a crucial role in determining the current-blocking range, with higher doping concentrations leading to improved current blocking. A TCAD SILVACO simulation was conducted to optimize the Mg-doping concentration to match the simulated current-voltage characteristics with the experimental ones, and the idea of effective doping was presented. With the increase of targeted Mg doping concentration, the effective doping decreases as the donor-type defects become more significant. These findings indicate that the _in situ_ Mg doping method for developing the current blocking layer is a viable route to high-performance Ga\({}_{2}\)O\({}_{3}\) vertical devices and will benefit the Ga\({}_{2}\)O\({}_{3}\) device development effort. Further improvements can be obtained by optimizing the growth conditions for lower defect formation. This study is an important contribution to the power electronics community and will be of great interest to researchers and practitioners alike. ## Acknowledgments We acknowledge the support from AFOSR (Air Force Office of Scientific Research) under award FA9550-18-1-0479 (Program Manager: Ali Sayir), from NSF under award ECCS 2019749, and II-VI Foundation Block Gift Program. This work used the electron beam lithography system acquired through NSF MRI award ECCS 1919798. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.